نویسنده: post Bina

  • Advisory: Pahalgam Attack themed decoys used by APT36 to target the Indian Government

    Advisory: Pahalgam Attack themed decoys used by APT36 to target the Indian Government


    Seqrite Labs APT team has discovered “Pahalgam Terror Attack” themed documents being used by the Pakistan-linked APT group Transparent Tribe (APT36) to target Indian Government and Defense personnel. The campaign involves both credential phishing and deployment of malicious payloads, with fake domains impersonating Jammu & Kashmir Police and Indian Air Force (IAF) created shortly after the April 22, 2025 attack. This advisory alerts about the phishing PDF and domains used to uncover similar activity along with macro-laced document used to deploy the group’s well-known Crimson RAT.

    Analysis

    The PDF in question was created on April 24, 2025, with the author listed as “Kalu Badshah”. The names of this phishing document are related to the response measures by the Indian Government regarding the attack.

    • “Action Points & Response by Govt Regarding Pahalgam Terror Attack .pdf”
    • “Report Update Regarding Pahalgam Terror Attack.pdf”
    Picture 1

    The content of the document is masked and the link embedded within the document is the primary vector for the attack. If clicked, it leads to a fake login page which is part of a social engineering effort to lure individuals. The embedded URL triggered is:

    • hxxps://jkpolice[.]gov[.]in[.]kashmirattack[.]exposed/service/home/

    The domain mimics the legitimate Jammu & Kasmir Police (jkpolice[.]gov[.]in), an official Indian police website, but the fake one introduces a subdomain kashmirattack[.]exposed.

    Picture 2

    The addition of “kashmirattack” indicates a thematic connection to the sensitive geopolitical issue, in this case, related to the recent attack in the Kashmir region. Once the government credentials are entered for @gov.in or @nic.in, they are sent directly back to the host. Pivoting on the author’s name, we observed multiple such phishing documents.

    Picture 3

    Multiple names have been observed for each phishing document related to various government and defence meetings to lure the targets, showcasing how quickly the group crafts lures around ongoing events in the country:

    • Report & Update Regarding Pahalgam Terror Attack.pdf
    • Report Update Regarding Pahalgam Terror Attack.pdf
    • Action Points & Response by Govt Regarding Pahalgam Terror Attack .pdf
    • J&K Police Letter Dated 17 April 2025.pdf
    • ROD on Review Meeting held on 10 April 2025 by Secy DRDO.pdf
    • RECORD OF DISCUSSION TECHNICAL REVIEW MEETING NOTICE, 07 April 2025 (1).pdf
    • MEETING NOTICE – 13th JWG meeting between India and Nepal.pdf
    • Agenda Points for Joint Venture Meeting at IHQ MoD on 04 March 2025.pdf
    • DO Letter Integrated HQ of MoD dated 3 March.pdf
    • Collegiate Meeting Notice & Action Points MoD 24 March.pdf
    • Letter to the Raksha Mantri Office Dated 26 Feb 2025.pdf
    • pdf
    • Alleged Case of Sexual Harassment by Senior Army Officer.pdf
    • Agenda Points of Meeting of Dept of Defence held at 11March 25.html
    • Action Points of Meeting of Dept of Defence held at 10March 25.html
    • Agenda Points of Meeting of External Affairs Dept 10 March 25.pdf.html

    PowerPoint PPAM Dropper

    A PowerPoint add-on file with the same name as of the phishing document “Report & Update Regarding Pahalgam Terror Attack.ppam” has been identified which contains malicious macros. It extracts both the embedded files into a hidden directory under user’s profile with a dynamic name, determines the payload based on the Windows version and eventually opens the decoy file with the same phishing URL embedded along with executing the Crimson RAT payload.

    Picture 4

    The final Crimson RAT dropped has internal name “jnmxrvt hcsm.exe” and dropped as “WEISTT.jpg” with similar PDB convention:

    • C:\jnmhxrv cstm\jnmhxrv cstm\obj\Debug\jnmhxrv cstm.pdb

    All three RAT payloads have compilation timestamp on 2025-04-21, just before the Pahalgam terror attack. As usual the hardcoded default IP is present as a decoy and the actual C2 after decoding is – 93.127.133[.]58. It supports the following 22 commands for command and control apart from retrieving system and user information.

    Commands Functionality
    procl / getavs Get a list of all processes
    endpo Kill process based on PID
    scrsz Set screen size to capture
    cscreen Get screenshot
    dirs Get all disk drives
    stops Stop screen capture
    filsz Get file information (Name, Creation Time, Size)
    dowf Download the file from C2
    cnls Stop uploading, downloading and screen capture
    scren Get screenshots continuously
    thumb Get a thumbnail of the image as GIF with size ‘of 200×150.’
    putsrt Set persistence via Run registry key
    udlt Download & execute file from C2 with ‘vdhairtn’ name
    delt Delete file
    file Exfiltrate the file to C2
    info Get machine info (Computer name, username, IP, OS name, etc.)
    runf Execute command
    afile Exfiltrate file to C2 with additional information
    listf Search files based on extension
    dowr Download file from C2 (No execution)
    fles Get the list of files in a directory
    fldr Get the list of folders in a directory

    Infrastructure and Attribution

    The phishing domains identified through hunting have the creation day just one or two days after the documents were created.

    Domains Creation IP ASN
    jkpolice[.]gov[.]in[.]kashmirattack[.]exposed 2025-04-24 37.221.64.134
    78.40.143.189
    AS 200019 (Alexhost Srl)
    AS 45839 (Shinjiru Technology)
    iaf[.]nic[.]in[.]ministryofdefenceindia[.]org 2025-04-16 37.221.64.134 AS 200019 (Alexhost Srl)
    email[.]gov[.]in[.]ministryofdefenceindia[.]org 2025-04-16 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofdefenceindia[.]link 2025-02-18 45.141.59.167 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofdefence[.]de 2025-04-10 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]briefcases[.]email 2025-04-06 45.141.58.224
    78.40.143.98
    AS 213373 (IP Connect Inc)
    AS 45839 (Shinjiru Technology)
    email[.]gov[.]in[.]modindia[.]link 2025-03-02 84.54.51.12 AS 200019 (Alexhost Srl)
    email[.]gov[.]in[.]defenceindia[.]ltd 2025-03-20 45.141.58.224
    45.141.58.33
    AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiadefencedepartment[.]link 2025-02-25 45.141.59.167 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofspace[.]info 2025-04-20 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiangov[.]download 2025-04-06 45.141.58.33
    78.40.143.98
    AS 213373 (IP Connect Inc)
    AS 45839 (Shinjiru Technology)
    indianarmy[.]nic[.]in[.]departmentofdefence[.]de 2025-04-10 176.65.143.215 AS 215208
    indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org 2025-04-16 176.65.143.215 AS 215208
    email[.]gov[.]in[.]indiandefence[.]work 2025-03-10 45.141.59.72 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiangov[.]download 2025-04-06 78.40.143.98 AS 45839 (Shinjiru Technology)
    email[.]gov[.]in[.]drdosurvey[.]info 2025-03-19 192.64.118.76 AS 22612 (NAMECHEAP-NET)

    This kind of attack is typical in hacktivism, where the goal is to create chaos or spread a political message by exploiting sensitive or emotionally charged issues. In this case, the threat actor is exploiting existing tensions surrounding Kashmir to maximize the impact of their campaign and extract intelligence around these issues.

    The suspicious domains are part of a phishing and disinformation infrastructure consistent with tactics previously used by APT36 (Transparent Tribe) that has a long history of targeting:

    • Indian military personnel
    • Government agencies
    • Defense and research organizations
    • Activists and journalists focused on Kashmir

    PPAM for initial access has been used since many years to embed malicious executables as OLE objects. Domain impersonation to create deceptive URLs that mimic Indian government, or military infrastructure has been seen consistently since last year. They often exploit sensitive topics like Kashmir conflict, border skirmishes, and military movements to create lures for spear-phishing campaigns. Hence these campaigns are attributed to APT36 with high confidence, to have involved delivering Crimson RAT, hidden behind fake documents or malicious links embedded in spoofed domains.

    Potential Impact: Geopolitical and Cybersecurity Implications

    The combination of a geopolitical theme and cybersecurity tactics suggests that this document is part of a broader disinformation campaign. The reference to Kashmir, a region with longstanding political and territorial disputes, indicates the attacker’s intention to exploit sensitive topics to stir unrest or create division.

    Additionally, using PDF files as a delivery mechanism for malicious links is a proven technique aiming to influence public perception, spread propaganda, or cause disruptions. Here’s how the impact could manifest:

    • Disruption of Sensitive Operations: If an official or government worker were to interact with this document, it could compromise their personal or organizational security.
    • Information Operations: The document could lead to the exposure of sensitive documents or the dissemination of false information, thereby creating confusion and distrust among the public.
    • Espionage and Data Breaches: The phishing attempt could ultimately lead to the theft of sensitive data or the deployment of malware within the target’s network, paving the way for further exploitation.

    Recommendations

    Email & Document Screening: Implement advanced threat protection to scan PDFs and attachments for embedded malicious links or payloads.

    Restrict Macro Execution: Disable macros by default, especially from untrusted sources, across all endpoints.

    Network Segmentation & Access Controls: Limit access to sensitive systems and data; apply the principle of least privilege.

    User Awareness & Training: Conduct regular training on recognizing phishing, disinformation, and geopolitical manipulation tactics.

    Incident Response Preparedness: Ensure a tested response plan is in place for phishing, disinformation, or suspected nation-state activity.

    Threat Intelligence Integration: Leverage geopolitical threat intel to identify targeted campaigns and proactively block indicators of compromise (IOCs).

    Monitor for Anomalous Behaviour: Use behavioural analytics to detect unusual access patterns or data exfiltration attempts.

    IOCs

    Phishing Documents

    c4fb60217e3d43eac92074c45228506a

    172fff2634545cf59d59c179d139e0aa

    7b08580a4f6995f645a5bf8addbefa68

    1b71434e049fb8765d528ecabd722072

    c4f591cad9d158e2fbb0ed6425ce3804

    5f03629508f46e822cf08d7864f585d3

    f5cd5f616a482645bbf8f4c51ee38958

    fa2c39adbb0ca7aeab5bc5cd1ffb2f08

    00cd306f7cdcfe187c561dd42ab40f33

    ca27970308b2fdeaa3a8e8e53c86cd3e

    Phishing Domains

    jkpolice[.]gov[.]in[.]kashmirattack[.]exposed

    iaf[.]nic[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]departmentofdefenceindia[.]link

    email[.]gov[.]in[.]departmentofdefence[.]de

    email[.]gov[.]in[.]briefcases[.]email

    email[.]gov[.]in[.]modindia[.]link

    email[.]gov[.]in[.]defenceindia[.]ltd

    email[.]gov[.]in[.]indiadefencedepartment[.]link

    email[.]gov[.]in[.]departmentofspace[.]info

    email[.]gov[.]in[.]indiangov[.]download

    indianarmy[.]nic[.]in[.]departmentofdefence[.]de

    indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]indiandefence[.]work

    email[.]gov[.]in[.]indiangov[.]download

    email[.]gov[.]in[.]drdosurvey[.]info

    Phishing URLs

    hxxps://iaf[.]nic[.]in[.]ministryofdefenceindia[.]org/publications/default[.]htm

    hxxps://jkpolice[.]gov[.]in[.]kashmiraxxack[.]exposed/service/home

    hxxps://email[.]gov[.]in[.]ministryofdefenceindia[.]org/service/home/

    hxxps://email[.]gov[.]in[.]departmentofdefenceindia[.]link/service/home/

    hxxps://email[.]gov[.]in[.]departmentofdefence[.]de/service/home/

    hxxps://email[.]gov[.]in[.]indiangov[.]download/service/home/

    hxxps://indianarmy[.]nic[.]in[.]departmentofdefence[.]de/publications/publications-site-main/index[.]html

    hxxps://indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org/publications/publications-site-main/index[.]htm

    hxxps://email[.]gov[.]in[.]briefcases[.]email/service/home/

    hxxps://email[.]gov[.]in[.]modindia[.]link/service/home/

    hxxps://email[.]gov[.]in[.]defenceindia[.]ltd/service/home/

    hxxps://email[.]gov[.]in[.]indiadefencedepartment[.]link/service/home/

    hxxps://email[.]gov[.]in[.]departmentofspace[.]info/service/home/

    hxxps://email[.]gov[.]in[.]indiandefence[.]work/service/home/

    PPAM/XLAM

    d946e3e94fec670f9e47aca186ecaabe

    e18c4172329c32d8394ba0658d5212c2

    2fde001f4c17c8613480091fa48b55a0

    c1f4c9f969f955dec2465317b526b600

    Crimson RAT

    026e8e7acb2f2a156f8afff64fd54066

    fb64c22d37c502bde55b19688d40c803

    70b8040730c62e4a52a904251fa74029

    3efec6ffcbfe79f71f5410eb46f1c19e

    b03211f6feccd3a62273368b52f6079d

    93.127.133.58 (Ports – 1097, 17241, 19821, 21817, 23221, 27425)

    104.129.27.14 (Ports – 8108, 16197, 19867, 28784, 30123)

    MITRE ATT&CK

    Reconnaissance T1598.003 Phishing for Information: Spearphishing Link
    Resource Development T1583.001 Acquire Infrastructure: Domains
    Initial Access T1566.001 Phishing: Spearphishing Attachment
    Execution T1204.001

    T1059.005

    User Execution: Malicious Link

    Command and Scripting Interpreter: Visual Basic

    Persistence T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
    Discovery T1033

    T1057

    T1082

    T1083

    System Owner/User Discovery

    Process Discovery

    System Information Discovery

    File and Directory Discovery

    Collection T1005

    T1113

    Data from Local System

    Screen Capture

    Exfiltration T1041 Exfiltration Over C2 Channel

     

    Authors:

    Sathwik Ram Prakki

    Rhishav Kanjilal



    Source link

  • measuring things to ensure prosperity | Code4IT

    measuring things to ensure prosperity | Code4IT


    Non-functional requirements matter, but we often forget to validate them. You can measure them by setting up Fitness Functions.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Just creating an architecture is not enough; you should also make sure that the pieces of stuff you are building are, in the end, the once needed by your system.

    Is your system fast enough? Is it passing all the security checks? What about testability, maintainability, and other -ilities?

    Fitness Functions are components of the architecture that do not execute functional operations, but, using a set of tests and measurements, allow you to validate that the system respects all the non-functional requirements defined upfront.

    Fitness Functions: because non-functional requirements matter

    An architecture is made of two main categories of requirements: functional requirements and non-functional requirements.

    Functional requirements are the most easy to define and to test: if one of the requirements is “a user with role Admin must be able to see all data”, then writing a suite of tests for this specific requirement is pretty straightforward.

    Non-functional requirements are, for sure, as important as functional requirements, but are often overlooked or not detailed. “The system must be fast”: ok, how fast? What do you mean with “fast”? What is an acceptable value of “fast”?

    If we don’t have a clear understanding of non-functional requirements, then it’s impossible to measure them.

    And once we have defined a way to measure them, how can we ensure that we are meeting our expectations? Here’s where Fitness Functions come in handy.

    In fact, Fitness Functions are specific components that focus on non-functional requirements, executing some calculations and providing metrics that help architects and developers ensure that the system’s architecture aligns with business goals, technical requirements, and other quality attributes.

    Why Fitness Functions are crucial for future-proof architectures

    When creating an architecture, you must think of the most important -ilities for that specific case. How can you ensure that the technical choices we made meet the expectations?

    By being related to specific and measurable metrics, Fitness Functions provide a way to assess the architecture’s quality and performance, reducing the reliance on subjective opinions by using objective measurements. A metric can be a simple number (e.g., “maximum number of requests per second”), a percentage value (like “percentage of code covered by tests”) or other values that are still measurable.

    Knowing how the system behaves in regards to these measures allows architects to work on the continuous improvement of the system: teams can identify areas for improvement and make decisions based not on personal opinion but on actual data to enhance the system.

    Having a centralized place to view the historical values of a measure helps understanding if you have done progresses or, as time goes by, the quality has degraded.

    Still talking about the historical values of the measures, having a clear understanding of what is the current status of such metrics can help in identifying potential issues early in the development process, allowing teams to address them before they become critical problems.

    For example, by using Fitness Functions, you can ensure that the system is able to handle a certain amount of users per second: having proper measurements, you can identify which functionalities are less performant and, in case of high traffic, may bring the whole system down.

    You are already using Fitness Functions, but you didn’t know

    Fitness Functions sound like complex things to handle.

    Even though you can create your own functions, most probably you are already using them without knowing it. Lots of tools are available out there that cover several metrics, and I’m sure you’ve already used some of them (or, at least, you’ve already heard of them).

    Tools like SonarQube and NDepend use Fitness Functions to evaluate code quality based on metrics such as code complexity, duplication, and adherence to coding standards. Those metrics are calculated based on static analysis of the code, and teams can define thresholds under which a system can be at risk of losing maintainability. An example of metric related to code quality is Code Coverage: the higher, the better (even though 100% of code coverage does not guarantee your code is healthy).

    Tools like JMeter or K6 help you measure system performance under various conditions: having a history of load testing results can help ensure that, as you add new functionalities to the system, the performance on some specific modules does not downgrade.

    All in all, most of the Fitness Functions can be set to be part of CI/CD pipelines: for example, you can configure a CD pipeline to block the deployment of the code on a specific system if the load testing results of the new code are worse than the previous version. Or you could block a Pull Request if the code coverage percentage is getting lower.

    Further readings

    A good way to start experimenting with Load Testing is by running them locally. A nice open-source project is K6: you can install it on your local machine, define the load phases, and analyze the final result.

    🔗 Getting started with Load testing with K6 on Windows 11 | Code4IT

    This article first appeared on Code4IT 🐧

    But, even if you don’t really care about load testing (maybe because your system is not expected to handle lots of users), I’m sure you still care about code quality and their tests. When using .NET, you can collect code coverage reports using Cobertura. Then, if you are using Azure DevOps, you may want to stop a Pull Request if the code coverage percentage has decreased.

    Here’s how to do all of this:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    Wrapping up

    Sometimes, there are things that we use every day, but we don’t know how to name them: Fitness Functions are one of them – and they are the foundation of future-proof software systems.

    You can create your own Fitness Functions based on whatever you can (and need to) measure: from average page loading to star-rated customer satisfaction. In conjunction with a clear dashboard, you can provide a clear view of the history of such metrics.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Threat Actors are Targeting US Tax-Session with new Tactics of Stealerium-infostealer

    Threat Actors are Targeting US Tax-Session with new Tactics of Stealerium-infostealer


    Introduction

    A security researcher from Seqrite Labs has uncovered a malicious campaign targeting U.S. citizens as Tax Day approaches on April 15. Seqrite Labs has identified multiple phishing attacks leveraging tax-related themes as a vector for social engineering, aiming to exfiltrate user credentials and deploy malware. These campaigns predominantly utilize redirection techniques, such as phishing emails, and exploit malicious LNK files to further their objectives.

    Each year, cybercriminals exploit the tax season as an opportunity to deploy various social engineering tactics to compromise sensitive personal and financial data. These adversaries craft highly deceptive campaigns designed to trick taxpayers into divulging confidential information, making fraudulent to counterfeit services, or inadvertently installing malicious payloads on their devices, thereby exposing them to identity theft and financial loss.

    Infection Chain:

    Fig 1: Infection chain

    Initial analysis about campaign:

    While tax-season phishing, attacks pose a risk to a broad spectrum of individuals, our analysis indicates that certain demographics are disproportionately vulnerable. Specifically, high-risk targets include individuals with limited knowledge of government tax processes, such as green card holders, small business owners, and new taxpayers.

    Our findings reveal that threat actors are leveraging a sophisticated phishing technique in which they deliver files via email with deceptive extensions. One such example is a file named “104842599782-4.pdf.lnk,” which utilizes a malicious LNK extension. This tactic exploits user trust by masquerading as a legiti payments mate document, ultimately leading to the execution of malicious payloads upon interaction.

    Decoy Document:

    Threat actors are disseminating a transcript related to tax sessions, targeting individuals through email by sharing it as a malicious attachment. These cybercriminals are leveraging this document as a vector to deliver harmful payloads, thereby compromising the security of the recipients.

     

    Fig 2: Decoy Document

    Technical Analysis:

    We have retrieved the LNK file, identified as “04842599782-4.pdf.lnk,” which was utilized in the attack. This LNK file embeds a Base64-encoded payload within its structure.

    Fig 3: Inside LNK File

    Upon decoding the string, we extracted a PowerShell command line that itself contains another Base64-encoded payload embedded within it.

    Fig 4: Encoded PowerShell Command Line

     

    Subsequently, upon decoding the nested Base64 string, we uncovered the final PowerShell command line embedded within the payload.

    Fig 5: Decoded Command Line

    The extracted PowerShell command line initiated the download of rev_pf2_yas.txt, which itself is a PowerShell script (Payload.ps1) containing yet another Base64-encoded payload embedded within it.

    Fig 6: 2nd PowerShell command with Base64 Encoded

    We have decoded the above Base64 encoded command line and get below final executable.

    Fig 7: Decoded PowerShell Command

    According to the PowerShell command line, the script Payload.ps1 (or rev_pf2_yas.txt) initiated the download of an additional file, revolaomt.rar, from the Command and Control (C2) server. This archive contained a malicious executable, named either Setup.exe or revolaomt.exe.

    Detail analysis of Setup.exe / revolaomt.exe:

    Fig 8: Detect it Easy

    Upon detailed examination of the Setup.exe binary, it was identified as a PyInstaller-packaged Python executable. Subsequent extraction and decompilation revealed embedded Python bytecode artifacts, including DCTYKS.pyc and additional Python module components.

    Fig 9: PyInstaller-packaged Python executable
    Fig 10: In side DCTYKS.pyc

    Upon analysis of the DCTYKS.pyc sample, it was determined that the file contains obfuscated or encrypted payload data, which is programmatically decrypted at runtime and subsequently executed, as illustrated in the figure above.

    Fig 11: Encoded DCTYKS.pyc with Base64

    Upon successful decryption of the script, it was observed that the sample embeds a Base64-encoded executable payload. The decrypted payload leverages process injection techniques to target mstsc.exe for execution. Further analysis of the second-stage payload revealed it to be a .NET-compiled binary.

    Analysis 2nd Payload (Stealerium malware):

    Fig 12: .NET Base Malware sample

    The second-stage payload is identified as a .NET-based malware sample. Upon inspection of its class structures, methods, and overall functionality, the sample exhibits strong behavioural and structural similarities to the Stealerium malware family, specifically aligning with version 1.0.35.

    Stealerium is an open-source information-stealing malware designed to exfiltrate sensitive data from web browsers, cryptocurrency wallets, and popular applications such as Discord, Steam, and Telegram. It performs extensive system reconnaissance by harvesting details including active processes, desktop screenshots, and available Wi-Fi network configurations. Additionally, the malware incorporates sophisticated anti-analysis mechanisms to identify execution within virtualized environments and detect the presence of debugging tools.

    Anti_Analysis

    Fig 13: Anti Analysis Techniques
    Fig 14: GitHub URLs
    Fig 15: Detecting Suspicious ENV

    This AntiAnalysis class is part of malware designed to detect sandbox, virtual machines, emulators, suspicious processes, services, usernames, and more. It checks system attributes against blacklists fetched from online sources (github). If any suspicious environment is detected, it logs the finding and may trigger self-destruction. This helps the malware avoid analysis in controlled or security research setups.

    Mutex Creation

    Fig 16: Mutex Creation

    This MutexControl class prevents multiple instances of the malware from running at the same time. It tries to create a system-wide mutex using a name from Config.Mutex (QT1bm11ocWPx). If the mutex already exists, it means another instance is running, so it exits the process. If an error occurs during this check, it logs the error and exits too.

    Fig 17: Configuration of StringsCrypt.DecryptConfig

    It configures necessary values by decrypting them with StringsCrypt.DecryptConfig. It handles the decryption of the server base URL and WebSocket address. If enabled, it also decodes cryptocurrency wallet addresses from Base64 and decrypts them using AES-256 encryption.

    “hxxp://91.211.249.142:7816”

    Radom Directory Creation

    Fig 18: Random Directory Creation

    The InitWorkDir() method generates a random subdirectory under %LOCALAPPDATA%, creates it if it doesn’t exist, and hides it for stealth purposes. This is likely used for storing data or maintaining persistence without detection.

    \AppData\Local\e9d3e2dd2788c322ffd2c9defddf7728 random directory is created in hidden attribute.

    BoT Registration

    Fig 19: BOT Registration

    The RegisterBot method initiates an HTTP POST request to register a bot instance, utilizing a unique hash identifier and an authorization token for authentication. It serializes the registration payload, appends the necessary HTTP headers, and logs the server response or any encountered exceptions. The method returns a boolean value—true upon successful execution, and false if an exception is raised during the process.

    RequestUri: ‘http[:]//91[.]211[.]249[.]142:7816/api/bot/v1/register’

     

    Stealer Activity From Browser:

    Fig 20: Stealer activity from Browser

    It extracts browser-related data (passwords, cookies, credit cards, history, bookmarks, autofill) from a given user data profile path.

    FileZilla Credentials stealer activity

    Fig 21: FileZilla Credential Stealer activity

    The above code is part of a password-stealing component targeting FileZilla, an FTP client.

    Gaming Platform Data Extraction Modules

    Fig 22: Gaming platform data extraction

    This component under bt.Stub.Target.Gaming is designed to collect data from the following platforms:

    • BattleNet
    • Minecraft
    • Steam
    • Uplay

    Each class likely implements routines to extract user data, game configurations, or sensitive files for exfiltration.

    Fig 23: Checks for a Minecraft installation

    It checks for a Minecraft installation and creates a save directory to exfiltrate various data like mods, files, versions, logs, and screenshots. It conditionally captures logs and screenshots based on the Config.GrabberModule setting.

    Messenger Data Stealer Modules

    Itargets various communication platforms to extract user data or credentials from:

    • Discord
    • Element
    • ICQ
    • Outlook
    • Pidgin
    • Signal
    • Skype
    • Telegram
    • Tox

    Below is one example of Outlook Credentials Harvesting

    It targets specific registry keys associated with Outlook profiles to extract sensitive information like email addresses, server names, usernames, and passwords. It gathers data for multiple mail clients (SMTP, POP3, IMAP) and writes the collected information to a file (Outlook.txt).

    Fig 24: Messenger Data Extraction

     

    Webcam Screenshot Capture

    Attempts to take a screenshot using a connected webcam, saving the image as a JPEG file. If only one camera is connected, it triggers a series of messages to capture the webcam image, which is then saved to the specified path (camera.jpg or a timestamped filename). The method is controlled by a configuration setting (Config.WebcamScreenshot).

     

    Fig 25: Webcam Screen shot captures

     

    Wi-Fi Password Retrieval

     

    It retrieves the Wi-Fi password for a given network profile by running the command netsh wlan show profile and extracting the password from the output. The command uses findstr Key to filter the password, which is then split and trimmed to get the value

     

    Fig 26: WI-FI Password Retrieval

     

    VPN Data Extraction

    It targets various VPN applications to exfiltrate sensitive information such as login credentials:

    • NordVpn
    • OpenVpn
    • ProtonVpn

    For example, it  extracts and saves NordVPN credentials from the user.config file found in NordVPN installation directories. It looks for “Username” and “Password” settings, decodes them, and writes them to a file (accounts.txt) in the specified savePath.

     

    Fig 27: VPN Data Extraction

     

    Porn Detection & Screenshot Capture

    Fig 28: Porn Detection & Snapshot Captures.

    It detects adult content by checking if the active window’s title contains specific keywords related to NSFW content (configured in Config.PornServices). If such content is detected, it triggers a screenshot capture.

    Conclusion:

    Based on our recent proactive threat analysis, we’ve identified that cybercriminals are actively targeting U.S. citizens around the tax filing period scheduled for April 15. These threat actors are leveraging the occasion to deploy Stealerium malware, using deceptive tactics to trick users.

    Stealerium malware is designed to steal Personally Identifiable Information (PII) from infected devices and transmit it to attacker-controlled bots for further exploitation.

    To safeguard your data and devices, we strongly recommend using Seqrite Endpoint Security, which provides advanced protection against such evolving threats.

    Stay secure. Stay protected with Seqrite.

    TTPS

    Tactic Technique ID Name
    Initial Access T1566.001 Phishing: Spear phishing Attachment
    Execution T1059.001 Command and Scripting Interpreter: PowerShell
    Evasion T1140 Deobfuscate/Decode Files or Information
    T1027 Obfuscated Files or Information
    T1497 Virtualization/Sandbox Evasion
    T1497.001 System Checks
    Credential Access T1555.003 Credentials from Password Stores:  Credentials from Web Browsers

     

    T1539 Steal Web Session Cookie
    Discovery T1217 Browser Information Discovery
    T1016 System Network Configuration Discovery: Wi-Fi Discovery
    Collection T1113 Screen Capture
    Exfiltration T1567.004 Exfiltration Over Web Service:  Exfiltration Over Webhook

     

    Seqrite Protections:

    • HEUR:Trojan.Win32.PH
    • Trojan.49490.GC
    • trojan.49489.GC

    IoCs:

    File Name SHA-256
    Setup.exe/revolaomt.exe 6a9889fee93128a9cdcb93d35a2fec9c6127905d14c0ceed14f5f1c4f58542b8
    104842599782-4.pdf.lnk 48328ce3a4b2c2413acb87a4d1f8c3b7238db826f313a25173ad5ad34632d9d7
    payload_1.ps1 / fgrsdt_rev_hx4_ln_x.txt 10f217c72f62aed40957c438b865f0bcebc7e42a5e947051edee1649adf0cbf2
    revolaomt.rar 31705d906058e7324027e65ce7f4f7a30bcf6c30571aa3f020e91678a22a835a
    104842599782-4.html Ff5e3e3bf67d292c73491fab0d94533a712c2935bb4a9135546ca4a416ba8ca1

     

    C2:

    • hxxp[:]//91[.]211[.]249[.]142:7816/
    • hxxp://91.211.249.142:7816″
    • hxxp[:]//185[.]237[.]165[.]230/

     

    Authors:

    Dixit Panchal
    Kartik Jivani
    Soumen Burma



    Source link

  • Calling AWS Bedrock from code. Using Python in a Jupyter notebook | by Thomas Reid


    Image by Author

    Using Python in a Jupyter notebook

    Many of you will know that every man and his dog are producing AI products or LLM’s and integrating them with their products. Not surprisingly AWS — the biggest cloud services provider — is also getting in on the act.

    What is bedrock?

    Its AI offering is called Bedrock and the following blurb from it’s website describes what Bedrock is.

    Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security. With Amazon Bedrock’s comprehensive capabilities, you can easily experiment with a variety of top FMs, privately customize them with your data using techniques such as fine-tuning and retrieval augmented generation (RAG), and create managed agents that execute complex business tasks — from booking travel and processing insurance claims to creating ad campaigns and managing inventory — all without writing any code. Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI…



    Source link

  • End of April Sale! 🎁

    End of April Sale! 🎁


    At Browserling and Online Tools, we love sales.

    We just created new automated sale – now each year, on the last day of April, we run an End of April Sale!

    🔥 onlinetools.com/pricing

    🔥 browserling.com/#pricing

    Buy a subscription now and see you next time!



    Source link

  • Let a thousand programming publications bloom. | by Tony Stubblebine


    I get that this is a big pivot given that we switched to a new editor recently. But things are changing at Medium and I think this will ultimately be a boon for everyone, authors, readers and publications.

    I would like to inspire some (but not all) of you to start a publication and give you some guidelines on how to do it well. If you are an author, there are many other publications to write for and hopefully there will soon be even more (check the comments for suggestions).

    Medium has always had publications that acted as something in between a group blog and a sub-reddit. Publication editors help set a quality bar, give feedback on your posts, and bring you an audience. Publications are a pillar of the Medium experience.

    But the publication opportunities that (I think) are exciting are changing. In the past, the way to have a successful publication was to publish on anything and everything. So Medium was dominated by broad, high volume publications. Better Programming was one of those pubs and we published on topics that might not have a lot of overlapping readers. How many of you are currently programming in all of these languages: Go, Rust, Javascript, Ruby, Python, Swift, Kotlin, and Dart?

    Better Programming has published stories on all of those topics and more, and so by definition we were often publishing stories that a lot of you don’t want to read. The direction Medium is heading is to optimize for publications that are more focused than Better Programming has been.

    There are two types of focuses that I’m personally excited about. One is that publications are de facto communities of enthusiasts. The other is that publications bring a level of expertise to Medium’s boost program. Caveat: these are just what I’m excited about — maybe you have more creative ideas than I do.

    Both cases beg for publications that are focused.

    If you want to build an enthusiast community of people who love Kotlin, who want to write about their Kotlin projects and what they are learning, then you don’t also need authors in your publication who are writing about Swift.

    Similarly, Medium is leaning on the expertise of publication editors to contribute as nominators in the Boost program. It’s hard to bring credible expertise when your focus is too broad. Most nominators also have first hand expertise beyond what they publish. So, if I were to run Better Programming myself, I think I could credibly nominate within Rails (I’ve built several companies on that stack) and Regular Expressions (I wrote a book), but I’m clueless on nearly everything else.

    Running a publication isn’t for everyone and it isn’t a get-rich-quick scheme. The best publications are run out of authentic interest in a topic and nothing more. In technical topics, there can be some financial rewards, which I’ll get to. But mainly it’s best to think of this as a way to harness a passion you may have. I know that the community building impulse is strong in many of you because I’ve seen how many people have started publications on Medium over the years.

    For any of you who are interested, I’m going to give you some tips on starting a publication. These aren’t exactly a recipe, but I’ll try to arrange them in order.

    1. Pubs are easy to start and at minimum you have yourself as a possible author to fill the pub with stories. Here’s a link to get going.
    2. If you want to accept other authors then you need to setup instructions. Almost all publications that accept other authors setup a “write for us” page with instructions, make it a tab on the publication, write a style guide, and then create a Google form to handle new author applications. Copy ours.
    3. Do you want to focus on inclusivity? If so, then your role is probably more about support and encouragement and less about setting a high editorial bar. People get squeamish about being judged but the thing I’ve long recognized is that all writing was useful to the writer and is often useful to at least a few people, but very little writing is going to trend on Reddit or HackerNews.
    4. Do you want to focus on exclusivity, i.e. finding the best of the best ideas and information on a topic? Medium’s Boost program gives publication editors a tool to recruit authors: “I can help boost your stories to more readers.” You can’t just boost anything, it has to be the best of the best. And so focusing on that is a very exclusive approach. I often think of a publication here about Runners where the editor is using his access to the Boost to work with professional running coaches, professional runners, and the former editor-in-chief of Runners World. That must be so fun for him! The programming equivalent is different for each programming languange so I’ll use an example from the language I got started in: if I started a publication for Perl, I’d use the boost as a way to recruit Larry Wall.
    5. Consider becoming a Boost nominator but also consider that doing that will require having a strong nose for the best of the best. Of course every story on Medium is “high quality” but there are certain stories that are important, accurate, helpful and maybe even more than that. This isn’t official policy, but unofficially, it would be reasonable to submit an application to be a Boost Nominator once you have a publication with three authors and ten stories.
    6. Getting a publication started requires recruiting authors. Hopefully you know some already, even if they aren’t on Medium. I think that if you don’t know a subject well enough that you also already know other people with similar enthusiasm and expertise in that subject, then starting a publication isn’t for you. That’s not a hard rule, but I’m saying it from experience. After recruiting from your own network, the way almost every other publication has recruited authors is by monitoring relevant tags on Medium and then using the private note feature to invite recently published stories into your publication.
    7. Lets talk money. If you are a publication that Boosts stories you will get paid an honorarium. Plus if you build an audience, your own stories might make more money. But, you are missing the big picture if this is the most important thing to you. Writing and editing is a form of portfolio building. The software engineering field pays so much money, way beyond what Medium pays for writing. So focusing on getting paid from Medium is the ultimate example of a local maxima because the you can make 1000x more by building a reputation and using it to get a job or raise. This is just fact.

    If you do start a programming publication that is looking for authors or you’ve already started a programming publication like that, post a link in the responses along with a link to your submission guidelines.

    Authors: I looked up Better Programming’s stats. 4.6k authors have published 16.8k stories to Better Programming. Those stories generated 151M page views. Not all of them were behind the paywall, but the ones that were earned authors $999 thousand dollars. It’s been a huge honor to play a role in that and my thanks go out to the editors who’ve made it happen and to all of you for writing. Medium is still a great home for you, it’s just that you should find new places to publish.



    Source link

  • HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!)

    HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!)


    Aren’t you tired of adding manual logs to your HTTP APIs to log HTTP requests and responses? By using a built-in middleware in ASP.NET, you will be able to centralize logs management and have a clear view of all the incoming HTTP requests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Whenever we publish a service, it is important to add proper logging to the application. Logging helps us understand how the system works and behaves, and it’s a fundamental component that allows us to troubleshoot problems that occur during the actual usage of the application.

    In this blog, we have talked several times about logging. However, we mostly focused on the logs that were written manually.

    In this article, we will learn how to log incoming HTTP requests to help us understand how our APIs are being used from the outside.

    Scaffolding the empty project

    To showcase this type of logging, I created an ASP.NET API. It’s a very simple application with CRUD operations on an in-memory collection.

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        private readonly List<Book> booksCatalogue = Enumerable.Range(1, 5).Select(index => new Book
        {
            Id = index,
            Title = $"Book with ID {index}"
        }).ToList();
    
        private readonly ILogger<BooksController> _logger;
    
        public BooksController(ILogger<BooksController> logger)
        {
            _logger = logger;
        }
    }
    

    These CRUD operations are exposed via HTTP APIs, following the usual verb-based convention.

    For example:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
                , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
    
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    As you can see, I have added some custom logs: before searching for the element with the specified ID, I also wrote a log message such as “Looking if in my collection with 5 books there is one with ID 2”.

    Where can I find the message? For the sake of this article, I decided to use Seq!

    Seq is a popular log sink (well, as you may know, my favourite one!), that is easy to install and to integrate with .NET. I’ve thoroughly explained how to use Seq in conjunction with ASP.NET in this article and in other ones.

    In short, the most important change in your application is to add Seq as the log sink, like this:

    builder.Services.AddLogging(lb => {
        lb.AddSeq();
    });
    

    Now, whenever I call the GET endpoint, I can see the related log messages appear in Seq:

    Custom log messages

    But sometimes it’s not enough. I want to see more details, and I want them to be applied everywhere!

    How to add HTTP Logging to an ASP.NET application

    HTTP Logging is a way of logging most of the details of the incoming HTTP operations, tracking both the requests and the responses.

    With HTTP Logging, you don’t need to manually write custom logs to access the details of incoming requests: you just need to add its related middleware, configure it as you want, and have all the required logs available for all your endpoints.

    Adding it is pretty straightforward: you first need to add the HttpLogging middleware to the list of services:

    builder.Services.AddHttpLogging(lb => { });
    

    so that you can use it once the WebApplication instance is built:

    There’s still a problem, though: all the logs generated via HttpLogging are, by default, ignored, as logs coming from their namespace (named Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware) are at Information log level, thus ignored because of the default configurations.

    You either have to update the appsetting.json file to tell the logging system to process logs from that namespace:

    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "Microsoft.AspNetCore": "Warning",
          "Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware": "Information"
        }
      }
    }
    

    or, alternatively, you need to do the same when setting up the logging system in the Program class:

    builder.Services.AddLogging(lb => {
      lb.AddSeq();
    + lb.AddFilter("Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware", LogLevel.Information);
    });
    

    We then have all our pieces in place: let’s execute the application!

    First, you can spin up the API; you should be able to see the Swagger page:

    Swagger page for our application&rsquo;s API

    From here, you can call the GET endpoint:

    Http response of the API call, as seen on Swagger

    You should now able to see all the logs in Seq:

    Logs list in Seq

    As you can see from the screenshot above, I have a log entry for the request and one for the response. Also, of course, I have the custom message I added manually in the C# method.

    Understanding HTTP Request logs

    Let’s focus on the data logged for the HTTP request.

    If we open the log related to the HTTP request, we can see all these values:

    Details of the HTTP Request

    Among these details, we can see properties such as:

    • the host name (localhost:7164)
    • the method (GET)
    • the path (/books/4)

    and much more.

    You can see all the properties as standalone items, but you can also have a grouped view of all the properties by accessing the HttpLog element:

    Details of the HTTP Log element

    Notice that for some elements we do not have access to the actual value, as the value is set to [Redacted]. This is a default configuration that prevents logging too many things (and undisclosing some values) as well as writing too much content on the log sink (the more you write, the less performant the queries become – and you also pay more!).

    Among other redacted values, you can see that even the Cookie value is not directly available – for the same reasons explained before.

    Understanding HTTP Response logs

    Of course, we can see some interesting data in the Response log:

    Details of the HTTP Response

    Here, among some other properties such as the Host Name, we can see the Status Code and the Trace Id (which, as you may notice, is the same as the one in te Request).

    As you can see, the log item does not contain the body of the response.

    Also, just as it happens with the Request, we do not have access to the list of HTTP Headers.

    How to save space, storage, and money by combining log entries

    For every HTTP operation, we end up with 2 log entries: one for the Request and one for the Response.

    However, it would be more practical to have both request and response info stored in the same log item so we can understand more easily what is happening.

    Lucky for us, this functionality is already in place. We just need to set the CombineLogs property to true when we add the HttpLogging functionality:

    builder.Services.AddHttpLogging(lb =>
    {
    +  lb.CombineLogs = true;
    }
    );
    

    Then, we are able to see the data for both the request and the related response in the same log element.

    Request and Response combined logs

    The downsides of using HTTP Logging

    Even though everything looks nice and pretty, adding HTTP Logging has some serious consequences.

    First of all, remember that you are doing some more operations for every incoming HTTP request. Just processing and storing the log messages can bring to an application performance downgrade – you are using parts of the processing resources to interpret the HTTP context, create the correct log entry, and store it.

    Depending on how your APIs are structured, you may need to strip out sensitive data: HTTP Logs, by default, log almost everything (except for the parts stored as Redacted). Since you don’t want to store as plain text the content of the requests, you may need to create custom logic to redact parts of the request and response you want to hide: you may need to implement a custom IHttpLoggingInterceptor.

    Finally, consider that logging occupies storage, and storage has a cost. The more you log, the higher the cost. You should define proper strategies to avoid excessive storage costs while keeping valuable logs.

    Further readings

    There is a lot more, as always. In this article, I focused on the most essential parts, but the road to having proper HTTP Logs is still long.

    You may want to start from the official documentation, of course!

    🔗 HTTP logging in ASP.NET Core | Microsoft Docs

    This article first appeared on Code4IT 🐧

    All the logs produced for this article were stored on Seq. You can find more info about installing and integrating Seq in ASP.NET Core in this article:

    🔗 Easy logging management with Seq and ILogger in ASP.NET | Code4IT

    Wrapping up

    HTTP Logging can be a good tool for understanding the application behaviour and detecting anomalies. However, as you can see, there are some important downsides that need to be considered.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Animating in Frames: Repeating Image Transition

    Animating in Frames: Repeating Image Transition


    In our last Motion Highlights collection, I added a really amazing reel by Joana Correia, a truly skilled motion designer. Her works are so thoughtful and browsing her latest projects I stumbled upon one of her other reels wrapping up her last year. Right at the beginning, there is this:

    This small excerpt showcases a really interesting sliced repetition effect, which inspired me to try something new: animating “frames” of the same image along a path. I’m not sure if this is any good, but for some reason, I really like it. It feels like it could fit well within editorial design contexts.

    It’s just a proof of concept, but I hope it sparks some new ideas for you too! 🙂

    Configuration

    There are better ways to do this obviously, but since this is a proof of concept and we want to be able to show various effects in our demo, I decided to do it like this. So here’s how the configuration works.

    Each grid item can override the global animation settings by specifying data- attributes directly in the HTML. This allows fine-tuning of transitions on a per-item basis.

    You can customize the following options for each .grid__item:

    • clipPathDirectiondata-clip-path-direction: Direction for clip-path animation (top-bottom, bottom-top, left-right, right-left).
    • stepsdata-steps: Number of mover elements created between grid item and panel.
    • stepDurationdata-step-duration: Duration (in seconds) of each mover animation step.
    • stepIntervaldata-step-interval: Delay (in seconds) between each mover’s animation start.
    • rotationRangedata-rotation-range: Maximum random rotation (±value, degrees) applied to movers.
    • wobbleStrengthdata-wobble-strength: Maximum random positional wobble (in pixels) during motion path generation.
    • moverPauseBeforeExitdata-mover-pause-before-exit: Pause duration (in seconds) before movers exit.
    • panelRevealEasedata-panel-reveal-ease: Easing function used when revealing the panel.
    • gridItemEasedata-grid-item-ease: Easing function for animating grid item exits.
    • moverEnterEasedata-mover-enter-ease: Easing function for movers entering.
    • moverExitEasedata-mover-exit-ease: Easing function for movers exiting.
    • panelRevealDurationFactordata-panel-reveal-duration-factor: Multiplier to adjust panel reveal animation timing.
    • clickedItemDurationFactordata-clicked-item-duration-factor: Multiplier to adjust clicked grid item animation timing.
    • gridItemStaggerFactordata-grid-item-stagger-factor: Multiplier for staggered grid item animations (based on distance).
    • moverBlendModedata-mover-blend-mode: CSS mix-blend-mode to apply to movers (normal, screen, etc.).
    • pathMotiondata-path-motion: Path motion type: linear (straight) or sine (curved).
    • sineAmplitudedata-sine-amplitude: Height of sine wave if using sine path motion (in pixels).
    • sineFrequencydata-sine-frequency: Frequency of sine wave motion (higher = more waves).

    Example

    <figure class="grid__item"
            data-clip-path-direction="left-right"
            data-steps="8"
            data-rotation-range="20"
            data-path-motion="sine"
            data-sine-amplitude="60"
            data-sine-frequency="6.28">
    
      <div class="grid__item-image" style="background-image: url(assets/img32.webp)"></div>
      
      <figcaption class="grid__item-caption">
        <h3>Aura — K21</h3>
        <p>Model: Lily Cooper</p>
      </figcaption>
      
    </figure>

    This item will fly with 8 movers, stronger rotation wobble, a sine wave path, and panel opening from left to right.

    Try it out and play with it and I really hope you enjoy it!



    Source link

  • How to create Custom Attributes, and why they are useful &vert; Code4IT

    How to create Custom Attributes, and why they are useful | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.

    I’m sure you’ve already used them before. Examples are:

    • the [Required] attribute when you define the properties of a model to be validated;
    • the [Test] attribute when creating Unit Tests using NUnit;
    • the [Get] and the [FromBody] attributes used to define API endpoints.

    As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.

    In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.

    Create a custom attribute by inheriting from System.Attribute

    Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class ApplicationModuleAttribute : Attribute
    {
     public Module BelongingModule { get; }
    
     public ApplicationModuleAttribute(Module belongingModule)
       {
     BelongingModule = belongingModule;
       }
    }
    
    public enum Module
    {
     Authentication,
     Catalogue,
     Cart,
     Payment
    }
    

    Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.

    Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
    I can then use this attribute by calling [ApplicationModule(Module.Cart)].

    Define where a Custom Attribute can be applied

    Have a look at the attribute applied to the class definition:

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    

    This attribute tells us that the ApplicationModule can be applied to interfaces, classes, and methods.

    System.AttributeTargets is an enum that enlists all the points you can attach to an attribute. The AttributeTargets enum is defined as:

    [Flags]
    public enum AttributeTargets
    {
     Assembly = 1,
     Module = 2,
     Class = 4,
     Struct = 8,
     Enum = 16,
     Constructor = 32,
     Method = 64,
     Property = 128,
     Field = 256,
     Event = 512,
     Interface = 1024,
     Parameter = 2048,
     Delegate = 4096,
     ReturnValue = 8192,
     GenericParameter = 16384,
     All = 32767
    }
    

    Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.

    There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Or, if you want, you can inline them:

    [ApplicationModule(Module.Cart), ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Practical usage of Custom Attributes

    You can use custom attributes to declare which components or business areas an element belongs to.

    In the previous example, I defined an enum that enlists all the business modules supported by my application:

    public enum Module
    {
        Authentication,
        Catalogue,
        Cart,
        Payment
    }
    

    This way, whenever I define an interface, I can explicitly tell which components it belongs to:

    [ApplicationModule(Module.Catalogue)]
    public interface IItemDetails
    {
        [ApplicationModule(Module.Catalogue)]
        string ShowItemDetails(string itemId);
    }
    
    [ApplicationModule(Module.Cart)]
    public interface IItemDiscounts
    {
        [ApplicationModule(Module.Cart)]
        bool CanHaveDiscounts(string itemId);
    }
    

    Not only that: I can have one single class implement both interfaces and mark it as related to both the Catalogue and the Cart areas.

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService : IItemDetails, IItemDiscounts
    {
        [ApplicationModule(Module.Catalogue)]
        public string ShowItemDetails(string itemId) => throw new NotImplementedException();
    
        [ApplicationModule(Module.Cart)]
        public bool CanHaveDiscounts(string itemId) => throw new NotImplementedException();
    }
    

    Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.

    Further readings

    As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:

    🔗 5 things you should know about enums in C# | Code4IT

    and
    🔗 5 more things you should know about enums in C# | Code4IT

    This article first appeared on Code4IT 🐧

    There are some famous but not-so-obvious examples of attributes that you should know: DebuggerDisplay and InternalsVisibleTo.

    DebuggerDisplay can be useful for improving your debugging sessions.

    🔗 Simplify debugging with DebuggerDisplay attribute dotNET | Code4IT

    IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.

    🔗 Testing internal members with InternalsVisibleTo | Code4IT

    Wrapping up

    In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.

    In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.

    Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Zero Trust Best Practices for Enterprises and Businesses


    Cybersecurity threats are becoming more sophisticated and frequent in today’s digital landscape. Whether a large enterprise or a growing small business, organizations must pivot from traditional perimeter-based security models to a more modern, robust approach—Zero Trust Security. At its core, Zero Trust operates on a simple yet powerful principle: never trust, always verify.

    Implementing Zero Trust is not a one-size-fits-all approach. It requires careful planning, integration of the right technologies, and ongoing management. Here are some key zero trust best practices to help both enterprises and small businesses establish a strong zero-trust foundation:

    1. Leverage IAM and AD Integrations

    A successful Zero-Trust strategy begins with Identity and Access Management (IAM). Integrating IAM solutions with Active Directory (AD) or other identity providers helps centralize user authentication and enforce policies more effectively. These integrations allow for a unified view of user roles, permissions, and access patterns, essential for controlling who gets access to what and when.

    IAM and AD integrations also enable seamless single sign-on (SSO) capabilities, improving user experience while ensuring access control policies are consistently applied across your environment.

    If your organization does not have an IdP or AD, choose a ZT solution with a User Management feature for Local Users.

    1. Ensure Zero Trust for Both On-Prem and Remote Users

    Gone are the days when security could rely solely on protecting the corporate network perimeter. With the rise of hybrid work models, extending zero-trust principles beyond traditional office setups is critical. This means ensuring that both on-premises and remote users are subject to the same authentication, authorization, and continuous monitoring processes.

    Cloud-native Zero Trust Network Access (ZTNA) solutions help enforce consistent policies across all users, regardless of location or device. This is especially important for businesses with distributed teams or those who rely on contractors and third-party vendors.

    1. Implement MFA for All Users for Enhanced Security

    Multi-factor authentication (MFA) is one of the most effective ways to protect user identities and prevent unauthorized access. By requiring at least two forms of verification, such as a password and a one-time code sent to a mobile device, MFA dramatically reduces the risk of credential theft and phishing attacks.

    MFA should be mandatory for all users, including privileged administrators and third-party collaborators. It’s a low-hanging fruit that can yield high-security dividends for organizations of all sizes.

    1. Ensure Proper Device Posture Rules

    Zero Trust doesn’t stop at verifying users—it must also verify their devices’ health and security posture. Whether it’s a company-issued laptop or a personal mobile phone, devices should meet specific security criteria before being granted access to corporate resources.

    This includes checking for up-to-date antivirus software, secure OS configurations, and encryption settings. By enforcing device posture rules, businesses can reduce the attack surface and prevent compromised endpoints from becoming a gateway to sensitive data.

    1. Adopt Role-Based Access Control

    Access should always be granted on a need-to-know basis. Implementing Role-Based Access Control (RBAC) ensures that users only have access to the data and applications required to perform their job functions, nothing more, nothing less.

    This minimizes the risk of internal threats and lateral movement within the network in case of a breach. For small businesses, RBAC also helps simplify user management and audit processes, primarily when roles are clearly defined, and policies are enforced consistently.

    1. Regularly Review and Update Policies

    Zero Trust is not a one-time setup, it’s a continuous process. As businesses evolve, so do user roles, devices, applications, and threat landscapes. That’s why it’s essential to review and update your security policies regularly.

    Conduct periodic audits to identify outdated permissions, inactive accounts, and policy misconfigurations. Use analytics and monitoring tools to assess real-time risk levels and fine-tune access controls accordingly. This iterative approach ensures that your Zero Trust architecture remains agile and responsive to emerging threats.

    Final Thoughts

    Zero Trust is more than just a buzzword, it’s a strategic shift that aligns security with modern business realities. Adopting these zero trust best practices can help you build a more resilient and secure IT environment, whether you are a large enterprise or a small business.

    By focusing on identity, device security, access control, and continuous policy refinement, organizations can reduce risk exposure and stay ahead of today’s ever-evolving cyber threats.

    Ready to take the next step in your Zero Trust journey? Start with what you have, plan for what you need, and adopt a security-first mindset across your organization.

    Embrace the Seqrite Zero Trust Access Solution and create a secure and resilient environment for your organization’s digital assets. Contact us today.

     



    Source link