نویسنده: post Bina

  • Measuring maintainability metrics with NDepend | Code4IT

    Measuring maintainability metrics with NDepend | Code4IT


    Keeping an eye on maintainability is mandatory for every project which should live long. With NDepend, you can measure maintainability for .NET projects.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Software systems can be easy to build, but hard to maintain. The more a system will be maintained, the more updates to the code will be needed.

    Structuring the code to help maintainability is crucial if your project is expected to evolve.

    In this article, we will learn how to measure the maintainability of a .NET project using NDepend, a tool that can be installed as an extension in Visual Studio.

    So, let’s begin with the how, and then we’ll move to the what.

    Introducing NDepend

    NDepend is a tool that performs static analysis on any .NET project.

    It is incredibly powerful and can calculate several metrics that you can use to improve the quality of your code, like Lines of Code, Cyclomatic Complexity, and Coupling.

    You can use NDepend in two ways: installing it on your local Visual Studio instance, or using it in your CI/CD pipelines, to generate reports during the build process.

    In this article, I’ve installed it as a Visual Studio extension. Once it is ready, you’ll have to create a new NDepend project and link it to your current solution.

    To do that, click on the ⚪ icon on the bottom-right corner of Visual Studio, and create a new NDepend project. It will create a ndproj project and attach it to your solution.

    When creating a new NDepend project, you can choose which of your .NET projects must be taken into consideration. You’ll usually skip analyzing test projects.

    Create NDepend project by selecting .NET projects to be analyzed

    Then, to run the analysis of your solution, you need to click again on that ⚪ icon and click Run analysis and generate report.

    Now you’ll have two ways to access the results. On an HTML report, like this one:

    NDepend HTML report

    Or as a Dashboard integrated with Visual Studio:

    NDepend dashboard on Visual Studio

    You will find most of the things in the HTML report.

    What is Maintainability

    Maintainability is a quality of a software system (a single application or a group of applications) that describes how easy it is to maintain it.

    Easy-to-maintain code has many advantages:

    • it allows quicker and less expensive maintenance operations
    • the system is easier to reverse-engineer
    • the code is oriented to the other devs as well as to the customers
    • it keeps the system easy to update if the original developers leave the company

    There are some metrics that we can use to have an idea of how much it is easy to maintain our software.

    And to calculate those metrics, we will need some external tools. Guess what? Like NDepend!

    Lines of code (LOC)

    Typically, systems with more lines of code (abbreviated as LOC) are more complex and, therefore, harder to maintain.

    Of course, it’s the order of magnitude of that number that tells us about the complexity; 90000 and 88000 are similar numbers, you won’t see any difference.

    Two types of LOC can be calculated: physical LOC and logical LOC.

    Physical LOC refers to the total count of lines of your code. It’s the easiest one to calculate since you can just count the lines of code as they appear.

    Logical LOC is about only the effectively executable lines of code. Spacing, comments, and imports are excluded from this count.

    Calculating LOC with NDepend

    If you want to see the LOC value for your code, you can open the NDepend HTML report, head to Metrics > Types Metrics (in the left menu), and see that value.

    This value is calculated based on the IL and the actual C# code, so it may happen that it’s not the exact number of lines you can see on your IDE. By the way, it’s a good estimation to understand which classes and methods need some attention.

    LOC value report generated by NDepend

    Why is LOC important?

    Keeping track of LOC is useful because the more lines of code, the more possible bugs.

    Also, having lots of lines of code can make refactoring harder, especially because it’s probable that there is code duplication.

    How to avoid it? Well, probably, you can’t. Or, at least, you can’t move to a lower magnitude. But, still, you can organize the code in modules with a small LOC value.

    In this way, every LOC is easily maintainable, especially if focused on a specific aspect (SRP, anyone?)

    The total LOC value won’t change. What will change is how the code is distributed across separated and independent modules.

    Cyclomatic complexity (CC)

    Cyclomatic complexity is the measure of the number of linear paths through a module.

    This formula works for simple programs and methods:

    CC = E-N+2
    

    where E is the number of Edges of the graph, while N is the number of Nodes.

    Wait! Graph?? 😱

    Yes, graphs!

    Code can be represented as a graph, where each node is a block of code.

    Take as an example this method:

    public string GetItemDescription(Item item)
    {
        string description;
        if (item == null)
            description = string.Empty;
        else
            description = item.Name + " - " + item.Seller;
    
        return description;
    }
    

    Here we have 4 nodes (N=4) and 4 edges (E=4).

    GetItemDescription described as a graph

    so

    CC = 4-4+2 = 2
    

    Again, you will not calculate CC manually: we can use NDepend instead.

    Calculating Cyclomatic Complexity with NDepend

    As described before, the first step to do is to run NDepend and generate the HTML report. Then, open the left menu and click on Metrics > Type Metrics

    Here you can see the values for Cyclomatic Complexity for every class (but you cannot drill down to every method).

    Cyclomatic complexity

    Why is CC important?

    Keeping track of Cyclomatic Complexity is good to understand the degree of complexity of a module or a method.

    The higher the CC, the harder it will be to maintain and update the module.

    We can use Cyclomatic Complexity as a lower limit for test cases. Since the CC of a method tells us about the number of independent execution paths, we can use that value to see the minimum number of tests to execute on that method. So, in the previous example, CC=2, and we need at least two tests: one for the case when item is null, and one for the case when item is not null.

    Depth of Inheritance Tree (DIT)

    Depth of Inheritance Tree (DIT) is the value of the maximum length of the path between a base class and its farther subclass.

    Take for example this simple class hierarchy.

    public class User{}
    
    public class Teacher : User { }
    
    public class Student : User { }
    
    public class AssociatedTeacher : Teacher { }
    

    It can be represented as a tree, to better understand the relationship between classes:

    User class hierarchy as a tree

    Since the maximum depth of the tree is 3, the DIT value is 3.

    How to calculate DIT with NDepend

    As usual, run the code analysis with NDepend to generate the HTML report.

    Then, you can head to Metrics > Type Metrics and navigate to the Code Members and Inheritance section to see the value of DIT of each class.

    DIT calculated with NDepend

    Why is DIT important?

    Inheritance is a good way to reduce code duplication, that’s true: everything that is defined in the base class can be used by the derived classes.

    But still, you should keep your eyes on the DIT value: if the depth level is greater than a certain amount (5, as stated by many devs), you’re probably risking to incur on possible bugs and unwanted behaviors due to some parent classes.

    Also, having such a deep hierarchy may cause your system to be hard to maintain and evolve. So, if possible, prefer composition over inheritance.

    Two words about NDepend

    For sure, NDepend is an amazing tool for static analysis. All those metrics can be really useful – if you know how to use them. Luckily, not only do they give you the values of those metrics, but they also explain them.

    In this article, I showed the most boring stuff you can see with NDepend. But you can do lots of incredible things.

    My favorites ones are:

    Instability vs Abstractness diagram, which shows if your modules are easy to maintain. The relation between Instability and Abstractness is well explained in Uncle Bob’s Clean Architecture book.

    Instability vs Abstractness diagram

    Assemblies Dependencies, which lists all the assemblies referenced by your project. Particularly useful to keep track of the OSS libraries you’re using, in case you need to update them for whichever reason (Log4J, anyone?)

    Assemblies Dependencies

    Then, the Component Dependencies Diagram, which is probably my fav feature: it allows you to navigate the modules and classes, and to understand which module depends on which other module.

    Component Dependencies Diagram

    and many more.

    BUT!

    There are also things I don’t like.

    I found it difficult to get started with it: installing and running it the first time was quite difficult. Even updating it is not that smooth.

    Then, the navigation menu is not that easy to understand. Take this screenshot:

    NDepend Menu

    Where can I find the Component Dependencies Diagram? Nowhere – it is accessible only from the homepage.

    So, the tool is incredibly useful, but it’s difficult to use (at first, obviously).

    If the NDepend team starts focusing on the usability and the UI, I’m sure it can quickly become a must-have tool for every team working on .NET. Of course, if they create a free (or cheaper) tier for their product with reduced capabilities: now it’s quite expensive. Well, actually it is quite cheap for companies, but for solo devs it is not affordable.

    Additional resources

    If you want to read more about how NDepend calculates those metrics, the best thing to do is to head to their documentation.

    🔗 Code quality metrics | NDepend

    And, obviously, have a look at that project:

    🔗 NDepend Homepage

    As I said before, you should avoid creating too many subclasses. Rather, you should compose objects to extend their behavior. A good way to do that is through the Decorator pattern, as I explained here.

    🔗 Decorator pattern with Scrutor | Code4IT

    To test NDepend I used an existing, and well-known, project, that you can see on GitHub: Clean Architecture, created by Steve Smith (aka Ardalis).

    🔗 Clean Architecture repository | GitHub

    Wrapping up

    In this article, we’ve seen how to measure metrics like Lines Of Code, Cyclomatic Complexity, and Depth of Inheritance Tree to keep an eye on the maintainability of a .NET solution.

    To do that, we’ve used NDepend – I know, it’s WAY too powerful to be used only for those metrics. It’s like using a bazooka to kill a bee 🐝. But still, it was nice to try it out with a realistic project.

    So, NDepend is incredibly useful for managing complex projects – it’s quite expensive, but in the long run, it may help you save money.

    Have you already used it?

    Do you keep track of maintainability metrics?

    Happy coding!

    🐧



    Source link

  • Recreating Palmer’s Draggable Product Grid with GSAP

    Recreating Palmer’s Draggable Product Grid with GSAP



    One of the best ways to learn is by recreating an interaction you’ve seen out in the wild and building it from scratch. It pushes you to notice the small details, understand the logic behind the animation, and strengthen your problem-solving skills along the way.

    So today we’ll dive into rebuilding the smooth, draggable product grid from the Palmer website, originally crafted by Uncommon with Kevin Masselink, Alexis Sejourné, and Dylan Brouwer. The goal is to understand how this kind of interaction works under the hood and code the basics from scratch.

    Along the way, you’ll learn how to structure a flexible grid, implement draggable navigation, and add smooth scroll-based movement. We’ll also explore how to animate products as they enter or leave the viewport, and finish with a polished product detail transition using Flip and SplitText for dynamic text reveals.

    Let’s get started!

    Grid Setup

    The Markup

    Let’s not try to be original and, as always, start with the basics. Before we get into the animations, we need a clear structure to work with — something simple, predictable, and easy to build upon.

    <div class="container">
      <div class="grid">
        <div class="column">
          <div class="product">
            <div><img src="./public/img-3.png" /></div>
          </div>
          <div class="product">
            <div><img src="./public/img-7.png" /></div>
          </div>
          <!-- repeat -->
        </div>
        <!-- repeat -->
      </div>
    </div>

    What we have here is a .container that fills the viewport, inside of which sits a .grid divided into vertical columns. Each column stacks multiple .product elements, and every product wraps around an image. It’s a minimal setup, but it lays the foundation for the draggable, animated experience we’re about to create.

    The Style

    Now that we’ve got the structure, let’s add some styling to make the grid usable. We’ll keep things straightforward and use Flexbox instead of CSS Grid, since Flexbox makes it easier to handle vertical offsets for alternating columns. This approach keeps the layout flexible and ready for animation.

    .container {
      position: fixed;
      width: 100vw;
      height: 100vh;
      top: 0;
      left: 0;
    }
    
    .grid {
      position: absolute;
      display: flex;
      gap: 5vw;
      cursor: grab;
    }
    
    .column {
      display: flex;
      flex-direction: column;
      gap: 5vw;
    }
    
    .column:nth-child(even) {
      margin-top: 10vw;  
    }
    
    .product {
      position: relative;
      width: 18.5vw;
      aspect-ratio: 1 / 1;
    
      div {
        width: 18.5vw;
        aspect-ratio: 1 / 1;
      }
    
      img {
        position: absolute;
        width: 100%;
        height: 100%;
        object-fit: contain;
      }
    }

    Animation

    Okay, setup’s out of the way — now let’s jump into the fun part.

    When developing interactive experiences, it helps to break things down into smaller parts. That way, each piece can be handled step by step without feeling overwhelming.

    Here’s the structure I followed for this project:

    1 – Introduction / Preloader
    2 – Grid Navigation
    3 – Product’s detail view transition

    Introduction / Preloader

    First, the grid isn’t centered by default, so we’ll fix that with a small utility function. This makes sure the grid always sits neatly in the middle of the screen, no matter the viewport size.

    centerGrid() {
      const gridWidth = this.grid.offsetWidth
      const gridHeight = this.grid.offsetHeight
      const windowWidth = window.innerWidth
      const windowHeight = window.innerHeight
    
      const centerX = (windowWidth - gridWidth) / 2
      const centerY = (windowHeight - gridHeight) / 2
    
      gsap.set(this.grid, {
        x: centerX,
        y: centerY
      })
    }

    In the original Palmer reference, the experience starts with products appearing one by one in a slightly random order. After that reveal, the whole grid smoothly zooms into place.

    To keep things simple, we’ll start with both the container and the products scaled down to 0.5 and the products fully transparent. Then we animate them back to full size and opacity, adding a random stagger so the images pop in at slightly different times.

    The result is a dynamic but lightweight introduction that sets the tone for the rest of the interaction.

    intro() {
      this.centerGrid()
    
      const timeline = gsap.timeline()
    
      timeline.set(this.dom, { scale: .5 })
      timeline.set(this.products, {
        scale: 0.5,
        opacity: 0,
      })
    
      timeline.to(this.products, {
        scale: 1,
        opacity: 1,
        duration: 0.6,
        ease: "power3.out",
        stagger: { amount: 1.2, from: "random" }
      })
      timeline.to(this.dom, {
        scale: 1,
        duration: 1.2,
        ease: "power3.inOut"
      })
    }

    Grid Navigation

    The grid looks good. Next, we need a way to navigate it: GSAP’s Draggable plugin is just what we need.

    setupDraggable() {
      this.draggable = Draggable.create(this.grid, {
        type: "x,y",
        bounds: {
          minX: -(this.grid.offsetWidth - window.innerWidth) - 200,
          maxX: 200,
          minY: -(this.grid.offsetHeight - window.innerHeight) - 100,
          maxY: 100
        },
        inertia: true,
        allowEventDefault: true,
        edgeResistance: 0.9,
      })[0]
    }

    It would be great if we could add scrolling too.

    window.addEventListener("wheel", (e) => {
      e.preventDefault()
    
      const deltaX = -e.deltaX * 7
      const deltaY = -e.deltaY * 7
    
      const currentX = gsap.getProperty(this.grid, "x")
      const currentY = gsap.getProperty(this.grid, "y")
    
      const newX = currentX + deltaX
      const newY = currentY + deltaY
    
      const bounds = this.draggable.vars.bounds
      const clampedX = Math.max(bounds.minX, Math.min(bounds.maxX, newX))
      const clampedY = Math.max(bounds.minY, Math.min(bounds.maxY, newY))
    
      gsap.to(this.grid, {
        x: clampedX,
        y: clampedY,
        duration: 0.3,
        ease: "power3.out"
      })
    }, { passive: false })

    We can also make the products appear as we move around the grid.

    const observer = new IntersectionObserver((entries) => {
      entries.forEach((entry) => {
        if (entry.target === this.currentProduct) return
        if (entry.isIntersecting) {
          gsap.to(entry.target, {
            scale: 1,
            opacity: 1,
            duration: 0.5,
            ease: "power2.out"
          })
        } else {
          gsap.to(entry.target, {
            opacity: 0,
            scale: 0.5,
            duration: 0.5,
            ease: "power2.in"
          })
        }
      })
    }, { root: null, threshold: 0.1 })

    Product’s detail view transition

    When you click on a product, an overlay opens and displays the product’s details.
    During this transition, the product’s image animates smoothly from its position in the grid to its position inside the overlay.

    We build a simple overlay with minimal structure and styling and add an empty <div> that will contain the product image.

    <div class="details">
      <div class="details__title">
        <p>The title</p>
      </div>
      <div class="details__body">
        <div class="details__thumb"></div>
        <div class="details__texts">
          <p>Lorem ipsum dolor, sit amet consectetur adipisicing elit...</p>
        </div>
      </div>
    </div>
    .details {
      position: absolute;
      top: 0;
      left: 0;
      width: 50vw;
      height: 100vh;
      padding: 4vw 2vw;
      background-color: #FFF;
    
      transform: translate3d(50vw, 0, 0);
    }
    
    .details__thumb {
      position: relative;
      width: 25vw;
      aspect-ratio: 1 / 1;
      z-index: 3;
      will-change: transform;
    }
    
    /* etc */

    To achieve this effect, we use GSAP’s Flip plugin. This plugin makes it easy to animate elements between two states by calculating the differences in position, size, scale, and other properties, then animating them seamlessly.

    We capture the state of the product image, move it into the details thumbnail container, and then animate the transition from the captured state to its new position and size.

    showDetails(product) {
      gsap.to(this.dom, {
        x: "50vw",
        duration: 1.2,
        ease: "power3.inOut",
      })
    
      gsap.to(this.details, {
        x: 0,
        duration: 1.2,
        ease: "power3.inOut",
      })
    
      this.flipProduct(product)
    }
    
    flipProduct(product) {
      this.currentProduct = product
      this.originalParent = product.parentNode
    
      if (this.observer) {
        this.observer.unobserve(product)
      }
    
      const state = Flip.getState(product)
      this.detailsThumb.appendChild(product)
    
      Flip.from(state, {
        absolute: true,
        duration: 1.2,
        ease: "power3.inOut",
      });
    }

    We can add different text-reveal animations when a product’s details are shown, using the SplitText plugin.

    const splitTitles = new SplitText(this.titles, {
      type: "lines, chars",
      mask: "lines",
      charsClass: "char"
    })
    
    const splitTexts = new SplitText(this.texts, {
      type: "lines",
      mask: "lines",
      linesClass: "line"
    })
    
    gsap.to(splitTitles.chars, {
      y: 0,
      duration: 1.1,
      delay: 0.4,
      ease: "power3.inOut",
      stagger: 0.025
    });
    
    gsap.to(splitTexts.lines, {
      y: 0,
      duration: 1.1,
      delay: 0.4,
      ease: "power3.inOut",
      stagger: 0.05
    });

    Final thoughts

    I hope you enjoyed following along and picked up some useful techniques. Of course, there’s always room for further refinement—like experimenting with different easing functions or timing—but the core ideas are all here.

    With this approach, you now have a handy toolkit for building smooth, draggable product grids or even simple image galleries. It’s something you can adapt and reuse in your own projects, and a good reminder of how much can be achieved with GSAP and its plugins when used thoughtfully.

    A huge thanks to Codrops and to Manoela for giving me the opportunity to share this first article here 🙏 I’m really looking forward to hearing your feedback and thoughts!

    See you around 👋



    Source link

  • Operation HanKook Phantom: APT37 Spear-Phishing Campaign

    Operation HanKook Phantom: APT37 Spear-Phishing Campaign


    Table of Contents:

    • Introduction
    • Threat Profile
    • Infection Chain
    • Campaign-1
      • Analysis of Decoy:
      • Technical Analysis
      • Fingerprint of ROKRAT’s Malware
    • Campaign-2
      • Analysis of Decoy
      • Technical analysis
      • Detailed analysis of Decoded tony31.dat
    • Conclusion
    • Seqrite Protections
    • MITRE Att&ck:
    • IoCs

    Introduction:

    Seqrite Lab has uncovered a campaign in which threat actors are leveraging the “국가정보연구회 소식지 (52호)” (National Intelligence Research Society Newsletter – Issue 52) as a decoy document to lure victims. The attackers are distributing this legitimate-looking PDF along with a malicious LNK (Windows shortcut) file named as 국가정보연구회 소식지(52호).pdf .LNK is typically appended to the same archive or disguised as a related file. Once the LNK file is executed, it triggers a payload download or command execution, enabling the attacker to compromise the system.

    The primary targets appear to be individuals associated with the National Intelligence Research Association, including academic figures, former government officials, and researchers in the newsletter. The attackers likely aim to steal sensitive information, establish persistence, or conduct espionage.

    Threat Profile:

    Our investigation has identified the involvement of APT-37, also referred to as InkySquid, ScarCruft, Reaper, Group123, TEMP. Reaper, or Ricochet Chollima. This threat actor is a North Korean state-backed cyber espionage group operational since at least 2012. While their primary focus has been on targets within South Korea, their activities have also reached nations such as Japan, Vietnam, and various countries across Asia and the Middle East. APT-37 is particularly known for executing sophisticated spear-phishing attacks.

    Targets below Country:

    • South Korea
    • Japan
    • Vietnam
    • Russia
    • Nepal
    • China
    • India
    • Romania
    • Kuwait
    • Middle East

    APT-37 has been observed targeting North Korea through spear-phishing campaigns using various decoy documents. These include files such as 러시아 전장에 투입된 인민군 장병들에게.hwp” (To North Korean Soldiers Deployed to the Russian Battlefield.hwp), 국가정보와 방첩 원고.lnk” (National Intelligence and Counterintelligence Manuscript.lnk), and the most recent sample, which is analyzed in detail in this report.

    Infection Chain:

    Campaign –1:

    Analysis of Decoy:

    The document “국가정보연구회 소식지 (52호)” (“National Intelligence Research Society Newsletter—Issue 52”) is a monthly or periodic internal newsletter issued by a South Korean research group focused on national intelligence, labour relations, security, and energy issues.

    The document informs members of upcoming seminars, events, research topics, and organizational updates, including financial contributions and reminders. It reflects ongoing academic and policy-oriented discussions about national security, labour, and North-South Korea relations, considering current events and technological developments like AI.

    Threat actors leveraged the decoy document as a delivery mechanism to facilitate targeted attacks, disseminating it to specific authorities as part of a broader spear-phishing campaign. This tactic exploited trust and gained unauthorized access to sensitive systems or information.

    Targeted Government Sectors:

    • National Intelligence Research Association (국가정보연구회)
    • Kwangwoon University
    • Korea University
    • Institute for National Security Strategy
    • Central Labor Economic Research Institute
    • Energy Security and Environment Association
    • Republic of Korea National Salvation Spirit Promotion Association
    • Yangjihoe (Host of Memorial Conference)
    • Korea Integration Strategy.

    Technical Analysis:

    After downloading the LNK file named 국가정보연구회 소식지(52).pdf.lnk and executing it in our test environment, we observed the following chain of execution using Procmon.

    The LNK file contains embedded PowerShell scripts that extract and execute additional payloads at runtime.

    This script searches for .lnk files, opens them in binary mode, reads embedded payload data from them, extracts multiple file contents (including a disguised .pdf and additional payloads), writes them to disk (like aio0.dat, aio1.dat, and aio1+.3.b+la+t).

    This block reads specific binary chunks from offsets in the .lnk file:

    • Offset 0x0000102C: likely fake PDF (decoy)
    • Offset 0x0007EDC1: payload #1 (dat)
    • Offset 0x0015A851: string (commands/script)
    • Offset 0x0015AED2: another payload (aio1+3.b+la+t)

    It stores them as:

    • $pdfPath – saved as .pdf decoy
    • $exePath = dat – possibly loader binary
    • $executePath = aio1+3.b+la+t – final malicious payload

    This executes a batch script (aio03.bat) dropped in the %TEMP% folder.

    As per our analysis, the attack starts with a malicious .lnk file containing hidden payloads at specific binary offsets. When executed, PowerShell scans for such .lnk files, extracts a decoy PDF and three embedded payloads (aio1.dat, aio2.dat, and aio1+3.b+la+t), and saves them in %TEMP%. A batch script (aio03.bat) is then executed to trigger the next stage, where PowerShell reads and decodes a UTF-8 encoded script from aio02.dat and runs it in memory using Invoke-Command. This leads to the execution of aio1.dat, the final payload, completing the multi-stage infection chain.

    This PowerShell script ai02.dat represents the final in-memory execution stage of the malware chain and is a clear example of fileless execution via PowerShell with reflective DLL injection.

    It tries to open the file aio01.dat (previously dropped to %TEMP%) and reads its binary content into $exeFile byte array.

    $k=’5′

    for ($i=0; $i -lt $len; $i++) {

    $newExeFile[$i] = $exeFile[$i] -bxor $k[0]

    }

    The payload is XOR-encrypted with a single-byte key (0x35, which is ASCII ‘5’). This loop decodes the encrypted binary into $newExeFile.

    The aio02.dat file contains a PowerShell script that performs in-memory execution of a final payload (aio01.dat). It reads the XOR-encrypted binary (aio01.dat) from the %TEMP% directory, decrypts it using a single-byte XOR key (0x35), and uses Windows API functions (GlobalAlloc, VirtualProtect, CreateThread, WaitForSingleObject) to allocate memory, make it executable, inject the decoded binary, and execute it—all without dropping another file to disk.

    Detailed Analysis of the Extracted EXE file:

    Fingerprint of ROKRAT’s Malware

    The function is building a host fingerprint string set, containing:

    • Architecture flag (WOW64 or not)
    • Computer name
    • Username
    • Path to malware binary
    • BIOS / Manufacturer info

    Anti VM

    This function often checks whether the system runs in a virtual machine, sandbox, or analysis environment. In our case, it is being used with:

    “C:\\Program Files\\VMware\\VMware Tools\\vmtoolsd.exe”

    The function sub_40EA2C is likely used as an environment or privilege check. It tries to create and delete a randomly named .dat file in the Windows system directory, which typically requires administrative privileges. If this operation succeeds, it suggests the program is running in a real user environment with sufficient permissions. However, if it fails, it may indicate a restricted environment such as a sandbox or virtual machine used for malware analysis.

    Screenshot Capture

    The function sub_40E40B appears to capture a screenshot, process the image in memory, and possibly encode or transmit the image data.

    ROKRAT Commands

    Each command is identified by a single character. Some of the commands take arguments, and they are supplied just after the command ID character. After the correct command is determined, the code parses the statements according to the command type. The following table lists the commands we discovered in ROKRAT, together with their expected arguments and actions:

    Command 1 to 4

    The shellcode is retrieved from the C2 server and executed via CreateThread. Execution status—either “Success” or “Failed”—is logged to a file named r.txt. In parallel, detailed system information from the victim’s machine is gathered and transmitted back to the command-and-control (C&C) server.

     

    Command 5 to 9

    The malware first initializes cloud provider information, which is likely part of setting up communication with the command-and-control (C2) server. It then proceeds to download a PE (Portable Executable) file from the C2 server. The downloaded file is saved with the name KB400928_doc.exe, consistent with the naming convention used in earlier steps. Once the file is saved locally, the malware immediately executes it.

     

    Command C – Exfiltrate Files

    Searches for files in the specified file or directory path based on the provided extensions—either all files, common document types (e.g., doc, xls, ppt, txt, m4a, amr, pdf, hwp), or user-defined extensions. The located files are then uploaded to the C&C server.

    Command E – Run a Command

    Executes the specified command using cmd.exe, allowing remote execution of arbitrary system commands.

     

    Command H – Enumerate Files on Drives

    Gathers file and directory information from available drives by executing the command dir /A /S : >> “%temp%\\_.TMP”, which recursively lists all files and folders and stores the output in a temporary file.

    Command ‘i’ – Mark Data as Ready for Exfiltration

    Collected data is ready to be sent to the command and control (C2) server.

    Command ‘j’ or ‘b’ – Terminate Malware Execution

    Initiates a shutdown procedure, causing the malware to stop all operations and terminate its process.

    C2C connection

    RokRat leverages cloud services like pCloud, Yandex, and Dropbox as command and control (C2) channels. it can exfiltrate stolen data, retrieve additional payloads, and execute remote commands with minimal detection.

     

    Provider Function Obfuscated URL
    Dropbox list_folder hxxps://api.dropboxapi[.]com/2/files/list_folder
    upload hxxps://content.dropboxapi[.]com/2/files/upload
    download hxxps://content.dropboxapi[.]com/2/files/download
    delete hxxps://api.dropboxapi[.]com/2/files/delete
    pCloud listfolder hxxps://api.pcloud[.]com/listfolder?path=%s
    uploadfile hxxps://api.pcloud[.]com/uploadfile?path=%s&filename=%s&nopartial=1
    getfilelink hxxps://api.pcloud[.]com/getfilelink?path=%s&forcedownload=1&skipfilename=1
    deletefile hxxps://api.pcloud[.]com/deletefile?path=%s
    Yandex.Disk list folder (limit) hxxps://cloud-api.yandex[.]net/v1/disk/resources?path=%s&limit=500
    upload hxxps://cloud-api.yandex[.]net/v1/disk/resources/upload?path=%s&overwrite=%s
    download hxxps://cloud-api.yandex[.]net/v1/disk/resources/download?path=%s
    permanently delete hxxps://cloud-api.yandex[.]net/v1/disk/resources?path=%s&permanently=%s

     

    Campaign –2:

    Analysis of Decoy:

    Threat Actors are utilizing this document, which is a statement issued by Kim Yō-jong, the Vice Department Director of the Central Committee of the Workers’ Party of Korea (North Korea), dated July 28, and reported by the Korean Central News Agency (KCNA).

    This statement marks a sharp and formal rejection by North Korea of any reconciliation efforts from South Korea, particularly under the government of President Lee Jae-myung. It strongly criticizes the South’s attempts to improve inter-Korean relations, labelling them as meaningless or hypocritical, and asserts.

    North Korea also expressed no interest in any future dialogue or proposals from South Korea, stating that the country will no longer engage in talks or cooperation.

    The statement concluded by reaffirming North Korea’s hostile stance toward South Korea, emphasizing that the era of national unity is over, and future relations will be based on confrontation, not reconciliation.

    Targeted Government organization:

    • South Korean Government (李在明政府 – Lee Jae-myung administration)
    • Ministry of Unification (統一部)
    • Workers’ Party of Korea (朝鮮労働党中央委員会)
    • Korean Central News Agency (KCNA / 朝鮮中央通信)
    • S.–South Korea Military Alliance (韓米同盟)
    • Asia-Pacific Economic Cooperation (APEC)

    Technical Analysis:

    Upon analysing the second LNK file we found while hunting on Virus Total, we observed the same execution chain as previously seen when running the file.

    The LNK file drops a decoy document named file.doc and creates the following artifacts in the %TEMP% directory. After dropping these files, the LNK file deletes itself from the parent directory to evade detection and hinder forensic analysis.

    As observed in our previous campaign, the same set of files is also being used here. However, this time the files have been renamed—likely to random or arbitrary names—to evade detection or hinder analysis.

    Looking into the Bat file,, which is named tony33.bat,

    This appears to be highly obfuscated and contains PowerShell execution code. After decoding, the content can be seen in the snapshot below.

    The file tony32.dat contains a Base64-encoded PowerShell payload that serves as the core malicious component of the attack. The accompanying .bat/PowerShell loader is designed to read this file from the system’s temporary directory, decode its contents twice—first converting the raw bytes to a UTF-8 string, then Base64-decoding that string back into executable PowerShell code—and finally execute the decoded payload directly in memory. This fileless execution technique allows the attackers to run malicious code without writing the final script to disk, making it harder for traditional security solutions to detect or block the activity.

    Upon analysing and decoding the tony32.dat file, we observed that the file has a Base64 encoded string as below,

    After decoding the string, we have seen that the file is memory injection loader — it reads an XOR-encrypted binary from tony31.dat, decrypts it, and executes it directly in memory using Windows API calls.

    $exePath = $env:temp + ‘\tony31.dat’;

    $exeFile = Get-Content -path $exePath -encoding byte;

    Loads tony31.dat as raw bytes from the system’s Temp folder.

    $xK = ‘7’;

    for($i=0; $i -lt $len; $i++) {

        $newExeFile[$i] = $exeFile[$i] -bxor $xk[0];

    Each byte is XOR-decoded using the key 0x37 (ASCII ‘7’).

    $buffer = $b::GlobalAlloc(0x0040, $byteCount + 0x100);

    $a90234sb::VirtualProtect($buffer, $byteCount + 0x100, 0x40, [ref]$old);

    Allocates a memory buffer with executable permissions.

    • dat = Encrypted malicious executable (XOR with ‘7’)
    • The script decrypts it entirely in memory (no file drop to disk)
    • Uses direct Windows API calls to allocate and execute memory (fileless execution).

    Detailed analysis of Decoded tony31.dat:

    Upon analysis of the extracted Exe, we found that this malware acts as a dropper/launcher, downloading a file named abs.tmp in temp directory, and loading ads or drops a file named abs.tmp, and loads its contents.
    It then executes the payload through PowerShell and deletes the staging file to cover its tracks.

    Data Exfiltration

    Malware doesn’t always force its way into systems — sometimes it operates quietly, collecting sensitive data and disappearing without a trace. In this case, two functions, sub_401360 and sub_4021F0, work in tandem to execute a stealthy data exfiltration routine.

    The first function scans a specific Temp directory on the victim’s machine (C:\Users\<username>\AppData\Local\Temp\{502C2E2E-…}), identifying all non-directory files. Each discovered file path is then passed to the second function, which opens the file, reads its contents into memory, and packages it into a browser-style multipart/form-data HTTP POST request.

    Disguised as a PDF upload, the request includes the victim’s computer name and a timestamp, and is sent to a hardcoded C2 server at:

    hxxp://daily.alltop.asia/blog/article/up2.php

    Once the file is successfully exfiltrated, it is deleted from the local system, effectively erasing evidence and complicating recovery efforts. This “scan → steal → delete” workflow is designed to be covert — the network traffic mimics a legitimate Chrome file upload, complete with a WebKitFormBoundary string and a fake MIME type (application/pdf) to evade basic content filters.

    The stolen files can include cached documents, authentication tokens, downloaded content, or staging files from other malware. To detect such activity, defenders should monitor outbound HTTP POST requests to unfamiliar domains, flag inconsistencies between file extensions and MIME types, and watch for bulk deletions in Temp directories.

    Connects to C2C and tries to download payload.

    The captured packet confirms what the functions sub_4020D0 and sub_401F80 implement: the malware builds an HTTP GET request to its C2 server at daily.alltop.Asia, targeting /blog/article/d2.php?downfname=<filename>&crc32=<value> where the filename is victim-specific (e.g., abs.tmp) and the CRC value is set to zero, then sends it with realistic browser-like headers including a spoofed Chrome User-Agent, Accept, Language, and Keep-Alive to blend in with normal traffic. This request is sent via WinINet, the response (typically a short command or acknowledgment) is optionally stored in a buffer, the code sleeps briefly, and then a second request is issued to /blog/article/del2.php?delfname=<filename> without reading the reply, effectively telling the server to delete the staged file and reduce evidence. Together, these functions implement a lightweight download-and-cleanup beacon pattern that makes use of a legitimate-looking HTTP session to disguise malicious C2 communication

    C2C: hxxp://daily.alltop.asia/blog/article/d2.php?downfname=abs.tmp&crc32=0

     

    After downloading the payload, it tries to save it under a benign filename like `abs.tmp.

    Once the file is created, the program opens it using `CreateFileW`, checks its size, and allocates a buffer—rejecting files larger than 128 MB. It then reads the file’s contents into memory.

    If the file contains data, it calls `sub_402620`, which likely performs validation or DE-obfuscation—such as checking for magic bytes, verifying a checksum, or decrypting the payload.

    Upon successful validation, the program constructs a PowerShell command line. It initializes a `STARTUPINFOA` structure and a zeroed `PROCESS_INFORMATION` structure.

    The command line begins with `”powershell “` and appends an encoded or packed payload extracted from the file using `sub_401280(&CommandLine[11], nSize[1], v15, nSize[1])`. This function likely embeds the payload using techniques like Base64 encoding or inline scripting with `-EncodedCommand`.

    Finally, the program executes the PowerShell command via `CreateProcessA`, waits for 2 seconds (`Sleep(0x7D0)`), and deletes `abs.tmp` using `DeleteFileW` to clean up traces.

    Conclusion:

    The analysis of this campaign highlights how APT37 (ScarCruft/InkySquid) continues to employ highly tailored spear-phishing attacks, leveraging malicious LNK loaders, fileless PowerShell execution, and covert exfiltration mechanisms. The attackers specifically target South Korean government sectors, research institutions, and academics with the objective of intelligence gathering and long-term espionage.

    We have named this campaign Operation HanKook Phantom for two reasons: the term “HanKook” (한국) directly signifies that Korea in Korea, while “Phantom” represents the stealthy and evasive techniques used throughout the infection chain, including in-memory execution, disguised decoys, and hidden data exfiltration routines. This name reflects both the strategic targeting and the clandestine nature of the operation.

    Overall, Operation HanKook Phantom demonstrates the persistent threat posed by North Korean state-sponsored actors, reinforcing the need for proactive monitoring, advanced detection of LNK-based delivery, and vigilance against misuse of cloud services for command-and-control.

    Seqrite Protection:

    • Trojan.49901.GC
    • trojan.49897.GC

    MITRE Att&ck:

    Initial Access T1566.001 Spear phishing Attachment
    Execution T1059.001 Command and Scripting Interpreter: PowerShell
    T1204.001 User Execution: Malicious Link
    T1204.002 User Execution: Malicious File
    Persistence T1574.001 Hijack Execution Flow: DLL
    T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
    Privilege Escalation T1055.001 Process Injection: Dynamic-link Library Injection
    T1055.009 Process Injection: Proc Memory
    T1053.005 Scheduled Task/Job : Scheduled Task
    Defense Evasion T1140 Deobfuscate/Decode Files or Information
    T1070.004 Indicator Removal : File Deletion
    T1027.009 Obfuscated Files or Information: Embedded Payloads
    T1027.013 Obfuscated Files or Information: Encrypted/Encoded File
    Credential Access T1056.002 Input Capture: Keylogging : GUI Input Capture
    Discovery T1087.001 Account Discovery : Local Account
    T1217 Browser Information Discovery
    T1083 File and Directory Discovery
    T1082 System Information Discovery
    Collection T1123 Audio Capture
    T1005 Data from Local System
    T1113 Screen Capture
    Command and Control T1102.002 Web Service: Bidirectional Communication
    Exfiltration T1041 Exfiltration Over C2 Channel
    Impact T1529 System Shutdown/Reboot

     

    IOCs:

    MD5 File Name
    1aec7b1227060a987d5cb6f17782e76e aio02.dat
    591b2aaf1732c8a656b5c602875cbdd9 aio03.bat
    d035135e190fb6121faa7630e4a45eed aio01.dat
    cc1522fb2121cf4ae57278921a5965da *.Zip
    2dc20d55d248e8a99afbe5edaae5d2fc tony31.dat
    f34fa3d0329642615c17061e252c6afe tony32.dat
    051517b5b685116c2f4f1e6b535eb4cb tony33.bat
    da05d6ab72290ca064916324cbc86bab *.LNK
    443a00feeb3beaea02b2fbcd4302a3c9 북한이탈주민의 성공적인 남한정착을 위한 아카데미 운영.lnk
    f6d72abf9ca654a20bbaf23ea1c10a55 국가정보와 방첩 원고.lnk

    Authors: 

    Dixit Panchal
    Kartik Jivani
    Soumen Burma



    Source link

  • The First AI-Powered Ransomware & How It Works

    The First AI-Powered Ransomware & How It Works


    Introduction

    AI-powered malware has become quite a trend now. We have always been discussing how threat actors could perform attacks by leveraging AI models, and here we have a PoC demonstrating exactly that. Although it has not yet been observed in active attacks, who knows if it isn’t already being weaponized by threat actors to target organizations?

    We are talking about PromptLock, shared by ESET Research. PromptLock is the first known AI-powered ransomware. It leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption. These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS. For file encryption, PromptLock utilizes the SPECK 128-bit encryption algorithm.

    Ransomware itself is already one of the most dangerous categories of malware. When created using AI, it becomes even more concerning. PromptLock leverages large language models (LLMs) to dynamically generate malicious scripts. These AI-generated Lua scripts drive its malicious activity, making them flexible enough to work across Windows, Linux, and macOS.

    Technical Overview:

    The malware is written in Go (Golang) and communicates with a locally hosted LLM through the Ollama API.

    On executing this malware, we will observe it to be making connection to the  locally hosted LLM through the Ollama API.

    It identifies whether the infected machine is a personal computer, server, or industrial controller. Based on this classification, PromptLock decides whether to exfiltrate, encrypt, or destroy data.

    It is not just a sophisticated sample – entire LLM prompts are in the code itself. It uses SPECK 128bit encryption algorithm in ECB mode.

    The encryption key is stored in the key variable as four 32-bit little-endian words: local key = {key[1], key[2], key[3], key[4]}. This gets dynamically generated as shown in the figure:

    It begins infection by scanning the victim’s filesystem and building an inventory of candidate files, writing the results into scan.log.

    It also scans the user’s home directory to identify files containing potentially sensitive or critical information (e.g., PII). The results are stored in target_file_list.log

    Probably, PromptLock first creates scan.log to record discovered files and then narrows this into target.log, which defines the set to encrypt. Samples also generate files like payloads.txt for metadata or staging. Once targets are set, each file is encrypted in 16-byte chunks using SPECK-128 in ECB mode, overwriting contents with ciphertext.

    After encryption, it generates ransom notes dynamically. These notes may include specific details such as a Bitcoin address (1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa) this address is the first bitcoin address ever created and ransom amount. As it is a POC, no real data is present.

    PromptLock’s CLI and scripts rely on:

    • model=gpt-oss:20b
    • com/PeerDB-io/gluabit32
    • com/yuin/gopher-lua
    • com/gopher-lfs

    It also prints several required keys in lowercase (formatted as “key: value”), including:

    • os
    • username
    • Home
    • Hostname
    • Temp
    • Sep
    • cwd

    Implementation guidance:

    – environment variables:
    username: os.getenv(“USERNAME”) or os.getenv(“USER”)
    home: os.getenv(“USERPROFILE”) or os.getenv(“HOME”)
    hostname: os.getenv(“COMPUTERNAME”) or os.getenv(“HOSTNAME”) or io.popen(“hostname”):read(“*l”)
    temp: os.getenv(“TMPDIR”) or os.getenv(“TEMP”) or os.getenv(“TMP”) or “/tmp”
    sep: detect from package.path (if contains “\” then “\” else “/”), default to “/”


    – os: detect from environment and path separator:
    * if os.getenv(“OS”) == “Windows_NT” then “windows”
    * elseif sep == “\” then “windows”  
    * elseif os.getenv(“OSTYPE”) then use that valuevir
    * else “unix”

    – cwd: use io.popen(“pwd”):read(“*l”) or io.popen(“cd”):read(“*l”) depending on OS

    Conclusion:

    It’s high time the industry starts considering such malware cases. If we want to beat AI-powered malware, we will have to incorporate AI-powered solutions. In the last few months, we have been observing a tremendous rise in such cases, although PoCs, they are good enough to be leveraged to perform actual attacks. This clearly signals that defensive strategies must evolve at the same pace as offensive innovations.

    How Does SEQRITE Protect Its Customers?

    • PromptLock
    • PromptLock.49912.GC

    IOCs:

    • ed229f3442f2d45f6fdd4f3a4c552c1c
    • 2fdffdf0b099cc195316a85636e9636d
    • 1854a4427eef0f74d16ad555617775ff
    • 806f552041f211a35e434112a0165568
    • 74eb831b26a21d954261658c72145128
    • ac377e26c24f50b4d9aaa933d788c18c
    • F7cf07f2bf07cfc054ac909d8ae6223d

     

    Authors:

    Shrutirupa Banerjee
    Rayapati Lakshmi Prasanna Sai
    Pranav Pravin Hondrao
    Subhajeet Singha
    Kartikkumar Ishvarbhai Jivani
    Aravind Raj
    Rahul Kumar Mishra

     

     



    Source link

  • Motion Highlights #12

    Motion Highlights #12



    Your latest roundup of exceptional motion design and animation, spotlighting talent from the global creative community.



    Source link

  • why is it important? &vert; Code4IT

    why is it important? | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even though many developers underestimate this part, tests should be written even more clearly than production code.

    This is true because, while production code is meant to be executed by the application, good tests allow you to document the behavior of the production code. So, the first consumers of the tests are the developers themselves.

    So, how can we write better tests? A simple trick is following the «Arrange, Act, Assert» pattern.

    A working (but bad) example

    As long as the tests pass, they are fine.

    Take this example:

    [Test]
    public void TestDateRange_WithFutureDate()
    {
        var diff = (new DateTime(2021, 2, 8) - new DateTime(2021, 2, 3)).Days;
        Assert.AreEqual(5, diff);
    }
    

    Yes, the test passes, but when you need to read and understand it, everything becomes less clear.

    So, it’s better to explicitly separate the sections of the test. In the end, it’s just a matter of readability.

    AAA: Arrange, Act, Assert

    A better way to organize tests is by following the AAA pattern: Arrange, Act, Assert.

    During the Arrange part, you define all the preconditions needed for your tests. You set up the input values, the mocked dependencies, and everything else needed to run the test.

    The Act part is where you eventually run the production code. The easiest example is to run a method in the System Under Test.

    Finally, the Assert part, where you check that everything worked as expected.

    [Test]
    public void TestDateRange_WithFutureDate()
    {
        // Arrange
        var today = new DateTime(2021, 2, 3);
        var otherDate = new DateTime(2021, 2, 8);
    
        // Act
        var diff = (otherDate.Date - today.Date).Days;
    
        // Assert
        Assert.AreEqual(5, diff);
    }
    

    You don’t need to specify in every method the three different parts, but personally, I find it more readable.

    Think of tests as physics experiments: first, you set up the environment, then you run the test, and finally, you check if the result is the one you were expecting.

    This article first appeared on Code4IT

    Conclusion

    This is a really simple way to improve your tests: keep every part separated from the others. It helps developers understand what is the meaning of each test, and allows for easier updates.

    Happy coding

    🐧



    Source link

  • PostgreSQL CRUD operations with C# and Dapper | Code4IT


    Mapping every SQL result to a data type can be a pain. To simplify our life, we can use an ORM like Dapper to automatically map the data.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we’ve seen how to perform simple CRUD operations on a Postgres database by using Npgsql, a library that allows you to write and perform queries to be executed specifically on a PostgreSQL database.

    In this article, we will take a step further: we will perform the same operations using Dapper, one of the most popular ORM for .NET applications, and we will see how performing those operations becomes easier.

    Introducing the project

    For this article, I will reuse the project I used for the previous article.

    This project performs CRUD (Create, Read, Update, Delete) operations on a Postgres database with a single table: Games. All those operations (plus a bunch of other additional ones) are executed by a class that implements this interface:

    public interface IBoardGameRepository
    {
        Task<IEnumerable<BoardGame>> GetAll();
    
        Task<BoardGame> Get(int id);
    
        Task Add(BoardGame game);
    
        Task Update(int id, BoardGame game);
    
        Task Delete(int id);
    
        Task<string> GetVersion();
    
        Task CreateTableIfNotExists();
    }
    

    This allows me to define and use a new class without modifying too much the project: in fact, I simply have to replace the dependency in the Startup class to use the Dapper repository.

    But first…

    Dapper, a micro-ORM

    In the introduction, I said that we will use Dapper, a popular ORM. Let me explain.

    ORM stands for Object-relational mapping and is a technique that allows you to map data from one format to another. This technique simplifies developers’ lives since they don’t have to manually map everything that comes from the database to an object – the ORM takes care of this task.

    Dapper is one of the most popular ORMs, created by the Stack Overflow team. Well, actually Dapper is a Micro-ORM: it performs only a subset of the operations commonly executed by other ORMs; for example, Dapper allows you to map query results to objects, but it does not automatically generate the queries.

    To add Dapper to your .NET project, simply run this command:

    dotnet add package Dapper
    

    Or add the NuGet package via Visual Studio:

    Dapper Nuget Package

    Dapper will take care of only a part of the operations; for instance, it cannot open a connection to your DB. That’s why you need to install Npgsql, just as we did in a previous article. We can say the whole Dapper library is a set of Extension Methods built on top of the native data access implementation – in the case of PostgreSQL, on to op Npgsql.

    Now we have all the dependencies installed, so we can start writing our queries.

    Open the connection

    Once we have created the application, we must instantiate and open a connection against our database.

    private NpgsqlConnection connection;
    
    public DapperBoardGameRepository()
    {
        connection = new NpgsqlConnection(CONNECTION_STRING);
        connection.Open();
    }
    

    We will use the connection object later when we will perform the queries.

    CRUD operations

    We are working on a table, Games, whose name is stored in a constant:

    private const string TABLE_NAME = "Games";
    

    The Games table consists of several fields:

    Field name Field type
    id INTEGER PK
    Name VARCHAR NOT NULL
    MinPlayers SMALLINT NOT NULL
    MaxPlayers SMALLINT
    AverageDuration SMALLINT

    And it is mapped to the BoardGame class:

    public class BoardGame
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public int MinPlayers { get; set; }
        public int MaxPlayers { get; set; }
        public int AverageDuration { get; set; }
    }
    

    So, the main task of Dapper is to map the result of the queries performed on the Games table to one or more BoardGame objects.

    Create

    To create a new row on the Games table, we need to do something like this:

    public async Task Add(BoardGame game)
    {
        string commandText = $"INSERT INTO {TABLE_NAME} (id, Name, MinPlayers, MaxPlayers, AverageDuration) VALUES (@id, @name, @minPl, @maxPl, @avgDur)";
    
        var queryArguments = new
        {
            id = game.Id,
            name = game.Name,
            minPl = game.MinPlayers,
            maxPl = game.MaxPlayers,
            avgDur = game.AverageDuration
        };
    
        await connection.ExecuteAsync(commandText, queryArguments);
    }
    

    Since Dapper does not create any queries for us, we still need to define them explicitly.

    The query contains various parameters, marked with the @ symbol (@id, @name, @minPl, @maxPl, @avgDur). Those are placeholders, whose actual values are defined in the queryArguments anonymous object:

    var queryArguments = new
    {
        id = game.Id,
        name = game.Name,
        minPl = game.MinPlayers,
        maxPl = game.MaxPlayers,
        avgDur = game.AverageDuration
    };
    

    Finally, we can execute our query on the connection we have opened in the constructor:

    await connection.ExecuteAsync(commandText, queryArguments);
    

    Comparison with Npgsql library

    Using dapper simplifies our code. In fact, when using the native Npgsql library without Dapper, we have to declare every parameter explicitly.

    As a comparison, have a look at how we implemented the same operation using Npgsql:

    public async Task Add(BoardGame game)
    {
        string commandText = $"INSERT INTO {TABLE_NAME} (id, Name, MinPlayers, MaxPlayers, AverageDuration) VALUES (@id, @name, @minPl, @maxPl, @avgDur)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    When using Dapper, we declare the parameter values in a single anonymous object, and we don’t create a NpgsqlCommand instance to define our query.

    Read

    As we’ve seen before, an ORM simplifies how you read data from a database by automatically mapping the query result to a list of objects.

    When we want to get all the games stored on our table, we can do something like that:

    public async Task<IEnumerable<BoardGame>> GetAll()
    {
        string commandText = $"SELECT * FROM {TABLE_NAME}";
        var games = await connection.QueryAsync<BoardGame>(commandText);
    
        return games;
    }
    

    Again, we define our query and allow Dapper to do the rest for us.

    In particular, connection.QueryAsync<BoardGame> fetches all the data from the query and converts it to a collection of BoardGame objects, performing the mapping for us.

    Of course, you can also query for BoardGames with a specific Id:

    public async Task<BoardGame> Get(int id)
    {
        string commandText = $"SELECT * FROM {TABLE_NAME} WHERE ID = @id";
    
        var queryArgs = new { Id = id };
        var game = await connection.QueryFirstAsync<BoardGame>(commandText, queryArgs);
        return game;
    }
    

    As we did before, you define the query with a placeholder @id, which will have the value defined in the queryArgs anonymous object.

    To store the result in a C# object, we map only the first object returned by the query, by using QueryFirstAsync instead of QueryAsync.

    Comparison with Npgsql

    The power of Dapper is the ability to automatically map query results to C# object.

    With the plain Npgsql library, we would have done:

    await using (NpgsqlDataReader reader = await cmd.ExecuteReaderAsync())
        while (await reader.ReadAsync())
        {
            BoardGame game = ReadBoardGame(reader);
            games.Add(game);
        }
    

    to perform the query and open a reader on the result set. Then we would have defined a custom mapper to convert the Reader to a BoardGame object.

    private static BoardGame ReadBoardGame(NpgsqlDataReader reader)
    {
        int? id = reader["id"] as int?;
        string name = reader["name"] as string;
        short? minPlayers = reader["minplayers"] as Int16?;
        short? maxPlayers = reader["maxplayers"] as Int16?;
        short? averageDuration = reader["averageduration"] as Int16?;
    
        BoardGame game = new BoardGame
        {
            Id = id.Value,
            Name = name,
            MinPlayers = minPlayers.Value,
            MaxPlayers = maxPlayers.Value,
            AverageDuration = averageDuration.Value
        };
        return game;
    }
    

    With Dapper, all of this is done in a single instruction:

    var games = await connection.QueryAsync<BoardGame>(commandText);
    

    Update and Delete

    Update and Delete operations are quite similar: just a query, with a parameter, whose operation is executed in an asynchronous way.

    I will add them here just for completeness:

    public async Task Update(int id, BoardGame game)
    {
        var commandText = $@"UPDATE {TABLE_NAME}
                    SET Name = @name, MinPlayers = @minPl, MaxPlayers = @maxPl, AverageDuration = @avgDur
                    WHERE id = @id";
    
        var queryArgs = new
        {
            id = game.Id,
            name = game.Name,
            minPl = game.MinPlayers,
            maxPl = game.MaxPlayers,
            avgDur = game.AverageDuration
        };
    
        await connection.ExecuteAsync(commandText, queryArgs);
    }
    

    and

    public async Task Delete(int id)
    {
        string commandText = $"DELETE FROM {TABLE_NAME} WHERE ID=(@p)";
    
        var queryArguments = new {  p = id  };
    
        await connection.ExecuteAsync(commandText, queryArguments);
    }
    

    Again: define the SQL operation, specify the placeholders, and execute the operation with ExecuteAsync.

    Further readings

    As always, the best way to get started with a new library is by reading the official documentation:

    🔗 Dapper official documentation

    To see the complete code for these examples, you can have a look at the related GitHub repository.

    🔗 PostgresCrudOperations repository | GitHub

    Dapper adds a layer above the data access. If you want to go a level below, to have full control over what’s going on, you should use the native PostgreSQL library, Npgsql, as I explained in a previous article.

    🔗CRUD operations on PostgreSQL using C# and Npgsql | Code4IT

    How to get a Postgres instance running? You can use any cloud implementation, or you can download and run a PostgreSQL instance on your local machine using Docker as I explained in this guide:

    🔗 How to run PostgreSQL locally with Docker | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve seen how to use Dapper to simplify our data access. Dapper is useful for querying different types of RDBMS, not only PostgreSQL.

    To try those examples out, download the code from GitHub, specify the connection string, and make sure that you are using the DapperBoardGameRepository class (this can be configured in the Startup class).

    In a future article, we will use Entity Framework to – guess what? – perform CRUD operations on the Games table. In that way, you will have 3 different ways to access data stored on PostgreSQL by using .NET Core.

    Happy coding!

    🐧



    Source link

  • How to temporarily change the CurrentCulture &vert; Code4IT

    How to temporarily change the CurrentCulture | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    It may happen, even just for testing some functionalities, that you want to change the Culture of the thread your application is running on.

    The current Culture is defined in this global property: Thread.CurrentThread.CurrentCulture. How can we temporarily change it?

    An idea is to create a class that implements the IDisposable interface to create a section, delimited by a using block, with the new Culture:

    public class TemporaryThreadCulture : IDisposable
    {
    	CultureInfo _oldCulture;
    
    	public TemporaryThreadCulture(CultureInfo newCulture)
    	{
    		_oldCulture = CultureInfo.CurrentCulture;
    		Thread.CurrentThread.CurrentCulture = newCulture;
    	}
    
    	public void Dispose()
    	{
    		Thread.CurrentThread.CurrentCulture = _oldCulture;
    	}
    }
    

    In the constructor, we store the current Culture in a private field. Then, when we call the Dispose method (which is implicitly called when closing the using block), we use that value to restore the original Culture.

    How to use it

    How can we try it? An example is by checking the currency symbol.

    Thread.CurrentThread.CurrentCulture = new CultureInfo("ja-jp");
    
    Console.WriteLine(Thread.CurrentThread.CurrentCulture.NumberFormat.CurrencySymbol); //¥
    
    using (new TemporaryThreadCulture(new CultureInfo("it-it")))
    {
    	Console.WriteLine(Thread.CurrentThread.CurrentCulture.NumberFormat.CurrencySymbol);//€
    }
    
    Console.WriteLine(Thread.CurrentThread.CurrentCulture.NumberFormat.CurrencySymbol); //¥
    

    We start by setting the Culture of the current thread to Japanese so that the Currency symbol is ¥. Then, we temporarily move to the Italian culture, and we print the Euro symbol. Finally, when we move outside the using block, we get back to ¥.

    Here’s a test that demonstrates the usage:

    [Fact]
    void TestChangeOfCurrency()
    {
    	using (new TemporaryThreadCulture(new CultureInfo("it-it")))
    	{
    		var euro = CultureInfo.CurrentCulture.NumberFormat.CurrencySymbol;
    		Assert.Equal(euro, "€");
    
    		using (new TemporaryThreadCulture(new CultureInfo("en-us")))
    		{
    			var dollar = CultureInfo.CurrentCulture.NumberFormat.CurrencySymbol;
    
    			Assert.NotEqual(euro, dollar);
    		}
    		Assert.Equal(euro, "€");
    	}
    }
    

    This article first appeared on Code4IT

    Conclusion

    Using a class that implements IDisposable is a good way to create a temporary environment with different characteristics than the main environment.

    I use this approach a lot when I want to experiment with different cultures to understand how the code behaves when I’m not using English (or, more generically, Western) culture.

    Do you have any other approaches for reaching the same goal? If so, feel free to share them in the comments section!

    Happy coding!

    🐧



    Source link

  • WinRAR Directory Traversal & NTFS ADS Vulnerabilities (CVE-2025-6218 & CVE-2025-8088)

    WinRAR Directory Traversal & NTFS ADS Vulnerabilities (CVE-2025-6218 & CVE-2025-8088)


    Executive Summary

    Two high-severity vulnerabilities in WinRAR for Windows — CVE-2025-6218 and CVE-2025-8088 — allow attackers to write files outside the intended extraction directory. CVE-2025-6218 involves traditional path traversal, while CVE-2025-8088 extends the attack using NTFS Alternate Data Streams (ADS). Both flaws can be exploited by delivering a malicious archive to a user and relying on minimal interaction (just extraction).

    Why it matters: These flaws enable reliable persistence and remote code execution (RCE) in enterprise environments. Threat actors, including RomCom and Paper Werewolf (aka GOFFEE), have exploited CVE-2025-8088 in active campaigns.

    Vulnerability Overview

    • CVE-2025-6218
      • Type: Directory Traversal during extraction
      • Affected: WinRAR for Windows 7.11 and earlier (before 7.12 Beta 1)
      • Fixed in12 Beta 1
      • Impact: Files can be dropped outside the target extraction directory, e.g., into Windows Startup.
    • CVE-2025-8088
      • Type: Directory Traversal via NTFS ADS syntax (txt: stream)
      • Affected: WinRAR for Windows 7.12 and earlier
      • Fixed in13.
      • Impact: Attackers can hide payloads in ADS or place them into autorun locations for stealthy persistence.
    • Affected Components: WinRAR for Windows (GUI/CLI), UnRAR/UnRAR.dll, portable UnRAR (Windows builds)

    Technical Details

    CVE-2025-6218 – Directory Traversal in Archive Extraction

    Root Cause: The RARReadHeader / RARProcessFile routines in WinRAR fail to normalize or validate relative path components (‘..\’, ‘../’). Attackers can force file writes outside the extraction directory without canonicalizing and bounding the output path.

    Trigger: Any malicious RAR/ZIP archive containing file entries with traversal sequences in their header metadata.

    ExamplePayloadPath:
    ..\..\..\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\malicious.exe

    Impact: File lands in Startup folder → auto-executes on login under user privileges.

    Variant Notes: This exploit works for both absolute and relative extraction destinations. It does not require overwriting existing files — it can create new ones.

    The vulnerability is exploitable whether the archive entry’s stored path is absolute (full system path) or relative (using traversal sequences).

    Absolute path example:
    C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\evil.exe
    When extracted, the file is placed directly in the Startup folder, ignoring the chosen extraction directory.

    Relative path example:
    ..\..\..\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\evil.exe
    The ‘..\’ sequences walk up the directory tree from the extraction location, then down into the Startup folder.

    No Need to Overwrite Existing Files: The flaw allows new files to be created in sensitive locations even if they didn’t exist. This enables persistence without replacing trusted binaries, reducing the chance of triggering integrity alerts. Example: Dropping evil.lnk or malware.exe into Startup ensures auto-run on login.

    CVE-2025-8088 – ADS-Assisted Path Traversal

    Root Cause: Same traversal flaw as CVE-2025-6218, but the extraction code also fails to block NTFS ADS syntax in filenames (‘:’ character followed by stream name).

    NTFS ADS Basics: An NTFS file can have multiple data streams: the main unnamed stream (default content) and any number of named alternate streams (e.g., ‘readme.txt: payload.exe’). Windows Explorer and most file listings don’t show ADS, making them useful for hiding content.

    Example Payload Path:
    ..\..\..\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\readme.txt: malicious.exe

    Impact: The payload of a benign-looking file in the Startup folder is stored in ADS. A loader script may execute it later or side-load it via another process.

    Why It’s Worse: ADS hides the malicious binary from casual inspection and some legacy security tools, delaying detection.

    Observed Exploitation: Threat actors use it for stealth persistence plus staging malware for later execution.

    Attack Chain

    1. Prepare Payload: Attacker embeds malicious executable/script in archive using traversal and/or ADS syntax.
    2. Deliver Archive: Sent via email, instant messaging, or malicious download links.
    3. Victim Extraction: User extracts with vulnerable WinRAR/UnRAR.
    4. Silent Path Escape: Payload lands in Startup or other sensitive locations.
    5. Automatic Execution: Runs on reboot/login with user privileges.

    Exploitation in the Wild

    • RomCom: Used CVE-2025-8088 as a zero-day in spear-phishing starting mid-July 2025, delivering backdoors via autorun locations.
    • Paper Werewolf: Observed exploiting similar traversal flaws against Russian targets.
    • Forecast: Expect copycat campaigns — trivial to weaponize, high persistence rate.

    Protection:

    • Trojan.49857.GC
    • Trojan.49856.GC
    • Romcom.49869.SL
    • Ghanarava.1754899322556336
    • Agent.S37377547
    • Agent.S37377548

    Indicators of Compromise (IoCs):

    SHA-256 Detection Name
    49023b86fde4430faf22b9c39e921541e20224c47fa46ff473f880d5ae5bc1f1 Bat.Trojan.49857.GC
    a25d011e2d8e9288de74d78aba4c9412a0ad8b321253ef1122451d2a3d176efa Lnk.Trojan.49856.GC
    4da20b8b16f006a6a745032165be68c42efef9709c8e133e39d4b6951cca5179 Lnk.Trojan.49856.GC
    8082956ace8b016ae8ce16e4a777fe347c7f80f8a576a6f935f9d636a30204e7 Trojan.Ghanarava.1754899322556336

    Others –

    File/Path Patterns

    • Writes to:
      • %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\*.exe
    • Presence of ADS (: in filename):
    • Unexpected files outside the intended extraction folder.

    Process/Behavior

    • exe / UnRAR.exe spawning processes (cmd.exe, powershell.exe) post-extraction.
    • ADS creation events (Sysmon Event ID 15).

    Registry/Autorun

    • Dropped Startup files (no registry needed).
    • Monitor HKCU\Software\Microsoft\Windows\CurrentVersion\Run for related changes.

    MITRE ATT&CK Mapping

    • T1059 – Command and Scripting Interpreter
    • T1204 – User Execution
    • 001 – Registry Run Keys / Startup Folder
    • 004 – NTFS File Attributes (ADS)
    • T1027 – Obfuscated Files or Information

    Patch Verification

    • Confirm version 13 on all endpoints.
    • Validate signatures & checksums of installer.
    • Test with crafted traversal/ADS archives to ensure blocking.

    Conclusion

    CVE-2025-6218 and CVE-2025-8088 show how insufficient path validation and overlooked NTFS features can lead to stealthy persistence and RCE. Exploitation requires minimal user interaction, and both flaws have been used in real-world attacks. Immediate patching, combined with proactive hunting for ADS and Startup modifications, is essential for defense.

    References

    1. RARLAB – Official WinRAR Security Advisory (August 2025)
      https://www.rarlab.com/rar/winrar-security.htm
      (Vendor confirmation of affected versions and fixes)
    2. National Vulnerability Database (NVD) – CVE-2025-6218
      https://nvd.nist.gov/vuln/detail/CVE-2025-6218
      (Technical classification, CVSS score, CWE mapping)
    3. National Vulnerability Database (NVD) – CVE-2025-8088
      https://nvd.nist.gov/vuln/detail/CVE-2025-8088
      (Technical classification, CVSS score, CWE mapping)
    4. Malwarebytes Labs – “WinRAR Zero-Day Exploited in the Wild” (August 2025)
      https://blog.malwarebytes.com/
      (Covers RomCom’s exploitation of CVE-2025-8088)
    5. ESET Research – “APT Campaigns Using WinRAR Vulnerabilities”
      https://www.welivesecurity.com/
      (Threat actor campaigns & IOC context)
    6. Microsoft Defender Threat Intelligence – “Detecting ADS Abuse in Windows”
      https://learn.microsoft.com/en-us/windows/security/threat-protection/
      (Guidance on detecting NTFS ADS creation events)
    7. MITRE ATT&CK – Technique T1547.001 (Startup Folder) & T1564.004 (ADS)
      https://attack.mitre.org/
      (Mapping for persistence & hiding techniques)

     

    Authors:

    Nandini Vimal Seth

    Suvarnjeet Milind Jagtap



    Source link

  • Advanced parsing using Int.TryParse in C# | Code4IT


    We all need to parse strings as integers. Most of the time, we use int.TryParse(string, out int). But there’s a more advanced overload that we can use for complex parsing.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You have probably used the int.TryParse method with this signature:

    public static bool TryParse (string? s, out int result);
    

    That C# method accepts a string, s, which, if it can be parsed, will be converted to an int value and whose integer value will be stored in the result parameter; at the same time, the method returns true to notify that the parsing was successful.

    As an example, this snippet:

    if (int.TryParse("100", out int result))
    {
        Console.WriteLine(result + 2); // correctly parsed as an integer
    }
    else
    {
        Console.WriteLine("Failed");
    }
    

    prints 102.

    Does it work? Yes. Is this the best we can do? No!

    How to parse complex strings with int.TryParse

    What if you wanted to parse 100€? There is a less-known overload that does the job:

    public static bool TryParse (
        string? s,
        System.Globalization.NumberStyles style,
        IFormatProvider? provider,
        out int result);
    

    As you see, we have two more parameters: style and provider.

    IFormatProvider? provider allows you to specify the culture information: examples are CultureInfo.InvariantCulture and new CultureInfo("es-es").

    But the real king of this overload is the style parameter: it is a Flagged Enum which allows you to specify the expected string format.

    style is of type System.Globalization.NumberStyles, which has several values:

    [Flags]
    public enum NumberStyles
    {
        None = 0x0,
        AllowLeadingWhite = 0x1,
        AllowTrailingWhite = 0x2,
        AllowLeadingSign = 0x4,
        AllowTrailingSign = 0x8,
        AllowParentheses = 0x10,
        AllowDecimalPoint = 0x20,
        AllowThousands = 0x40,
        AllowExponent = 0x80,
        AllowCurrencySymbol = 0x100,
        AllowHexSpecifier = 0x200,
        Integer = 0x7,
        HexNumber = 0x203,
        Number = 0x6F,
        Float = 0xA7,
        Currency = 0x17F,
        Any = 0x1FF
    }
    

    You can combine those values with the | symbol.

    Let’s see some examples.

    Parse as integer

    The simplest example is to parse a simple integer:

    [Fact]
    void CanParseInteger()
    {
        NumberStyles style = NumberStyles.Integer;
        var canParse = int.TryParse("100", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    Notice the NumberStyles style = NumberStyles.Integer;, used as a baseline.

    Parse parenthesis as negative numbers

    In some cases, parenthesis around a number indicates that the number is negative. So (100) is another way of writing -100.

    In this case, you can use the NumberStyles.AllowParentheses flag.

    [Fact]
    void ParseParenthesisAsNegativeNumber()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowParentheses;
        var canParse = int.TryParse("(100)", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(-100, result);
    }
    

    Parse with currency

    And if the string represents a currency? You can use NumberStyles.AllowCurrencySymbol.

    [Fact]
    void ParseNumberAsCurrency()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowCurrencySymbol;
        var canParse = int.TryParse(
    "100€",
     style,
     new CultureInfo("it-it"),
    out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    But, remember: the only valid symbol is the one related to the CultureInfo instance you are passing to the method.

    Both

    var canParse = int.TryParse(
        "100€",
        style,
        new CultureInfo("en-gb"),
        out int result);
    

    and

    var canParse = int.TryParse(
        "100$",
        style,
        new CultureInfo("it-it"),
        out int result);
    

    are not valid. One because we are using English culture to parse Euros, the other because we are using Italian culture to parse Dollars.

    Hint: how to get the currency symbol given a CultureInfo? You can use NumberFormat.CurrecySymbol, like this:

    new CultureInfo("it-it").NumberFormat.CurrencySymbol; // €
    

    Parse with thousands separator

    And what to do when the string contains the separator for thousands? 10.000 is a valid number, in the Italian notation.

    Well, you can specify the NumberStyles.AllowThousands flag.

    [Fact]
    void ParseThousands()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowThousands;
        var canParse = int.TryParse("10.000", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    Parse hexadecimal values

    It’s a rare case, but it may happen: you receive a string in the Hexadecimal notation, but you need to parse it as an integer.

    In this case, NumberStyles.AllowHexSpecifier is the correct flag.

    [Fact]
    void ParseHexValue()
    {
        NumberStyles style = NumberStyles.AllowHexSpecifier;
        var canParse = int.TryParse("F", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(15, result);
    }
    

    Notice that the input string does not contain the Hexadecimal prefix.

    Use multiple flags

    You can compose multiple Flagged Enums to create a new value that represents the union of the specified values.

    We can use this capability to parse, for example, a currency that contains the thousands separator:

    [Fact]
    void ParseThousandsCurrency()
    {
        NumberStyles style =
    NumberStyles.Integer
    | NumberStyles.AllowThousands
    | NumberStyles.AllowCurrencySymbol;
    
        var canParse = int.TryParse("10.000€", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    NumberStyles.AllowThousands | NumberStyles.AllowCurrencySymbol does the trick.

    Conclusion

    We all use the simple int.TryParse method, but when parsing the input string requires more complex calculations, we can rely on those overloads. Of course, if it’s still not enough, you should create your custom parsers (or, as a simpler approach, you can use regular expressions).

    Are there any methods that have overloads that nobody uses? Share them in the comments!

    Happy coding!

    🐧



    Source link