برچسب: With

  • Revolutionizing XDR with Gen AI Cybersecurity

    Revolutionizing XDR with Gen AI Cybersecurity


    In today’s digital era, cyber threats evolve at an alarming pace. Advanced persistent threats (APTs) infiltrate networks, exfiltrating sensitive data over time. Security teams grapple with overwhelming alert volumes, siloed tools, and manual processes that delay responses. Seqrite XDR, empowered by Gen AI cybersecurity, offers a transformative solution. This blog delves into the power of XDR, the role of Gen AI in cybersecurity in enhancing it, and the unmatched capabilities of Seqrite XDR with Seqrite Intelligent Assistant (SIA), the Gen AI-powered  virtual security analyst.

    What is  XDR

    Extended Detection and Response (XDR) is a comprehensive cybersecurity platform. It integrates security across endpoints, networks, and cloud environments, surpassing traditional endpoint protection. XDR provides a unified approach to threat management, enabling organizations to stay ahead of sophisticated attacks. Its core capabilities include:

    • Holistic Visibility: Monitors all attack surfaces for complete oversight.
    • Advanced Threat Detection: Leverages analytics to identify complex threats.
    • Automated Response: Swiftly isolates or mitigates risks.
    • Proactive Threat Hunting: Searches for indicators of compromise (IOCs).
    • Efficient Incident Management: Streamlines investigation and remediation processes.

    XDR eliminates the fragmentation of siloed tools. It reduces operational complexity. It empowers security teams to respond with speed and precision, ensuring robust protection against modern cyber threats.

    How Gen AI Enhances XDR

    Gen AI in cybersecurity is a game-changer for XDR. It processes massive datasets in real-time, uncovering patterns that evade human analysts. By integrating Gen AI cybersecurity, XDR platforms become more innovative and more responsive. Key enhancements include:

    • Real-Time Anomaly Detection: Identifies threats instantly with unparalleled accuracy.
    • Automated Incident Summaries: Delivers concise insights for rapid decision-making.
    • Contextual Threat Mapping: Correlates alerts with frameworks like MITRE ATT&CK.
    • Intelligent Analyst Support: Provides natural-language guidance for investigations.

    Gen AI in cybersecurity minimizes false positives by 40-70%. It prioritizes critical alerts, reducing alert fatigue. It enables security teams to focus on high-impact threats, enhancing overall efficiency. With Gen AI in cybersecurity, XDR becomes a proactive shield against evolving dangers.

    Seqrite XDR with Gen AI Capabilities

    Seqrite XDR is a leading cybersecurity solution. It combines advanced analytics, machine learning, and multi-layered security to combat sophisticated threats. Integrated with SIA, a Gen AI-powered virtual security analyst, Seqrite XDR sets a new standard. Its capabilities include:

    • SIA-Powered Investigations: SIA processes prompts like “Investigate incident UUID-12345” for rapid, detailed analysis.
    • Multi-Layered Protection: Defends against zero-day threats with robust defenses.
    • Real-Time Threat Hunting: Uses IOCs and MITRE TTP-based rules for precise detection.
    • Playbook Automation: Streamlines manual and automatic response workflows.
    • Intuitive Dashboard: Offers unified visibility into endpoints, alerts, and incidents.
    • Scalability and Flexibility: Adapts to growing business and IT needs.
    • Compliance Support: Provides real-time monitoring and audit logs for regulatory adherence.

    SIA leverages Gen AI cybersecurity to simplify complex tasks. It reduces analyst workload by 50%. It integrates Endpoint Protection Platform (EPP) capabilities, ensuring comprehensive protection. Seqrite XDR’s unified platform uncovers hidden threats that siloed tools miss. It delivers actionable insights through SIA’s conversational interface, enabling faster investigations.

    Ready to revolutionize your cybersecurity? Seqrite XDR with SIA harnesses Gen AI cybersecurity to deliver unmatched protection. Contact Seqrite at 1800-212-7377 or visit Seqrite XDR to experience AI-driven security.

    Discover Seqrite XDR Today

     



    Source link

  • Enhancing Retry Patterns with a bit of randomness | Code4IT

    Enhancing Retry Patterns with a bit of randomness | Code4IT


    Operations may fail for transient reasons. How can you implement retry patterns? And how can a simple Jitter help you stabilize the system?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When building complex systems, you may encounter situations where you have to retry an operation several times before giving up due to transient errors.

    How can you implement proper retry strategies? And how can a little thing called “Jitter” help avoid the so-called “Thundering Herd problem”?.

    Retry Patterns and their strategies

    Retry patterns are strategies for retrying operations caused by transient, temporary errors, such as packet loss or a temporarily unavailable resource.

    Suppose you have a database that can handle up to 3 requests per second (yay! so performant!).

    Accidentally, three clients try to execute an operation at the exact same instant. What happens now?

    Well, the DB becomes temporarily unavailable, and it won’t be able to serve those requests. So, since this issue occurred by chance, you just have to wait and retry.

    How long should we wait before the next tentative?

    You can imagine that the timeframe between a tentative and the next one follows a mathematical function, where the wait time (called Backoff) depends on the tentative number:

    Backoff = f(RetryAttemptNumber)
    

    With that in mind, we can think of two main retry strategies: linear backoff retries and exponential backoff retries.

    Linear backoff retries

    The simplest way to handle retries is with Linear backoff.

    Let’s continue with the mathematical function analogy. In this case, the function we can use is a linear function.

    We can simplify the idea by saying that, regardless of the attempt number, the delay between one retry and the next one stays constant.

    Linear backoff

    Let’s see an example in C#. Say that you have defined an operation that may fail randomly, stored in an Action instance. You can call the following RetryOperationWithLinearBackoff method to execute the operation passed in input with a linear retry.

    static void RetryOperationWithLinearBackoff(Action operation)
    {
        int maxRetries = 5;
        double delayInSeconds = 5.0;
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                Console.WriteLine($"Retrying in {delayInSeconds:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delayInSeconds));
            }
        }
    }
    

    The input opertation will be retried for up to 5 times, and every time an operation fails, the system waits 5 seconds before the next retry.

    Linear backoff is simple to implement, as you just saw. However, it falls short when the system is in a faulty state and takes a long time to get back to work. Having linear retries and a fixed amount of maximum retries limits the timespan an operation can be retried. You can end up finishing your attempts while the downstream system is still recovering.

    There may be better ways.

    Exponential backoff retries

    An alternative is to use Exponential Backoff.

    With this approach, the backoff becomes longer after every attempt — usually, it doubles at every retry, that’s why it is called “exponential” backoff.

    This way, if the downstream system takes a long time to recover, the top-level operation has a better chance of being completed successfully.

    Exponential Backoff

    Of course, the downside of this approach is that to get a response from the operation (did it complete? did it fail?), you will have to wait longer — it all depends on the number of retries.
    So, the top-level operation can go into timeout because it tries to access a resource, but the retries become increasingly diluted.

    A simple implementation in C# would be something like this:

    static void RetryOperationWithExponentialBackoff(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 2.0;
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
                Console.WriteLine($"Retrying in {exponentialDelay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(exponentialDelay));
            }
        }
    }
    

    The key to understanding the exponential backoff is how the delay is calculated:

    double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
    

    Understanding the Thundering Herd problem

    The “basic” versions of these retry patterns are effective in overcoming temporary service unavailability, but they can inadvertently cause a thundering herd problem. This occurs when multiple clients retry simultaneously, overwhelming the system with a surge of requests, potentially leading to further failures.

    Suppose that a hypothetical downstream system becomes unavailable if 5 or more requests occur simultaneously.

    What happens when five requests start at the exact same moment? They start, overwhelm the system, and they all fail.

    Their retries will always be in sync, since the backoff is fixed (yes, it can grow in time, but it’s still a fixed value).

    So, all five requests will wait for a fixed amount of time before the next retry. This means that they will always stay in sync.

    Let’s make it more clear with these simple diagrams, where each color represents a different client trying to perform the operation, and the number inside the star represents the attempt number.

    In the case of linear backoff, all the requests are always in sync.

    Multiple retries with linear backoff

    The same happens when using exponential backoff: even if the backoff grows exponentially, all the requests stay in sync, making the system unstable.

    Multiple retries with exponential backoff

    What is Jitter?

    Jitter refers to the introduction of randomness into timing mechanisms: the term was first adopted when talking about network communications, but then became in use for in other areas of system design.

    Jitter helps to mitigate the risk of synchronized retries that can lead to spikes in server load, forcing clients that try to simultanously access a resource to perform their operations with a slightly randomized delay.

    In fact, by randomizing the delay intervals between retries, jitter ensures that retries are spread out over time, reducing the likelihood of overwhelming a service.

    Benefits of Jitter in Distributed Systems

    This is where Jitter comes in handy: it adds a random interval around the moment a retry should happen to minimize excessive retries in sync.

    Exponential Backoff with Jitter

    Jitter introduces randomness to the delay intervals between retries. By staggering these retries, jitter helps distribute the load more evenly over time.

    This reduces the risk of server overload and allows backend systems to recover and process requests efficiently. Implementing jitter can transform a simple retry mechanism into a robust strategy that enhances system reliability and performance.

    Incorporating jitter into your system design offers several advantages:

    • Reduced Load Spikes: By spreading out retries, Jitter minimizes sudden surges in traffic, preventing server overload.
    • Enhanced System Stability: With less synchronized activity, systems remain more stable, even during peak usage times.
    • Improved Resource Utilization: Jitter allows for more efficient use of resources, as requests are processed more evenly.
    • Greater Resilience: Systems become more resilient to transient errors and network fluctuations, improving overall reliability.
    • Avoiding Synchronization: Jitter prevents multiple clients from retrying at the same time, which can lead to server overload.
    • Improved Resource Utilization: By spreading out retries, jitter helps maintain a more consistent load on servers, improving resource utilization.
    • Enhanced Reliability: Systems become more resilient to transient errors, reducing the likelihood of cascading failures.

    Let’s review the retry methods we defined before.

    static void RetryOperationWithLinearBackoffAndJitter(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 5.0;
    
        Random random = new Random();
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                double jitter = random.NextDouble() * 4 - 2; // Random jitter between -2 and 2 seconds
                double delay = baseDelayInSeconds + jitter;
                Console.WriteLine($"Retrying in {delay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delay));
            }
        }
    }
    

    And, for Exponential Backoff,

    static void RetryOperationWithExponentialBackoffAndJitter(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 2.0;
    
        Random random = new Random();
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                // Exponential backoff with jitter
                double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
                double jitter = random.NextDouble() * (exponentialDelay / 2);
                double delay = exponentialDelay + jitter;
                Console.WriteLine($"Retrying in {delay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delay));
            }
        }
    }
    

    In both cases, the key is in creating the delay variable: a random value (the Jitter) is added to the delay.

    Notice that the Jitter can also be a negative value!

    Further readings

    Retry patterns and Jitter make your system more robust, but if badly implemented, they can make your code a mess. So, a question arises: should you focus on improving performances or on writing cleaner code?

    🔗 Code opinion: performance or clean code? | Code4IT

    This article first appeared on Code4IT 🐧

    Clearly, if the downstream system is not able to handle too many requests, you may need to implement a way to limit the number of incoming requests in a timeframe. You can choose between 4 well-known algorithms to implement Rate Limiting.

    🔗 4 algorithms to implement Rate Limiting, with comparison | Code4IT

    Wrapping up

    While adding jitter may seem like a minor tweak, its impact on distributed systems can be significant. By introducing randomness into retry patterns, jitter helps create a more balanced, efficient, and robust system.

    As we continue to build and scale our systems, incorporating jitter is a best practice that can prevent cascading failures and optimize performance. All in all, a little randomness can be just what your system needs to thrive.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Targeting Taiwan & Japan with DLL Implants

    Targeting Taiwan & Japan with DLL Implants


    • Introduction
    • Initial Findings.
    • Infection Chain.
    • Technical Analysis.
      • Stage 1 – Malicious LNK Script.
      • Stage 2 – Malicious Pterois Implant.
      • Stage 3 – Malicious Isurus Implant.
      • Stage 4 – Malicious Cobalt Strike Shellcode.
    • Infrastructure and Hunting.
    • Attribution
    • Conclusion
    • Seqrite Protection.
    • IOCs
    • MITRE ATT&CK.

    Introduction

    Seqrite Labs APT-Team has recently uncovered a campaign which we have termed as Swan Vector, that has been targeting the nations across the East China sea such as Taiwan and Japan. The campaign is aimed at educational institutes and mechanical engineering industry with lures aiming to deliver fake resume of candidates which acts as a decoy.

    The entire malware ecosystem involved in this campaign comprises a total of four stages, the first being one being a malicious LNK, the second stage involves the shortcut file executing DLL implant Pterois via a very well-known LOLBin. It uses stealthy methods to execute and download the third stage containing multiple files including legitimate Windows executable that is further used to execute another implant Isurus via DLL-Sideloading. This further executes the fourth stage that is the malicious Cobalt Strike shellcode downloaded by Pterois.

    In this blog, we’ll explore the sophistication and cover every minutia technical detail of the campaign we have encountered during our analysis. We will examine the various stages of this campaign, starting with the analysis of shortcut (.LNK) file to multiple DLL implants ending with analyzing the shellcode with a final overview.

    Initial Findings

    Recently in April, our team found a malicious ZIP file named as 歐買尬金流問題資料_20250413 (6).rar which can be translated to Oh My God Payment Flow Problem Data – 2025/04/13 (6) , which has been used as preliminary source of infection, containing various files such as one of them being an LNK and other a file with .PNG extension.

    The ZIP contains a malicious LNK file named, 詳細記載提領延遲問題及相關交易紀錄.pdf.lnk. which translates to, “Shortcut to PDF: Detailed Documentation of Withdrawal Delay Issues and Related Transaction Records.pdf.lnk”, which is responsible for running the DLL payload masqueraded as a PNG file known as Chen_YiChun.png. This DLL is then executed via a very well-known LOLBin that is RunDLL32.exe which further downloads other set of implants and a PDF file, which is a decoy.

    Looking into the decoy

    As, the first DLL implant aka Pterois was initially executed via the LOLBin, we saw a decoy file named rirekisho2025 which basically, stands for a nearly Japanese translation for Curriculum Vitae (CV 2025) was downloaded and stored inside the Temp directory along-side other implants and binaries. In the first page, there is a Japanese resume/employment history form “履歴書・職歴経歴書” dated with the Reiwa era format (令和5年4月). The form has a basic header section with fields for personal information including name (氏名), date, gender selection (男/女), birth date, address fields, email address (E-Mail), and contact numbers. There’s also a photo placeholder box in the upper right corner. The decoy appears to be mostly blank with rows for entering education and work history details. Notable fields include entries for different years (月), degree/qualification levels, and employment dates. At the bottom, there are sections for licenses/certifications and additional notes. In the second page, there are two identical sections labeled “職歴 1” and “職歴 2” for employment history entries. Each section contains fields for company name, position, employment dates, and a large notes section. The fields are arranged in a similar layout with spaces for company/organization name (会社・団体名), position title, dates of employment, and work-related details. There’s also a section with red text indicating additional about documents or materials (調査、調査料、ファイル等). In the third and last page, there is one more employment history section “職歴 3” with the same structure as the previous page – company name, position, employment dates, and notes. Below this, there are five additional employment history sections with repeated fields for company name, position, and employment dates, though these appear more condensed than the earlier sections. Each section follows the same pattern of requesting employment-related information in a structured format. Next, we will look into the infection chain and technical analysis.

    Infection Chain.

    Technical Analysis.

    We will break down the technical capabilities of this campaign into four different parts.

    Stage 1 – Malicious LNK Script.

    The ZIP contains a malicious LNK file, known as 詳細記載提領延遲問題及相關交易紀錄.pdf.lnk which translates to Detailed Record of Withdrawal Delay Issues and Related Transaction Records. Another name is also seen with the same LNK as 針對提領系統與客服流程的改進建議.pdf.lnk that translates to Suggestions for Improving the Withdrawal System and Customer Service Process. Creation time of LNK is 2025-03-04. Upon analyzing the contents of this malicious LNK file, we found that its sole purpose is to spawn an instance of the LOLBin rundll32.exe, which is then used to execute a malicious DLL implant named Pterois. The implant’s export function Trpo with an interesting argument 1LwalLoUdSinfGqYUx8vBCJ3Kqq_LCxIg, which we will look into the later part of this technical analysis, on how this argument is being leveraged by the implant.

    Stage 2 – Malicious Pterois Implant.

    Initially, upon examining the malicious RAR archive, along with the malicious LNK file, we found another file with .PNG extension known as Chen_YiChun.png . On doing some initial analysis, we figured out that the file is basically a DLL implant, and we have called it as Pterois. Now, let us examine the technicalities of this implant. While we did analyze the malicious LNK file, we did see that rundll32.exe is used to execute this DLL file’s export function Trpo. Looking inside the implant’s functionalities, it has two primary features, the first one is to perform API Hashing, and the latter is used to download the next stage of malware. The first function is responsible for resolving all APIs from the DLLs like NTDLL, UCRTBase, Kernel32 and other necessary libraries required, and the APIs required for desired functions. This is done by initially accessing the Process Environment Block (PEB) to retrieve the list of loaded modules. The code then traverses this list using the InMemoryOrderModuleList, which contains linked LDR_DATA_TABLE_ENTRY structures — each representing a loaded DLL. Within each LDR_DATA_TABLE_ENTRY, the BaseDllName field (a UNICODE_STRING) holds just the DLL’s filename (e.g., ntdll.dll), and the DllBase field contains its base address in memory.

    During traversal, the function converts the BaseDllName to an ANSI string, normalizes it by converting to uppercase and computes a case-insensitive SDBM hash of the resulting string. This computed hash is compared against a target hash provided to the function. If a match is found, the corresponding DLL’s base address is obtained from the DllBase field and returned. Now, once the DLL’s base address is returned, the code uses a similar case-insensitive SDBM hashing algorithm to resolve API function addresses within NTDLL.DLL. It does this by parsing the DLL’s Export Table, computing the SDBM hash of each exported function name, and comparing it to a target hash to find the matching function address. Here is a simple python script, which evaluates and performs hashing. So, in the first function, a total of four functions have been resolved. Similarly, the APIs for the other two dynamicalliy linked libraries ucrtbase.dll & Kernel32.dll , are being resolved in the same manner. In the next set of functions, where it is trying to resolve the APIs from DLLs like Iphlapi.dll , shell32.dll and WinHTTP.dll, it initially resolves the DLL’s base address just like the previous functions. Once it is returned, then it uses a simple yet pseudo-anti-analysis technique that is using Timer Objects to load these above DLLs. Initially it creates a timer-object using RtlCreateTimerQueue, once the Timer Object is created, then another API RtlCreateTimer is used to run a callback function, which is LoadLibraryW API in this case, further used to load the DLL. Then, the GetModuleHandleW is used to get a handle to the IPHLAPI.DLL. So, once it succeeds, the RtlDeleteTimerQueue API is used to delete and free the Timer Object. Then, finally an API GetAdaptersInfo is resolved via a hash. Similarly, other DLLs are also loaded in the same manner. Next, we will look into the later part of the implant that is the set of functions responsible for downloading the next stager. The function starts with initially getting the entire Command Line parameter comprising of the LOLBin and the argument, that later gets truncated to 1LwalLoUdSinfGqYUx8vBCJ3Kqq_LCxIg which basically is a hardcoded file-ID. Then it uses a technique to abuse Google Drive as a command-and-control server by first establishing authentication with legitimate OAuth credentials. After obtaining a valid access token through a properly formatted OAuth exchange, it uses the Google Drive API to retrieve files from specific hardcoded file IDs, including malicious executables, DLLs, and configuration files which it downloads to predetermined paths in C:\Windows\Temp.

    Then it sets the appropriate Content-Type header to “application/x-www-form-urlencoded” to ensure the request is processed correctly by Google’s authentication servers. Following this exchange, it performs precise JSON parsing capabilities, where it extracts the “access_token” field from Google’s response using cJSON_GetObjectItem. Looking into the memory dump clearly displays the obtained OAuth token beginning with “ya29.a0AZYk”, confirming a successful authentication process. Once this token is parsed and extracted then it is carefully stored and subsequently used to authorize API calls to Google Drive, allowing the implant to download additional payloads while appearing as legitimate traffic from Google Drive. The parsed JSON extracted from the memory looks something like this. Now, once the files are downloaded, another part of this implant uses CreateThread to spawn these downloaded decoy and other files to execute. Finally, these files are downloaded, and the decoy is spawned on the screen and the task of Pterois implant, is done. Well, the last part of this implant is, once the entire task is complete, it goes ahead and performs Self-Delete to cover its tracks and reduce the chance of detection.

    The self-deletion routine uses a delayed execution technique by spawning a cmd.exe process that pings localhost before deleting the file, ensuring the deletion occurs after the current process has completed and released its file handles.

    Next, we will look into the other DLL implant, which has been downloaded by this malicious loader.

    Stage 3 – Malicious Isurus Implant.

    The previous implant downloads a total of four samples. Out of which one of them is a legitimate Windows Signed binary known as PrintDialog.exe. Now, the other file PrintDialog.dll which is the other implant with compilation timestamp 2025-04-08 03:02:59 UTC, is responsible for running the shellcode contents present inside the ra.ini file, abuses a very well-known technique known as DLL-Sideloading by placing the malicious DLL in the current directory as PrintDialog.exe does not explicitly mention the path and this Implant which we call as Isurus performs malicious tasks. Looking, onto the export table, we can see that the malicious implant exports only two functions, one of them being the normal DllEntryPoint and the other being the malicious DllGetActivationFactory export function. Looking inside the export function, we can see that this Isurus performs API resolution via hash along with shellcode extraction and loads and executes the shellcode in memory. The implant initially resolves the APIs by performing the PEB-walking technique, traversing the Process Environment Block (PEB) to locate the base address of needed DLLs such as ntdll.dll and kernel32.dll. Once the base address of a target DLL is identified, the implant proceeds to manually parse the PE (Portable Executable) headers of the DLL to locate the Export Directory Table. Now, to resolve specific APIs, the implant employs a hashing algorithmCRC32. Instead of looking up an export by name, the loader computes a hash of each function name in the export table and compares it to precomputed constants embedded in the code to finally resolve the hashes. Now, let us look into how this implant extracts and loads the shellcode. It initially opens the existing file ra.ini with read permissions using CreateFileW API, then once it gets the handle, another API known as GetFileSize is used to read the size of the file. Once the file size is obtained, it is processed via ReadFile API. Then, using a hardcoded RC4 key wquefbqw the shellcode is then decrypted and returned. After extracting the shellcode, it is executed directly in memory using a syscall-based execution technique. This approach involves loading the appropriate syscall numbers into the EAX register and invoking low-level system calls to allocate memory, write the shellcode, change memory protections, and ultimately execute the shellcode—all without relying on higher-level Windows API functions. The PDB path of this implant also depicts the functionality:

    • C:\Users\test\source\repos\sysldr\x64\Release\weqfdqwefq.pdb

    In the next part, we will look into the malicious shellcode and its workings.

    Stage 4 – Malicious Cobalt Strike Shellcode.

    Upon looking into the file, we figured out that the shellcode is in encrypted format. Next, we decrypted the shellcode using the key, using a simple Python script. Further, on analyzing the shellcode, we found, that it is a Cobalt Strike based beacon. Therefore, here are the extracted configs. Extracted beacon config:

    Process Injection Targets:
    windir\syswow64\bootcfg.exe
    windir\sysnative\bootcfg.exe
    Infrastructural information:
    hxxps://52.199.49.4:7284/jquery-3.3.1.min.js
    hxxps://52.199.49.4:7284/jquery-3.3.2.min.js
    Request Body :
    GET /jquery-3.3.1.min.js HTTP/1.1
    Host: 52.199.49.4:7284
    
    
    User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
    Referer: http://code.jquery.com/
    Accept-Encoding: gzip, deflate
    Cookie: __cfduid=dT98nN_EYDF96RONtS1uMjE0IZIWy9GljNoWh6rXhEndZDFhNo_Ha4AmFQKcUn9C4ZUUqLTAI6-6HUu3jA-WcnuttiUnceIu3FbAlBPitw52PirDxM_nP460iXUlVqW6Lvv__Wr3k09xnyWZN4besu1gVlk3JWS2hX_yt5EioqY
    Connection: Keep-Alive
    Cache-Control: no-cache
    
    
    HTTP Settings GET Hash:
    52407f3c97939e9c8735462df5f7457d
    HTTP Settings POST Hash:
    7c48240b065248a8e23eb02a44bc910a

    Due to the extensive documentation and prevalence of Cobalt Strike in offensive security operations, an in-depth analysis is deemed unnecessary. Nonetheless, available extracted beacon configuration, confirm that the threat actor leveraged Cobalt Strike as a component of their intrusion toolkit in this campaign.

    Infrastructure and Hunting.

    As, we did encounter while reverse-engineering the implants, we found that the threat actor had been using Google-Drive as a command-and-control (C2) framework, which also leaked a lot of details such as sensitive API-keys and much more. We have found the associated details related to the threat actor’s infrastructure such as associated Gmail Address & list of implants, which had been scheduled by the threat actor for other campaigns, which have not been used In-The-Wild (ITW). Information related to Threat Actor’s Google Drive Account: {  “user”: {    “kind”: “drive#user”,    “displayName”: “Swsanavector56”,    “photoLink”: “https://lh3.googleusercontent.com/a/ACg8ocKiv7cWvdPxivqyPdYB70M1QTLrTsWUb-QHii8yNv60kYx8eA=s64”,    “me”: true,    “permissionId”: “09484302754176848006”,    “emailAddress”: “swsanavector42@gmail.com”  }} List of files found inside the Google Drive

    File Name File ID Type Size SHA-256 Hash
    PrintDialog.exe 14gFG2NsJ60CEDsRxE5aXvFN0Fs83YMMG EXE 123,032 bytes 7a942f65e8876aeec0a1372fcd4d53aa1f84d2279904b2b86c49d765e5a29d6f
    PrintDialog.dll 1VMrUQlxvKZZ-fRyQ8m3Ai8ZEhkzE3g5T DLL 108,032 bytes a9b33572237b100edf1d4c7b0a2071d68406e5931ab3957a962fcce4bfc2cc49
    ra.ini 1JAXiUPz6kvzOlokDMDxDhA4ohidt094b INI 265,734 bytes 0f303988e5905dffc3202ad371c3d1a49bd3ea5e22da697031751a80e21a13a7
    rirekisho2025.pdf 17hO28MbwD2assMsmA47UJnNbKB2fpM_A PDF 796,062 bytes 8710683d2ec2d04449b821a85b6ccd6b5cb874414fd4684702f88972a9d4cfdd
    rirekisho2021_01.pdf 1LwalLoUdSinfGqYUx8vBCJ3Kqq_LCxIg PDF 796,062 bytes 8710683d2ec2d04449b821a85b6ccd6b5cb874414fd4684702f88972a9d4cfdd
    wbemcomn.dll 1aY5oX6EIe4hfGD6QgAAzmCcwxM4DoLke DLL 181,760 bytes c7b9ae61046eed01651a72afe7a31de088056f1c1430b368b1acda0b58299e28
    svhost.exe 1P8_PG2DGtLWA3q8F4XPy43GMLznZFtQv EXE 209,920 bytes e0c6f9abfc11911747a7533f3282e7ff0c10fc397129228621bcb3a51f5be980
    0g9pglZr74.ini 1UE7gNfUIuTRzgjIv188hRIZG3YNtbvkV INI 265,734 bytes 9fb57a4c6576a98003de6bf441e4306f72c83f783630286758f5b468abaa105d
    KpEvjK3KG2.enc 1RxJi1RZMhcF31F1lgQ9TJfXMuvSJkYQl ENC 265,734 bytes e86feaa258df14e3023c7a74b7733f0b568cc75092248bec77de723dba52dd12
    LoggingPlatform.dll 1lZgq1ZNkK88eJsl6GlcvpzRuFlBgxEOF DLL 112,640 bytes 9df9bb3c13e4d20a83b0ac453e6a2908b77fc2bf841761b798b903efb2d0f4f7
    0g9pglZr74.ini 1ky1fEzC6v70U8-RbHBZG_i3YI79Ir8Og INI 265,734 bytes 9fb57a4c6576a98003de6bf441e4306f72c83f783630286758f5b468abaa105d
    python310.dll 1RuMLCJJ5hcFiVXbcg8kZK3giueWiVbTJ DLL 189,952 bytes e1b2d0396914f84d27ef780dd6fdd8bae653d721eea523f0ade8f45ac9a10faf
    ra.ini 13ooFQAYZ27Bx015UQG3qkHR293wlcL90 INI 265,734 bytes 777961d51eb92466ca4243fa32143520d49077a3f7c77a2fcbec183ebf975182
    pythonw.exe 19n1ta4hyQguQQmR8C6SAsZuGNQF4-ddU EXE 97,000 bytes 040d121a3179f49cd3f33f4bc998bc8f78b7f560bfd93f279224d69e76a06e92
    python.xml 1k4Q18FByEXW98Rr1CXyVVC-Kj8T0NBDW XML 1,526 bytes c8ed52278ec00a6fbc9697661db5ffbcbe19c5ab331b182f7fd0f9f7249b5896
    OneDriveFileLauncher.exe 137tczdqf5R7RMRoOb9fI_YjZuncd_TUn EXE 392,760 bytes 7bf5e1f3e29beccca7f25d7660545161598befff88506d6e3648b7b438181a75
    wbemcomn.dll 1xUPkhfaWIgYs5HSmxYPC_sZT4QKm_T7i DLL 181,760 bytes c7b9ae61046eed01651a72afe7a31de088056f1c1430b368b1acda0b58299e28
    0g9pglZr74.ini 1Ylpf9XVnztxeGk-joNw9df3b0Mv8wYU3 INI 265,734 bytes 9fb57a4c6576a98003de6bf441e4306f72c83f783630286758f5b468abaa105d
    svhost.exe 1wo1gZ9acixvy925lM6QAkz6Uaj6cRXxx EXE 209,920 bytes e0c6f9abfc11911747a7533f3282e7ff0c10fc397129228621bcb3a51f5be980
    llv 1ZuzB7x0zzgz34eNhHp_TI3auPhHj8Xhc Folder

    We also observed this host-address was being used where the Cobalt-Strike was being hosted under ASN 16509 with location of IP being in Japan. Also, apart from the Google Drive C2, we have also found that the Gmail address has been used to create accounts and perform activities which have currently been removed under multiple platforms like Google Maps, YouTube and Apple based services.

    Attribution.

    While attribution remains a key perspective when analyzing current and future motives of threat actors, we have observed similar modus operandi to this campaign, particularly in terms of DLL sideloading techniques. Previously, the Winnti APT group has exploited PrintDialog.exe using this method. Additionally, when examining the second implant, Isurus, we found some similarities with the codebase used by the Lazarus group, which has employed DLL sideloading techniques against wmiapsrv.exe – a file that was found uploaded to the threat actor’s Google Drive account. Along with which we have found a few similarities between Swan Vector and APT10’s recent targets across Japan & Taiwan.

    While these observations alone do not provide concrete attribution, when combined with linguistic analysis, implant maturity, and other collected artifacts, we are attributing this threat actor to the East Asian geosphere with medium confidence.

    Conclusion.

    Upon analysis and research, we have found that the threat actor is based out of East Asia and have been active since December 2024 targeting multiple hiring-based entities across Taiwan & Japan. The threat actor relies on custom development of implants comprising of downloader, shellcode-loaders & Cobalt Strike as their key tools with heavily relying on multiple evasion techniques like API hashing, Direct-syscalls, function callback, DLL Sideloading and self-deletion to avoid leaving any sort of traces on the target machine.

    We believe that the threat actor will be using the above implants which have been scheduled for upcoming campaigns which will be using DLL sideloading against applications like Python, WMI Performance Adapter Service, One Drive Launcher executable to execute their malicious Cobalt Strike beacon with CV-based decoys.

    Seqrite Protection.

    • Pterois.S36007342.
    • Trojan.49524.GC
    • trojan.49518.GC.

    Indicators-Of-Compromise (IOCs)

    Decoys (PDFs)

    Filename SHA-256
    rirekisho2021_01.pdf 8710683d2ec2d04449b821a85b6ccd6b5cb874414fd4684702f88972a9d4cfdd
    rirekisho2025.pdf 8710683d2ec2d04449b821a85b6ccd6b5cb874414fd4684702f88972a9d4cfdd

    IP/Domains

    Malicious Implants

    Filename SHA-256
    wbemcomn.dll c7b9ae61046eed01651a72afe7a31de088056f1c1430b368b1acda0b58299e28
    LoggingPlatform.dll 9df9bb3c13e4d20a83b0ac453e6a2908b77fc2bf841761b798b903efb2d0f4f7
    PrintDialog.dll a9b33572237b100edf1d4c7b0a2071d68406e5931ab3957a962fcce4bfc2cc49
    python310.dll e1b2d0396914f84d27ef780dd6fdd8bae653d721eea523f0ade8f45ac9a10faf
    Chen_YiChun.png de839d6c361c7527eeaa4979b301ac408352b5b7edeb354536bd50225f19cfa5
    針對提領系統與客服流程的改進建議.pdf.lnk 9c83faae850406df7dc991f335c049b0b6a64e12af4bf61d5fb7281ba889ca82

    Shellcode and other suspicious binaries

    Filename SHA-256
    0g9pglZr74.ini 9fb57a4c6576a98003de6bf441e4306f72c83f783630286758f5b468abaa105d
    ra.ini 0f303988e5905dffc3202ad371c3d1a49bd3ea5e22da697031751a80e21a13a7
    python.xml c8ed52278ec00a6fbc9697661db5ffbcbe19c5ab331b182f7fd0f9f7249b5896
    KpEvjK3KG2.enc e86feaa258df14e3023c7a74b7733f0b568cc75092248bec77de723dba52dd12

    MITRE ATT&CK.

    Tactic Technique ID Technique Name Sub-technique ID Sub-technique Name
    Initial Access T1566 Phishing T1566.001 Spearphishing Attachment
    Execution T1129 Shared Modules
    Execution T1106 Native API
    Execution T1204 User Execution T1204.002 Malicious File
    Persistence T1574 Hijack Execution Flow T1574.001 DLL Sideloading
    Privilege Escalation T1055 Process Injection T1055.003 Thread Execution Hijacking
    Privilege Escalation T1055 Process Injection T1055.004 Asynchronous Procedure Call
    Defense Evasion T1218 System Binary Proxy Execution T1218.011 Rundll32
    Defense Evasion T1027 Obfuscated Files or Information T1027.007 Dynamic API Resolution
    Defense Evasion T1027 Obfuscated Files or Information T1027.012 LNK Icon Smuggling
    Defense Evasion T1027 Obfuscated Files or Information T1027.013 Encrypted/Encoded File
    Defense Evasion T1070 Indicator Removal T1070.004 File Deletion
    Command and Control T1102 Web Service

    [ad_2]
    Source link

  • Deploy CoreML Models on the Server with Vapor | by Drew Althage


    Recently, at Sovrn, we had an AI Hackathon where we were encouraged to experiment with anything related to machine learning. The Hackathon yielded some fantastic projects from across the company. Everything from SQL query generators to chatbots that can answer questions about our products and other incredible work. I thought this would be a great opportunity to learn more about Apple’s ML tools and maybe even build something with real business value.

    A few of my colleagues and I teamed up to play with CreateML and CoreML to see if we could integrate some ML functionality into our iOS app. We got a model trained and integrated into our app in several hours, which was pretty amazing. But we quickly realized that we had a few problems to solve before we could actually ship this thing.

    • The model was hefty. It was about 50MB. That’s a lot of space to take up in our app bundle.
    • We wanted to update the model without releasing a new app version.
    • We wanted to use the model in the web browser as well.

    We didn’t have time to solve all of these problems. But the other day I was exploring the Vapor web framework and the thought hit me, “Why not deploy CoreML models on the server?”



    Source link

  • Automate Stock Analysis with Python and Yfinance: Generate Excel Reports



    In this article, we will explore how to analyze stocks using Python and Excel. We will fetch historical data for three popular stocks—Realty Income (O), McDonald’s (MCD), and Johnson & Johnson (JNJ) — calculate returns, factor in dividends, and visualize





    Source link

  • WebAssembly with Go: Taking Web Apps to the Next Level | by Ege Aytin


    Let’s dive a bit deeper into the heart of our WebAssembly integration by exploring the key segments of our Go-based WASM code.

    involves preparing and specifying our Go code to be compiled for a WebAssembly runtime.

    // go:build wasm
    // +build wasm

    These lines serve as directives to the Go compiler, signaling that the following code is designated for a WebAssembly runtime environment. Specifically:

    • //go:build wasm: A build constraint ensuring the code is compiled only for WASM targets, adhering to modern syntax.
    • // +build wasm: An analogous constraint, utilizing older syntax for compatibility with prior Go versions.

    In essence, these directives guide the compiler to include this code segment only when compiling for a WebAssembly architecture, ensuring an appropriate setup and function within this specific runtime.

    package main

    import (
    "context"
    "encoding/json"
    "syscall/js"

    "google.golang.org/protobuf/encoding/protojson"

    "github.com/Permify/permify/pkg/development"
    )

    var dev *development.Development

    func run() js.Func {
    // The `run` function returns a new JavaScript function
    // that wraps the Go function.
    return js.FuncOf(func(this js.Value, args []js.Value) interface{} {

    // t will be used to store the unmarshaled JSON data.
    // The use of an empty interface{} type means it can hold any type of value.
    var t interface{}

    // Unmarshal JSON from JavaScript function argument (args[0]) to Go's data structure (map).
    // args[0].String() gets the JSON string from the JavaScript argument,
    // which is then converted to bytes and unmarshaled (parsed) into the map `t`.
    err := json.Unmarshal([]byte(args[0].String()), &t)

    // If an error occurs during unmarshaling (parsing) the JSON,
    // it returns an array with the error message "invalid JSON" to JavaScript.
    if err != nil {
    return js.ValueOf([]interface{}{"invalid JSON"})
    }

    // Attempt to assert that the parsed JSON (`t`) is a map with string keys.
    // This step ensures that the unmarshaled JSON is of the expected type (map).
    input, ok := t.(map[string]interface{})

    // If the assertion is false (`ok` is false),
    // it returns an array with the error message "invalid JSON" to JavaScript.
    if !ok {
    return js.ValueOf([]interface{}{"invalid JSON"})
    }

    // Run the main logic of the application with the parsed input.
    // It’s assumed that `dev.Run` processes `input` in some way and returns any errors encountered during that process.
    errors := dev.Run(context.Background(), input)

    // If no errors are present (the length of the `errors` slice is 0),
    // return an empty array to JavaScript to indicate success with no errors.
    if len(errors) == 0 {
    return js.ValueOf([]interface{}{})
    }

    // If there are errors, each error in the `errors` slice is marshaled (converted) to a JSON string.
    // `vs` is a slice that will store each of these JSON error strings.
    vs := make([]interface{}, 0, len(errors))

    // Iterate through each error in the `errors` slice.
    for _, r := range errors {
    // Convert the error `r` to a JSON string and store it in `result`.
    // If an error occurs during this marshaling, it returns an array with that error message to JavaScript.
    result, err := json.Marshal(r)
    if err != nil {
    return js.ValueOf([]interface{}{err.Error()})
    }
    // Add the JSON error string to the `vs` slice.
    vs = append(vs, string(result))
    }

    // Return the `vs` slice (containing all JSON error strings) to JavaScript.
    return js.ValueOf(vs)
    })
    }

    Within the realm of Permify, the run function stands as a cornerstone, executing a crucial bridging operation between JavaScript inputs and Go’s processing capabilities. It orchestrates real-time data interchange in JSON format, safeguarding that Permify’s core functionalities are smoothly and instantaneously accessible via a browser interface.

    Digging into run:

    • JSON Data Interchange: Translating JavaScript inputs into a format utilizable by Go, the function unmarshals JSON, transferring data between JS and Go, assuring that the robust processing capabilities of Go can seamlessly manipulate browser-sourced inputs.
    • Error Handling: Ensuring clarity and user-awareness, it conducts meticulous error-checking during data parsing and processing, returning relevant error messages back to the JavaScript environment to ensure user-friendly interactions.
    • Contextual Processing: By employing dev.Run, it processes the parsed input within a certain context, managing application logic while handling potential errors to assure steady data management and user feedback.
    • Bidirectional Communication: As errors are marshaled back into JSON format and returned to JavaScript, the function ensures a two-way data flow, keeping both environments in synchronized harmony.

    Thus, through adeptly managing data, error-handling, and ensuring a fluid two-way communication channel, run serves as an integral bridge, linking JavaScript and Go to ensure the smooth, real-time operation of Permify within a browser interface. This facilitation of interaction not only heightens user experience but also leverages the respective strengths of JavaScript and Go within the Permify environment.

    // Continuing from the previously discussed code...

    func main() {
    // Instantiate a channel, 'ch', with no buffer, acting as a synchronization point for the goroutine.
    ch := make(chan struct{}, 0)

    // Create a new instance of 'Container' from the 'development' package and assign it to the global variable 'dev'.
    dev = development.NewContainer()

    // Attach the previously defined 'run' function to the global JavaScript object,
    // making it callable from the JavaScript environment.
    js.Global().Set("run", run())

    // Utilize a channel receive expression to halt the 'main' goroutine, preventing the program from terminating.
    <-ch
    }

    1. ch := make(chan struct{}, 0): A synchronization channel is created to coordinate the activity of goroutines (concurrent threads in Go).
    2. dev = development.NewContainer(): Initializes a new container instance from the development package and assigns it to dev.
    3. js.Global().Set("run", run()): Exposes the Go run function to the global JavaScript context, enabling JavaScript to call Go functions.
    4. <-ch: Halts the main goroutine indefinitely, ensuring that the Go WebAssembly module remains active in the JavaScript environment.

    In summary, the code establishes a Go environment running within WebAssembly that exposes specific functionality (run function) to the JavaScript side and keeps itself active and available for function calls from JavaScript.

    Before we delve into Permify’s rich functionalities, it’s paramount to elucidate the steps of converting our Go code into a WASM module, priming it for browser execution.

    For enthusiasts eager to delve deep into the complete Go codebase, don’t hesitate to browse our GitHub repository: Permify Wasm Code.

    Kickstart the transformation of our Go application into a WASM binary with this command:

    GOOS=js GOARCH=wasm go build -o permify.wasm main.go

    This directive cues the Go compiler to churn out a .wasm binary attuned for JavaScript environments, with main.go as the source. The output, permify.wasm, is a concise rendition of our Go capabilities, primed for web deployment.

    In conjunction with the WASM binary, the Go ecosystem offers an indispensable JavaScript piece named wasm_exec.js. It’s pivotal for initializing and facilitating our WASM module within a browser setting. You can typically locate this essential script inside the Go installation, under misc/wasm.

    However, to streamline your journey, we’ve hosted wasm_exec.js right here for direct access: wasm_exec.

    cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" .

    Equipped with these pivotal assets — the WASM binary and its companion JavaScript — the stage is set for its amalgamation into our frontend.

    To kick things off, ensure you have a directory structure that clearly separates your WebAssembly-related code from the rest of your application. Based on your given structure, the loadWasm folder seems to be where all the magic happens:

    loadWasm/

    ├── index.tsx // Your main React component that integrates WASM.
    ├── wasm_exec.js // Provided by Go, bridges the gap between Go's WASM and JS.
    └── wasmTypes.d.ts // TypeScript type declarations for WebAssembly.

    To view the complete structure and delve into the specifics of each file, refer to the Permify Playground on GitHub.

    Inside the wasmTypes.d.ts, global type declarations are made which expand upon the Window interface to acknowledge the new methods brought in by Go’s WebAssembly:

    declare global {
    export interface Window {
    Go: any;
    run: (shape: string) => any[];
    }
    }
    export {};

    This ensures TypeScript recognizes the Go constructor and the run method when called on the global window object.

    In index.tsx, several critical tasks are accomplished:

    • Import Dependencies: First off, we import the required JS and TypeScript declarations:
    import "./wasm_exec.js";
    import "./wasmTypes.d.ts";
    • WebAssembly Initialization: The asynchronous function loadWasm takes care of the entire process:
    async function loadWasm(): Promise<void> {
    const goWasm = new window.Go();
    const result = await WebAssembly.instantiateStreaming(
    fetch("play.wasm"),
    goWasm.importObject
    );
    goWasm.run(result.instance);
    }

    Here, new window.Go() initializes the Go WASM environment. WebAssembly.instantiateStreaming fetches the WASM module, compiles it, and creates an instance. Finally, goWasm.run activates the WASM module.

    • React Component with Loader UI: The LoadWasm component uses the useEffect hook to asynchronously load the WebAssembly when the component mounts:
    export const LoadWasm: React.FC<React.PropsWithChildren<{}>> = (props) => {
    const [isLoading, setIsLoading] = React.useState(true);

    useEffect(() => {
    loadWasm().then(() => {
    setIsLoading(false);
    });
    }, []);

    if (isLoading) {
    return (
    <div className="wasm-loader-background h-screen">
    <div className="center-of-screen">
    <SVG src={toAbsoluteUrl("/media/svg/rocket.svg")} />
    </div>
    </div>
    );
    } else {
    return <React.Fragment>{props.children}</React.Fragment>;
    }
    };

    While loading, SVG rocket is displayed to indicate that initialization is ongoing. This feedback is crucial as users might otherwise be uncertain about what’s transpiring behind the scenes. Once loading completes, children components or content will render.

    Given your Go WASM exposes a method named run, you can invoke it as follows:

    function Run(shape) {
    return new Promise((resolve) => {
    let res = window.run(shape);
    resolve(res);
    });
    }

    This function essentially acts as a bridge, allowing the React frontend to communicate with the Go backend logic encapsulated in the WASM.

    To integrate a button that triggers the WebAssembly function when clicked, follow these steps:

    1. Creating the Button Component

    First, we’ll create a simple React component with a button:

    import React from "react";

    type RunButtonProps = {
    shape: string;
    onResult: (result: any[]) => void;
    };

    function RunButton({ shape, onResult }: RunButtonProps) {
    const handleClick = async () => {
    let result = await Run(shape);
    onResult(result);
    };

    return <button onClick={handleClick}>Run WebAssembly</button>;
    }

    In the code above, the RunButton component accepts two props:

    • shape: The shape argument to pass to the WebAssembly run function.
    • onResult: A callback function that receives the result of the WebAssembly function and can be used to update the state or display the result in the UI.
    1. Integrating the Button in the Main Component

    Now, in your main component (or wherever you’d like to place the button), integrate the RunButton:

    import React, { useState } from "react";
    import RunButton from "./path_to_RunButton_component"; // Replace with the actual path

    function App() {
    const [result, setResult] = useState<any[]>([]);

    // Define the shape content
    const shapeContent = {
    schema: `|-
    entity user {}

    entity account {
    relation owner @user
    relation following @user
    relation follower @user

    attribute public boolean
    action view = (owner or follower) or public
    }

    entity post {
    relation account @account

    attribute restricted boolean

    action view = account.view

    action comment = account.following not restricted
    action like = account.following not restricted
    }`,
    relationships: [
    "account:1#owner@user:kevin",
    "account:2#owner@user:george",
    "account:1#following@user:george",
    "account:2#follower@user:kevin",
    "post:1#account@account:1",
    "post:2#account@account:2",
    ],
    attributes: [
    "account:1$public|boolean:true",
    "account:2$public|boolean:false",
    "post:1$restricted|boolean:false",
    "post:2$restricted|boolean:true",
    ],
    scenarios: [
    {
    name: "Account Viewing Permissions",
    description:
    "Evaluate account viewing permissions for 'kevin' and 'george'.",
    checks: [
    {
    entity: "account:1",
    subject: "user:kevin",
    assertions: {
    view: true,
    },
    },
    ],
    },
    ],
    };

    return (
    <div>
    <RunButton shape={JSON.stringify(shapeContent)} onResult={setResult} />
    <div>
    Results:
    <ul>
    {result.map((item, index) => (
    <li key={index}>{item}</li>
    ))}
    </ul>
    </div>
    </div>
    );
    }

    In this example, App is a component that contains the RunButton. When the button is clicked, the result from the WebAssembly function is displayed in a list below the button.

    Throughout this exploration, the integration of WebAssembly with Go was unfolded, illuminating the pathway toward enhanced web development and optimal user interactions within browsers.

    The journey involved setting up the Go environment, converting Go code to WebAssembly, and executing it within a web context, ultimately giving life to the interactive platform showcased at play.permify.co.

    This platform stands not only as an example but also as a beacon, illustrating the concrete and potent capabilities achievable when intertwining these technological domains.



    Source link

  • Python – Data Wrangling with Excel and Pandas – Useful code

    Python – Data Wrangling with Excel and Pandas – Useful code


    Data wrangling with Excel and Pandas is actually quite useful tool in the belt of any Excel professional, financial professional, data analyst or a developer. Really, everyonecan benefit from the well defined libraries that ease people’s lifes. These are the libraries used:

    Additionally, a function for making a unique Excel name is used:

    An example of the video, where Jupyter Notebook is used.

    In the YT video below, the following 8 points are discussed:

    # Trick 1 – Simple reading of worksheet from Excel workbook

    # Trick 2 – Combine Reports

    # Trick 3 – Fix Missing Values

    # Trick 4 – Formatting the exported Excel file

    # Trick 5 – Merging Excel Files

    # Trick 6 – Smart Filtering

    # Trick 7 – Mergining Tables

    # Trick 8 – Export Dataframe to Excel

    The whole code with the Excel files is available in GitHub here.

    https://www.youtube.com/watch?v=SXXc4WySZS4

    Enjoy it!



    Source link

  • Matrix Sentinels: Building Dynamic Particle Trails with TSL

    Matrix Sentinels: Building Dynamic Particle Trails with TSL


    While experimenting with particle systems, I challenged myself to create particles with tails, similar to snakes moving through space. At first, I didn’t have access to TSL, so I tested basic ideas, like using noise derivatives and calculating previous steps for each particle, but none of them worked as expected.

    I spent a long time pondering how to make it work, but all my solutions involved heavy testing with WebGL and GPGPU, which seemed like it would require too much code for a simple proof of concept. That’s when TSL (Three.js Shader Language) came into play. With its Compute Shaders, I was able to compute arrays and feed the results into materials, making it easier to test ideas quickly and efficiently. This allowed me to accomplish the task without much time lost.

    Now, let’s dive into the step-by-step process of building the particle system, from setting up the environment to creating the trails and achieving that fluid movement.

    Step 1: Set Up the Particle System

    First, we’ll define the necessary uniforms that will be used to create and control the particles in the system.

    uniforms = {
        color: uniform( new THREE.Color( 0xffffff ).setRGB( 1, 1, 1 ) ),
        size: uniform( 0.489 ),
    
        uFlowFieldInfluence: uniform( 0.5 ),
        uFlowFieldStrength: uniform( 3.043 ),
        uFlowFieldFrequency: uniform( 0.207 ),
    }

    Next, create the variables that will define the parameters of the particle system. The “tails_count” variable determines how many segments each snake will have, while the “particles_count” defines the total number of segments in the scene. The “story_count” variable represents the number of frames used to store the position data for each segment. Increasing this value will increase the distance between segments, as we will store the position history of each one. The “story_snake” variable holds the history of one snake, while “full_story_length” stores the history for all snakes. These variables will be enough to bring the concept to life.

    tails_count = 7 //  n-1 point tails
    particles_count = this.tails_count * 200 // need % tails_count
    story_count = 5 // story for 1 position
    story_snake = this.tails_count * this.story_count
    full_story_length = ( this.particles_count / this.tails_count ) * this.story_snake

    Next, we need to create the buffers required for the computational shaders. The most important buffer to focus on is the “positionStoryBuffer,” which will store the position history of all segments. To understand how it works, imagine a train: the head of the train sets the direction, and the cars follow in the same path. By saving the position history of the head, we can use that data to determine the position of each car by referencing its position in the history.

    const positionsArray = new Float32Array( this.particles_count * 3 )
    const lifeArray = new Float32Array( this.particles_count )
    
    const positionInitBuffer = instancedArray( positionsArray, 'vec3' );
    const positionBuffer = instancedArray( positionsArray, 'vec3' );
    
    // Tails
    const positionStoryBuffer = instancedArray( new Float32Array( this.particles_count * this.tails_count * this.story_count ), 'vec3' );
    
    const lifeBuffer = instancedArray( lifeArray, 'float' );

    Now, let’s create the particle system with a material. I chose a standard material because it allows us to use an emissiveNode, which will interact with Bloom effects. For each segment, we’ll use a sphere and disable frustum culling to ensure the particles don’t accidentally disappear off the screen.

    const particlesMaterial = new THREE.MeshStandardNodeMaterial( {
        metalness: 1.0,
        roughness: 0
    } );
        
    particlesMaterial.emissiveNode = color(0x00ff00)
    
    const sphereGeometry = new THREE.SphereGeometry( 0.1, 32, 32 );
    
    const particlesMesh = this.particlesMesh = new THREE.InstancedMesh( sphereGeometry, particlesMaterial, this.particles_count );
    particlesMesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage );
    particlesMesh.frustumCulled = false;
    
    this.scene.add( this.particlesMesh )
    

    Step 2: Initialize Particle Positions

    To initialize the positions of the particles, we’ll use a computational shader to reduce CPU usage and speed up page loading. We randomly generate the particle positions, which form a pseudo-cube shape. To keep the particles always visible on screen, we assign them a lifetime after which they disappear and won’t reappear from their starting positions. The “cycleStep” helps us assign each snake its own random positions, ensuring the tails are generated in the same location as the head. Finally, we send this data to the computation process.

    const computeInit = this.computeInit = Fn( () => {
        const position = positionBuffer.element( instanceIndex )
        const positionInit = positionInitBuffer.element( instanceIndex );
        const life = lifeBuffer.element( instanceIndex )
    
        // Position
        position.xyz = vec3(
            hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) ),
            hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) ),
            hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) )
        ).sub( 0.5 ).mul( vec3( 5, 5, 5 ) );
    
        // Copy Init
        positionInit.assign( position )
    
        const cycleStep = uint( float( instanceIndex ).div( this.tails_count ).floor() )
    
        // Life
        const lifeRandom = hash( cycleStep.add( uint( Math.random() * 0xffffff ) ) )
        life.assign( lifeRandom )
    
    } )().compute( this.particles_count );
    
    this.renderer.computeAsync( this.computeInit ).then( () => {
        this.initialCompute = true
    } )
    Initialization of particle position

    Step 3: Compute Position History

    For each frame, we compute the position history for each segment. The key aspect of the “computePositionStory” function is that new positions are recorded only from the head of the snake, and all positions are shifted one step forward using a queue algorithm.

    const computePositionStory = this.computePositionStory = Fn( () => {
        const positionStory = positionStoryBuffer.element( instanceIndex )
    
        const cycleStep = instanceIndex.mod( uint( this.story_snake ) )
        const lastPosition = positionBuffer.element( uint( float( instanceIndex.div( this.story_snake ) ).floor().mul( this.tails_count ) ) )
    
        If( cycleStep.equal( 0 ), () => { // Head
            positionStory.assign( lastPosition )
        } )
    
        positionStoryBuffer.element( instanceIndex.add( 1 ) ).assign( positionStoryBuffer.element( instanceIndex ) )
    
    } )().compute( this.full_story_length );

    Step 4: Update Particle Positions

    Next, we update the positions of all particles, taking into account the recorded history of their positions. First, we use simplex noise to generate the new positions of the particles, allowing our snakes to move smoothly through space. Each particle also has its own lifetime, during which it moves and eventually resets to its original position. The key part of this function is determining which particle is the head and which is the tail. For the head, we generate a new position based on simplex noise, while for the tail, we use positions from the saved history.

    const computeUpdate = this.computeUpdate = Fn( () => {
    
        const position = positionBuffer.element( instanceIndex )
        const positionInit = positionInitBuffer.element( instanceIndex )
    
        const life = lifeBuffer.element( instanceIndex );
    
        const _time = time.mul( 0.2 )
    
        const uFlowFieldInfluence = this.uniforms.uFlowFieldInfluence
        const uFlowFieldStrength = this.uniforms.uFlowFieldStrength
        const uFlowFieldFrequency = this.uniforms.uFlowFieldFrequency
    
        If( life.greaterThanEqual( 1 ), () => {
            life.assign( life.mod( 1 ) )
            position.assign( positionInit )
    
        } ).Else( () => {
            life.addAssign( deltaTime.mul( 0.2 ) )
        } )
    
        // Strength
        const strength = simplexNoise4d( vec4( position.mul( 0.2 ), _time.add( 1 ) ) ).toVar()
        const influence = uFlowFieldInfluence.sub( 0.5 ).mul( -2.0 ).toVar()
        strength.assign( smoothstep( influence, 1.0, strength ) )
    
        // Flow field
        const flowField = vec3(
            simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 0 ), _time ) ),
            simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 1.0 ), _time ) ),
            simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 2.0 ), _time ) )
        ).normalize()
    
        const cycleStep = instanceIndex.mod( uint( this.tails_count ) )
    
        If( cycleStep.equal( 0 ), () => { // Head
            const newPos = position.add( flowField.mul( deltaTime ).mul( uFlowFieldStrength ) /* * strength */ )
            position.assign( newPos )
        } ).Else( () => { // Tail
            const prevTail = positionStoryBuffer.element( instanceIndex.mul( this.story_count ) )
            position.assign( prevTail )
        } )
    
    } )().compute( this.particles_count );

    To display the particle positions, we’ll create a simple function called “positionNode.” This function will not only output the positions but also apply a slight magnification effect to the head of the snake.

    particlesMaterial.positionNode = Fn( () => {
        const position = positionBuffer.element( instanceIndex );
    
        const cycleStep = instanceIndex.mod( uint( this.tails_count ) )
        const finalSize = this.uniforms.size.toVar()
    
        If( cycleStep.equal( 0 ), () => {
            finalSize.addAssign( 0.5 )
        } )
    
        return positionLocal.mul( finalSize ).add( position )
    } )()

    The final element will be to update the calculations on each frame.

    async update( deltaTime ) {
    
        // Compute update
        if( this.initialCompute) {
            await this.renderer.computeAsync( this.computePositionStory )
            await this.renderer.computeAsync( this.computeUpdate )
        }
    }

    Conclusion

    Now, you should be able to easily create position history buffers for other problem-solving tasks, and with TSL, this process becomes quick and efficient. I believe this project has potential for further development, such as transferring position data to model bones. This could enable the creation of beautiful, flying dragons or similar effects in 3D space. For this, a custom bone structure tailored to the project would be needed.



    Source link

  • Easy logging management with Seq and ILogger in ASP.NET &vert; Code4IT

    Easy logging management with Seq and ILogger in ASP.NET | Code4IT


    Seq is one of the best Log Sinks out there : it’s easy to install and configure, and can be added to an ASP.NET application with just a line of code.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is one of the most essential parts of any application.

    Wouldn’t it be great if we could scaffold and use a logging platform with just a few lines of code?

    In this article, we are going to learn how to install and use Seq as a destination for our logs, and how to make an ASP.NET 8 API application send its logs to Seq by using the native logging implementation.

    Seq: a sink and dashboard to manage your logs

    In the context of logging management, a “sink” is a receiver of the logs generated by one or many applications; it can be a cloud-based system, but it’s not mandatory: even a file on your local file system can be considered a sink.

    Seq is a Sink, and works by exposing a server that stores logs and events generated by an application. Clearly, other than just storing the logs, Seq allows you to view them, access their details, perform queries over the collection of logs, and much more.

    It’s free to use for individual usage, and comes with several pricing plans, depending on the usage and the size of the team.

    Let’s start small and install the free version.

    We have two options:

    1. Download it locally, using an installer (here’s the download page);
    2. Use Docker: pull the datalust/seq image locally and run the container on your Docker engine.

    Both ways will give you the same result.

    However, if you already have experience with Docker, I suggest you use the second approach.

    Once you have Docker installed and running locally, open a terminal.

    First, you have to pull the Seq image locally (I know, it’s not mandatory, but I prefer doing it in a separate step):

    Then, when you have it downloaded, you can start a new instance of Seq locally, exposing the UI on a specific port.

    docker run --name seq -d --restart unless-stopped -e ACCEPT_EULA=Y -p 5341:80 datalust/seq:latest
    

    Let’s break down the previous command:

    • docker run: This command is used to create and start a new Docker container.
    • --name seq: This option assigns the name seq to the container. Naming containers can make them easier to manage.
    • -d: This flag runs the container in detached mode, meaning it runs in the background.
    • --restart unless-stopped: This option ensures that the container will always restart unless it is explicitly stopped. This is useful for ensuring that the container remains running even after a reboot or if it crashes.
    • -e ACCEPT_EULA=Y: This sets an environment variable inside the container. In this case, it sets ACCEPT_EULA to Y, which likely indicates that you accept the End User License Agreement (EULA) for the software running in the container.
    • -p 5341:80: This maps port 5341 on your host machine to port 80 in the container. This allows you to access the service running on port 80 inside the container via port 5341 on your host.
    • datalust/seq:latest: This specifies the Docker image to use for the container. datalust/seq is the image name, and latest is the tag, indicating that you want to use the latest version of this image.

    So, this command runs a container named seq in the background, ensures it restarts unless stopped, sets an environment variable to accept the EULA, maps a host port to a container port, and uses the latest version of the datalust/seq image.

    It’s important to pay attention to the used port: by default, Seq uses port 5341 to interact with the UI and the API. If you prefer to use another port, feel free to do that – just remember that you’ll need some additional configuration.

    Now that Seq is installed on your machine, you can access its UI. Guess what? It’s on localhost:5341!

    Seq brand new instance

    However, Seq is “just” a container for our logs – but we have to produce them.

    A sample ASP.NET API project

    I’ve created a simple API project that exposes CRUD operations for a data model stored in memory (we don’t really care about the details).

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        public BooksController()
        {
    
        }
    
        [HttpGet("{id}")]
        public ActionResult<Book> GetBook([FromRoute] int id)
        {
    
            Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
            return book switch
            {
                null => NotFound(),
                _ => Ok(book)
            };
        }
    }
    

    As you can see, the details here are not important.

    Even the Main method is the default one:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    var app = builder.Build();
    
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    app.UseHttpsRedirection();
    
    app.MapControllers();
    
    app.Run();
    

    We have the Controllers, we have Swagger… well, nothing fancy.

    Let’s mix it all together.

    How to integrate Seq with an ASP.NET application

    If you want to use Seq in an ASP.NET application (may it be an API application or whatever else), you have to add it to the startup pipeline.

    First, you have to install the proper NuGet package: Seq.Extensions.Logging.

    The Seq.Extensions.Logging NuGet package

    Then, you have to add it to your Services, calling the AddSeq() method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    + builder.Services.AddLogging(lb => lb.AddSeq());
    
    var app = builder.Build();
    

    Now, Seq is ready to intercept whatever kind of log arrives at the specified port (remember, in our case, we are using the default one: 5341).

    We can try it out by adding an ILogger to the BooksController constructor:

    private readonly ILogger<BooksController> _logger;
    
    public BooksController(ILogger<BooksController> logger)
    {
        _logger = logger;
    }
    

    So that we can use the _logger instance to create logs as we want, using the necessary Log Level:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("I am Information");
        _logger.LogWarning("I am Warning");
        _logger.LogError("I am Error");
        _logger.LogCritical("I am Critical");
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Log messages on Seq

    Using Structured Logging with ILogger and Seq

    One of the best things about Seq is that it automatically handles Structured Logging.

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Have a look at this line:

    _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    

    This line generates a string message, replaces all the placeholders, and, on top of that, creates two properties, SearchedId and TotalBooksCount; you can now define queries using these values.

    Structured Logs in Seq allow you to view additional logging properties

    Further readings

    I have to admit it: logging management is one of my favourite topics.

    I’ve already written a sort of introduction to Seq in the past, but at that time, I did not use the native ILogger, but Serilog, a well-known logging library that added some more functionalities on top of the native logger.

    🔗 Logging with Serilog and Seq | Code4IT

    This article first appeared on Code4IT 🐧

    In particular, Serilog can be useful for propagating Correlation IDs across multiple services so that you can fetch all the logs generated by a specific operation, even though they belong to separate applications.

    🔗 How to log Correlation IDs in .NET APIs with Serilog

    Feel free to search through my blog all the articles related to logging – I’m sure you will find interesting stuff!

    Wrapping up

    I think Seq is the best tool for local development: it’s easy to download and install, supports structured logging, and can be easily added to an ASP.NET application with just a line of code.

    I usually add it to my private projects, especially when the operations I run are complex enough to require some well-structured log.

    Given how it’s easy to install, sometimes I use it for my work projects too: when I have to fix a bug, but I don’t want to use the centralized logging platform (since it’s quite complex to use), I add Seq as a destination sink, run the application, and analyze the logs in my local machine. Then, of course, I remove its reference, as I want it to be just a discardable piece of configuration.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Python – Simple Stock Analysis with yfinance – Useful code

    Python – Simple Stock Analysis with yfinance – Useful code


    Sometimes, the graphs of stocks are useful. Sometimes these are not. In general, do your own research, none of this is financial advice.

    And while doing that, if you want to analyze stocks with just a few lines of python, this article might help? This simple yet powerful script helps you spot potential buy and sell opportunities for Apple (AAPL) using two classic technical indicators: moving averages and RSI.

    Understanding the Strategy

    1. SMA Crossover: The Trend Following Signal

    The script first calculates two Simple Moving Averages (SMA):

    The crossover strategy is simple:

    This works because moving averages smooth out price noise, helping identify the overall trend direction.

    2. RSI: The Overbought/Oversold Indicator

    The Relative Strength Index (RSI) measures whether a stock is overbought or oversold:

    By combining SMA crossovers (trend confirmation) and RSI extremes (timing), we get stronger signals.

    This plot is generated with less than 40 lines of python code

    The code looks like that:

    The code above, but in way more details is explained in the YT video below:

    https://www.youtube.com/watch?v=m0ayASmrZmE

    And it is available in GitHub as well.



    Source link