بلاگ

  • [ENG] .NET 5, 6, and 7 for busy developers | Dotnet Sheff



    [ENG] .NET 5, 6, and 7 for busy developers | Dotnet Sheff



    Source link

  • Android Cryptojacker Masquerades as Banking App to Mine Cryptocurrency on Locked Devices

    Android Cryptojacker Masquerades as Banking App to Mine Cryptocurrency on Locked Devices


    The global craze around cryptocurrency has fueled both innovation and exploitation. While many legally chase digital gold, cybercriminals hijack devices to mine it covertly. Recently, we encountered a phishing website impersonating a well-known bank, hosting a fake Android app. While the app does not function like a real banking application, it uses the bank’s name and icon to mislead users. Behind the scenes, it silently performs cryptocurrency mining, abusing user devices for illicit gain.

    Cryptocurrency mining (or crypto mining) uses computing power to validate and record transactions on a blockchain network. In return, miners are rewarded with new cryptocurrency coins.

    This process involves solving complex mathematical puzzles that require significant CPU or GPU resources. While large-scale miners often use powerful rigs equipped with high-end GPUs or ASICs for maximum efficiency, individuals can also legitimately mine cryptocurrencies using personal devices like PCs or smartphones.

    Because of Google Play Store policies related to cryptocurrency mining, even legitimate apps that perform on-device mining are not allowed to be published on the Play Store. As a result, users often install such mining applications from third-party sources or unofficial app stores, which increases the risk of encountering malicious or compromised apps disguised as legitimate ones.

    Threat actors take advantage of this situation by spreading fake apps on third-party stores and websites. These malicious apps have cryptocurrency mining code embedded within them, allowing attackers to secretly use victims’ devices to mine cryptocurrency for their own benefit.

    Here, we refer to legitimate cryptocurrency mining apps that disclose mining activities, obtain user consent, and ensure that the mining profits go directly to the user. In contrast, cryptocurrency mining malware, also known as cryptojackers, secretly mines without permission, hijacking device resources so that the attacker gains all the profits.

    What Are the Effects of Mining Malware (cryptojackers) Installed on an Android Device?

    • Battery Drain: The mining process involves constant, intensive CPU usage, which leads to rapid battery depletion.
    • Overheating: Continuous computations generate excessive heat, significantly increasing the device’s temperature.
    • Potential Hardware Damage: Prolonged overheating and stress may cause irreversible damage to internal components like the battery, CPU, or motherboard.
    • High Data Usage: Cryptocurrency mining applications communicate frequently with mining pools, leading to unexpected data usage.
    • Performance Lag: The app consumes processing power, making the device slow, laggy, or unresponsive.

    In recent case, the phishing site(getxapp[.]in) impersonates Axis Bank and hosts a fake application called Axis Card. The malware author has embedded XMRig to perform cryptocurrency mining in the background. XMRig is an open-source cryptocurrency mining software designed to mine Monero and other coins.

    Figure 1. Phishing Site

    Figure 2 illustrates the attack flow of this campaign. The user initially downloads the malware-laced application either from a phishing site or through social media platforms like WhatsApp. Upon execution, the app displays a fake update screen but provides no actual functionality, causing the user to ignore it.

    In the background, however, the malware begins monitoring the device’s status, particularly the battery level and screen lock state. Once the device is locked, the malicious app silently downloads an encrypted .so payload, decrypts it, and initiates cryptomining activity.

    If the user unlocks the device, the mining process immediately halts, and the malware returns to the monitoring phase—waiting for the next lock event. This lock–unlock loop allows the miner to operate stealthily and persistently. Over time, this prolonged background mining can lead to excessive heat, battery drain, and permanent hardware damage to the device.

    Figure 2. Attack flow of this malware application

    Technical analysis:

    Figure 3 shows details of the malware application hosted on this fake website.

    Figure 3. File information

    Figure 4 highlights the permissions declared by the application in its manifest file. Generally, Android mining applications require only the android.permission.INTERNET permission, as it allows them to connect to remote mining servers and carry out operations over the network. This permission is no longer classified as dangerous and is automatically granted by the Android system without requiring explicit user consent.

    Many miner apps also request the WAKE_LOCK permission to prevent the device from sleeping, ensuring uninterrupted mining activity even when the screen is off. Additionally, miners often use the android.intent.action.BOOT_COMPLETED broadcast to automatically restart after a device reboot, thereby maintaining persistence.

    In this case, the application requests Internet permission along with a few other suspicious permissions.

    Figure 4. Permissions declared by Malware in its Androidmanifest file

     Malware execution

    The app begins by asking for permission to run in the background, which is commonly abused in mining operations to stay active without user interaction. It then displays a fake update screen claiming new features have been added, with a prominent UPDATE button. Clicking the button shows an Install prompt, but instead of installing anything, it ends with a message saying the installer has expired. Interestingly, the app declares the REQUEST_INSTALL_PACKAGES permission, suggesting it intends to install another APK. However, no actual installation occurs, indicating the entire update flow is likely staged for deception or redirection.

    Figure 5. Application execution flow

    In the background, the malware repeatedly attempts to download a malicious binary from one of several hardcoded URLs. These URLs point to platforms such as GitHub, Cloudflare Pages, and a custom domain (uasecurity[.]org), all of which are used to host the miner payload. Figure 6 illustrates this behavior.

    Figure 6. code used to download payload binary

    Figure 7 shows a screenshot of the GitHub repository hxxps[:]//github[.]com/backend-url-provider/access, which is used to host the miner payloads libmine-arm32.so and libmine-arm64.so. Both files are encrypted to evade static detection and hinder analysis.

    Figure 7. Screenshot of the GitHub page hosting the payload binary.

    The malware first decrypts the downloaded binary using an AES algorithm (Figure 8). In the next step (Figure 9), the decrypted binary is written to a file named d-miner within the app’s private storage. Once written, the file is marked as executable.

    Figure 8. payload decryption code
    Figure 9. decrypted code saved as d-miner file

    To retrieve the encrypted payload, a custom Java-based decryption method was used. Figure 10 confirms that the resulting .so file is based on or directly derived from XMRig’s Android build. The extracted strings reference internal configuration paths, usage instructions, version details, and mining-related URLs. These artifacts clearly validate that the primary purpose of this native library is CPU-based cryptomining.

    Figure 10. Strings view from the .so file opened in JEB showing references to XMRig.

    Figure 11 illustrates the method NMuU8KNchX5bP8Oy(), which constructs the command-line arguments required by the XMRig miner for execution. It attempts to connect directly to the Monero mining pool at pool.uasecurity.org:9000, or alternatively to a proxy pool at pool-proxy.uasecurity.org:9000, depending on availability.

    Figure 11: XMRig initializer code

    After determining the working pool endpoint, the method constructs and returns an array of command-line arguments used to launch an XMRig miner with the following configuration:

    • -o <pool>: The mining pool endpoint (direct or proxy)
    • -k: Keepalive flag
    • –tls: Enable TLS encryption
    • -u <wallet>: Monero wallet address where mined coins are sent
    • –coin monero: Specifies the coin
    • -p <password>: Generates using current date and UUID
    • –nicehash: Adjusts mining strategy for NiceHash compatibility

    The code shown in Figure 12 demonstrates how the d-miner execution is initiated. First, it calls NMuU8KNchX5bP8Oy() to retrieve the arguments. Second, it obtains the path to the d-miner file. Finally, it executes d-miner using the retrieved arguments and file path.

    Figure 12. code used to start d-miner execution

    The following code snippet is responsible for uploading the report.txt file generated by the malware. This file captures the stdout output of the XMRig mining process, providing insight into the miner’s execution and activity.

    Figure 13.  code used to upload

     Logcat Reveals Complete Picture:

    The malware author has logged every action performed by the application as it sends standard output (stdout) data to the mining pool, making Logcat a valuable source for understanding the malware’s full behavior.

    Periodic Device Monitoring:

    Upon execution, the app checks—every 5 seconds—the battery level, charging status, recent installation status, and whether the device is locked. (See Figure 14)

    Figure 14. Logcat_screenshot_1

    Mining Triggered on Device Lock:

    As soon as the device is locked (i.e., isDeviceLocked becomes true), the malware initiates its mining process. It connects to a Monero mining pool (pool.uasecurity.org) over TLS and receives a mining job using the RandomX algorithm. The malware then allocates approximately 2.3 GB of RAM and starts mining using 8 CPU threads. (See Figure 15)

    Figure 15. Logcat_screenshot_2

    Mining Stops on Device Unlock:

    As soon as the device is unlocked, the malware halts its mining activity and transitions into a monitoring state. (See Figure 16)

    Figure 16. Logcat_screenshot_3

    Mining Resumes on Device Lock:

    Once the device is locked again, the malware resumes mining activity. (See Figure 17)

    Figure 17. Logcat_screenshot_4

    Effect on the device

    The malware significantly strains the device by consuming high CPU and memory resources, leading to overheating and degraded performance.

    The top command output clearly shows the d-miner process running under the app’s user (u0_a606), consuming over 746% CPU and 27.5% memory. (See Figure 18) This confirms continuous cryptomining activity in the background, heavily impacting device performance.

    Figure 18. Increased CPU usage

    Figure 19 shows how the device temperature rises steadily over a 30-minute span while the phone remained locked, increasing from 32.0 °C to 45.0 °C. This gradual rise confirms that the miner continues to operate in the background, causing sustained CPU usage and abnormal heat buildup even when the device is idle.

    Figure 19. Increased device temperature

    Prolonged activity may damage the device’s hardware or battery and pose safety risks if left unnoticed.

    MITRE ATT&CK Tactics and Techniques:

    Figure 20

    Quick Heal Detection of Android Malware

    Quick Heal detects such malicious applications with variants of Android.Dminer.A

    It is recommended that all mobile users should install a trusted Anti-Virus like “Quick Heal Mobile Security for Android” to mitigate such threats and stay protected. Our antivirus software restricts users from downloading malicious applications on their mobile devices. Download your Android protection here

    Conclusion:

    This campaign highlights how threat actors abuse trusted banking names like Axis Bank to distribute malware through phishing sites. The malware embeds XMRig, a cryptocurrency miner that runs silently in the background, leading to excessive CPU usage, abnormal heating, and potential long-term hardware damage. Beyond phishing sites, such malware can also spread via social media platforms, often disguised under familiar or reputable names to trick users. This reinforces the importance of user awareness, cautious app installation behavior, and robust mobile security solutions to defend against such threats.

    IOCs:

    Figure 21

    URLs:

    hxxps:// getxapp[.]in

    hxxps:// accessor.pages[.]dev

    hxxps://uasecurity[.]org/

    hxxps://github[.]com/backend-url-provider/access/raw/refs/heads/main/

    Mining pool domains:

    Pool.uasecurity[.]org

    pool-proxy.uasecurity[.]org

    Wallet address: 44DhRjPJrQeNDqomajQjBvdD39UiQvoeh67ABYSWMZWEWKCB3Tzhvtw2jB9KC3UARF1gsBuhvEoNEd2qSDz76BYEPYNuPKD

     

    TIPS TO STAY DIGITALLY SAFE: 

    • Download applications only from trusted sources like Google Play Store.
    • Do not click on any links received through messages or any other social media platforms as they may be intentionally or inadvertently pointing to malicious sites.
    • Read the pop-up messages you get from the Android system before accepting or/allowing any new permissions.
    • Be extremely cautious about what applications you download on your phone, as malware authors can easily spoof the original applications’ names, icons, and developer details.
    • For enhanced protection of your phone, always use a good antivirus like Quick Heal Mobile Security for Android.

    Don’t wait! Secure your smartphones today with Quick Heal Total Security for Mobiles & Smartphones – Buy or Renew Today!



    Source link

  • How to automatically refresh configurations with Azure App Configuration in ASP.NET Core &vert; Code4IT

    How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT


    ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.

    We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.

    In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.

    Since this one is a kind of improvement of the previous article, you should read it first.

    Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:

    {
      "MyNiceConfig": {
        "PageSize": 6,
        "Host": {
          "BaseUrl": "https://www.mydummysite.com",
          "Password": "123-go"
        }
      }
    }
    

    In the constructor of the API controller, I injected an IOptions<MyConfig> instance that holds the current data stored in the application.

     public ConfigDemoController(IOptions<MyConfig> config)
            => _config = config;
    

    The only HTTP Endpoint is a GET: it just accesses that value and returns it to the client.

    [HttpGet()]
    public IActionResult Get()
    {
        return Ok(_config.Value);
    }
    

    Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    Now we can move on and make configurations dynamic.

    Sentinel values: a guard value to monitor changes in the configurations

    On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.

    Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.

    Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.

    A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.

    So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.

    Sentinel value on Azure App Configuration

    As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).

    Define how to refresh configurations using ASP.NET Core app startup

    We can finally move to the code!

    If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    That instruction is used to load all the configurations, without managing polling and updates. Therefore, we must remove it.

    Instead of that instruction, add this other one:

    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options
        .Connect(ConnectionString)
        .Select(KeyFilter.Any, LabelFilter.Null)
        // Configure to reload configuration if the registered sentinel key is modified
        .ConfigureRefresh(refreshOptions =>
                  refreshOptions.Register("Sentinel", label: LabelFilter.Null, refreshAll: true)
            .SetCacheExpiration(TimeSpan.FromSeconds(3))
          );
    });
    

    Let’s deep dive into each part:

    options.Connect(ConnectionString) just tells ASP.NET that the configurations must be loaded from that specific connection string.

    .Select(KeyFilter.Any, LabelFilter.Null) loads all keys that have no Label;

    and, finally, the most important part:

    .ConfigureRefresh(refreshOptions =>
                refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
          .SetCacheExpiration(TimeSpan.FromSeconds(3))
        );
    

    Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).

    Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.

    Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.

    First of all, add Azure App Configuration to the IServiceCollection object:

    builder.Services.AddAzureAppConfiguration();
    

    Finally, we have to add it to our existing middlewares by calling

    app.UseAzureAppConfiguration();
    

    The final result is this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        const string ConnectionString = "......";
    
        // Load configuration from Azure App Configuration
        builder.Configuration.AddAzureAppConfiguration(options =>
        {
            options.Connect(ConnectionString)
                    .Select(KeyFilter.Any, LabelFilter.Null)
                    // Configure to reload configuration if the registered sentinel key is modified
                    .ConfigureRefresh(refreshOptions =>
                        refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
                        .SetCacheExpiration(TimeSpan.FromSeconds(3)));
        });
    
        // Add the service to IServiceCollection
        builder.Services.AddAzureAppConfiguration();
    
        builder.Services.AddControllers();
        builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
    
        var app = builder.Build();
    
        // Add the middleware
        app.UseAzureAppConfiguration();
    
        app.UseHttpsRedirection();
    
        app.MapControllers();
    
        app.Run();
    }
    

    IOptionsMonitor: accessing and monitoring configuration values

    It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.

    Default config coming from Azure App Configuration

    Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯

    This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.

    So, head back to the ConfigDemoController, and replace the way we retrieve the config:

    [ApiController]
    [Route("[controller]")]
    public class ConfigDemoController : ControllerBase
    {
        private readonly IOptionsMonitor<MyConfig> _config;
    
        public ConfigDemoController(IOptionsMonitor<MyConfig> config)
        {
            _config = config;
            _config.OnChange(Update);
        }
    
        [HttpGet()]
        public IActionResult Get()
        {
            return Ok(_config.CurrentValue);
        }
    
        private void Update(MyConfig arg1, string? arg2)
        {
          Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
                    $" Password is {arg1.Host.Password}");
        }
    }
    

    When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;

    We can finally run the application and update the values on Azure App Configuration.

    Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.

    Then, call again the application (again, without restarting it), and admire the updated values!

    You can see a live demo here:

    Demo of configurations refreshed dinamically

    As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.

    My 2 cents on timing

    As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:

    • a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
    • a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.

    There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:

    1. UserService starts
    2. PaymentService starts
    3. Someone updates the values on Azure
    4. UserService restarts, while PaymentService doesn’t.

    We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.

    Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.

    So, think thoroughly before choosing an expiration time!

    Further readings

    This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    This article first appeared on Code4IT 🐧

    Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core | Code4IT

    Finally, I briefly talked about pricing. As of July 2023, there are just 2 pricing tiers, with different limitations.

    🔗 App Configuration pricing | Microsoft Learn

    Wrapping up

    In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.

    Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.

    Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Designing Momentum: The Story Behind Meet Your Legend

    Designing Momentum: The Story Behind Meet Your Legend



    We partnered with Meet Your Legend to bring their groundbreaking vision to life — a mentorship platform that seamlessly blends branding, UI/UX, full-stack development, and immersive digital animation.

    Meet Your Legend isn’t just another online learning platform. It’s a bridge between generations of creatives. Focused on VFX, animation, and video game production, it connects aspiring talent — whether students, freelancers, or in-house studio professionals — with the industry’s most accomplished mentors. These are the legends behind the scenes: lead animators, FX supervisors, creative directors, and technical wizards who’ve shaped some of the biggest productions in modern entertainment.

    Our goal? To create a vivid digital identity and interactive platform that captures three core qualities:

    • The energy of creativity
    • The precision of industry-level expertise
    • The dynamism of motion graphics and storytelling

    At the heart of everything was a single driving idea: movement. Not just visual movement — but career momentum, the transfer of knowledge, and the emotional propulsion behind creativity itself.

    We built the brand identity around the letter “M” — stylized with an elongated tail that represents momentum, legacy, and forward motion. This tail forms a graphic throughline across the platform. Mentor names, modules, and animations plug into it, creating a modular and adaptable system that evolves with the content and contributors.

    From the visual system to the narrative structure, we wanted every interaction to feel alive — dynamic, immersive, and unapologetically aspirational.

    The Concept

    The site’s architecture is built around a narrative arc, not just a navigation system.

    Users aren’t dropped into a menu or a generic homepage. Instead, they’re invited into a story. From the moment the site loads, there’s a sense of atmosphere and anticipation — an introduction to the platform’s mission, mood, and voice before unveiling the core offering: the mentors themselves, or as the platform calls them, “The Legends.”

    Each element of the experience is structured with intention. We carefully designed the content flow to evoke a sense of reverence, curiosity, and inspiration. Think of it as a cinematic trailer for a mentorship journey.

    We weren’t just explaining the brand — we were immersing visitors in it.

    Typography & Color System

    The typography system plays a crucial role in reinforcing the platform’s dual personality: technical sophistication meets expressive creativity.

    We paired two distinct sans-serif fonts:

    – A light-weight, technical font to convey structure, clarity, and approachability — ideal for body text and interface elements

    – A bold, expressive typeface that commands attention — perfect for mentor names, quotes, calls to action, and narrative highlights

    The contrast between these two fonts helps create rhythm, pacing, and emotional depth across the experience.

    The color palette is deliberately cinematic and memorable:

    • Flash orange signals energy, creative fire, and boldness. It’s the spark — the invitation to engage.
    • A range of neutrals — beige, brown, and warm grays — offer a sense of balance, maturity, and professionalism. These tones ground the experience and create contrast for vibrant elements.

    Together, the system is both modern and timeless — a tribute to craft, not trend.

    Technology Stack

    We brought the platform to life with a modern and modular tech stack designed for both performance and storytelling:

    • WordPress (headless CMS) for scalable, easy-to-manage content that supports a dynamic editorial workflow
    • GSAP (GreenSock Animation Platform) for fluid, timeline-based animations across scroll and interactions
    • Three.js / WebGL for high-performance visual effects, shaders, and real-time graphical experiences
    • Custom booking system powered by Make, Google Calendar, Whereby, and Stripe — enabling seamless scheduling, video sessions, and payments

    This stack allowed us to deliver a responsive, cinematic experience without compromising speed or maintainability.

    Loader Experience

    Even the loading screen is part of the story.

    We designed a cinematic prelude using the “M” tail as a narrative element. This loader animation doesn’t just fill time — it sets the stage. Meanwhile, key phrases from the creative world — terms like motion 2D & 3D, vfx, cgi, and motion capture — animate in and out of view, building excitement and immersing users in the language of the craft.

    It’s a sensory preview of what’s to come, priming the visitor for an experience rooted in industry and artistry.

    Title Reveal Effects

    Typography becomes motion.

    To bring the brand’s kinetic DNA to life, we implemented a custom mask-reveal effect for major headlines. Each title glides into view with trailing motion, echoing the flowing “M” mark. This creates a feeling of elegance, control, and continuity — like a shot dissolving in a film edit.

    These transitions do more than delight — they reinforce the platform’s identity, delivering brand through movement.

    Menu Interaction

    We didn’t want the menu to feel like a utility. We wanted it to feel like a scene transition.

    The menu unfolds within the iconic M-shape — its structure serving as both interface and metaphor. As users open it, they reveal layers: content categories, mentor profiles, and stories. Every motion is deliberate, reminiscent of opening a timeline in an editing suite or peeling back layers in a 3D model.

    It’s tactile, immersive, and true to the world the platform celebrates.

    Gradient & WebGL Shader

    A major visual motif was the idea of “burning film” — inspired by analog processes but expressed through modern code.

    To bring this to life, we created a custom WebGL shader, incorporating a reactive orange gradient from the brand palette. As users move their mouse or scroll, the shader responds in real-time, adding a subtle but powerful VFX-style distortion to the screen.

    This isn’t just decoration. It’s a living texture — a symbol of the heat, friction, and passion that fuel creative careers.

    Scroll-Based Storytelling

    The homepage isn’t static. It’s a stage for narrative progression.

    We designed the flow as a scroll-driven experience where content and story unfold in sync. From an opening slider that introduces the brand, to immersive sections that highlight individual mentors and their work, each moment is carefully choreographed.

    Users aren’t just reading — they’re experiencing a sequence, like scenes in a movie or levels in a game. It’s structured, emotional, and deeply human.

    Who We Are

    We are a digital studio at the intersection of design, storytelling, and interaction. Our approach is rooted in concept and craft. We build digital experiences that are not only visually compelling but emotionally resonant.

    From bold brands to immersive websites, we design with movement in mind — movement of pixels, of emotion, and of purpose.

    Because we believe great design doesn’t just look good — it moves you.



    Source link

  • 2 ways to define ASP.NET Core custom Middleware &vert; Code4IT

    2 ways to define ASP.NET Core custom Middleware | Code4IT


    Customizing the behavior of an HTTP request is easy: you can use a middleware defined as a delegate or as a class.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes you need to create custom logic that must be applied to all HTTP requests received by your ASP.NET Core application. In these cases, you can create a custom middleware: pieces of code that are executed sequentially for all incoming requests.

    The order of middlewares matters. Here’s a nice schema published on the Microsoft website:

    Middleware order

    A Middleware, in fact, can manipulate the incoming HttpRequest and the resulting HttpResponse objects.

    In this article, we’re gonna learn 2 ways to create a middleware in .NET.

    Middleware as inline delegates

    The easiest way is to define a delegate function that must be defined after building the WebApplication.

    By calling the Use method, you can update the HttpContext object passed as a first parameter.

    app.Use(async (HttpContext context, Func<Task> task) =>
    {
        context.Response.Headers.TryAdd("custom-header", "a-value");
    
        await task.Invoke();
    });
    

    Note that you have to call the Invoke method to call the next middleware.

    There is a similar overload that accepts in input a RequestDelegate instance instead of Func<Task>, but it is considered to be less performant: you should, in fact, use the one with Func<Task>.

    Middleware as standalone classes

    The alternative to delegates is by defining a custom class.

    You can call it whatever you want, but you have some constraints to follow when creating the class:

    • it must have a public constructor with a single parameter whose type is RequestDelegate (that will be used to invoke the next middleware);
    • it must expose a public method named Invoke or InvokeAsync that accepts as a first parameter an HttpContext and returns a Task;

    Here’s an example:

    public class MyCustomMiddleware
    {
        private readonly RequestDelegate _next;
    
        public MyCustomMiddleware(RequestDelegate next)
        {
            _next = next;
        }
    
        public async Task InvokeAsync(HttpContext context)
        {
            context.Response.Headers.TryAdd("custom-name", "custom-value");
            await _next(context);
        }
    }
    

    Then, to add it to your application, you have to call

    app.UseMiddleware<MyCustomMiddleware>();
    

    Delegates or custom classes?

    Both are valid methods, but each of them performs well in specific cases.

    For simple scenarios, go with inline delegates: they are easy to define, easy to read, and quite performant. But they are a bit difficult to test.

    For complex scenarios, go with custom classes: this way you can define complex behaviors in a single class, organize your code better, use Dependency Injection to pass services and configurations to the middleware. Also, defining the middleware as a class makes it more testable. The downside is that, as of .NET 7, using a middleware resides on reflection: UseMiddleware invokes the middleware by looking for a public method named Invoke or InvokeAsync. So, theoretically, using classes is less performant than using delegates (I haven’t benchmarked it yet, though!).

    Wrapping up

    On Microsoft documentation you can find a well-explained introduction to Middlewares:

    🔗 ASP.NET Core Middleware | Microsoft docs

    And some suggestions on how to write a custom middleware as standalone classes:

    🔗 Write custom ASP.NET Core middleware | Microsoft docs

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Espionage Campaigns Uncovered by Seqrite Labs

    Espionage Campaigns Uncovered by Seqrite Labs


    Seqrite Labs APT-Team has identified and tracked UNG0002 also known as Unknown Group 0002, a bunch of espionage-oriented operations which has been grouped under the same cluster conducting campaigns across multiple Asian jurisdictions including China, Hong Kong, and Pakistan. This threat entity demonstrates a strong preference for using shortcut files (LNK), VBScript, and post-exploitation tools such as Cobalt Strike and Metasploit, while consistently deploying CV-themed decoy documents to lure victims.

    The cluster’s operations span two major campaigns: Operation Cobalt Whisper (May 2024 – September 2024) and Operation AmberMist (January 2025 – May 2025). During Operation Cobalt Whisper, 20 infection chains were observed targeting defense, electrotechnical engineering, and civil aviation sectors. The more recent Operation AmberMist campaign has evolved to target gaming, software development, and academic institutions with improved lightweight implants including Shadow RAT, Blister DLL Implant, and INET RAT.

    In the recent operation AmberMist, the threat entity has also abused the ClickFix Technique – a social engineering method that tricks victims into executing malicious PowerShell scripts through fake CAPTCHA verification pages. Additionally, UNG0002 leverages DLL sideloading techniques, particularly abusing legitimate Windows applications like Rasphone and Node-Webkit binaries to execute malicious payloads.

    • Multi-Stage Attacks: UNG0002 employs sophisticated infection chains using malicious LNK files, VBScript, batch scripts, and PowerShell to deploy custom RAT implants including Shadow RAT, INET RAT, and Blister DLL.
    • ClickFix Social Engineering: The group utilizes fake CAPTCHA verification pages to trick victims into executing malicious PowerShell scripts, notably spoofing Pakistan’s Ministry of Maritime Affairs website.
    • Abusing DLL Sideloading: In the recent campaign, consistent abuse of legitimate Windows applications (Rasphone, Node-Webkit) for DLL sideloading to execute malicious payloads while evading detection.
    • CV-Themed Decoy Documents: Use of realistic resume documents targeting specific industries, including fake profiles of game UI designers and computer science students from prestigious institutions.
    • Persistent Infrastructure: Maintained command and control infrastructure with consistent naming patterns and operational security across multiple campaigns spanning over a year.

    • Targeted Industry Focus: Systematic targeting of defense, electrotechnical engineering, energy, civil aviation, academia, medical institutions, cybersecurity researchers, gaming, and software development sectors.
    • Attribution Challenges: UNG0002 represents an evolving threat cluster that demonstrates high adaptability by mimicking techniques from other threat actor playbooks to complicate attribution efforts, with Seqrite Labs assessing with high confidence that the group originates from South-East Asia and focuses on espionage activities. As more intelligence becomes available, associated campaigns may be expanded or refined in the future.

    UNG0002 represents a sophisticated and persistent threat entity from South Asia that has maintained consistent operations targeting multiple Asian jurisdictions since at least May 2024. The group demonstrates high adaptability and technical proficiency, continuously evolving their toolset while maintaining consistent tactics, techniques, and procedures.

    The threat actor’s focus on specific geographic regions (China, Hong Kong, Pakistan) and targeted industries suggests a strategic approach to intelligence gathering AKA classic espionage related activities. Their use of legitimate-looking decoy documents, social engineering techniques, and pseudo-advanced evasion methods indicates a well-resourced and experienced operation.

    UNG0002 demonstrates consistent operational patterns across both Operation Cobalt Whisper and Operation AmberMist, maintaining similar infrastructure naming conventions, payload delivery mechanisms, and target selection criteria. The group’s evolution from using primarily Cobalt Strike and Metasploit frameworks to developing custom implants like Shadow RAT, INET RAT, and Blister DLL indicates their persistent nature.

    Notable technical artifacts include PDB paths revealing development environments such as C:\Users\The Freelancer\source\repos\JAN25\mustang\x64\Release\mustang.pdb for Shadow RAT and C:\Users\Shockwave\source\repos\memcom\x64\Release\memcom.pdb for INET RAT, indicating potential code names “Mustang” and “ShockWave” which indicate the mimicry of already-existing threat groups. An in-depth technical analysis of the complete infection chains and detailed campaign specifics can be found in our comprehensive whitepaper.

    Attributing threat activity to a specific group is always a complex task. It requires detailed analysis across several areas, including targeting patterns, tactics and techniques (TTPs), geographic focus, and any possible slip-ups in operational security. UNG0002 is an evolving cluster that Seqrite Labs is actively monitoring. As more intelligence becomes available, we may expand or refine the associated campaigns. Based on our current findings, we assess with high confidence that this group originates from South-East Asia and demonstrates a high level of adaptability — often mimicking techniques seen in other threat actor playbooks to complicate attribution focusing on espionage. We also, appreciate other researchers in the community, like malwarehunterteam for hunting these campaigns.

    • Non-PE [Script-Based Files, Shortcut, C2-Config, Encrypted Shellcode blobs]
    File Type Hash (SHA-256)
    LNK (Shortcut) 4ca4f673e4389a352854f5feb0793dac43519ade8049b5dd9356d0cbe0f06148
    55dc772d1b59c387b5f33428d5167437dc2d6e2423765f4080ee3b6a04947ae9
    4b410c47465359ef40d470c9286fb980e656698c4ee4d969c86c84fbd012af0d
    SCT (Scriptlet) c49e9b556d271a853449ec915e4a929f5fa7ae04da4dc714c220ed0d703a36f7
    VBS (VBScript) ad97b1c79735b1b97c4c4432cacac2fce6316889eafb41a0d97f2b0e565ee850
    c722651d72c47e224007c2111e0489a028521ccdf5331c92e6cd9cfe07076918
    2140adec9cde046b35634e93b83da4cc9a8aa0a71c21e32ba1dce2742314e8dc
    Batch Script (.bat) a31d742d7e36fefed01971d8cba827c71e69d59167e080d2f551210c85fddaa5
    PowerShell (.ps1) a31d742d7e36fefed01971d8cba827c71e69d59167e080d2f551210c85fddaa5
    TXT – C2 Config 2df309018ab935c47306b06ebf5700dcf790fff7cebabfb99274fe867042ecf0

    b7f1d82fb80e02b9ebe955e8f061f31dc60f7513d1f9ad0a831407c1ba0df87e

    Shellcode (.dat) 2c700126b22ea8b22b8b05c2da05de79df4ab7db9f88267316530fa662b4db2c
    Hash (SHA-256) Malware Type Notes
    c3ccfe415c3d3b89bde029669f42b7f04df72ad2da4bd15d82495b58ebde46d6 Blister DLL Implant Used in Operation AmberMist, DLL sideloaded via Node-Webkit
    4c79934beb1ea19f17e39fd1946158d3dd7d075aa29d8cd259834f8cd7e04ef8 Blister DLL Implant Same family as above, possible variant
    2bdd086a5fce1f32ea41be86febfb4be7782c997cfcb028d2f58fee5dd4b0f8a INET RAT Shadow RAT rewrite with anti-analysis and C2 flexibility
    90c9e0ee1d74b596a0acf1e04b41c2c5f15d16b2acd39d3dc8f90b071888ac99 Shadow RAT Deployed via Rasphone with decoy and config loader
    Tactic Technique Technique ID Observed Behavior / Example
    Reconnaissance Spearphishing for Information T1598.002 Use of job-themed resumes (e.g., Zhang Wanwan & Li Mingyue CVs) to target specific sectors.
    Resource Development Develop Capabilities T1587 Custom implants: INET RAT (rewrite of Shadow RAT), use of Blister DLL loader.
    Acquire Infrastructure T1583.001, T1583.006 Use of spoofed domains (e.g., moma[.]islamabadpk[.]site); ASN usage.
    Initial Access Spear Phishing Attachment T1566.001 Use of malicious ZIPs with LNKs and VBS (e.g., 张婉婉简历.zip, 李明月_CV.pdf.lnk).
    Drive-by Compromise (ClickFix technique) T1189 Malicious site tricks user into pasting PowerShell copied to clipboard.
    Execution Command and Scripting Interpreter (PowerShell, VBScript, Batch) T1059 Multi-stage execution via VBS ➝ BAT ➝ PowerShell.
    Signed Binary Proxy Execution (wscript, rasphone, regsvr32) T1218 Use of LOLBINs like wscript.exe, regsvr32.exe, rasphone.exe for execution and sideloading.
    Scripting (Scriptlets – .sct files) T1059.005 Use of run.sct via regsvr32 for further payload execution.
    Persistence Scheduled Task/Job T1053.005 Tasks like SysUpdater, UtilityUpdater scheduled for recurring execution.
    Privilege Escalation DLL Search Order Hijacking T1574.001 DLL sideloading via rasphone.exe, node-webkit for Shadow RAT, Blister loader.
    Defense Evasion Obfuscated Files or Information T1027 Scripts with obfuscation, hex-encoded C2 configs, junk code in SCTs.
    Deobfuscate/Decode Files or Information T1140 INET RAT decrypting C2 configuration from list.txt.
    Software Packing (Shellcode loader) T1027.002 Blister decrypts and injects shellcode from update.dat using AES.
    Indirect Command Execution T1202 Executing SCT through regsvr32, using P/Invoke to load DLLs.
    Credential Access Input Capture (potential within Shadow/INET RAT) T1056 RAT capabilities imply possible credential theft.
    Discovery System Information Discovery T1082 INET RAT collects computer/user names upon execution.
    Command & Control Application Layer Protocol: Web Protocols T1071.001 Shadow/INET RATs communicate over HTTP(S).
    Ingress Tool Transfer T1105 Payloads and decoys downloaded from external servers.
    Collection Data from Local System T1005 Likely via RATs for file collection or clipboard access.
    Exfiltration Exfiltration Over C2 Channel T1041 Shadow/INET RAT reverse shell features suggest data tunneling over same HTTP channel.

     

    Authors

    Sathwik Ram Prakki

    Subhajeet Singha

     



    Source link

  • Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit &vert; Code4IT

    Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT


    Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.

    In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).

    In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.

    For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,

    GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
    

    will return

    {
      "instanceName": "Real",
      "info": {
        "socialNetworkName": "Twitter",
        "sourceUrl": "https://twitter.com/BelloneDavide/status/1682305491785973760",
        "username": "BelloneDavide",
        "id": "1682305491785973760"
      }
    }
    

    For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.

    Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.

    There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.

    As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.

    Class Diagram

    But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.

    How to create a custom WebApplicationFactory in .NET

    When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.

    Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
    
    }
    

    Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.

    If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.

    What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:

    public partial class Program { }
    

    Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
              builder.ConfigureAppConfiguration((host, configurationBuilder) => { });
        }
    }
    

    How to use WebApplicationFactory in your NUnit tests

    It’s time to start working on some real Integration Tests!

    As we said before, we have only one HTTP endpoint, defined like this:

    
    private readonly ISocialLinkParser _parser;
    private readonly ILogger<SocialPostLinkController> _logger;
    private readonly IConfiguration _config;
    
    public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
    {
        _parser = parser;
        _logger = logger;
        _config = config;
    }
    
    [HttpGet]
    public IActionResult Get([FromQuery] string uri)
    {
        _logger.LogInformation("Received uri {Uri}", uri);
        if (Uri.TryCreate(uri, new UriCreationOptions {  }, out Uri _uri))
        {
            var linkInfo = _parser.GetLinkInfo(_uri);
            _logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
    
            var instance = new Instance
            {
                InstanceName = _config.GetValue<string>("InstanceName"),
                Info = linkInfo
            };
            return Ok(instance);
        }
        else
        {
            _logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
            return BadRequest();
        }
    }
    

    We have 2 flows to validate:

    • If the input URI is valid, the HTTP Status code should be 200;
    • If the input URI is invalid, the HTTP Status code should be 400;

    We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.

    First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.

    public class ApiIntegrationTests : IDisposable
    {
        private IntegrationTestWebApplicationFactory _factory;
        private HttpClient _client;
    
        [OneTimeSetUp]
        public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
        [SetUp]
        public void Setup() => _client = _factory.CreateClient();
    
        public void Dispose() => _factory?.Dispose();
    }
    

    As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.

    From now on, we can use the _client instance to work with the in-memory instance of the API.

    One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.

    Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:

    [Test]
    public async Task Should_ReturnHttp200_When_UrlIsValid()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
    }
    

    Otherwise, return Bad Request:

    [Test]
    public async Task Should_ReturnBadRequest_When_UrlIsNotValid()
    {
        string inputUrl = "invalid-url";
    
        var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
    }
    

    How to create test-specific configurations using InMemoryCollection

    WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.

    Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.

    For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("Real"));
    }
    

    But some other times you might want to override a specific configuration key.

    The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.

    If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureAppConfiguration((host, configurationBuilder) =>
        {
            configurationBuilder.AddInMemoryCollection(
                new List<KeyValuePair<string, string?>>
                {
                    new KeyValuePair<string, string?>("InstanceName", "FromTests")
                });
        });
    }
    

    Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.

    You can validate this change by simply replacing the expected value in the previous test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
    }
    

    If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.

    How to set up custom dependencies for your tests

    Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.

    We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:

    public class StubSocialLinkParser : ISocialLinkParser
    {
        public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
        {
            SocialNetworkName = "test from stub",
            Id = "test id",
            SourceUrl = postUri,
            Username = "test username"
        };
    }
    

    We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    });
    

    Finally, we can create a method to validate this change:

    [Test]
    public async Task Should_UseStubName()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
    }
    

    How to create Integration Tests on specific resolved dependencies

    Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.

    We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.

    For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:

    builder.Services.AddScoped<SocialLinkParser>();
    

    Now we can create a test to validate that it is working:

    [Test]
    public async Task Should_ResolveDependency()
    {
        using (var _scope = _factory.Services.CreateScope())
        {
            var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
            Assert.That(service, Is.Not.Null);
            Assert.That(service, Is.AssignableTo<SocialLinkParser>());
        }
    }
    

    As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.

    The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.

    Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:

    protected IServiceScope _scope;
    protected SocialLinkParser _sut;
    private IntegrationTestWebApplicationFactory _factory;
    
    [OneTimeSetUp]
    public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
    [SetUp]
    public void Setup()
    {
        _scope = _factory.Services.CreateScope();
        _sut = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
    }
    
    [TearDown]
    public void TearDown()
    {
        _sut = null;
        _scope.Dispose();
    }
    
    public void Dispose() => _factory?.Dispose();
    

    You can see an example of the implementation here in the SocialLinkParserTests class.

    Where are my logs?

    Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.

    But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddLogging(builder => builder.AddConsole().AddDebug());
    });
    

    Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:

    Logs appear in the Output panel of VisualStudio

    Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():

    services.AddLogging(builder => builder.ClearProviders() .AddConsole().AddDebug());
    

    Full example

    In this article, we’ve configured many parts of our WebApplicationFactory. Here’s the final result:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            builder.ConfigureAppConfiguration((host, configurationBuilder) =>
            {
                // Remove other settings sources, if necessary
                configurationBuilder.Sources.Clear();
    
                //Create custom key-value pairs to be used as settings
                configurationBuilder.AddInMemoryCollection(
                    new List<KeyValuePair<string, string?>>
                    {
                        new KeyValuePair<string, string?>("InstanceName", "FromTests")
                    });
            });
    
            builder.ConfigureTestServices(services =>
            {
                //Add stub classes
                services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    
                //Configure logging
                services.AddLogging(builder => builder.ClearProviders().AddConsole().AddDebug());
            });
        }
    }
    

    You can find the source code used for this article on my GitHub; feel free to download it and toy with it!

    Further readings

    This is an in-depth article about Integration Tests in .NET. I already wrote an article about it with a simpler approach that you might enjoy:

    🔗 How to run Integration Tests for .NET API | Code4IT

    This article first appeared on Code4IT 🐧

    As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.

    In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.

    Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Wrapping up

    This was a huge article, I know.

    Again, feel free to download and run the example code I shared on my GitHub.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • In-Demand Front-End Development Skills to Future-Proof Your Career



    In-Demand Front-End Development Skills to Future-Proof Your Career



    Source link

  • A Guide for ASP.NET Core Developers &vert; Code4IT

    A Guide for ASP.NET Core Developers | Code4IT


    Feature Flags are a technique that allows you to control the visibility and functionality of features in your software without changing the code. They enable you to experiment with new features, perform gradual rollouts, and revert changes quickly if needed.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    To turn functionalities on or off on an application, you can use simple if(condition) statements. That would work, of course. But it would not be flexible, and you’ll have to scatter those checks all around the application.

    There is another way, though: Feature Flags. Feature Flags allow you to effortlessly enable and disable functionalities, such as Middlewares, HTML components, and API controllers. Using ASP.NET Core, you have Feature Flags almost ready to be used: it’s just a matter of installing one NuGet package and using the correct syntax.

    In this article, we are going to create and consume Feature Flags in an ASP.NET Core application. We will start from the very basics and then see how to use complex, built-in filters. We will consume Feature Flags in a generic C# code, and then we will see how to include them in a Razor application and in ASP.NET Core APIs.

    How to add the Feature Flags functionality on ASP.NET Core applications

    The very first step to do is to install the Microsoft.FeatureManagement.AspNetCore NuGet package:

    Microsoft.FeatureManagement.AspNetCore NuGet package

    This package contains everything you need to integrate Feature Flags in an ASP.NET application, from reading configurations from the appsettings.json file to the utility methods we will see later in this article.

    Now that we have the package installed, we can integrate it into our application. The first step is to call AddFeatureManagement on the IServiceCollection object available in the Main method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddFeatureManagement();
    

    By default, this method looks for Feature Flags in a configuration section named FeatureManagement.

    If you want to use another name, you can specify it by accessing the Configuration object. For example, if your section name is MyWonderfulFlags, you must use this line instead of the previous one:

    builder.Services.AddFeatureManagement(builder.Configuration.GetSection("MyWonderfulFlags"));
    

    But, for now, let’s stick with the default section name: FeatureManagement.

    Define Feature Flag values in the appsettings file

    As we saw, we have to create a section named FeatureManagement in the appsettings file. This section will contain a collection of keys, each representing a Feature Flag and an associated value.

    For now, let’s say that the value is a simple boolean (we will see an advanced case later!).

    For example, we can define our section like this:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false
      }
    }
    

    Using Feature Flags in generic C# code

    The simplest way to use Feature Flags is by accessing the value directly in the C# code.

    By calling AddFeatureManagement, we have also injected the IFeatureManager interface, which comes in handy to check whether a flag is enabled.

    You can then inject it in a class constructor and reference it:

    private readonly IFeatureManager _featureManager;
    
    public MyClass(IFeatureManager featureManager)
    {
        _featureManager = featureManager;
    }
    
    public async Task DoSomething()
    {
        bool privacyEnabled = await _featureManager.IsEnabledAsync("PrivacyPage");
        if(privacyEnabled)
        {
            // do something specific
        }
    }
    

    This is the simplest way. Looks like it’s nothing more than a simple if statement. Is it?

    Applying a Feature Flag to a Controller or a Razor Model using the FeatureGate attribute

    When rolling out new versions of your application, you might want to enable or disable an API Controller or a whole Razor Page, depending on the value of a Feature Flag.

    There is a simple way to achieve this result: using the FeatureGate attribute.

    Suppose you want to hide the “Privacy” Razor page depending on its related flag, PrivacyPage. You then have to apply the FeatureGate attribute to the whole Model class (in our case, PrivacyModel), specifying that the flag to watch out for is PrivacyPage:

    [FeatureGate("PrivacyPage")]
    public class PrivacyModel : PageModel
    {
    
        public PrivacyModel()
        {
        }
    
        public void OnGet()
        {
        }
    }
    

    Depending on the value of the flag, we will have two results:

    • if the flag is enabled, we will see the whole page normally;
    • if the flag is disabled, we will receive a 404 – Not Found response.

    Let’s have a look at the attribute definition:

    //
    // Summary:
    //     An attribute that can be placed on MVC controllers, controller actions, or Razor
    //     pages to require all or any of a set of features to be enabled.
    [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class FeatureGateAttribute : ActionFilterAttribute, IAsyncPageFilter, IFilterMetadata
    

    As you can see, you can apply the attribute to any class or method that is related to API controllers or Razor pages. This allows you to support several scenarios:

    • add a flag on a whole API Controller by applying the attribute to the related class;
    • add a flag on a specific Controller Action, allowing you, for example, to expose the GET Action but apply the attribute to the POST Action.
    • add a flag to a whole Razor Model, hiding or showing the related page depending on the flag value.

    You can apply the attribute to a custom class or method unrelated to the MVC pipeline, but it will be ineffective.

    If you try with

    public void OnGet()
    {
        var hello = Hello();
        ViewData["message"] = hello;
    }
    
    [FeatureGate("PrivacyPage")]
    private string Hello()
    {
        return "Ciao!";
    }
    

    the Hello method will be called as usual. The same happens for the OnGet method: yes, it represents the way to access the Razor Page, but you cannot hide it; the only way is to apply the flag to the whole Model.

    You can use multiple Feature Flags on the same FeatureGate attribute. If you need to hide or show a component based on various Feature Flags, you can simply add the required keys in the attribute parameters list:

    [HttpGet]
    [FeatureGate("PrivacyPage", "Footer")]
    public IActionResult Get()
    {
        return Ok("Hey there!");
    }
    

    Now, the GET endpoint will be available only if both PrivacyPage and Footer are enabled.

    Finally, you can define that the component is available if at least one of the flags is enabled by setting the requirementType parameter to RequirementType.Any:

    [HttpGet]
    [FeatureGate(requirementType:RequirementType.Any,  "PrivacyPage", "Footer")]
    public IActionResult Get()
    {
        return Ok("Hey there!");
    }
    

    How to use Feature Flags in Razor Pages

    The Microsoft.FeatureManagement.AspNetCore NuGet package brings a lot of functionalities. Once installed, you can use Feature Flags in your Razor pages.

    To use such functionalities, though, you have to add the related tag helper: open the _ViewImports.cshtml file and add the following line:

    @addTagHelper *, Microsoft.FeatureManagement.AspNetCore
    

    Now, you can use the feature tag.

    Say you want to show an HTML tag when the Header flag is on. You can use the feature tag this way:

    <feature name="Header">
        <p>The header flag is on.</p>
    </feature>
    

    You can also show some content when the flag is off, by setting the negate attribute to true. This comes in handy when you want to display alternative content when the flag is off:

    <feature name="ShowPicture">
        <img src="image.png" />
    </feature>
    <feature name="ShowPicture" negate="true">
        <p>There should have been an image, here!</p>
    </feature>
    

    Clearly, if ShowPicture is on, it shows the image; otherwise, it displays a text message.

    Similar to the FeatureGate attribute, you can apply multiple flags and choose whether all of them or at least one must be on to show the content by setting the requirement attribute to Any (remember: the default value is All):

    <feature name="Header, Footer" requirement="All">
        <p>Both header and footer are enabled.</p>
    </feature>
    
    <feature name="Header, Footer" requirement="Any">
        <p>Either header or footer is enabled.</p>
    </feature>
    

    Conditional Feature Filters: a way to activate flags based on specific advanced conditions

    Sometimes, you want to activate features using complex conditions. For example:

    • activate a feature only for a percentage of requests;
    • activate a feature only during a specific timespan;

    Let’s see how to use the percentage filter.

    The first step is to add the related Feature Filter to the FeatureManagement functionality. In our case, we will add the Microsoft.FeatureManagement.FeatureFilters.PercentageFilter.

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Now we just have to define the related flag in the appsettings file. We cannot use anymore a boolean value, but we need a complex object. Let’s configure the ShowPicture flag to use the Percentage filter.

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    Have a look at the structure. Now we have:

    • a field named EnabledFor;
    • EnabledFor is an array, not a single object;
    • every object within the array is made of two fields: Name, which must match the filter name, and Parameters, which is a generic object whose value depends on the type of filter.

    In this example, we have set "Value": 60. This means that the flag will be active in around 60% of calls. In the remaining 40%, the flag will be off.

    Now, I encourage you to toy with this filter:
    Apply it to a section or a page.
    Run the application.
    Refresh the page several times without restarting the application.

    You’ll see the component appear and disappear.

    Further readings

    We learned about setting “simple” configurations in an ASP.NET Core application in a previous article. You should read it to have a better understanding of how we can define configurations.

    🔗Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Here, we focused on the Feature Flags. As we saw, most functionalities come out of the box with ASP.NET Core.

    In particular, we learned how to use the <feature> tag on a Razor page. You can read more on the official documentation (even though we already covered almost everything!):

    🔗FeatureTagHelper Class | Microsoft Docs

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to use Feature Flags in an ASP.NET application on Razor pages and API Controllers.

    Feature Flags can be tremendously useful when activating or deactivating a feature in some specific cases. For example, you can roll out a functionality in production by activating the related flag. Suppose you find an error in that functionality. In that case, you just have to turn off the flag and investigate locally the cause of the issue.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link