Centralizing configurations can be useful for several reasons: security, consistency, deployability. In this article, we’re gonna use Azure App Configuration to centralize the configurations used in a .NET API application.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Almost every application requires some sort of configuration: connection strings, default values, and so on.
It’s not a good practice to keep all the configurations in your codebase: if your code leaks online, you’ll have all your connection strings, private settings, and API keys exposed online.
In this article, we’re gonna make our application more secure by moving our configurations to the cloud and using Azure App Configurations to securely use such configs in our applications.
But first, as always, we need a dummy project to demonstrate such capabilities.
I created a simple .NET 7 API project with just one endpoint, /ConfigDemo, that returns the values from the settings.
We want to update the page size and the password. Locate Configuration Explorer in the left menu, click on Create, and add a new value for each configuration. Remember: nested configurations can be defined using the : sign: to update the password, the key must be MyNiceConfig:Host:Password. Important: do not set labels or tags, for now: they are advanced topics, and require some additional settings that we will probably explore in future articles.
Once you’ve overridden both values, you should be able to see something like this:
How to integrate Azure App Configuration in a .NET application
Now we are ready to integrate Azure App Configuration with our .NET APIs.
First things first: we must install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet Package:
Then, we need to find a way to connect to our App Configuration instance. There are two ways: using Azure Active Directory (Azure AD) or using a simple Access Key. We’re gonna use the latter.
Get back to Azure, and locate the Access Keys menu item. Then head to Read-only keys, and copy the full connection string.
Do NOT store it on your repository! There are smarter, more secure ways to store use such connection strings:
Environment variables: for example, run the application with dotnet run --MYKEY=<your_connection_string>;
launchsettings.json key: you can use different configurations based on the current profile;
secrets store: hidden values only available on your machine. Can be set using dotnet user-secrets set MyKey "<your_connection_string>";
pipeline configurations: you can define such values in your CI/CD pipelines;
But still, for the sake of this example, I will store the connection string in a local variable 😁
You can now run the APIs and call the previous endpoint to see the new results
Why should you use Azure App Configuration?
In my opinion, having a proper way to handle configurations is crucial for the success of a project.
Centralizing configurations can be useful in three different ways:
Your application is more secure since you don’t risk having the credentials exposed on the web;
You can share configurations across different services: say that you have 4 services that access the same external APIs that require a Client Secret. Centralizing the config helps in having consistent values across the different services and, for example, updating the secret for all the applications in just one place;
Use different configs based on the environment: with Azure App Configuration you can use a set of tags and labels to determine which configs must be loaded in which environment. This simplifies a lot the management of configurations across different environments.
But notice that, using the basic approach that we used in this article, configurations coming from Azure are loaded at the startup of the application: configs are static until you restart the application. You can configure your application to poll Azure App Configuration to always have the most updated values without the need of restarting the application, but it will be the topic of a future article.
Further readings
Configuration management is one of the keys to the success of a project: if settings are difficult to manage and difficult to set locally for debugging, you’ll lose a lot of time (true story! 😩).
However, there are several ways to set configurations for a .NET application, such as Environment Variables, launchSettings, and so on.
The global craze around cryptocurrency has fueled both innovation and exploitation. While many legally chase digital gold, cybercriminals hijack devices to mine it covertly. Recently, we encountered a phishing website impersonating a well-known bank, hosting a fake Android app. While the app does not function like a real banking application, it uses the bank’s name and icon to mislead users. Behind the scenes, it silently performs cryptocurrency mining, abusing user devices for illicit gain.
Cryptocurrency mining (or crypto mining) uses computing power to validate and record transactions on a blockchain network. In return, miners are rewarded with new cryptocurrency coins.
This process involves solving complex mathematical puzzles that require significant CPU or GPU resources. While large-scale miners often use powerful rigs equipped with high-end GPUs or ASICs for maximum efficiency, individuals can also legitimately mine cryptocurrencies using personal devices like PCs or smartphones.
Because of Google Play Store policies related to cryptocurrency mining, even legitimate apps that perform on-device mining are not allowed to be published on the Play Store. As a result, users often install such mining applications from third-party sources or unofficial app stores, which increases the risk of encountering malicious or compromised apps disguised as legitimate ones.
Threat actors take advantage of this situation by spreading fake apps on third-party stores and websites. These malicious apps have cryptocurrency mining code embedded within them, allowing attackers to secretly use victims’ devices to mine cryptocurrency for their own benefit.
Here, we refer to legitimate cryptocurrency mining apps that disclose mining activities, obtain user consent, and ensure that the mining profits go directly to the user. In contrast, cryptocurrency mining malware, also known as cryptojackers, secretly mines without permission, hijacking device resources so that the attacker gains all the profits.
What Are the Effects of Mining Malware (cryptojackers) Installed on an Android Device?
Battery Drain: The mining process involves constant, intensive CPU usage, which leads to rapid battery depletion.
Potential Hardware Damage: Prolonged overheating and stress may cause irreversible damage to internal components like the battery, CPU, or motherboard.
High Data Usage: Cryptocurrency mining applications communicate frequently with mining pools, leading to unexpected data usage.
Performance Lag: The app consumes processing power, making the device slow, laggy, or unresponsive.
In recent case, the phishing site(getxapp[.]in) impersonates Axis Bank and hosts a fake application called “Axis Card.” The malware author has embedded XMRig to perform cryptocurrency mining in the background. XMRig is an open-source cryptocurrency mining software designed to mine Monero and other coins.
Figure 1. Phishing Site
Figure 2 illustrates the attack flow of this campaign. The user initially downloads the malware-laced application either from a phishing site or through social media platforms like WhatsApp. Upon execution, the app displays a fake update screen but provides no actual functionality, causing the user to ignore it.
In the background, however, the malware begins monitoring the device’s status, particularly the battery level and screen lock state. Once the device is locked, the malicious app silently downloads an encrypted .so payload, decrypts it, and initiates cryptomining activity.
If the user unlocks the device, the mining process immediately halts, and the malware returns to the monitoring phase—waiting for the next lock event. This lock–unlock loop allows the miner to operate stealthily and persistently. Over time, this prolonged background mining can lead to excessive heat, battery drain, and permanent hardware damage to the device.
Figure 2. Attack flow of this malware application
Technical analysis:
Figure 3 shows details of the malware application hosted on this fake website.
Figure 3. File information
Figure 4 highlights the permissions declared by the application in its manifest file. Generally, Android mining applications require only the android.permission.INTERNET permission, as it allows them to connect to remote mining servers and carry out operations over the network. This permission is no longer classified as dangerous and is automatically granted by the Android system without requiring explicit user consent.
Many miner apps also request the WAKE_LOCK permission to prevent the device from sleeping, ensuring uninterrupted mining activity even when the screen is off. Additionally, miners often use the android.intent.action.BOOT_COMPLETED broadcast to automatically restart after a device reboot, thereby maintaining persistence.
In this case, the application requests Internet permission along with a few other suspicious permissions.
Figure 4. Permissions declared by Malware in its Androidmanifest file
Malware execution
The app begins by asking for permission to run in the background, which is commonly abused in mining operations to stay active without user interaction. It then displays a fake update screen claiming new features have been added, with a prominent UPDATE button. Clicking the button shows an Install prompt, but instead of installing anything, it ends with a message saying the installer has expired. Interestingly, the app declares the REQUEST_INSTALL_PACKAGES permission, suggesting it intends to install another APK. However, no actual installation occurs, indicating the entire update flow is likely staged for deception or redirection.
Figure 5. Application execution flow
In the background, the malware repeatedly attempts to download a malicious binary from one of several hardcoded URLs. These URLs point to platforms such as GitHub, Cloudflare Pages, and a custom domain (uasecurity[.]org), all of which are used to host the miner payload. Figure 6 illustrates this behavior.
Figure 6. code used to download payload binary
Figure 7 shows a screenshot of the GitHub repository hxxps[:]//github[.]com/backend-url-provider/access, which is used to host the miner payloads libmine-arm32.so and libmine-arm64.so. Both files are encrypted to evade static detection and hinder analysis.
Figure 7. Screenshot of the GitHub page hosting the payload binary.
The malware first decrypts the downloaded binary using an AES algorithm (Figure 8). In the next step (Figure 9), the decrypted binary is written to a file named d-miner within the app’s private storage. Once written, the file is marked as executable.
To retrieve the encrypted payload, a custom Java-based decryption method was used. Figure 10 confirms that the resulting .so file is based on or directly derived from XMRig’s Android build. The extracted strings reference internal configuration paths, usage instructions, version details, and mining-related URLs. These artifacts clearly validate that the primary purpose of this native library is CPU-based cryptomining.
Figure 10. Strings view from the .so file opened in JEB showing references to XMRig.
Figure 11 illustrates the method NMuU8KNchX5bP8Oy(), which constructs the command-line arguments required by the XMRig miner for execution. It attempts to connect directly to the Monero mining pool at pool.uasecurity.org:9000, or alternatively to a proxy pool at pool-proxy.uasecurity.org:9000, depending on availability.
Figure 11: XMRig initializer code
After determining the working pool endpoint, the method constructs and returns an array of command-line arguments used to launch an XMRig miner with the following configuration:
-o <pool>: The mining pool endpoint (direct or proxy)
-k: Keepalive flag
–tls: Enable TLS encryption
-u <wallet>: Monero wallet address where mined coins are sent
–coin monero: Specifies the coin
-p <password>: Generates using current date and UUID
–nicehash: Adjusts mining strategy for NiceHash compatibility
The code shown in Figure 12 demonstrates how the d-miner execution is initiated. First, it calls NMuU8KNchX5bP8Oy() to retrieve the arguments. Second, it obtains the path to the d-miner file. Finally, it executes d-miner using the retrieved arguments and file path.
Figure 12. code used to start d-miner execution
The following code snippet is responsible for uploading the report.txt file generated by the malware. This file captures the stdout output of the XMRig mining process, providing insight into the miner’s execution and activity.
Figure 13. code used to upload
Logcat Reveals Complete Picture:
The malware author has logged every action performed by the application as it sends standard output (stdout) data to the mining pool, making Logcat a valuable source for understanding the malware’s full behavior.
Periodic Device Monitoring:
Upon execution, the app checks—every 5 seconds—the battery level, charging status, recent installation status, and whether the device is locked. (See Figure 14)
Figure 14. Logcat_screenshot_1
Mining Triggered on Device Lock:
As soon as the device is locked (i.e., isDeviceLocked becomes true), the malware initiates its mining process. It connects to a Monero mining pool (pool.uasecurity.org) over TLS and receives a mining job using the RandomX algorithm. The malware then allocates approximately 2.3 GB of RAM and starts mining using 8 CPU threads. (See Figure 15)
Figure 15. Logcat_screenshot_2
Mining Stops on Device Unlock:
As soon as the device is unlocked, the malware halts its mining activity and transitions into a monitoring state. (See Figure 16)
Figure 16. Logcat_screenshot_3
Mining Resumes on Device Lock:
Once the device is locked again, the malware resumes mining activity. (See Figure 17)
Figure 17. Logcat_screenshot_4
Effect on the device
The malware significantly strains the device by consuming high CPU and memory resources, leading to overheating and degraded performance.
The top command output clearly shows the d-miner process running under the app’s user (u0_a606), consuming over 746% CPU and 27.5% memory. (See Figure 18) This confirms continuous cryptomining activity in the background, heavily impacting device performance.
Figure 18. Increased CPU usage
Figure 19 shows how the device temperature rises steadily over a 30-minute span while the phone remained locked, increasing from 32.0 °C to 45.0 °C. This gradual rise confirms that the miner continues to operate in the background, causing sustained CPU usage and abnormal heat buildup even when the device is idle.
Figure 19. Increased device temperature
Prolonged activity may damage the device’s hardware or battery and pose safety risks if left unnoticed.
MITRE ATT&CK Tactics and Techniques:
Figure 20
Quick Heal Detection of Android Malware
Quick Heal detects such malicious applications with variants of Android.Dminer.A
It is recommended that all mobile users should install a trusted Anti-Virus like “Quick Heal Mobile Security for Android” to mitigate such threats and stay protected. Our antivirus software restricts users from downloading malicious applications on their mobile devices. Download your Android protection here
Conclusion:
This campaign highlights how threat actors abuse trusted banking names like Axis Bank to distribute malware through phishing sites. The malware embeds XMRig, a cryptocurrency miner that runs silently in the background, leading to excessive CPU usage, abnormal heating, and potential long-term hardware damage. Beyond phishing sites, such malware can also spread via social media platforms, often disguised under familiar or reputable names to trick users. This reinforces the importance of user awareness, cautious app installation behavior, and robust mobile security solutions to defend against such threats.
Do not click on any links received through messages or any other social media platforms as they may be intentionally or inadvertently pointing to malicious sites.
Read the pop-up messages you get from the Android system before accepting or/allowing any new permissions.
Be extremely cautious about what applications you download on your phone, as malware authors can easily spoof the original applications’ names, icons, and developer details.
ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.
We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.
In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.
Since this one is a kind of improvement of the previous article, you should read it first.
Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:
Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:
Now we can move on and make configurations dynamic.
Sentinel values: a guard value to monitor changes in the configurations
On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.
Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.
Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.
A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.
So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.
As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).
Define how to refresh configurations using ASP.NET Core app startup
We can finally move to the code!
If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling
Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).
Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.
Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.
First of all, add Azure App Configuration to the IServiceCollection object:
builder.Services.AddAzureAppConfiguration();
Finally, we have to add it to our existing middlewares by calling
app.UseAzureAppConfiguration();
The final result is this:
publicstaticvoid Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
conststring ConnectionString = "......";
// Load configuration from Azure App Configuration builder.Configuration.AddAzureAppConfiguration(options =>
{
options.Connect(ConnectionString)
.Select(KeyFilter.Any, LabelFilter.Null)
// Configure to reload configuration if the registered sentinel key is modified .ConfigureRefresh(refreshOptions =>
refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
.SetCacheExpiration(TimeSpan.FromSeconds(3)));
});
// Add the service to IServiceCollection builder.Services.AddAzureAppConfiguration();
builder.Services.AddControllers();
builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
var app = builder.Build();
// Add the middleware app.UseAzureAppConfiguration();
app.UseHttpsRedirection();
app.MapControllers();
app.Run();
}
IOptionsMonitor: accessing and monitoring configuration values
It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.
Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯
This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.
So, head back to the ConfigDemoController, and replace the way we retrieve the config:
[ApiController][Route("[controller]")]
publicclassConfigDemoController : ControllerBase
{
privatereadonly IOptionsMonitor<MyConfig> _config;
public ConfigDemoController(IOptionsMonitor<MyConfig> config)
{
_config = config;
_config.OnChange(Update);
}
[HttpGet()]public IActionResult Get()
{
return Ok(_config.CurrentValue);
}
privatevoid Update(MyConfig arg1, string? arg2)
{
Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
$" Password is {arg1.Host.Password}");
}
}
When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;
We can finally run the application and update the values on Azure App Configuration.
Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.
Then, call again the application (again, without restarting it), and admire the updated values!
You can see a live demo here:
As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.
My 2 cents on timing
As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:
a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.
There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:
UserService starts
PaymentService starts
Someone updates the values on Azure
UserService restarts, while PaymentService doesn’t.
We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.
Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.
So, think thoroughly before choosing an expiration time!
Further readings
This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.
Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.
In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.
Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.
Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
We partnered with Meet Your Legend to bring their groundbreaking vision to life — a mentorship platform that seamlessly blends branding, UI/UX, full-stack development, and immersive digital animation.
Meet Your Legend isn’t just another online learning platform. It’s a bridge between generations of creatives. Focused on VFX, animation, and video game production, it connects aspiring talent — whether students, freelancers, or in-house studio professionals — with the industry’s most accomplished mentors. These are the legends behind the scenes: lead animators, FX supervisors, creative directors, and technical wizards who’ve shaped some of the biggest productions in modern entertainment.
Our goal? To create a vivid digital identity and interactive platform that captures three core qualities:
The energy of creativity
The precision of industry-level expertise
The dynamism of motion graphics and storytelling
At the heart of everything was a single driving idea: movement. Not just visual movement — but career momentum, the transfer of knowledge, and the emotional propulsion behind creativity itself.
We built the brand identity around the letter “M” — stylized with an elongated tail that represents momentum, legacy, and forward motion. This tail forms a graphic throughline across the platform. Mentor names, modules, and animations plug into it, creating a modular and adaptable system that evolves with the content and contributors.
From the visual system to the narrative structure, we wanted every interaction to feel alive — dynamic, immersive, and unapologetically aspirational.
The Concept
The site’s architecture is built around a narrative arc, not just a navigation system.
Users aren’t dropped into a menu or a generic homepage. Instead, they’re invited into a story. From the moment the site loads, there’s a sense of atmosphere and anticipation — an introduction to the platform’s mission, mood, and voice before unveiling the core offering: the mentors themselves, or as the platform calls them, “The Legends.”
Each element of the experience is structured with intention. We carefully designed the content flow to evoke a sense of reverence, curiosity, and inspiration. Think of it as a cinematic trailer for a mentorship journey.
We weren’t just explaining the brand — we were immersing visitors in it.
Typography & Color System
The typography system plays a crucial role in reinforcing the platform’s dual personality: technical sophistication meets expressive creativity.
We paired two distinct sans-serif fonts:
– A light-weight, technical font to convey structure, clarity, and approachability — ideal for body text and interface elements
– A bold, expressive typeface that commands attention — perfect for mentor names, quotes, calls to action, and narrative highlights
The contrast between these two fonts helps create rhythm, pacing, and emotional depth across the experience.
The color palette is deliberately cinematic and memorable:
Flash orange signals energy, creative fire, and boldness. It’s the spark — the invitation to engage.
A range of neutrals — beige, brown, and warm grays — offer a sense of balance, maturity, and professionalism. These tones ground the experience and create contrast for vibrant elements.
Together, the system is both modern and timeless — a tribute to craft, not trend.
Technology Stack
We brought the platform to life with a modern and modular tech stack designed for both performance and storytelling:
WordPress (headless CMS) for scalable, easy-to-manage content that supports a dynamic editorial workflow
GSAP (GreenSock Animation Platform) for fluid, timeline-based animations across scroll and interactions
Three.js / WebGL for high-performance visual effects, shaders, and real-time graphical experiences
Custom booking system powered by Make, Google Calendar, Whereby, and Stripe — enabling seamless scheduling, video sessions, and payments
This stack allowed us to deliver a responsive, cinematic experience without compromising speed or maintainability.
Loader Experience
Even the loading screen is part of the story.
We designed a cinematic prelude using the “M” tail as a narrative element. This loader animation doesn’t just fill time — it sets the stage. Meanwhile, key phrases from the creative world — terms like motion 2D & 3D, vfx, cgi, and motion capture — animate in and out of view, building excitement and immersing users in the language of the craft.
It’s a sensory preview of what’s to come, priming the visitor for an experience rooted in industry and artistry.
Title Reveal Effects
Typography becomes motion.
To bring the brand’s kinetic DNA to life, we implemented a custom mask-reveal effect for major headlines. Each title glides into view with trailing motion, echoing the flowing “M” mark. This creates a feeling of elegance, control, and continuity — like a shot dissolving in a film edit.
These transitions do more than delight — they reinforce the platform’s identity, delivering brand through movement.
Menu Interaction
We didn’t want the menu to feel like a utility. We wanted it to feel like a scene transition.
The menu unfolds within the iconic M-shape — its structure serving as both interface and metaphor. As users open it, they reveal layers: content categories, mentor profiles, and stories. Every motion is deliberate, reminiscent of opening a timeline in an editing suite or peeling back layers in a 3D model.
It’s tactile, immersive, and true to the world the platform celebrates.
Gradient & WebGL Shader
A major visual motif was the idea of “burning film” — inspired by analog processes but expressed through modern code.
To bring this to life, we created a custom WebGL shader, incorporating a reactive orange gradient from the brand palette. As users move their mouse or scroll, the shader responds in real-time, adding a subtle but powerful VFX-style distortion to the screen.
This isn’t just decoration. It’s a living texture — a symbol of the heat, friction, and passion that fuel creative careers.
Scroll-Based Storytelling
The homepage isn’t static. It’s a stage for narrative progression.
We designed the flow as a scroll-driven experience where content and story unfold in sync. From an opening slider that introduces the brand, to immersive sections that highlight individual mentors and their work, each moment is carefully choreographed.
Users aren’t just reading — they’re experiencing a sequence, like scenes in a movie or levels in a game. It’s structured, emotional, and deeply human.
Who We Are
We are a digital studio at the intersection of design, storytelling, and interaction. Our approach is rooted in concept and craft. We build digital experiences that are not only visually compelling but emotionally resonant.
From bold brands to immersive websites, we design with movement in mind — movement of pixels, of emotion, and of purpose.
Because we believe great design doesn’t just look good — it moves you.
Customizing the behavior of an HTTP request is easy: you can use a middleware defined as a delegate or as a class.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes you need to create custom logic that must be applied to all HTTP requests received by your ASP.NET Core application. In these cases, you can create a custom middleware: pieces of code that are executed sequentially for all incoming requests.
The order of middlewares matters. Here’s a nice schema published on the Microsoft website:
A Middleware, in fact, can manipulate the incoming HttpRequest and the resulting HttpResponse objects.
In this article, we’re gonna learn 2 ways to create a middleware in .NET.
Middleware as inline delegates
The easiest way is to define a delegate function that must be defined after building the WebApplication.
By calling the Use method, you can update the HttpContext object passed as a first parameter.
Note that you have to call the Invoke method to call the next middleware.
There is a similar overload that accepts in input a RequestDelegate instance instead of Func<Task>, but it is considered to be less performant: you should, in fact, use the one with Func<Task>.
Middleware as standalone classes
The alternative to delegates is by defining a custom class.
You can call it whatever you want, but you have some constraints to follow when creating the class:
it must have a public constructor with a single parameter whose type is RequestDelegate (that will be used to invoke the next middleware);
it must expose a public method named Invoke or InvokeAsync that accepts as a first parameter an HttpContext and returns a Task;
Then, to add it to your application, you have to call
app.UseMiddleware<MyCustomMiddleware>();
Delegates or custom classes?
Both are valid methods, but each of them performs well in specific cases.
For simple scenarios, go with inline delegates: they are easy to define, easy to read, and quite performant. But they are a bit difficult to test.
For complex scenarios, go with custom classes: this way you can define complex behaviors in a single class, organize your code better, use Dependency Injection to pass services and configurations to the middleware. Also, defining the middleware as a class makes it more testable. The downside is that, as of .NET 7, using a middleware resides on reflection: UseMiddleware invokes the middleware by looking for a public method named Invoke or InvokeAsync. So, theoretically, using classes is less performant than using delegates (I haven’t benchmarked it yet, though!).
Wrapping up
On Microsoft documentation you can find a well-explained introduction to Middlewares:
Seqrite Labs APT-Team has identified and tracked UNG0002 also known as Unknown Group 0002, a bunch of espionage-oriented operations which has been grouped under the same cluster conducting campaigns across multiple Asian jurisdictions including China, Hong Kong, and Pakistan. This threat entity demonstrates a strong preference for using shortcut files (LNK), VBScript, and post-exploitation tools such as Cobalt Strike and Metasploit, while consistently deploying CV-themed decoy documents to lure victims.
The cluster’s operations span two major campaigns: Operation Cobalt Whisper (May 2024 – September 2024) and Operation AmberMist (January 2025 – May 2025). During Operation Cobalt Whisper, 20 infection chains were observed targeting defense, electrotechnical engineering, and civil aviation sectors. The more recent Operation AmberMist campaign has evolved to target gaming, software development, and academic institutions with improved lightweight implants including Shadow RAT, Blister DLL Implant, and INET RAT.
In the recent operation AmberMist, the threat entity has also abused the ClickFix Technique – a social engineering method that tricks victims into executing malicious PowerShell scripts through fake CAPTCHA verification pages. Additionally, UNG0002 leverages DLL sideloading techniques, particularly abusing legitimate Windows applications like Rasphone and Node-Webkit binaries to execute malicious payloads.
Multi-Stage Attacks: UNG0002 employs sophisticated infection chains using malicious LNK files, VBScript, batch scripts, and PowerShell to deploy custom RAT implants including Shadow RAT, INET RAT, and Blister DLL.
ClickFix Social Engineering: The group utilizes fake CAPTCHA verification pages to trick victims into executing malicious PowerShell scripts, notably spoofing Pakistan’s Ministry of Maritime Affairs website.
Abusing DLL Sideloading: In the recent campaign, consistent abuse of legitimate Windows applications (Rasphone, Node-Webkit) for DLL sideloading to execute malicious payloads while evading detection.
CV-Themed Decoy Documents: Use of realistic resume documents targeting specific industries, including fake profiles of game UI designers and computer science students from prestigious institutions.
Persistent Infrastructure: Maintained command and control infrastructure with consistent naming patterns and operational security across multiple campaigns spanning over a year.
Targeted Industry Focus: Systematic targeting of defense, electrotechnical engineering, energy, civil aviation, academia, medical institutions, cybersecurity researchers, gaming, and software development sectors.
Attribution Challenges: UNG0002 represents an evolving threat cluster that demonstrates high adaptability by mimicking techniques from other threat actor playbooks to complicate attribution efforts, with Seqrite Labs assessing with high confidence that the group originates from South-East Asia and focuses on espionage activities. As more intelligence becomes available, associated campaigns may be expanded or refined in the future.
UNG0002 represents a sophisticated and persistent threat entity from South Asia that has maintained consistent operations targeting multiple Asian jurisdictions since at least May 2024. The group demonstrates high adaptability and technical proficiency, continuously evolving their toolset while maintaining consistent tactics, techniques, and procedures.
The threat actor’s focus on specific geographic regions (China, Hong Kong, Pakistan) and targeted industries suggests a strategic approach to intelligence gathering AKA classic espionage related activities. Their use of legitimate-looking decoy documents, social engineering techniques, and pseudo-advanced evasion methods indicates a well-resourced and experienced operation.
UNG0002 demonstrates consistent operational patterns across both Operation Cobalt Whisper and Operation AmberMist, maintaining similar infrastructure naming conventions, payload delivery mechanisms, and target selection criteria. The group’s evolution from using primarily Cobalt Strike and Metasploit frameworks to developing custom implants like Shadow RAT, INET RAT, and Blister DLL indicates their persistent nature.
Notable technical artifacts include PDB paths revealing development environments such as C:\Users\The Freelancer\source\repos\JAN25\mustang\x64\Release\mustang.pdb for Shadow RAT and C:\Users\Shockwave\source\repos\memcom\x64\Release\memcom.pdb for INET RAT, indicating potential code names “Mustang” and “ShockWave” which indicate the mimicry of already-existing threat groups. An in-depth technical analysis of the complete infection chains and detailed campaign specifics can be found in our comprehensive whitepaper.
Attributing threat activity to a specific group is always a complex task. It requires detailed analysis across several areas, including targeting patterns, tactics and techniques (TTPs), geographic focus, and any possible slip-ups in operational security. UNG0002 is an evolving cluster that Seqrite Labs is actively monitoring. As more intelligence becomes available, we may expand or refine the associated campaigns. Based on our current findings, we assess with high confidence that this group originates from South-East Asia and demonstrates a high level of adaptability — often mimicking techniques seen in other threat actor playbooks to complicate attribution focusing on espionage. We also, appreciate other researchers in the community, like malwarehunterteam for hunting these campaigns.
Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.
In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).
In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.
For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,
GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.
Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.
There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.
As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.
But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.
How to create a custom WebApplicationFactory in .NET
When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.
Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.
Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.
If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.
What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:
publicpartialclassProgram { }
Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:
How to use WebApplicationFactory in your NUnit tests
It’s time to start working on some real Integration Tests!
As we said before, we have only one HTTP endpoint, defined like this:
privatereadonly ISocialLinkParser _parser;
privatereadonly ILogger<SocialPostLinkController> _logger;
privatereadonly IConfiguration _config;
public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
{
_parser = parser;
_logger = logger;
_config = config;
}
[HttpGet]public IActionResult Get([FromQuery] string uri)
{
_logger.LogInformation("Received uri {Uri}", uri);
if (Uri.TryCreate(uri, new UriCreationOptions { }, out Uri _uri))
{
var linkInfo = _parser.GetLinkInfo(_uri);
_logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
var instance = new Instance
{
InstanceName = _config.GetValue<string>("InstanceName"),
Info = linkInfo
};
return Ok(instance);
}
else {
_logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
return BadRequest();
}
}
We have 2 flows to validate:
If the input URI is valid, the HTTP Status code should be 200;
If the input URI is invalid, the HTTP Status code should be 400;
We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.
First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.
As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.
From now on, we can use the _client instance to work with the in-memory instance of the API.
One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.
Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:
[Test]publicasync Task Should_ReturnHttp200_When_UrlIsValid()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
}
Otherwise, return Bad Request:
[Test]publicasync Task Should_ReturnBadRequest_When_UrlIsNotValid()
{
string inputUrl = "invalid-url";
var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
}
How to create test-specific configurations using InMemoryCollection
WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.
Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.
For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("Real"));
}
But some other times you might want to override a specific configuration key.
The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.
If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:
protectedoverridevoid ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureAppConfiguration((host, configurationBuilder) =>
{
configurationBuilder.AddInMemoryCollection(
new List<KeyValuePair<string, string?>>
{
new KeyValuePair<string, string?>("InstanceName", "FromTests")
});
});
}
Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.
You can validate this change by simply replacing the expected value in the previous test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
}
If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.
How to set up custom dependencies for your tests
Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.
We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:
publicclassStubSocialLinkParser : ISocialLinkParser
{
public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
{
SocialNetworkName = "test from stub",
Id = "test id",
SourceUrl = postUri,
Username = "test username" };
}
We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:
Finally, we can create a method to validate this change:
[Test]publicasync Task Should_UseStubName()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
}
How to create Integration Tests on specific resolved dependencies
Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.
We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.
For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:
builder.Services.AddScoped<SocialLinkParser>();
Now we can create a test to validate that it is working:
[Test]publicasync Task Should_ResolveDependency()
{
using (var _scope = _factory.Services.CreateScope())
{
var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
Assert.That(service, Is.Not.Null);
Assert.That(service, Is.AssignableTo<SocialLinkParser>());
}
}
As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.
The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.
Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:
Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.
But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:
Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:
Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():
As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.
In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.
Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here: