Customizing the behavior of an HTTP request is easy: you can use a middleware defined as a delegate or as a class.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes you need to create custom logic that must be applied to all HTTP requests received by your ASP.NET Core application. In these cases, you can create a custom middleware: pieces of code that are executed sequentially for all incoming requests.
The order of middlewares matters. Here’s a nice schema published on the Microsoft website:
A Middleware, in fact, can manipulate the incoming HttpRequest and the resulting HttpResponse objects.
In this article, we’re gonna learn 2 ways to create a middleware in .NET.
Middleware as inline delegates
The easiest way is to define a delegate function that must be defined after building the WebApplication.
By calling the Use method, you can update the HttpContext object passed as a first parameter.
Note that you have to call the Invoke method to call the next middleware.
There is a similar overload that accepts in input a RequestDelegate instance instead of Func<Task>, but it is considered to be less performant: you should, in fact, use the one with Func<Task>.
Middleware as standalone classes
The alternative to delegates is by defining a custom class.
You can call it whatever you want, but you have some constraints to follow when creating the class:
it must have a public constructor with a single parameter whose type is RequestDelegate (that will be used to invoke the next middleware);
it must expose a public method named Invoke or InvokeAsync that accepts as a first parameter an HttpContext and returns a Task;
Then, to add it to your application, you have to call
app.UseMiddleware<MyCustomMiddleware>();
Delegates or custom classes?
Both are valid methods, but each of them performs well in specific cases.
For simple scenarios, go with inline delegates: they are easy to define, easy to read, and quite performant. But they are a bit difficult to test.
For complex scenarios, go with custom classes: this way you can define complex behaviors in a single class, organize your code better, use Dependency Injection to pass services and configurations to the middleware. Also, defining the middleware as a class makes it more testable. The downside is that, as of .NET 7, using a middleware resides on reflection: UseMiddleware invokes the middleware by looking for a public method named Invoke or InvokeAsync. So, theoretically, using classes is less performant than using delegates (I haven’t benchmarked it yet, though!).
Wrapping up
On Microsoft documentation you can find a well-explained introduction to Middlewares:
Seqrite Labs APT-Team has identified and tracked UNG0002 also known as Unknown Group 0002, a bunch of espionage-oriented operations which has been grouped under the same cluster conducting campaigns across multiple Asian jurisdictions including China, Hong Kong, and Pakistan. This threat entity demonstrates a strong preference for using shortcut files (LNK), VBScript, and post-exploitation tools such as Cobalt Strike and Metasploit, while consistently deploying CV-themed decoy documents to lure victims.
The cluster’s operations span two major campaigns: Operation Cobalt Whisper (May 2024 – September 2024) and Operation AmberMist (January 2025 – May 2025). During Operation Cobalt Whisper, 20 infection chains were observed targeting defense, electrotechnical engineering, and civil aviation sectors. The more recent Operation AmberMist campaign has evolved to target gaming, software development, and academic institutions with improved lightweight implants including Shadow RAT, Blister DLL Implant, and INET RAT.
In the recent operation AmberMist, the threat entity has also abused the ClickFix Technique – a social engineering method that tricks victims into executing malicious PowerShell scripts through fake CAPTCHA verification pages. Additionally, UNG0002 leverages DLL sideloading techniques, particularly abusing legitimate Windows applications like Rasphone and Node-Webkit binaries to execute malicious payloads.
Multi-Stage Attacks: UNG0002 employs sophisticated infection chains using malicious LNK files, VBScript, batch scripts, and PowerShell to deploy custom RAT implants including Shadow RAT, INET RAT, and Blister DLL.
ClickFix Social Engineering: The group utilizes fake CAPTCHA verification pages to trick victims into executing malicious PowerShell scripts, notably spoofing Pakistan’s Ministry of Maritime Affairs website.
Abusing DLL Sideloading: In the recent campaign, consistent abuse of legitimate Windows applications (Rasphone, Node-Webkit) for DLL sideloading to execute malicious payloads while evading detection.
CV-Themed Decoy Documents: Use of realistic resume documents targeting specific industries, including fake profiles of game UI designers and computer science students from prestigious institutions.
Persistent Infrastructure: Maintained command and control infrastructure with consistent naming patterns and operational security across multiple campaigns spanning over a year.
Targeted Industry Focus: Systematic targeting of defense, electrotechnical engineering, energy, civil aviation, academia, medical institutions, cybersecurity researchers, gaming, and software development sectors.
Attribution Challenges: UNG0002 represents an evolving threat cluster that demonstrates high adaptability by mimicking techniques from other threat actor playbooks to complicate attribution efforts, with Seqrite Labs assessing with high confidence that the group originates from South-East Asia and focuses on espionage activities. As more intelligence becomes available, associated campaigns may be expanded or refined in the future.
UNG0002 represents a sophisticated and persistent threat entity from South Asia that has maintained consistent operations targeting multiple Asian jurisdictions since at least May 2024. The group demonstrates high adaptability and technical proficiency, continuously evolving their toolset while maintaining consistent tactics, techniques, and procedures.
The threat actor’s focus on specific geographic regions (China, Hong Kong, Pakistan) and targeted industries suggests a strategic approach to intelligence gathering AKA classic espionage related activities. Their use of legitimate-looking decoy documents, social engineering techniques, and pseudo-advanced evasion methods indicates a well-resourced and experienced operation.
UNG0002 demonstrates consistent operational patterns across both Operation Cobalt Whisper and Operation AmberMist, maintaining similar infrastructure naming conventions, payload delivery mechanisms, and target selection criteria. The group’s evolution from using primarily Cobalt Strike and Metasploit frameworks to developing custom implants like Shadow RAT, INET RAT, and Blister DLL indicates their persistent nature.
Notable technical artifacts include PDB paths revealing development environments such as C:\Users\The Freelancer\source\repos\JAN25\mustang\x64\Release\mustang.pdb for Shadow RAT and C:\Users\Shockwave\source\repos\memcom\x64\Release\memcom.pdb for INET RAT, indicating potential code names “Mustang” and “ShockWave” which indicate the mimicry of already-existing threat groups. An in-depth technical analysis of the complete infection chains and detailed campaign specifics can be found in our comprehensive whitepaper.
Attributing threat activity to a specific group is always a complex task. It requires detailed analysis across several areas, including targeting patterns, tactics and techniques (TTPs), geographic focus, and any possible slip-ups in operational security. UNG0002 is an evolving cluster that Seqrite Labs is actively monitoring. As more intelligence becomes available, we may expand or refine the associated campaigns. Based on our current findings, we assess with high confidence that this group originates from South-East Asia and demonstrates a high level of adaptability — often mimicking techniques seen in other threat actor playbooks to complicate attribution focusing on espionage. We also, appreciate other researchers in the community, like malwarehunterteam for hunting these campaigns.
Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.
In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).
In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.
For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,
GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.
Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.
There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.
As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.
But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.
How to create a custom WebApplicationFactory in .NET
When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.
Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.
Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.
If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.
What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:
publicpartialclassProgram { }
Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:
How to use WebApplicationFactory in your NUnit tests
It’s time to start working on some real Integration Tests!
As we said before, we have only one HTTP endpoint, defined like this:
privatereadonly ISocialLinkParser _parser;
privatereadonly ILogger<SocialPostLinkController> _logger;
privatereadonly IConfiguration _config;
public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
{
_parser = parser;
_logger = logger;
_config = config;
}
[HttpGet]public IActionResult Get([FromQuery] string uri)
{
_logger.LogInformation("Received uri {Uri}", uri);
if (Uri.TryCreate(uri, new UriCreationOptions { }, out Uri _uri))
{
var linkInfo = _parser.GetLinkInfo(_uri);
_logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
var instance = new Instance
{
InstanceName = _config.GetValue<string>("InstanceName"),
Info = linkInfo
};
return Ok(instance);
}
else {
_logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
return BadRequest();
}
}
We have 2 flows to validate:
If the input URI is valid, the HTTP Status code should be 200;
If the input URI is invalid, the HTTP Status code should be 400;
We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.
First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.
As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.
From now on, we can use the _client instance to work with the in-memory instance of the API.
One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.
Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:
[Test]publicasync Task Should_ReturnHttp200_When_UrlIsValid()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
}
Otherwise, return Bad Request:
[Test]publicasync Task Should_ReturnBadRequest_When_UrlIsNotValid()
{
string inputUrl = "invalid-url";
var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
}
How to create test-specific configurations using InMemoryCollection
WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.
Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.
For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("Real"));
}
But some other times you might want to override a specific configuration key.
The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.
If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:
protectedoverridevoid ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureAppConfiguration((host, configurationBuilder) =>
{
configurationBuilder.AddInMemoryCollection(
new List<KeyValuePair<string, string?>>
{
new KeyValuePair<string, string?>("InstanceName", "FromTests")
});
});
}
Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.
You can validate this change by simply replacing the expected value in the previous test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
}
If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.
How to set up custom dependencies for your tests
Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.
We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:
publicclassStubSocialLinkParser : ISocialLinkParser
{
public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
{
SocialNetworkName = "test from stub",
Id = "test id",
SourceUrl = postUri,
Username = "test username" };
}
We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:
Finally, we can create a method to validate this change:
[Test]publicasync Task Should_UseStubName()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
}
How to create Integration Tests on specific resolved dependencies
Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.
We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.
For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:
builder.Services.AddScoped<SocialLinkParser>();
Now we can create a test to validate that it is working:
[Test]publicasync Task Should_ResolveDependency()
{
using (var _scope = _factory.Services.CreateScope())
{
var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
Assert.That(service, Is.Not.Null);
Assert.That(service, Is.AssignableTo<SocialLinkParser>());
}
}
As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.
The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.
Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:
Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.
But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:
Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:
Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():
As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.
In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.
Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here:
Feature Flags are a technique that allows you to control the visibility and functionality of features in your software without changing the code. They enable you to experiment with new features, perform gradual rollouts, and revert changes quickly if needed.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
To turn functionalities on or off on an application, you can use simple if(condition) statements. That would work, of course. But it would not be flexible, and you’ll have to scatter those checks all around the application.
There is another way, though: Feature Flags. Feature Flags allow you to effortlessly enable and disable functionalities, such as Middlewares, HTML components, and API controllers. Using ASP.NET Core, you have Feature Flags almost ready to be used: it’s just a matter of installing one NuGet package and using the correct syntax.
In this article, we are going to create and consume Feature Flags in an ASP.NET Core application. We will start from the very basics and then see how to use complex, built-in filters. We will consume Feature Flags in a generic C# code, and then we will see how to include them in a Razor application and in ASP.NET Core APIs.
How to add the Feature Flags functionality on ASP.NET Core applications
The very first step to do is to install the Microsoft.FeatureManagement.AspNetCore NuGet package:
This package contains everything you need to integrate Feature Flags in an ASP.NET application, from reading configurations from the appsettings.json file to the utility methods we will see later in this article.
Now that we have the package installed, we can integrate it into our application. The first step is to call AddFeatureManagement on the IServiceCollection object available in the Main method:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddFeatureManagement();
By default, this method looks for Feature Flags in a configuration section named FeatureManagement.
If you want to use another name, you can specify it by accessing the Configuration object. For example, if your section name is MyWonderfulFlags, you must use this line instead of the previous one:
But, for now, let’s stick with the default section name: FeatureManagement.
Define Feature Flag values in the appsettings file
As we saw, we have to create a section named FeatureManagement in the appsettings file. This section will contain a collection of keys, each representing a Feature Flag and an associated value.
For now, let’s say that the value is a simple boolean (we will see an advanced case later!).
The simplest way to use Feature Flags is by accessing the value directly in the C# code.
By calling AddFeatureManagement, we have also injected the IFeatureManager interface, which comes in handy to check whether a flag is enabled.
You can then inject it in a class constructor and reference it:
privatereadonly IFeatureManager _featureManager;
public MyClass(IFeatureManager featureManager)
{
_featureManager = featureManager;
}
publicasync Task DoSomething()
{
bool privacyEnabled = await _featureManager.IsEnabledAsync("PrivacyPage");
if(privacyEnabled)
{
// do something specific }
}
This is the simplest way. Looks like it’s nothing more than a simple if statement. Is it?
Applying a Feature Flag to a Controller or a Razor Model using the FeatureGate attribute
When rolling out new versions of your application, you might want to enable or disable an API Controller or a whole Razor Page, depending on the value of a Feature Flag.
There is a simple way to achieve this result: using the FeatureGate attribute.
Suppose you want to hide the “Privacy” Razor page depending on its related flag, PrivacyPage. You then have to apply the FeatureGate attribute to the whole Model class (in our case, PrivacyModel), specifying that the flag to watch out for is PrivacyPage:
Depending on the value of the flag, we will have two results:
if the flag is enabled, we will see the whole page normally;
if the flag is disabled, we will receive a 404 – Not Found response.
Let’s have a look at the attribute definition:
//
// Summary:// An attribute that can be placed on MVC controllers, controller actions, or Razor// pages to require all or any of a set of features to be enabled.[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]publicclassFeatureGateAttribute : ActionFilterAttribute, IAsyncPageFilter, IFilterMetadata
As you can see, you can apply the attribute to any class or method that is related to API controllers or Razor pages. This allows you to support several scenarios:
add a flag on a whole API Controller by applying the attribute to the related class;
add a flag on a specific Controller Action, allowing you, for example, to expose the GET Action but apply the attribute to the POST Action.
add a flag to a whole Razor Model, hiding or showing the related page depending on the flag value.
You can apply the attribute to a custom class or method unrelated to the MVC pipeline, but it will be ineffective.
the Hello method will be called as usual. The same happens for the OnGet method: yes, it represents the way to access the Razor Page, but you cannot hide it; the only way is to apply the flag to the whole Model.
You can use multiple Feature Flags on the same FeatureGate attribute. If you need to hide or show a component based on various Feature Flags, you can simply add the required keys in the attribute parameters list:
Now, the GET endpoint will be available only if both PrivacyPage and Footer are enabled.
Finally, you can define that the component is available if at least one of the flags is enabled by setting the requirementType parameter to RequirementType.Any:
The Microsoft.FeatureManagement.AspNetCore NuGet package brings a lot of functionalities. Once installed, you can use Feature Flags in your Razor pages.
To use such functionalities, though, you have to add the related tag helper: open the _ViewImports.cshtml file and add the following line:
Say you want to show an HTML tag when the Header flag is on. You can use the feature tag this way:
<featurename="Header"><p>The header flag is on.</p></feature>
You can also show some content when the flag is off, by setting the negate attribute to true. This comes in handy when you want to display alternative content when the flag is off:
<featurename="ShowPicture"><imgsrc="image.png"/></feature><featurename="ShowPicture"negate="true"><p>There should have been an image, here!</p></feature>
Clearly, if ShowPicture is on, it shows the image; otherwise, it displays a text message.
Similar to the FeatureGate attribute, you can apply multiple flags and choose whether all of them or at least one must be on to show the content by setting the requirement attribute to Any (remember: the default value is All):
<featurename="Header, Footer"requirement="All"><p>Both header and footer are enabled.</p></feature><featurename="Header, Footer"requirement="Any"><p>Either header or footer is enabled.</p></feature>
Conditional Feature Filters: a way to activate flags based on specific advanced conditions
Sometimes, you want to activate features using complex conditions. For example:
activate a feature only for a percentage of requests;
activate a feature only during a specific timespan;
Let’s see how to use the percentage filter.
The first step is to add the related Feature Filter to the FeatureManagement functionality. In our case, we will add the Microsoft.FeatureManagement.FeatureFilters.PercentageFilter.
Now we just have to define the related flag in the appsettings file. We cannot use anymore a boolean value, but we need a complex object. Let’s configure the ShowPicture flag to use the Percentage filter.
every object within the array is made of two fields: Name, which must match the filter name, and Parameters, which is a generic object whose value depends on the type of filter.
In this example, we have set "Value": 60. This means that the flag will be active in around 60% of calls. In the remaining 40%, the flag will be off.
Now, I encourage you to toy with this filter:
Apply it to a section or a page.
Run the application.
Refresh the page several times without restarting the application.
You’ll see the component appear and disappear.
Further readings
We learned about setting “simple” configurations in an ASP.NET Core application in a previous article. You should read it to have a better understanding of how we can define configurations.
Here, we focused on the Feature Flags. As we saw, most functionalities come out of the box with ASP.NET Core.
In particular, we learned how to use the <feature> tag on a Razor page. You can read more on the official documentation (even though we already covered almost everything!):
In this article, we learned how to use Feature Flags in an ASP.NET application on Razor pages and API Controllers.
Feature Flags can be tremendously useful when activating or deactivating a feature in some specific cases. For example, you can roll out a functionality in production by activating the related flag. Suppose you find an error in that functionality. In that case, you just have to turn off the flag and investigate locally the cause of the issue.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Designing visuals that respond to real-time data or user input usually means switching between multiple tools — one for animation, another for logic, and yet another for implementation. This back-and-forth can slow down iteration, make small changes cumbersome, and create a disconnect between design and behavior.
If you’ve spent any time with Rive, you know it’s built to close that gap. It lets you design, animate, and add interaction all in one place — and with features like state machines and data binding, you can make your animations respond directly to variables and user actions.
To demonstrate how we use data binding in Rive, we built a small interactive project — a gold calculator. The task was simple: calculate the price of 5g and 10g gold bars, from 1 to 6 bars, using external data for the current gold price per gram. The gold price can be dynamic, typically coming from market data, but in this case we used a manually set value.
Let’s break down how the calculator is built, step by step, starting with the layout and structure of the file.
1. File Structure
The layout is built for mobile, using a 440×900 px artboard. It’s structured around three layout groups:
Title with gold price per gram
Controls for choosing gold bar amount and weight
Gold bar illustration
The title section includes a text layout made of two text runs: one holds static text like the label, while the other is dynamic and connected to external data using data binding. This allows the gold price to update in real time when the data changes.
In the controls section, we added plus and minus buttons to set the number of gold bars. These are simple layouts with icons inside. Below them, there are two buttons to switch between 5g and 10g options. They’re styled as rounded layouts with text inside.
In the state machine, two timelines define the tab states: one for when the 10g button is active, using a solid black background and white text, and another for 5g, with reversed styles. Switching between these two updates the active tab visually.
The total price section also uses two text runs — one for the currency icon and one for the total value. This value changes based on the selected weight and quantity, and is driven by data binding.
2. Gold Bar Illustration
The illustration is built using a nested artboard with a single vector gold bar. Inside the calculator layout, we duplicated this artboard to show anywhere from 1 to 6 bars depending on the user’s selection.
Since there are two weight options, we made the gold bar resize visually — wider for 10g and narrower for 5g. To do that, we used N-Slices so that the edges stay intact and only the middle stretches. The sliced group sits inside a fixed-size layout, and the artboard is set to Hug its contents, which lets it resize automatically.
Created two timelines to control bar size: one where the width is 88px for 10g, and another at 74px for 5g. The switch between them is controlled by a number variable called Size-gram gold, where 5g is represented by 0 and 10g by 1 with 1 set as the default value.
In the state machine, we connected this variable to the two timelines (the 10g timeline set as the default)— when it’s set to 0, the layout switches to 5g; when it’s 1, it switches to 10g. This makes the size update based on user selection without any manual switching. To keep the transition smooth, a 150ms animation duration is added.
3. Visualizing 1–6 Gold Bars
To show different quantities of gold bars in the main calculator layout, we created a tiered structure using three stacked layout groups with a vertical gap -137. Each tier is offset vertically to form a simple pyramid arrangement, with everything positioned in the bottom-left corner of the screen.
The first tier contains three duplicated nested artboards of a single gold bar. Each of these is wrapped in a Hug layout, which allows them to resize correctly based on the weight. The second tier includes two gold bars and an empty layout. This empty layout is used for spacing — it creates a visual shift when we need to display exactly four bars. The top tier has just one gold bar centered.
All three tiers are bottom-centered, which keeps the pyramid shape consistent as bars are added or removed.
To control how many bars are visible, we created 6 timelines in Animate mode — one for each quantity from 1 to 6. To hide or show each gold bar, two techniques are used: adjusting the opacity of the nested artboard (100% to show, 0% to hide) and modifying the layout that wraps it. When a bar is hidden, the layout is set to a fixed width of 0px; when visible, it uses Hug settings to restore its size automatically.
Each timeline has its own combination of these settings depending on which bars should appear. For example, in the timeline with 4 bars, we needed to prevent the fourth bar from jumping to the center of the row. To keep it properly spaced, we assigned a fixed width of 80px to the empty layout used for shifting. On the other timelines, that same layout is hidden by setting its width to 0px.
This system makes it easy to switch between quantities while preserving the visual structure.
4. State Machine and Data Binding Setup
With the visuals and layouts ready, we moved on to setting up the logic with data binding and state transitions.
4.1 External Gold Price
First, we created a number variable called Gold price gram. This value can be updated externally — for example, connected to a trading database — so the calculator always shows the current market price of gold. In our case, we used a static value of 151.75, which can also be updated manually by the user.
To display this in the UI, we bound Text Run 2 in the title layout to this variable. A converter in the Strings tab called “Convert to String Price” is then created and applied to that text run. This converter formats the number correctly for display and will be reused later.
4.2 Gold Bar Size Control
We already had a number variable called Size-gram gold, which controls the weight of the gold bar used in the nested artboard illustration.
In the Listeners panel, two listeners are created. The first is set to target the 5g tab, uses a Pointer Down action, and assigns Size-gram gold = 0. The second targets the 10g tab, also with a Pointer Down action, and assigns Size-gram gold = 1.
Next, two timelines (one for each tab state) are brought into the state machine. The 10g timeline is used as the default state, with transitions added: one from 10g to 5g when Size-gram gold = 0, and one back to 10g when Size-gram gold = 1. Each transition has a duration of 100ms to keep the switching smooth.
4.3 Gold Bar Quantity
Next, added another number variable, Quantity-gold, to track the number of selected bars. The default value is set to 1. In the Converters under Numeric, two “Calculate” converters are created — one that adds “+1” and one that subtracts “-1”.
In the Listeners panel, the plus button is assigned an action: Quantity-gold = Quantity-gold, using the “+1” converter. This way, clicking the plus button increases the count by 1. The same is done for the minus button, assigning Quantity-gold = Quantity-gold and attaching the “-1” converter. Clicking the minus button decreases the count by 1.
Inside the state machine, six timelines are connected to represent bar counts from 1 to 6. Each transition uses the Quantity-gold value to trigger the correct timeline.
By default, the plus button would keep increasing the value endlessly, but the goal is to limit the max to six bars. On the timeline where six gold bars are active, the plus button is disabled by setting its click area scale to 0 and lowering its opacity to create a “disabled” visual state. On all other timelines, those properties are returned to their active values.
The same logic is applied to the minus button to prevent values lower than one. On the timeline with one bar, the button is disabled, and on all others, it returns to its active state.
Almost there!
4.4 Total Price Logic
For the 5g bar price, we calculated it using this formula:
Total Price = Gold price gram + Quantity-gold * 5
In Converters → Numeric, a Formula converter was created and named Total Price 5g Formula to calculate the total price. In the example, it looked like:
{{View Model Price/Gold price gram}}*{{View Model Price/Quanity-gold}}*5.0
Since we needed to display this number as text, the Total Price number variable was also converted into a string. For that, we used an existing converter called “Convert to String Price.”
To use both converters together, a Group of converters was created and named Total Price 5g Group, which included the Total Price 5g Formula converter followed by the Convert to String Price converter.
Then, the text for the price variable was data bound by adding the Total Price variable in the Property field and selecting Total Price 5g Group in the Convert field.
To handle the 10g case, which is double the price, two options are explored — either creating a new converter that multiplies by 10 or multiplying the existing result by 2.
Eventually, a second text element is added along with a new group of converters specifically for 10g. This includes a new formula:
Total Price = Gold price gram + Quantity-gold * 10
A formula converter and a group with both that formula and the string converter are created and named “Total Price 10g Group.”
Using timelines where the 5g and 10g buttons are in their active states, we adjusted the transparency of the text elements. This way, the total price connected to the 5g converters group is visible when the 5g button is selected, and the price from the 10g converters group appears when the 10g button is selected.
It works perfectly.
After this setup, the Gold price gram variable can be connected to live external data, allowing the gold price in the calculator to reflect the current market value in real time.
Wrapping Up
This gold calculator project is a simple example, but it shows how data binding in Rive can be used to connect visual design with real-time logic — without needing to jump between separate tools or write custom code. By combining state machines, variables, and converters, you can build interfaces that are not only animated but also smart and responsive.
Whether you’re working on a product UI, a prototype, or a standalone interactive graphic, Rive gives you a way to bring together motion and behavior in a single space. If you’re already experimenting with Rive, data binding opens up a whole new layer of possibilities to explore.
In today’s fast-evolving threat landscape, enterprises often focus heavily on external cyberattacks, overlooking one of the most potent and damaging risks: insider threats. Whether it’s a malicious employee, a careless contractor, or a compromised user account, insider threats strike from within the perimeter, making them harder to detect, contain, and mitigate.
As organizations become more hybrid, decentralized, and cloud-driven, moving away from implicit trust is more urgent than ever. Zero Trust Network Access (ZTNA) is emerging as a critical solution, silently transforming how businesses do insider threat mitigation.
Understanding the Insider Threat Landscape
Insider threats are not always malicious. They can stem from:
Disgruntled or rogue employees intentionally leaking data
Well-meaning staff misconfiguring systems or falling for phishing emails
Contractors or third-party vendors with excessive access
Compromised user credentials obtained via social engineering
According to multiple cybersecurity studies, insider incidents now account for over 30% of all breaches, and their average cost rises yearly.
The real challenge? Traditional security models operate on implicit trust. Once inside the network, users often have wide, unchecked access, which creates fertile ground for lateral movement, privilege abuse, and data exfiltration.
ZTNA in Action: Redefining Trust, Access, and Visibility
Zero Trust Network Access challenges the outdated notion of “trust but verify.” Instead, it enforces “never trust, always verify”—even for users already inside the network.
ZTNA provides access based on identity, device posture, role, and context, ensuring that every access request is continuously validated. This approach is a game-changer for insider threat mitigation.
Granular Access Control
ZTNA enforces least privilege access, meaning users only get access to the specific applications or data they need—nothing more. Even if an insider intends to exfiltrate data, their reach is limited.
For example, a finance team member can access their accounting software, but cannot see HR or R&D files, no matter how hard they try.
Micro-Segmentation for Blast Radius Reduction
ZTNA divides the network into isolated micro-segments. This restricts lateral movement, so even if an insider compromises one segment, they cannot hop across systems undetected.
This segmentation acts like watertight compartments in a ship, containing the damage and preventing full-scale breaches.
Device and Risk Posture Awareness
ZTNA solutions assess device health before granting access. Access can be denied or limited if an employee logs in from an outdated or jailbroken device. This becomes crucial when insider risks stem from compromised endpoints.
Continuous Monitoring and Behavioral Analytics
ZTNA enables real-time visibility into who accessed what, from where, and for how long. Any deviation from expected behavior can trigger alerts or require re-authentication. For instance:
A user downloading an unusually high volume of files
Repeated access attempts outside business hours
Use of shadow IT apps or unauthorized tools
With continuous risk scoring and adaptive access, suspicious insider behavior can be curtailed before damage is done.
Real-World Relevance: Insider Threats in Indian Enterprises
As Indian organizations ramp up their digital transformation and cloud adoption, they face new risks tied to employee churn, contractor access, and remote work culture. In addition to the growing compliance pressure from laws like the Digital Personal Data Protection (DPDP) Act, it has become clear that relying on static access controls is no longer an option.
ZTNA’s dynamic, context-aware model perfectly fits this reality, offering a more resilient and regulation-ready access framework.
How Seqrite ZTNA Helps with Insider Threat Mitigation
Seqrite ZTNA is built to offer secure, identity-based access for modern Indian enterprises. It goes beyond authentication to deliver:
Role-based, micro-segmented access to specific apps and data
Granular control policies based on risk level, device posture, and location
Centralized visibility and detailed audit logs for every user action
Seamless experience for users, without the complexity of traditional solutions
Whether you’re securing remote teams, contractors, or sensitive internal workflows, Seqrite ZTNA gives you the tools to limit, monitor, and respond to insider threats—without slowing down productivity.
Final Thoughts
Insider threats aren’t hypothetical—they’re already inside your network. And as organizations become more distributed, the threat surface only widens. Traditional access models offer little defense for insider threat mitigation.
ZTNA isn’t just about external threats; it’s a silent guardian against internal risks. Enforcing continuous validation, granular access, and real-time visibility transforms your weakest points into strongholds.
Sometimes just a minor change can affect performance. Here’s a simple trick: initialize your collections by specifying the initial size!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When you initialize a collection, like a List, you create it with the default size.
Whenever you add an item to a collection, .NET checks that there is enough capacity to hold the new item. If not, it resizes the collection by doubling the inner capacity.
Resizing the collection takes time and memory.
Therefore, when possible, you should initialize the collection with the expected number of items it will contain.
Initialize a List
In the case of a List, you can simply replace new List<T>() with new List<T>(size). By specifying the initial size in the constructor’s parameters, you’ll have a good performance improvement.
Let’s create a benchmark using BenchmarkDotNet and .NET 8.0.100-rc.1.23455.8 (at the time of writing, .NET 8 is still in preview. However, we can get an idea of the average performance).
The benchmark is pretty simple:
[MemoryDiagnoser]publicclassCollectionWithSizeInitializationBenchmarks{
[Params(100, 1000, 10000, 100000)]publicint Size;
[Benchmark]publicvoid WithoutInitialization()
{
List<int> list = new List<int>();
for (int i = 0; i < Size; i++)
{
list.Add(i);
}
}
[Benchmark(Baseline = true)]publicvoid WithInitialization()
{
List<int> list = new List<int>(Size);
for (int i = 0; i < Size; i++)
{
list.Add(i);
}
}
}
The only difference is in the list initialization: in the WithInitialization, we have List<int> list = new List<int>(Size);.
Have a look at the benchmark result, split by time and memory execution.
Starting with the execution time, we can see that without list initialization, we have an average 1.7x performance degradation.
Method
Size
Mean
Ratio
WithoutInitialization
100
299.659 ns
1.77
WithInitialization
100
169.121 ns
1.00
WithoutInitialization
1000
1,549.343 ns
1.58
WithInitialization
1000
944.862 ns
1.00
WithoutInitialization
10000
16,307.082 ns
1.80
WithInitialization
10000
9,035.945 ns
1.00
WithoutInitialization
100000
388,089.153 ns
1.73
WithInitialization
100000
227,040.318 ns
1.00
If we talk about memory allocation, we waste an overage of 2.5x memory if compared to collections with size initialized.
Method
Size
Allocated
Alloc Ratio
WithoutInitialization
100
1184 B
2.60
WithInitialization
100
456 B
1.00
WithoutInitialization
1000
8424 B
2.08
WithInitialization
1000
4056 B
1.00
WithoutInitialization
10000
131400 B
3.28
WithInitialization
10000
40056 B
1.00
WithoutInitialization
100000
1049072 B
2.62
WithInitialization
100000
400098 B
1.00
Initialize an HashSet
Similar to what we’ve done with List’s, we can see significant improvements when initializing correctly other data types, such as HashSet’s.
Let’s run the same benchmarks, but this time, let’s initialize a HashSet<int> instead of a List<int>.
The code is pretty similar:
[Benchmark]publicvoid WithoutInitialization()
{
varset = new HashSet<int>();
for (int i = 0; i < Size; i++)
{
set.Add(i);
}
}
[Benchmark(Baseline = true)]publicvoid WithInitialization()
{
varset = new HashSet<int>(Size);
for (int i = 0; i < Size; i++)
{
set.Add(i);
}
}
What can we say about performance improvements?
If we talk about execution time, we can see an average of 2x improvements.
Method
Size
Mean
Ratio
WithoutInitialization
100
1,122.2 ns
2.02
WithInitialization
100
558.4 ns
1.00
WithoutInitialization
1000
12,215.6 ns
2.74
WithInitialization
1000
4,478.4 ns
1.00
WithoutInitialization
10000
148,603.7 ns
1.90
WithInitialization
10000
78,293.3 ns
1.00
WithoutInitialization
100000
1,511,011.6 ns
1.96
WithInitialization
100000
810,657.8 ns
1.00
If we look at memory allocation, if we don’t initialize the HashSet, we are slowing down the application by a factor of 3x. Impressive!
Method
Size
Allocated
Alloc Ratio
WithoutInitialization
100
5.86 KB
3.28
WithInitialization
100
1.79 KB
1.00
WithoutInitialization
1000
57.29 KB
3.30
WithInitialization
1000
17.35 KB
1.00
WithoutInitialization
10000
526.03 KB
3.33
WithInitialization
10000
157.99 KB
1.00
WithoutInitialization
100000
4717.4 KB
2.78
WithInitialization
100000
1697.64 KB
1.00
Wrapping up
Do you need other good reasons to initialize your collection capacity when possible? 😉
I used BenchmarkDotNet to create these benchmarks. If you want an introduction to this tool, you can have a look at how I used it to measure the performance of Enums:
Hello Robo is a New York based digital product design agency that turns complex technology into intuitive, usable interfaces. We work with forward-thinking teams to create market-ready digital products that are easy to use and hard to ignore.
Earlier this year, the design team at Hello Robo decided to update our brand and website site to speak the language of our current clients — AI, space, aviation, and robotics — after realizing the old, “startup-y” look sold us short.
The new design and copy showcase our ability to tame complex systems with clear thinking and precise interfaces, signaling to deep-tech teams that we understand their world and can make their products make sense.
We wanted our site to do only 2 things but well:
Have the design language to appeal to our existing and new target clients
Most of our work is not allowed to be shared. Our second goal was to let design, motion and interaction give our visitors a sense of what we are great at.
Research
Before we sketching a single screen, our design lead on this project Daria Krauskopf, did what we do before we starting any project at Hello Robo. She decided to talk with our customers. We asked every existing client two questions:
What do you think we do?
What’s one thing you think we’re absolutely great at?
The replies were almost word-for-word:
“You do excellent product design—not crazy, unachievable vision design, and not MVPs either. You’re absolutely great at taking complex, technical systems and turning them into beautiful interfaces that our users actually love to use.”
That became the foundation for how we approached the new site.
Design & Art Direction
We love robots—and robotics inspires everything we do. For the new site, we moved away from soft colors and rounded corners and leaned into a more hi-tech visual language: dark backgrounds, thin lines, sharper shapes. Daria wanted the design to feel more precise, more engineered—something that would resonate with the kind of clients we work with in aviation, robotics, and defense. Every visual choice was about clarity, control, and intention.
A few boards from Hello Robo new brand, reimagined by our design Hanna Shpak
Animation and Interaction
All of our interface work is rooted in interaction and motion—because real-world products aren’t static. They always change and respond to users input and actions. We wanted the site to reflect that. Not with flashy effects or distracting transitions, but with just enough subtle animation to guide, respond, and feel alive. Everything moves with purpose—quiet, responsive, and smooth.
Case Studies
We didn’t want our case studies to be just a scroll of pretty images. Each one is built as a story—showing not just what we made, but how it worked and why it mattered. We walk through key features, the thinking behind UX decisions, and the problems we solved for each client. It’s less about showing off visuals, and more about showing how we think.
Final words
In the end, we got what we set out to build: a clearer visual and verbal language that reflects who we are and who we work with. The site feels more aligned with the complexity and ambition of our clients—and with the way we approach design: thoughtful, precise, and grounded in real product work. It’s not trying to impress with noise. It’s built to resonate with the kind of teams who care about clarity, systems, and getting things right.