برچسب: How

  • How Readymag’s free layout model drives unconventional web design

    How Readymag’s free layout model drives unconventional web design



    Readymag is a design tool for creating websites on a blank canvas. Grids and templates remain useful, but Readymag also makes room for another approach, one where designers can experiment more freely with composition, storytelling, and visual rhythm. As the web evolves, the free layout model feels increasingly relevant beyond art or experimental work. 

    Between structure and freedom

    Design history often swings between order and freedom. Some seek clarity and repetition, while others chase the chance to break rules for expression and surprise. Web design reflects this tension, shaped from the start by both technical limits and visual experimentation.

    Printing technology once dictated strict, grid-based layouts, later formalized by the Swiss school of graphic design. Early web technologies echoed this logic, making grids the default structure for clarity and usability. Yet many have pushed against it. Avant-garde and postmodern designers experimented with chaotic compositions, and on the web, Flash-era sites turned pages into performances.

    Today, grid and freedom approaches coexist. Tools like Readymag make it possible to borrow from both as needed, sometimes emphasizing structure, sometimes prioritizing expressiveness through typography, imagery, and motion.

    The philosophy and psychology of freedom

    If the grid in design symbolizes order, free layout is its breakaway gesture. Beyond altering page composition, it reflects deeper psychological and philosophical drives: the urge to experiment, assert individuality, and search for new meanings. Printing presses produce flawless, identical letters. A handwritten mark is always unique. Free layout works the same way: it allows designers to create something unique and memorable.

    Working without the grid means inviting randomness, juxtaposing the incompatible, chasing unexpected solutions. Not all experiments yield finished products, but they often shape new languages. In this sense, free layout isn’t chaos for chaos’s sake—it’s a laboratory where future standards are born.

    Freedom also changes the user’s experience. While grids reduce cognitive load, free composition is useful in creating emphasis and rhythm. Psychologists note that attention sharpens when expectations are disrupted. The most engaging designs often draw on both approaches, balancing clarity with moments of surprise.

    How does it work in practice

    While the philosophy of free layout may sound abstract, tools make it tangible. Each editor or builder imposes its own logic: some enforce rigid structures, others allow almost unlimited freedom. Comparing them shows how this philosophy plays out in practice.

    Classic digital design tools like Photoshop were built as a blank canvas: the designer chooses whether or not to use a grid. Interface tools like Figma also offer both modes—you can stick to columns and auto-layout, or position elements freely and experiment with composition.

    By contrast, pure web builders follow code logic. They work with containers, sections, and grids. Here the designer acts like an architect, assembling a structure that will display consistently across devices, support responsiveness, and guarantee predictability. Freedom is limited in favor of stability and usability.

    Readymag stands apart. Its philosophy is closer to InDesign than to HTML: a blank canvas where elements can be placed however the designer wishes. The power of this approach is in prioritizing storytelling, impression, and experimentation. 

    Storytelling and creativity

    Free layout gives the author a key tool: to direct attention the way a filmmaker frames a shot. Magazine longreads, promo pages, art projects—all of these rely on narrative. The reader needs to be guided through the story, tension built, emphasis placed. A strict grid often hinders this: it imposes uniform rhythm, equalizes blocks, and drains momentum. Free layout, by contrast, enables visual drama—a headline slicing into a photo, text running diagonally, an illustration spilling past the frame. Reading turns into an experience.

    The best websites of recent years show this in practice. They use deliberately broken grids: elements that float, shift, and create the sense of a living space. The unconventional arrangement itself becomes part of the story. Users don’t just read or look; they walk through the composition. Chaotic typography or abrupt animation goes beyond simple illustration and becomes a metaphor.

    Let’s explore a few examples of how this works in practice (all the websites below were made by Readymag users).

    This multimedia longread on the Nagorno-Karabakh conflict traces its history and recent escalation through text and imagery. The design relies on bold typography, layered photographs, and shifting compositions that alternate between grid-like order and free placement. Scrolling becomes a narrative device: sections unfold with rhythm and contrast, guiding the reader while leaving space for visual tension and moments of surprise. The result is a reading experience that balances structure with expressiveness, reflecting the gravity of the subject through form as well as content.

    On this website a collection of P.Y.E. sunglasses is presented through an immersive layout. Scrolling triggers rotations, shifts, and lens-like distortions, turning the screen into an expressive, almost performative space. Here, free composition sets the mood and builds a narrative around the product. Yet when it comes to the catalog itself, the design switches back to a clear grid, allowing for easy comparison of models and prices.

    Everything.can.be.scanned collects ordinary objects—tickets, pill packs, toys, scraps—and presents them as digital scans. The interface abandons order: items float in cluttered compositions, and the user is invited to drag them around, building their own arrangements. Texts and playful interactions, like catching disappearing shadows, add layers of exploration. Here, free layout is not just an aesthetic choice but the core mechanic, turning randomness into a way of seeing.

    Hayal & Hakikat recounts the story of Ottoman-era convicts through archival portraits that appear in sequence as the user scrolls. The repetition of images creates a grid-like rhythm, while interruptions like shifts in placement and sudden pauses break the order and add dramatic tension. The balance of structure and disruption mirrors the subject itself, turning the act of looking into part of the narrative.

    The analogy with film and theater is clear. Editing isn’t built from uniform shots: directors speed or slow the rhythm, insert sharp cuts, break continuity for dramatic effect. Theater works the same way—through pauses, sudden light changes, an actor stepping into the audience. On the web, free layout plays that role. It can disrupt the scrolling rhythm, halt attention, force the user to reset expectations. It is a language of emotion rather than information. More than a compositional device, it becomes a narrative tool—shaping story dynamics, heightening drama, setting rhythm. Where the goal is to engage, surprise, and immerse, it often proves stronger than the traditional grid.

    The future

    Today, freeform layout on the web is still often seen as a niche tool used in art projects and experimental media. But as technology evolves, it’s becoming clear that its philosophy can move beyond experimentation and grow into one of the fundamental languages of the future internet.

    A similar shift once happened in print. The transition from letterpress to phototypesetting and then to modern printing technologies expanded what was possible on the page and gave designers more freedom with layouts. The web is going through the same process: early constraints shaped a grid-based logic, but new technologies and tools like Readymag make it much simpler to experiment with custom arrangements when the project calls for it.

    User expectations are also changing. A generation raised on games, TikTok, and memes is attuned not to linear order but to flow, interplay, unpredictability. For them, strict grids may feel corporate, even dull. This suggests that in the future, grid-based and freeform layouts will continue to coexist, each used where it works best, and often together in the same design.



    Source link

  • How to add a caching layer in .NET 5 with Decorator pattern and Scrutor | Code4IT


    You should not add the caching logic in the same component used for retrieving data from external sources: you’d better use the Decorator Pattern. We’ll see how to use it, what benefits it brings to your application, and how to use Scrutor to add it to your .NET projects.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When fetching external resources – like performing a GET on some remote APIs – you often need to cache the result. Even a simple caching mechanism can boost the performance of your application: the fewer actual calls to the external system, the faster the response time of the overall application.

    We should not add the caching layer directly to the classes that get the data we want to cache, because it will make our code less extensible and testable. On the contrary, we might want to decorate those classes with a specific caching layer.

    In this article, we will see how we can use the Decorator Pattern to add a cache layer to our repositories (external APIs, database access, or whatever else) by using Scrutor, a NuGet package that allows you to decorate services.

    Before understanding what is the Decorator Pattern and how we can use it to add a cache layer, let me explain the context of our simple application.

    We are exposing an API with only a single endpoint, GetBySlug, which returns some data about the RSS item with the specified slug if present on my blog.

    To do that, we have defined a simple interface:

    public interface IRssFeedReader
    {
        RssItem GetItem(string slug);
    }
    

    That interface is implemented by the RssFeedReader class, which uses the SyndicationFeed class (that comes from the System.ServiceModel.Syndication namespace) to get the correct item from my RSS feed:

    public class RssFeedReader : IRssFeedReader
    {
        public RssItem GetItem(string slug)
        {
            var url = "https://www.code4it.dev/rss.xml";
            using var reader = XmlReader.Create(url);
            var feed = SyndicationFeed.Load(reader);
    
            SyndicationItem item = feed.Items.FirstOrDefault(item => item.Id.EndsWith(slug));
    
            if (item == null)
                return null;
    
            return new RssItem
            {
                Title = item.Title.Text,
                Url = item.Links.First().Uri.AbsoluteUri,
                Source = "RSS feed"
            };
        }
    }
    

    The RssItem class is incredibly simple:

    public class RssItem
    {
        public string Title { get; set; }
        public string Url { get; set; }
        public string Source { get; set; }
    }
    

    Pay attention to the Source property: we’re gonna use it later.

    Then, in the ConfigureServices method, we need to register the service:

    services.AddSingleton<IRssFeedReader, RssFeedReader>();
    

    Singleton, Scoped, or Transient? If you don’t know the difference, here’s an article for you!

    Lastly, our endpoint will use the IRssFeedReader interface to perform the operations, without knowing the actual type:

    public class RssInfoController : ControllerBase
    {
        private readonly IRssFeedReader _rssFeedReader;
    
        public RssInfoController(IRssFeedReader rssFeedReader)
        {
            _rssFeedReader = rssFeedReader;
        }
    
        [HttpGet("{slug}")]
        public ActionResult<RssItem> GetBySlug(string slug)
        {
            var item = _rssFeedReader.GetItem(slug);
    
            if (item != null)
                return Ok(item);
            else
                return NotFound();
        }
    }
    

    When we run the application and try to find an article I published, we retrieve the data directly from the RSS feed (as you can see from the value of Source).

    Retrieving data directly from the RSS feed

    The application is quite easy, right?

    Let’s translate it into a simple diagram:

    Base Class diagram

    The sequence diagram is simple as well- it’s almost obvious!

    Base sequence diagram

    Now it’s time to see what is the Decorator pattern, and how we can apply it to our situation.

    Introducing the Decorator pattern

    The Decorator pattern is a design pattern that allows you to add behavior to a class at runtime, without modifying that class. Since the caller works with interfaces and ignores the type of the concrete class, it’s easy to “trick” it into believing it is using the simple class: all we have to do is to add a new class that implements the expected interface, make it call the original class, and add new functionalities to that.

    Quite confusing, uh?

    To make it easier to understand, I’ll show you a simplified version of the pattern:

    Simplified Decorator pattern Class diagram

    In short, the Client needs to use an IService. Instead of passing a BaseService to it (as usual, via Dependency Injection), we pass the Client an instance of DecoratedService (which implements IService as well). DecoratedService contains a reference to another IService (this time, the actual type is BaseService), and calls it to perform the doSomething operation. But DecoratedService not only calls IService.doSomething(), but enriches its behavior with new capabilities (like caching, logging, and so on).

    In this way, our services are focused on a single aspect (Single Responsibility Principle) and can be extended with new functionalities (Open-close Principle).

    Enough theory! There are plenty of online resources about the Decorator pattern, so now let’s see how the pattern can help us adding a cache layer.

    Ah, I forgot to mention that the original pattern defines another object between IService and DecoratedService, but it’s useless for the purpose of this article, so we are fine anyway.

    Implementing the Decorator with Scrutor

    Have you noticed that we almost have all our pieces already in place?

    If we compare the Decorator pattern objects with our application’s classes can notice that:

    • Client corresponds to our RssInfoController controller: it’s the one that calls our services
    • IService corresponds to IRssFeedReader: it’s the interface consumed by the Client
    • BaseService corresponds to RssFeedReader: it’s the class that implements the operations from its interface, and that we want to decorate.

    So, we need a class that decorates RssFeedReader. Let’s call it CachedFeedReader: it checks if the searched item has already been processed, and, if not, calls the decorated class to perform the base operation.

    public class CachedFeedReader : IRssFeedReader
    {
        private readonly IRssFeedReader _rssFeedReader;
        private readonly IMemoryCache _memoryCache;
    
        public CachedFeedReader(IRssFeedReader rssFeedReader, IMemoryCache memoryCache)
        {
            _rssFeedReader = rssFeedReader;
            _memoryCache = memoryCache;
        }
    
        public RssItem GetItem(string slug)
        {
            var isFromCache = _memoryCache.TryGetValue(slug, out RssItem item);
            if (!isFromCache)
            {
                item = _rssFeedReader.GetItem(slug);
            }
            else
            {
                item.Source = "Cache";
            }
    
            _memoryCache.Set(slug, item);
            return item;
        }
    }
    

    There are a few points you have to notice in the previous snippet:

    • this class implements the IRssFeedReader interface;
    • we are passing an instance of IRssFeedReader in the constructor, which is the class that we are decorating;
    • we are performing other operations both before and after calling the base operation (so, calling _rssFeedReader.GetItem(slug));
    • we are setting the value of the Source property to Cache if the object is already in cache – its value is RSS feed the first time we retrieve this item;

    Now we have all the parts in place.

    To decorate the RssFeedReader with this new class, you have to install a NuGet package called Scrutor.

    Open your project and install it via UI or using the command line by running dotnet add package Scrutor.

    Now head to the ConfigureServices method and use the Decorate extension method to decorate a specific interface with a new service:

    services.AddSingleton<IRssFeedReader, RssFeedReader>(); // this one was already present
    services.Decorate<IRssFeedReader, CachedFeedReader>(); // add a new decorator to IRssFeedReader
    

    … and that’s it! You don’t have to update any other classes; everything is transparent for the clients.

    If we run the application again, we can see that the first call to the endpoint returns the data from the RSS Feed, and all the followings return data from the cache.

    Retrieving data directly from cache instead of from the RSS feed

    We can now update our class diagram to add the new CachedFeedReader class

    Decorated RssFeedReader Class diagram

    And, of course, the sequence diagram changed a bit too.

    Decorated RssFeedReader sequence diagram

    Benefits of the Decorator pattern

    Using the Decorator pattern brings many benefits.

    Every component is focused on only one thing: we are separating responsibilities across different components so that every single component does only one thing and does it well. RssFeedReader fetches RSS data, CachedFeedReader defines caching mechanisms.

    Every component is easily testable: we can test our caching strategy by mocking the IRssFeedReader dependency, without the worrying of the concrete classes called by the RssFeedReader class. On the contrary, if we put cache and RSS fetching functionalities in the RssFeedReader class, we would have many troubles testing our caching strategies, since we cannot mock the XmlReader.Create and SyndicationFeed.Load methods.

    We can easily add new decorators: say that we want to log the duration of every call. Instead of putting the logging in the RssFeedReader class or in the CachedFeedReader class, we can simply create a new class that implements IRssFeedReader and add it to the list of decorators.

    An example of Decorator outside the programming world? The following video from YouTube, where you can see that each cup (component) has only one responsibility, and can be easily decorated with many other cups.

    https://www.youtube.com/watch?v=T_7aVZZDGNM

    🔗Scrutor project on GitHub

    🔗An Atypical ASP.NET Core 5 Design Patterns Guide | Carl-Hugo Marcotte

    🔗GitHub repository for this article

    Wrapping up

    In this article, we’ve seen that the Decorator pattern allows us to write better code by focusing each component on a single task and by making them easy to compose and extend.

    We’ve done it thanks to Scrutor, a NuGet package that allows you to decorate services with just a simple configuration.

    I hope you liked this article.

    Happy coding! 🐧



    Source link

  • How to customize fields generation in Visual Studio 2019 | Code4IT


    Every time you ask Visual Studio to generate properties for you, it creates them with a simple, default format. But we can customize them by updating some options on our IDE. Let’s learn how!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We, as developers, hate repetitive tasks, isn’t it? In fact, we often auto-generate code by using our IDE’s capabilities. Yet, sometimes the auto-generated code does not follow our team rules or our personal taste, so we have to rename stuff every single time.

    For instance, say that your golden rule is to have your readonly properties named with a _ prefix: private readonly IService _myService instead of private readonly IService myService. Renaming the properties every time is… boring!

    In this article, you will learn how to customize Visual Studio 2019 to get the most out of the auto-generated code. In particular, we will customize the names of the readonly properties generated when we add a dependency in a class constructor.

    The usual autocomplete

    If you work properly, you do heavy use of Dependency Injection. And, if you do it, you will often define dependencies in a class’ constructor.

    Now, let’s have two simple actors: a class, MyService, and an interface, IMyDependency. We want to inject the IMyDependency service into the MyService constructor.

    public MyService(IMyDependency myDependency)
    {
    
    }
    

    To store somewhere the reference to IMyDependency, you usually click on the lightbulb that appears on the left navigation or hit CTRL+. This command will prompt you with some actions, like creating and initializing a new field:

    Default field generation without underscore

    This automatic task then creates a private readonly IMyDependency myDependency and assigns to this value the dependency defined in the constructor.

    private readonly IMyDependency myDependency;
    
    public MyService(IMyDependency myDependency)
    {
        this.myDependency = myDependency;
    }
    

    Now, let’s say that we want our properties to have an underscore as a prefix: so we must manually rename myDependency to _myDependency. Ok, not that big issue, but we can still save some time just by avoiding doing it manually.

    Setting up the right configurations

    To configure how automatic properties are generated, head to Visual Studio, and, in the top menu, navigate to Tools and then Options.

    Then, browse to Text Editor > C# > Code Style > Naming

    Navigation path in the Options window

    Here we have all the symbols that we can customize.

    The first thing to do is to create a custom naming style. On the right side of the options panel, click on the “Manage naming styles” button, and then on the “+” button. You will see a form that you can fill with your custom styles; the Sample Identifier field shows you the result of the generated fields.

    In the following picture you can see the result you can obtain if you fill all the fields: our properties will have a _ prefix, an Svc suffix, the words will be separated by a - symbol, and the name will be uppercase. As a result, the property name will be _EXAMPLE-IDENTIFIERSvc

    Naming Style window with all the filed filled

    Since we’re only interested in adding a _ prefix and making the text in camelCase, well… just add those settings! And don’t forget to specify a style name, like _fieldName.

    Close the form, and add a new Specification on the list: define that the new style must be applied to every Private or Internal Field, assign to it the newly created style (in my case, _fieldName). And… we’re done!

    Specification orders

    Final result

    Now that we have everything in place, we can try adding a dependency to our MyService class:

    Adding field on constructor

    As you can see, now the generated property is named _myDependency instead of myDependency.

    And the same happens when you instantiate a new instance of MyService and then you pass a new dependency in the constructor: Visual Studio automatically creates a new constructor with the missing dependency and assigns it to a private property (but, in this case, is not defined as readonly).

    Adding field from New statement

    Wrapping up

    In this article, we’ve learned how to configure Visual Studio 2019 to create private properties in a custom format, like adding a prefix to the property name.

    In my opinion, knowing the capabilities and possible customizations of your IDEs is one of the most underrated stuff. We spend most of our time working on an IDE – in my case, Visual Studio – so we should get to know it better to get the best from it and simplify our dev life.

    Are there any other smart customizations that you want to share? Tell us about it in the comment section below!

    So, for now, happy coding!

    🐧



    Source link

  • How to test HttpClientFactory with Moq


    Mocking IHttpClientFactory is hard, but luckily we can use some advanced features of Moq to write better tests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working on any .NET application, one of the most common things you’ll see is using dependency injection to inject an IHttpClientFactory instance into the constructor of a service. And, of course, you should test that service. To write good unit tests, it is a good practice to mock the dependencies to have full control over their behavior. A well-known library to mock dependencies is Moq; integrating it is pretty simple: if you have to mock a dependency of type IMyService, you can create mocks of it by using Mock<IMyService>.

    But here comes a problem: mocking IHttpClientFactory is not that simple: just using Mock<IHttpClientFactory> is not enough.

    In this article, we will learn how to mock IHttpClientFactory dependencies, how to define the behavior for HTTP calls, and finally, we will deep dive into the advanced features of Moq that allow us to mock that dependency. Let’s go!

    Introducing the issue

    To fully understand the problem, we need a concrete example.

    The following class implements a service with a method that, given an input string, sends it to a remote client using a DELETE HTTP call:

    public class MyExternalService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public MyExternalService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task DeleteObject(string objectName)
        {
            string path = $"/objects?name={objectName}";
            var client = _httpClientFactory.CreateClient("ext_service");
    
            var httpResponse = await client.DeleteAsync(path);
    
            httpResponse.EnsureSuccessStatusCode();
        }
    }
    

    The key point to notice is that we are injecting an instance of IHttpClientFactory; we are also creating a new HttpClient every time it’s needed by using _httpClientFactory.CreateClient("ext_service").

    As you may know, you should not instantiate new HttpClient objects every time to avoid the risk of socket exhaustion (see links below).

    There is a huge problem with this approach: it’s not easy to test it. You cannot simply mock the IHttpClientFactory dependency, but you have to manually handle the HttpClient and keep track of its internals.

    Of course, we will not use real IHttpClientFactory instances: we don’t want our application to perform real HTTP calls. We need to mock that dependency.

    Think of mocked dependencies as movies stunt doubles: you don’t want your main stars to get hurt while performing action scenes. In the same way, you don’t want your application to perform actual operations when running tests.

    Creating mocks is like using stunt doubles for action scenes

    We will use Moq to test the method and check that the HTTP call is correctly adding the objectName variable in the query string.

    How to create mocks of IHttpClientFactory with Moq

    Let’s begin with the full code for the creation of mocked IHttpClientFactorys:

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    
    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    
    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    service = new MyExternalService(mockHttpClientFactory.Object);
    

    A lot of stuff is going on, right?

    Let’s break it down to fully understand what all those statements mean.

    Mocking HttpMessageHandler

    The first instruction we meet is

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    

    What does it mean?

    HttpMessageHandler is the fundamental part of every HTTP request in .NET: it performs a SendAsync call to the specified endpoint with all the info defined in a HttpRequestMessage object passed as a parameter.

    Since we are interested in what happens to the HttpMessageHandler, we need to mock it and store the result in a variable.

    Have you noticed that MockBehavior.Strict? This is an optional parameter that makes the mock throw an exception when it doesn’t have a corresponding setup. To try it, remove that argument to the constructor and comment out the handlerMock.Setup() part: when you’ll run the tests, you’ll receive an error of type Moq.MockException.

    Next step: defining the behavior of the mocked HttpMessageHandler

    Defining the behavior of HttpMessageHandler

    Now we have to define what happens when we use the handlerMock object in any HTTP operation:

    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    

    The first thing we meet is that Protected(). Why?

    To fully understand why we need it, and what is the meaning of the next operations, we need to have a look at the definition of HttpMessageHandler:

    // Summary: A base type for HTTP message handlers.
    public abstract class HttpMessageHandler : IDisposable
    {
        /// Other stuff here...
    
        // Summary: Send an HTTP request as an asynchronous operation.
        protected internal abstract Task<HttpResponseMessage> SendAsync(
            HttpRequestMessage request,
            CancellationToken cancellationToken);
    }
    

    From this snippet, we can see that we have a method, SendAsync, which accepts an HttpRequestMessage object and a CancellationToken, and which is the one that deals with HTTP requests. But this method is protected. Therefore we need to use Protected() to access the protected methods of the HttpMessageHandler class, and we must set them up by using the method name and the parameters in the Setup method.

    With Protected() you can access protected members

    Two details to notice, then:

    • We specify the method to set up by using its name as a string: “SendAsync”
    • To say that we don’t care about the actual values of the parameters, we use ItExpr instead of It because we are dealing with the setup of a protected member.

    If SendAsync was a public method, we would have done something like this:

    handlerMock
        .Setup(_ => _.SendAsync(
            It.IsAny<HttpRequestMessage>(), It.IsAny<CancellationToken>())
        );
    

    But, since it is a protected method, we need to use the way I listed before.

    Then, we define that the call to SendAsync returns an object of type HttpResponseMessage: here we don’t care about the content of the response, so we can leave it in this way without further customizations.

    Creating HttpClient

    Now that we have defined the behavior of the HttpMessageHandler object, we can pass it to the HttpClient constructor to create a new instance of HttpClient that acts as we need.

    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    

    Here I’ve set up the value of the BaseAddress property to a valid URI to avoid null references when performing the HTTP call. You can use even non-existing URLs: the important thing is that the URL must be well-formed.

    Configuring the IHttpClientFactory instance

    We are finally ready to create the IHttpClientFactory!

    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    var service = new MyExternalService(mockHttpClientFactory.Object);
    

    So, we create the Mock of IHttpClientFactory and define the instance of HttpClient that will be returned when calling CreateClient("ext_service"). Finally, we’re passing the instance of IHttpClientFactory to the constructor of MyExternalService.

    How to verify the calls performed by IHttpClientFactory

    Now, suppose that in our test we’ve performed the operation under test.

    // setup IHttpClientFactory
    await service.DeleteObject("my-name");
    

    How can we check if the HttpClient actually called an endpoint with “my-name” in the query string? As before, let’s look at the whole code, and then let’s analyze every part of it.

    // verify that the query string contains "my-name"
    
    handlerMock.Protected()
     .Verify(
        "SendAsync",
        Times.Exactly(1), // we expected a single external request
        ItExpr.Is<HttpRequestMessage>(req =>
            req.RequestUri.Query.Contains("my-name")// Query string contains my-name
        ),
        ItExpr.IsAny<CancellationToken>()
        );
    

    Accessing the protected instance

    As we’ve already seen, the object that performs the HTTP operation is the HttpMessageHandler, which here we’ve mocked and stored in the handlerMock variable.

    Then we need to verify what happened when calling the SendAsync method, which is a protected method; thus we use Protected to access that member.

    Checking the query string

    The core part of our assertion is this:

    ItExpr.Is<HttpRequestMessage>(req =>
        req.RequestUri.Query.Contains("my-name")// Query string contains my-name
    ),
    

    Again, we are accessing a protected member, so we need to use ItExpr instead of It.

    The Is<HttpRequestMessage> method accepts a function Func<HttpRequestMessage, bool> that we can use to determine if a property of the HttpRequestMessage under test – in our case, we named that variable as req – matches the specified predicate. If so, the test passes.

    Refactoring the code

    Imagine having to repeat that code for every test method in your class – what a mess!

    So we can refactor it: first of all, we can move the HttpMessageHandler mock to the SetUp method:

    [SetUp]
    public void Setup()
    {
        this.handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
        HttpResponseMessage result = new HttpResponseMessage();
    
        this.handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .Returns(Task.FromResult(result))
        .Verifiable()
        ;
    
        var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
            };
    
        var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
        mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
        this.service = new MyExternalService(mockHttpClientFactory.Object);
    }
    

    and keep a reference to handlerMock and service in some private members.

    Then, we can move the assertion part to a different method, maybe to an extension method:

    public static void Verify(this Mock<HttpMessageHandler> mock, Func<HttpRequestMessage, bool> match)
    {
        mock.Protected().Verify(
            "SendAsync",
            Times.Exactly(1), // we expected a single external request
            ItExpr.Is<HttpRequestMessage>(req => match(req)
            ),
            ItExpr.IsAny<CancellationToken>()
        );
    }
    

    So that our test can be simplified to just a bunch of lines:

    [Test]
    public async Task Method_Should_ReturnSomething_When_Condition()
    {
        //Arrange occurs in the SetUp phase
    
        //Act
        await service.DeleteObject("my-name");
    
        //Assert
        handlerMock.Verify(r => r.RequestUri.Query.Contains("my-name"));
    }
    

    Further readings

    🔗 Example repository | GitHub

    🔗 Why we need HttpClientFactory | Microsoft Docs

    🔗 HttpMessageHandler class | Microsoft Docs

    🔗 Mock objects with static, complex data by using Manifest resources | Code4IT

    🔗 Moq documentation | GitHub

    🔗 How you can create extension methods in C# | Code4IT

    Wrapping up

    In this article, we’ve seen how tricky it can be to test services that rely on IHttpClientFactory instances. Luckily, we can rely on tools like Moq to mock the dependencies and have full control over the behavior of those dependencies.

    Mocking IHttpClientFactory is hard, I know. But here we’ve found a way to overcome those difficulties and make our tests easy to write and to understand.

    There are lots of NuGet packages out there that help us mock that dependency: do you use any of them? What is your favourite, and why?

    Happy coding!

    🐧



    Source link

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • How to resolve dependencies in .NET APIs based on current HTTP Request


    Did you know that in .NET you can resolve specific dependencies using Factories? We’ll use them to switch between concrete classes based on the current HTTP Request

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an interface and that you want to specify its concrete class at runtime using the native Dependency Injection engine provided by .NET.

    For instance, imagine that you have a .NET API project and that the flag that tells the application which dependency to use is set in the HTTP Request.

    Can we do it? Of course, yes – otherwise I wouldn’t be here writing this article 😅 Let’s learn how!

    Why use different dependencies?

    But first: does all of this make sense? Is there any case when you want to inject different services at runtime?

    Let me share with you a story: once I had to create an API project which exposed just a single endpoint: Process(string ID).

    That endpoint read the item with that ID from a DB – an object composed of some data and some hundreds of children IDs – and then called an external service to download an XML file for every child ID in the object; then, every downloaded XML file would be saved on the file system of the server where the API was deployed to. Finally, a TXT file with the list of the items correctly saved on the file system was generated.

    Quite an easy task: read from DB, call some APIs, store the file, store the report file. Nothing more.

    But, how to run it locally without saving hundreds of files for every HTTP call?

    I decided to add a simple Query Parameter to the HTTP path and let .NET understand whether use the concrete class or a fake one. Let’s see how.

    Define the services on ConfigureServices

    As you may know, the dependencies are defined in the ConfigureServices method inside the Startup class.

    Here we can define our dependencies. For this example, we have an interface, IFileSystemAccess, which is implemented by two classes: FakeFileSystemAccess and RealFileSystemAccess.

    So, to define those mutable dependencies, you can follow this snippet:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
    
        services.AddHttpContextAccessor();
    
        services.AddTransient<FakeFileSystemAccess>();
        services.AddTransient<RealFileSystemAccess>();
    
        services.AddScoped<IFileSystemAccess>(provider =>
        {
            var context = provider.GetRequiredService<IHttpContextAccessor>();
    
            var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    
            if (useFakeFileSystemAccess)
                return provider.GetRequiredService<FakeFileSystemAccess>();
            else
                return provider.GetRequiredService<RealFileSystemAccess>();
        });
    }
    

    As usual, let’s break it down:

    Inject dependencies using a Factory

    Let’s begin with the king of the article:

    services.AddScoped<IFileSystemAccess>(provider =>
    {
    }
    

    We can define our dependencies by using a factory. For instance, now we are using the AddScoped Extension Method (wanna know some interesting facts about Extension Methods?):

    //
    // Summary:
    //     Adds a scoped service of the type specified in TService with a factory specified
    //     in implementationFactory to the specified Microsoft.Extensions.DependencyInjection.IServiceCollection.
    //
    // Parameters:
    //   services:
    //     The Microsoft.Extensions.DependencyInjection.IServiceCollection to add the service
    //     to.
    //
    //   implementationFactory:
    //     The factory that creates the service.
    //
    // Type parameters:
    //   TService:
    //     The type of the service to add.
    //
    // Returns:
    //     A reference to this instance after the operation has completed.
    public static IServiceCollection AddScoped<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class;
    

    This Extension Method allows us to get the information about the services already injected in the current IServiceCollection instance and use them to define how to instantiate the actual dependency for the TService – in our case, IFileSystemAccess.

    Why is this a Scoped dependency? As you might remember from a previous article, in .NET we have 3 lifetimes for dependencies: Singleton, Scoped, and Transient. Scoped dependencies are the ones that get loaded once per HTTP request: therefore, those are the best choice for this specific example.

    Reading from Query String

    Since we need to read a value from the query string, we need to access the HttpRequest object.

    That’s why we have:

    var context = provider.GetRequiredService<IHttpContextAccessor>();
    var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    

    Here I’m getting the HTTP Context and checking if the fake-fs key is defined. Yes, I know, I’m not checking its actual value: I’m just checking whether the key exists or not.

    IHttpContextAccessor is the key part of this snippet: this is a service that acts as a wrap around the HttpContext object. You can inject it everywhere in your code, but under one condition: you have to define it in the ConfigureServices method.

    How? Well, that’s simple:

    services.AddHttpContextAccessor();
    

    Injecting the dependencies based on the request

    Finally, we can define which dependency must be injected for the current HTTP Request:

    if (useFakeFileSystemAccess)
        return provider.GetRequiredService<FakeFileSystemAccess>();
    else
        return provider.GetRequiredService<RealFileSystemAccess>();
    

    Remember that we are inside a factory method: this means that, depending on the value of useFakeFileSystemAccess, we are defining the concrete class of IFileSystemAccess.

    GetRequiredService<T> returns the instance of type T injected in the DI engine. This implies that we have to inject the two different services before accessing them. That’s why you see:

    services.AddTransient<FakeFileSystemAccess>();
    services.AddTransient<RealFileSystemAccess>();
    

    Those two lines of code serve two different purposes:

    1. they make those services available to the GetRequiredService method;
    2. they resolve all the dependencies injected in those services

    Running the example

    Now that we have everything in place, it’s time to put it into practice.

    First of all, we need a Controller with the endpoint we will call:

    [ApiController]
    [Route("[controller]")]
    public class StorageController : ControllerBase
    {
        private readonly IFileSystemAccess _fileSystemAccess;
    
        public StorageController(IFileSystemAccess fileSystemAccess)
        {
            _fileSystemAccess = fileSystemAccess;
        }
    
        [HttpPost]
        public async Task<IActionResult> SaveContent([FromBody] FileInfo content)
        {
            string filename = $"file-{Guid.NewGuid()}.txt";
            var saveResult = await _fileSystemAccess.WriteOnFile(filename, content.Content);
            return Ok(saveResult);
        }
    
        public class FileInfo
        {
            public string Content { get; set; }
        }
    }
    

    Nothing fancy: this POST endpoint receives an object with some text, and calls IFileSystemAccess to store the file. Then, it returns the result of the operation.

    Then, we have the interface:

    public interface IFileSystemAccess
    {
        Task<FileSystemSaveResult> WriteOnFile(string fileName, string content);
    }
    
    public class FileSystemSaveResult
    {
        public FileSystemSaveResult(string message)
        {
            Message = message;
        }
    
        public string Message { get; set; }
    }
    

    which is implemented by the two classes:

    public class FakeFileSystemAccess : IFileSystemAccess
    {
        public Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            return Task.FromResult(new FileSystemSaveResult("Used mock File System access"));
        }
    }
    

    and

    public class RealFileSystemAccess : IFileSystemAccess
    {
        public async Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            await File.WriteAllTextAsync(fileName, content);
            return new FileSystemSaveResult("Used real File System access");
        }
    }
    

    As you could have imagined, only RealFileSystemAccess actually writes on the file system. But both of them return an object with a message that tells us which class completed the operation.

    Let’s see it in practice:

    First of all, let’s call the endpoint without anything in Query String:

    Without specifying the flag in Query String, we are using the real file system access

    And, then, let’s add the key:

    By adding the flag, we are using the mock class, so that we don&rsquo;t create real files

    As expected, depending on the query string, we can see two different results.

    Of course, you can use this strategy not only with values from the Query String, but also from HTTP Headers, cookies, and whatever comes with the HTTP Request.

    Further readings

    If you remember, we’ve defined the dependency to IFileSystemAccess as Scoped. Why? What are the other lifetimes native on .NET?

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    Also, AddScoped is the Extension Method that we used to build our dependencies thanks to a Factory. Here’s an article about some advanced topics about Extension Methods:

    🔗 How you can create Extension Methods in C# | Code4IT

    Finally, the repository for the code used for this article:

    🔗 DependencyInjectionByHttpRequest project | GitHub

    Wrapping up

    In this article, we’ve seen that we can use a Factory to define at runtime which class will be used when resolving a Dependency.

    We’ve used a simple calculation based on the current HTTP request, but of course, there are many other ways to achieve a similar result.

    What would you use instead? Have you ever used a similar approach? And why?

    Happy coding!

    🐧



    Source link

  • Critical SAP Vulnerability & How to Protect Your Enterprise

    Critical SAP Vulnerability & How to Protect Your Enterprise


    Executive Summary

    CVE-2025-31324 is a critical remote code execution (RCE) vulnerability affecting the SAP NetWeaver Development Server, one of the core components used in enterprise environments for application development and integration. The vulnerability stems from improper validation of uploaded model files via the exposed metadatauploader endpoint. By exploiting this weakness, attackers can upload malicious files—typically crafted as application/octet-stream ZIP/JAR payloads—that the server mistakenly processes as trusted content.

    The risk is significant because SAP systems form the backbone of global business operations, handling finance, supply chain, human resources, and customer data. Successful exploitation enables adversaries to gain unauthenticated remote code execution, which can lead to:

    • Persistent foothold in enterprise networks
    • Theft of sensitive business data and intellectual property
    • Disruption of critical SAP-driven processes
    • Lateral movement toward other high-value assets within the organization

    Given the scale at which SAP is deployed across Fortune 500 companies and government institutions, CVE-2025-31324 poses a high-impact threat that defenders must address with urgency and precision.

    Vulnerability Overview

    • CVE ID: CVE-2025-31324
    • Type: Unauthenticated Arbitrary File Upload → Remote Code Execution (RCE)
    • CVSS Score: 8 (Critical) (based on vector: AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H)
    • Criticality: High – full compromise of SAP systems possible
    • Affected Products: SAP NetWeaver Application Server (Development Server module), versions prior to September 2025 patchset
    • Exploitation: Active since March 2025, widely weaponized after August 2025 exploit release
    • Business Impact: Persistent attacker access, data theft, lateral movement, and potential disruption of mission-critical ERP operations

    Threat Landscape & Exploitation

    Active exploitation began in March–April 2025, with attackers uploading web shells like helper.jsp, cache.jsp, or randomly-named .jsp files to SAP servers . On Linux systems, a stealthy backdoor named Auto-Color was deployed, enabling reverse shells, file manipulation, and evasive operation .

    In August 2025, the exploit script was publicly posted by “Scattered LAPSUS$ Hunters – ShinyHunters,” triggering a new wave of widespread automatic attacks . The script includes identifiable branding and taunts, a valuable signals for defenders.

    Technical Details

    Root Cause:
    The ‘metadatauploader’ endpoint fails to sanitize uploaded binary model files. It trusts client-supplied ‘Content-Type: application/octet-stream’ payloads and parses them as valid SAP model metadata.

    Trigger:

    Observed Payloads: Begin with PK (ZIP header), embedding .properties + compiled bytecode that triggers code execution when parsed.

    Impact: Arbitrary code execution within SAP NetWeaver server context, often leading to full system compromise.

    Exploitation in the Wild

    March–April 2025: First observed exploitation with JSP web shells.

    August 2025: Public exploit tool released by Scattered LAPSUS$ Hunters – ShinyHunters, fueling mass automated attacks.

    Reported Havoc: Over 1,200 exposed SAP NetWeaver Dev servers scanned on Shodan showed exploit attempts. Multiple confirmed intrusions across manufacturing, retail, and telecom sectors. Incidents of data exfiltration and reverse shell deployment confirmed in at least 8 large enterprises.

    Exploitation

    Attack Chain:
    1. Prepare Payload – Attacker builds a ZIP/JAR containing malicious model definitions or classes.
    2. Deliver Payload – Send crafted HTTP POST to /metadatauploader with application/octet-stream.
    3. Upload Accepted – Server writes/loads the malicious file without validation.
    4. Execution – Code is executed when the model is processed by NetWeaver.

    Indicators in PCAP:
    – POST /developmentserver/metadatauploader requests
    – Content-Type: application/octet-stream with PK-prefixed binary content

    Protection

    – Patch: Apply SAP September 2025 security updates immediately.
    – IPS/IDS Detection:
    • Match on POST requests to /metadatauploader containing CONTENTTYPE=MODEL.
    • Detect binary payloads beginning with PK in HTTP body.
    – EDR/XDR: Monitor SAP process spawning unexpected child processes (cmd.exe, powershell, etc).
    – Best Practice: Restrict development server exposure to trusted networks only.

    Indicators of Compromise (IoCs)

    Artifact Details
    1f72bd2643995fab4ecf7150b6367fa1b3fab17afd2abed30a98f075e4913087 Helper.jsp webshell
    794cb0a92f51e1387a6b316b8b5ff83d33a51ecf9bf7cc8e88a619ecb64f1dcf Cache.jsp webshell
    0a866f60537e9decc2d32cbdc7e4dcef9c5929b84f1b26b776d9c2a307c7e36e rrr141.jsp webshell
    4d4f6ea7ebdc0fbf237a7e385885d51434fd2e115d6ea62baa218073729f5249 rrxx1.jsp webshell

     

    Network:
    – URI: /developmentserver/metadatauploader?CONTENTTYPE=MODEL&CLIENT=1
    – Headers: Content-Type: application/octet-stream
    – Binary body beginning with PK

    Files:
    – Unexpected ZIP/JAR in SAP model directories
    – Modified .properties files in upload paths
    Processes:
    – SAP NetWeaver spawning system binaries

    MITRE ATT&CK Mapping

    – T1190 – Exploit Public-Facing Application
    – T1059 – Command Execution
    – T1105 – Ingress Tool Transfer
    – T1071.001 – Application Layer Protocol: Web Protocols

    Patch Verification

    – Confirm SAP NetWeaver patched to September 2025 release.
    – Test with crafted metadatauploader request – patched servers reject binary payloads.

    Conclusion

    CVE-2025-31324 highlights the risks of insecure upload endpoints in enterprise middleware. A single unvalidated file upload can lead to complete SAP system compromise. Given SAP’s role in core business operations, this vulnerability should be treated as high-priority with immediate patching and network monitoring for exploit attempts.

    References

    – SAP Security Advisory (September 2025) – CVE-2025-31324
    – NVD – https://nvd.nist.gov/vuln/detail/CVE-2025-31324
    – MITRE ATT&CK Framework – https://attack.mitre.org/techniques/T1190/

     

    Quick Heal Protection

    All Quick Heal customers are protected from this vulnerability by following signatures:

    • HTTP/CVE-2025-31324!VS.49935
    • HTTP/CVE-2025-31324!SP.49639

     

    Authors:
    Satyarth Prakash
    Vineet Sarote
    Adrip Mukherjee



    Source link

  • How to parse JSON Lines (JSONL) with C# | Code4IT


    JSONL is JSON’s less famous sibling: it allows you to store JSON objects separating them with new line. We will learn how to parse a JSONL string with C#.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    For sure, you already know JSON: it’s one of the most commonly used formats to share data as text.

    Did you know that there are different flavors of JSON? One of them is JSONL: it represents a JSON document where the items are in different lines instead of being in an array of items.

    It’s quite a rare format to find, so it can be tricky to understand how it works and how to parse it. In this article, we will learn how to parse a JSONL file with C#.

    Introducing JSONL

    As explained in the JSON Lines documentation, a JSONL file is a file composed of different items separated by a \n character.

    So, instead of having

    [{ "name": "Davide" }, { "name": "Emma" }]
    

    you have a list of items without an array grouping them.

    { "name" : "Davide" }
    { "name" : "Emma" }
    

    I must admit that I’d never heard of that format until a few months ago. Or, even better, I’ve already used JSONL files without knowing: JSONL is a common format for logs, where every entry is added to the file in a continuous stream.

    Also, JSONL has some characteristics:

    • every item is a valid JSON item
    • every line is separated by a \n character (or by \r\n, but \r is ignored)
    • it is encoded using UTF-8

    So, now, it’s time to parse it!

    Parsing the file

    Say that you’re creating a videogame, and you want to read all the items found by your character:

    class Item {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Category { get; set; }
    }
    

    The items list can be stored in a JSONL file, like this:

    {  "id": 1,  "name": "dynamite",  "category": "weapon" }
    {  "id": 2,  "name": "ham",  "category": "food" }
    {  "id": 3,  "name": "nail",  "category": "tool" }
    

    Now, all we have to do is to read the file and parse it.

    Assuming that we’ve read the content from a file and that we’ve stored it in a string called content, we can use Newtonsoft to parse those lines.

    As usual, let’s see how to parse the file, and then we’ll deep dive into what’s going on. (Note: the following snippet comes from this question on Stack Overflow)

    List<Item> items = new List<Item>();
    
    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    
    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    return items;
    

    Let’s break it down:

    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    

    The first thing to do is to create an instance of JsonTextReader, a class coming from the Newtonsoft.Json namespace. The constructor accepts a TextReader instance or any derived class. So we can use a StringReader instance that represents a stream from a specified string.

    The key part of this snippet (and, somehow, of the whole article) is the SupportMultipleContent property: when set to true it allows the JsonTextReader to keep reading the content as multiline.

    Its definition, in fact, says that:

    //
    // Summary:
    //     Gets or sets a value indicating whether multiple pieces of JSON content can be
    //     read from a continuous stream without erroring.
    //
    // Value:
    //     true to support reading multiple pieces of JSON content; otherwise false. The
    //     default is false.
    public bool SupportMultipleContent { get; set; }
    

    Finally, we can read the content:

    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    

    Here we create a new JsonSerializer (again, coming from Newtonsoft), and use it to read one item at a time.

    The while (jsonReader.Read()) allows us to read the stream till the end. And, to parse each item found on the stream, we use jsonSerializer.Deserialize<Item>(jsonReader);.

    The Deserialize method is smart enough to parse every item even without a , symbol separating them, because we have the SupportMultipleContent to true.

    Once we have the Item object, we can do whatever we want, like adding it to a list.

    Further readings

    As we’ve learned, there are different flavors of JSON. You can read an overview of them on Wikipedia.

    🔗 JSON Lines introduction | Wikipedia

    Of course, the best place to learn more about a format it’s its official documentation.

    🔗 JSON Lines documentation | Jsonlines

    This article exists thanks to Imran Qadir Baksh’s question on Stack Overflow, and, of course, to Yuval Itzchakov’s answer.

    🔗 Line delimited JSON serializing and de-serializing | Stack Overflow

    Since we’ve used Newtonsoft (aka: JSON.NET), you might want to have a look at its website.

    🔗SupportMultipleContent property | Newtonsoft

    Finally, the repository used for this article.

    🔗 JsonLinesReader repository | GitHub

    Conclusion

    You might be thinking:

    Why has Davide written an article about a comment on Stack Overflow?? I could have just read the same info there!

    Well, if you were interested only in the main snippet, you would’ve been right!

    But this article exists for two main reasons.

    First, I wanted to highlight that JSON is not always the best choice for everything: it always depends on what we need. For continuous streams of items, JSONL is a good (if not the best) choice. Don’t choose the most used format: choose what best fits your needs!

    Second, I wanted to remark that we should not be too attached to a specific library: I’d generally prefer using native stuff, so, for reading JSON files, my first choice is System.Text.Json. But not always it’s the best choice. Yes, we could write some complex workaround (like the second answer on Stack Overflow), but… does it worth it? Sometimes it’s better to use another library, even if just for one specific task. So, you could use System.Text.Json for the whole project unless for the part where you need to read a JSONL file.

    Have you ever met some unusual formats? How did you deal with it?

    Happy coding!

    🐧



    Source link

  • How to run PostgreSQL locally with Docker &vert; Code4IT

    How to run PostgreSQL locally with Docker | Code4IT


    PostgreSQL is a famous relational database. In this article, we will learn how to run it locally using Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is a relational database characterized for being open source and with a growing community supporting the project.

    There are several ways to store a Postgres database online so that you can use it to store data for your live applications. But, for local development, you might want to spin up a Postgres database on your local machine.

    In this article, we will learn how to run PostgreSQL on a Docker container for local development.

    Pull Postgres Docker Image

    As you may know, Docker allows you to download images of almost everything you want in order to run them locally (or wherever you want) without installing too much stuff.

    The best way to check the available versions is to head to DockerHub and search for postgres.

    Postgres image on DockerHub

    Here you’ll find a description of the image, all the documentation related to the installation parameters, and more.

    If you have Docker already installed, just open a terminal and run

    to download the latest image of PostgreSQL.

    Docker pull result

    Run the Docker Container

    Now that we have the image in our local environment, we can spin up a container and specify some parameters.

    Below, you can see the full command.

    docker run
        --name myPostgresDb
        -p 5455:5432
        -e POSTGRES_USER=postgresUser
        -e POSTGRES_PASSWORD=postgresPW
        -e POSTGRES_DB=postgresDB
        -d
        postgres
    

    Time to explain each and every part! 🔎

    docker run is the command used to create and run a new container based on an already downloaded image.

    --name myPostgresDb is the name we assign to the container that we are creating.

    -p 5455:5432 is the port mapping. Postgres natively exposes the port 5432, and we have to map that port (that lives within Docker) to a local port. In this case, the local 5455 port maps to Docker’s 5432 port.

    -e POSTGRES_USER=postgresUser, -e POSTGRES_PASSWORD=postgresPW, and -e POSTGRES_DB=postgresDB set some environment variables. Of course, we’re defining the username and password of the admin user, as well as the name of the database.

    -d indicates that the container run in a detached mode. This means that the container runs in a background process.

    postgres is the name of the image we are using to create the container.

    As a result, you will see the newly created container on the CLI (running docker ps) or view it using some UI tool like Docker Desktop:

    Containers running on Docker Desktop

    If you forgot which environment variables you’ve defined for that container, you can retrieve them using Docker Desktop or by running docker exec myPostgresDb env, as shown below:

    List all environment variables associated to a Container

    Note: environment variables may change with newer image versions. Always refer to the official docs, specifically to the documentation related to the image version you are consuming.

    Now that we have Postgres up and running, we can work with it.

    You can work with the DB using the console, or, if you prefer, using a UI.

    I prefer the second approach (yes, I know, it’s not cool as using the terminal, but it works), so I downloaded pgAdmin.

    There, you can connect to the server by using the environment variable you’ve defined when running docker run. Remember that the hostname is simply localhost.

    Connect to Postgres by using pgAdmin

    And we’ve finished! 🥳 Now you can work with a local instance of Postgres and shut it remove it when you don’t need it anymore.

    Additional resources

    I’ve already introduced Docker in another article, where I explained how to run MongoDB locally:

    🔗 First steps with Docker | Code4IT

    As usual, the best resource is the official website:

    🔗 PostgreSQL image | DockerHub

    Finally, a special mention to Francesco Ciulla, who thought me how to run Postgres with Docker while I thought him how to query it with C#. Yes, mutual help! 👏

    🔗 Francesco Ciulla’s blog

    Wrapping up

    In this article, we’ve seen how to download and install a PostgreSQL database on our local environment by using Docker.

    It’s just a matter of running a few commands and paying attention to the parameters passed in input.

    In a future article, we will learn how to perform CRUD operations on a PostgreSQL database using C#.

    For now, happy coding!

    🐧



    Source link

  • The First AI-Powered Ransomware & How It Works

    The First AI-Powered Ransomware & How It Works


    Introduction

    AI-powered malware has become quite a trend now. We have always been discussing how threat actors could perform attacks by leveraging AI models, and here we have a PoC demonstrating exactly that. Although it has not yet been observed in active attacks, who knows if it isn’t already being weaponized by threat actors to target organizations?

    We are talking about PromptLock, shared by ESET Research. PromptLock is the first known AI-powered ransomware. It leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption. These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS. For file encryption, PromptLock utilizes the SPECK 128-bit encryption algorithm.

    Ransomware itself is already one of the most dangerous categories of malware. When created using AI, it becomes even more concerning. PromptLock leverages large language models (LLMs) to dynamically generate malicious scripts. These AI-generated Lua scripts drive its malicious activity, making them flexible enough to work across Windows, Linux, and macOS.

    Technical Overview:

    The malware is written in Go (Golang) and communicates with a locally hosted LLM through the Ollama API.

    On executing this malware, we will observe it to be making connection to the  locally hosted LLM through the Ollama API.

    It identifies whether the infected machine is a personal computer, server, or industrial controller. Based on this classification, PromptLock decides whether to exfiltrate, encrypt, or destroy data.

    It is not just a sophisticated sample – entire LLM prompts are in the code itself. It uses SPECK 128bit encryption algorithm in ECB mode.

    The encryption key is stored in the key variable as four 32-bit little-endian words: local key = {key[1], key[2], key[3], key[4]}. This gets dynamically generated as shown in the figure:

    It begins infection by scanning the victim’s filesystem and building an inventory of candidate files, writing the results into scan.log.

    It also scans the user’s home directory to identify files containing potentially sensitive or critical information (e.g., PII). The results are stored in target_file_list.log

    Probably, PromptLock first creates scan.log to record discovered files and then narrows this into target.log, which defines the set to encrypt. Samples also generate files like payloads.txt for metadata or staging. Once targets are set, each file is encrypted in 16-byte chunks using SPECK-128 in ECB mode, overwriting contents with ciphertext.

    After encryption, it generates ransom notes dynamically. These notes may include specific details such as a Bitcoin address (1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa) this address is the first bitcoin address ever created and ransom amount. As it is a POC, no real data is present.

    PromptLock’s CLI and scripts rely on:

    • model=gpt-oss:20b
    • com/PeerDB-io/gluabit32
    • com/yuin/gopher-lua
    • com/gopher-lfs

    It also prints several required keys in lowercase (formatted as “key: value”), including:

    • os
    • username
    • Home
    • Hostname
    • Temp
    • Sep
    • cwd

    Implementation guidance:

    – environment variables:
    username: os.getenv(“USERNAME”) or os.getenv(“USER”)
    home: os.getenv(“USERPROFILE”) or os.getenv(“HOME”)
    hostname: os.getenv(“COMPUTERNAME”) or os.getenv(“HOSTNAME”) or io.popen(“hostname”):read(“*l”)
    temp: os.getenv(“TMPDIR”) or os.getenv(“TEMP”) or os.getenv(“TMP”) or “/tmp”
    sep: detect from package.path (if contains “\” then “\” else “/”), default to “/”


    – os: detect from environment and path separator:
    * if os.getenv(“OS”) == “Windows_NT” then “windows”
    * elseif sep == “\” then “windows”  
    * elseif os.getenv(“OSTYPE”) then use that valuevir
    * else “unix”

    – cwd: use io.popen(“pwd”):read(“*l”) or io.popen(“cd”):read(“*l”) depending on OS

    Conclusion:

    It’s high time the industry starts considering such malware cases. If we want to beat AI-powered malware, we will have to incorporate AI-powered solutions. In the last few months, we have been observing a tremendous rise in such cases, although PoCs, they are good enough to be leveraged to perform actual attacks. This clearly signals that defensive strategies must evolve at the same pace as offensive innovations.

    How Does SEQRITE Protect Its Customers?

    • PromptLock
    • PromptLock.49912.GC

    IOCs:

    • ed229f3442f2d45f6fdd4f3a4c552c1c
    • 2fdffdf0b099cc195316a85636e9636d
    • 1854a4427eef0f74d16ad555617775ff
    • 806f552041f211a35e434112a0165568
    • 74eb831b26a21d954261658c72145128
    • ac377e26c24f50b4d9aaa933d788c18c
    • F7cf07f2bf07cfc054ac909d8ae6223d

     

    Authors:

    Shrutirupa Banerjee
    Rayapati Lakshmi Prasanna Sai
    Pranav Pravin Hondrao
    Subhajeet Singha
    Kartikkumar Ishvarbhai Jivani
    Aravind Raj
    Rahul Kumar Mishra

     

     



    Source link