دسته: ذخیره داده‌های موقت

  • How to test HttpClientFactory with Moq


    Mocking IHttpClientFactory is hard, but luckily we can use some advanced features of Moq to write better tests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working on any .NET application, one of the most common things you’ll see is using dependency injection to inject an IHttpClientFactory instance into the constructor of a service. And, of course, you should test that service. To write good unit tests, it is a good practice to mock the dependencies to have full control over their behavior. A well-known library to mock dependencies is Moq; integrating it is pretty simple: if you have to mock a dependency of type IMyService, you can create mocks of it by using Mock<IMyService>.

    But here comes a problem: mocking IHttpClientFactory is not that simple: just using Mock<IHttpClientFactory> is not enough.

    In this article, we will learn how to mock IHttpClientFactory dependencies, how to define the behavior for HTTP calls, and finally, we will deep dive into the advanced features of Moq that allow us to mock that dependency. Let’s go!

    Introducing the issue

    To fully understand the problem, we need a concrete example.

    The following class implements a service with a method that, given an input string, sends it to a remote client using a DELETE HTTP call:

    public class MyExternalService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public MyExternalService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task DeleteObject(string objectName)
        {
            string path = $"/objects?name={objectName}";
            var client = _httpClientFactory.CreateClient("ext_service");
    
            var httpResponse = await client.DeleteAsync(path);
    
            httpResponse.EnsureSuccessStatusCode();
        }
    }
    

    The key point to notice is that we are injecting an instance of IHttpClientFactory; we are also creating a new HttpClient every time it’s needed by using _httpClientFactory.CreateClient("ext_service").

    As you may know, you should not instantiate new HttpClient objects every time to avoid the risk of socket exhaustion (see links below).

    There is a huge problem with this approach: it’s not easy to test it. You cannot simply mock the IHttpClientFactory dependency, but you have to manually handle the HttpClient and keep track of its internals.

    Of course, we will not use real IHttpClientFactory instances: we don’t want our application to perform real HTTP calls. We need to mock that dependency.

    Think of mocked dependencies as movies stunt doubles: you don’t want your main stars to get hurt while performing action scenes. In the same way, you don’t want your application to perform actual operations when running tests.

    Creating mocks is like using stunt doubles for action scenes

    We will use Moq to test the method and check that the HTTP call is correctly adding the objectName variable in the query string.

    How to create mocks of IHttpClientFactory with Moq

    Let’s begin with the full code for the creation of mocked IHttpClientFactorys:

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    
    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    
    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    service = new MyExternalService(mockHttpClientFactory.Object);
    

    A lot of stuff is going on, right?

    Let’s break it down to fully understand what all those statements mean.

    Mocking HttpMessageHandler

    The first instruction we meet is

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    

    What does it mean?

    HttpMessageHandler is the fundamental part of every HTTP request in .NET: it performs a SendAsync call to the specified endpoint with all the info defined in a HttpRequestMessage object passed as a parameter.

    Since we are interested in what happens to the HttpMessageHandler, we need to mock it and store the result in a variable.

    Have you noticed that MockBehavior.Strict? This is an optional parameter that makes the mock throw an exception when it doesn’t have a corresponding setup. To try it, remove that argument to the constructor and comment out the handlerMock.Setup() part: when you’ll run the tests, you’ll receive an error of type Moq.MockException.

    Next step: defining the behavior of the mocked HttpMessageHandler

    Defining the behavior of HttpMessageHandler

    Now we have to define what happens when we use the handlerMock object in any HTTP operation:

    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    

    The first thing we meet is that Protected(). Why?

    To fully understand why we need it, and what is the meaning of the next operations, we need to have a look at the definition of HttpMessageHandler:

    // Summary: A base type for HTTP message handlers.
    public abstract class HttpMessageHandler : IDisposable
    {
        /// Other stuff here...
    
        // Summary: Send an HTTP request as an asynchronous operation.
        protected internal abstract Task<HttpResponseMessage> SendAsync(
            HttpRequestMessage request,
            CancellationToken cancellationToken);
    }
    

    From this snippet, we can see that we have a method, SendAsync, which accepts an HttpRequestMessage object and a CancellationToken, and which is the one that deals with HTTP requests. But this method is protected. Therefore we need to use Protected() to access the protected methods of the HttpMessageHandler class, and we must set them up by using the method name and the parameters in the Setup method.

    With Protected() you can access protected members

    Two details to notice, then:

    • We specify the method to set up by using its name as a string: “SendAsync”
    • To say that we don’t care about the actual values of the parameters, we use ItExpr instead of It because we are dealing with the setup of a protected member.

    If SendAsync was a public method, we would have done something like this:

    handlerMock
        .Setup(_ => _.SendAsync(
            It.IsAny<HttpRequestMessage>(), It.IsAny<CancellationToken>())
        );
    

    But, since it is a protected method, we need to use the way I listed before.

    Then, we define that the call to SendAsync returns an object of type HttpResponseMessage: here we don’t care about the content of the response, so we can leave it in this way without further customizations.

    Creating HttpClient

    Now that we have defined the behavior of the HttpMessageHandler object, we can pass it to the HttpClient constructor to create a new instance of HttpClient that acts as we need.

    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    

    Here I’ve set up the value of the BaseAddress property to a valid URI to avoid null references when performing the HTTP call. You can use even non-existing URLs: the important thing is that the URL must be well-formed.

    Configuring the IHttpClientFactory instance

    We are finally ready to create the IHttpClientFactory!

    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    var service = new MyExternalService(mockHttpClientFactory.Object);
    

    So, we create the Mock of IHttpClientFactory and define the instance of HttpClient that will be returned when calling CreateClient("ext_service"). Finally, we’re passing the instance of IHttpClientFactory to the constructor of MyExternalService.

    How to verify the calls performed by IHttpClientFactory

    Now, suppose that in our test we’ve performed the operation under test.

    // setup IHttpClientFactory
    await service.DeleteObject("my-name");
    

    How can we check if the HttpClient actually called an endpoint with “my-name” in the query string? As before, let’s look at the whole code, and then let’s analyze every part of it.

    // verify that the query string contains "my-name"
    
    handlerMock.Protected()
     .Verify(
        "SendAsync",
        Times.Exactly(1), // we expected a single external request
        ItExpr.Is<HttpRequestMessage>(req =>
            req.RequestUri.Query.Contains("my-name")// Query string contains my-name
        ),
        ItExpr.IsAny<CancellationToken>()
        );
    

    Accessing the protected instance

    As we’ve already seen, the object that performs the HTTP operation is the HttpMessageHandler, which here we’ve mocked and stored in the handlerMock variable.

    Then we need to verify what happened when calling the SendAsync method, which is a protected method; thus we use Protected to access that member.

    Checking the query string

    The core part of our assertion is this:

    ItExpr.Is<HttpRequestMessage>(req =>
        req.RequestUri.Query.Contains("my-name")// Query string contains my-name
    ),
    

    Again, we are accessing a protected member, so we need to use ItExpr instead of It.

    The Is<HttpRequestMessage> method accepts a function Func<HttpRequestMessage, bool> that we can use to determine if a property of the HttpRequestMessage under test – in our case, we named that variable as req – matches the specified predicate. If so, the test passes.

    Refactoring the code

    Imagine having to repeat that code for every test method in your class – what a mess!

    So we can refactor it: first of all, we can move the HttpMessageHandler mock to the SetUp method:

    [SetUp]
    public void Setup()
    {
        this.handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
        HttpResponseMessage result = new HttpResponseMessage();
    
        this.handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .Returns(Task.FromResult(result))
        .Verifiable()
        ;
    
        var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
            };
    
        var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
        mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
        this.service = new MyExternalService(mockHttpClientFactory.Object);
    }
    

    and keep a reference to handlerMock and service in some private members.

    Then, we can move the assertion part to a different method, maybe to an extension method:

    public static void Verify(this Mock<HttpMessageHandler> mock, Func<HttpRequestMessage, bool> match)
    {
        mock.Protected().Verify(
            "SendAsync",
            Times.Exactly(1), // we expected a single external request
            ItExpr.Is<HttpRequestMessage>(req => match(req)
            ),
            ItExpr.IsAny<CancellationToken>()
        );
    }
    

    So that our test can be simplified to just a bunch of lines:

    [Test]
    public async Task Method_Should_ReturnSomething_When_Condition()
    {
        //Arrange occurs in the SetUp phase
    
        //Act
        await service.DeleteObject("my-name");
    
        //Assert
        handlerMock.Verify(r => r.RequestUri.Query.Contains("my-name"));
    }
    

    Further readings

    🔗 Example repository | GitHub

    🔗 Why we need HttpClientFactory | Microsoft Docs

    🔗 HttpMessageHandler class | Microsoft Docs

    🔗 Mock objects with static, complex data by using Manifest resources | Code4IT

    🔗 Moq documentation | GitHub

    🔗 How you can create extension methods in C# | Code4IT

    Wrapping up

    In this article, we’ve seen how tricky it can be to test services that rely on IHttpClientFactory instances. Luckily, we can rely on tools like Moq to mock the dependencies and have full control over the behavior of those dependencies.

    Mocking IHttpClientFactory is hard, I know. But here we’ve found a way to overcome those difficulties and make our tests easy to write and to understand.

    There are lots of NuGet packages out there that help us mock that dependency: do you use any of them? What is your favourite, and why?

    Happy coding!

    🐧



    Source link

  • use the same name for the same concept | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As I always say, naming things is hard. We’ve already talked about this in a previous article.

    By creating a simple and coherent dictionary, your classes will have better names because you are representing the same idea with the same name. This improves code readability and searchability. Also, by simply looking at the names of your classes you can grasp the meaning of them.

    Say that we have 3 objects that perform similar operations: they download some content from external sources.

    class YouTubeDownloader {    }
    
    class TwitterDownloadManager {    }
    
    class FacebookDownloadHandler {    }
    

    Here we are using 3 words to use the same concept: Downloader, DownloadManager, DownloadHandler. Why??

    So, if you want to see similar classes, you can’t even search for “Downloader” on your IDE.

    The solution? Use the same name to indicate the same concept!

    class YouTubeDownloader {    }
    
    class TwitterDownloader {    }
    
    class FacebookDownloader {    }
    

    It’s as simple as that! Just a small change can drastically improve the readability and usability of your code!

    So, consider also this small kind of issue when reviewing PRs.

    Conclusion

    A common dictionary helps to understand the code without misunderstandings. Of course, this tip does not refer only to class names, but to variables too. Avoid using synonyms for objects (eg: video and clip). Instead of synonyms, use more specific names (YouTubeVideo instead of Video).

    Any other ideas?

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • C# Tip: use the Ping class instead of an HttpClient to ping an endpoint

    C# Tip: use the Ping class instead of an HttpClient to ping an endpoint


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    What if you wanted to see if a remote website is up and running?

    Probably, the first thing that may come to your mind is to use a common C# class: HttpClient. But it may cause you some trouble.

    There is another way to ping an endpoint: using the Ping class.

    Why not using HttpClient

    Say that you need to know if the host at code4it.dev is live. With HttpClient you might use something like this:

    async Task Main()
    {
        var url = "https://code4it.dev";
    
        var isUp = await IsWebsiteUp_Get(url);
    
        Console.WriteLine("The website is {0}", isUp ? "up" : "down");
    }
    
    private async Task<bool> IsWebsiteUp_Get(string url)
    {
        var httpClient = new HttpClient(); // yes, I know, I should use HttpClientFactory!
        var httpResponse = await httpClient.GetAsync(url);
        return httpResponse.IsSuccessStatusCode;
    }
    

    There are some possible issues with this approach: what if there is no resource available in the root? You will have to define a specific path. And what happens if the defined resource is under authentication? IsWebsiteUp_Get will always return false. Even when the site is correctly up.

    Also, it is possible that the endpoint does not accept HttpGet requests. So, we can use HttpHead instead:

    private async Task<bool> IsWebsiteUp_Head(string url)
    {
        var httpClient = new HttpClient();
        HttpRequestMessage request = new HttpRequestMessage
        {
            RequestUri = new Uri(url),
            Method = HttpMethod.Head // Not GET, but HEAD
        };
        var result = await httpClient.SendAsync(request);
        return result.IsSuccessStatusCode;
    }
    

    We have the same issues described before, but at least we are not bound to a specific HTTP verb.

    By the way, we need to find another way.

    How to use Ping

    By using the Ping class, we can get rid of those checks and evaluate the status of the Host, not of a specific resource.

    private async Task<bool> IsWebsiteUp_Ping(string url)
    {
        Ping ping = new Ping();
        var hostName = new Uri(url).Host;
    
        PingReply result = await ping.SendPingAsync(hostName);
        return result.Status == IPStatus.Success;
    }
    

    The Ping class comes in the System.Net.NetworkInformation namespace, and allows you to perform the same operations of the ping command you usually send via command line.

    Conclusion

    We’ve seen why you should use Ping instead of HttpClient to perform a ping-like operation.

    There’s more than this: head to this more complete article to learn more.

    👉 Let’s discuss it on Twitter or on the comment section below.

    🐧





    Source link

  • syntax cheat sheet | Code4IT


    Moq and NSubstitute are two of the most used library to mock dependencies on your Unit Tests. How do they differ? How can we move from one library to the other?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing Unit Tests, you usually want to mock dependencies. In this way, you can define the behavior of those dependencies, and have full control of the system under test.

    For .NET applications, two of the most used mocking libraries are Moq and NSubstitute. They allow you to create and customize the behavior of the services injected into your classes. Even though they have similar functionalities, their syntax is slightly different.

    In this article, we will learn how the two libraries implement the most used functionalities; in this way, you can easily move from one to another if needed.

    A real-ish example

    As usual, let’s use a real example.

    For this article, I’ve created a dummy class, StringsWorker, that does nothing but call another service, IStringUtility.

    public class StringsWorker
    {
        private readonly IStringUtility _stringUtility;
    
        public StringsWorker(IStringUtility stringUtility)
            => _stringUtility = stringUtility;
    
        public string[] TransformArray(string[] items)
            => _stringUtility.TransformAll(items);
    
        public string[] TransformSingleItems(string[] items)
            => items.Select(i => _stringUtility.Transform(i)).ToArray();
    
        public string TransformString(string originalString)
            => _stringUtility.Transform(originalString);
    }
    

    To test the StringsWorker class, we will mock its only dependency, IStringUtility. This means that we won’t use a concrete class that implements IStringUtility, but rather we will use Moq and NSubstitute to mock it, defining its behavior and simulating real method calls.

    Of course, to use the two libraries, you have to install them in each tests project.

    How to define mocked dependencies

    The first thing to do is to instantiate a new mock.

    With Moq, you create a new instance of Mock<IStringUtility>, and then inject its Object property into the StringsWorker constructor:

    private Mock<IStringUtility> moqMock;
    private StringsWorker sut;
    
    public MoqTests()
    {
        moqMock = new Mock<IStringUtility>();
        sut = new StringsWorker(moqMock.Object);
    }
    

    With NSubstitute, instead, you declare it with Substitute.For<IStringUtility>() – which returns an IStringUtility, not wrapped in any class – and then you inject it into the StringsWorker constructor:

    private IStringUtility nSubsMock;
    private StringsWorker sut;
    
    public NSubstituteTests()
    {
        nSubsMock = Substitute.For<IStringUtility>();
        sut = new StringsWorker(nSubsMock);
    }
    

    Now we can customize moqMock and nSubsMock to add behaviors and verify the calls to those dependencies.

    Define method result for a specific input value: the Return() method

    Say that we want to customize our dependency so that, every time we pass “ciao” as a parameter to the Transform method, it returns “hello”.

    With Moq we use a combination of Setup and Returns.

    moqMock.Setup(_ => _.Transform("ciao")).Returns("hello");
    

    With NSubstitute we don’t use Setup, but we directly call Returns.

    nSubsMock.Transform("ciao").Returns("hello");
    

    Define method result regardless of the input value: It.IsAny() vs Arg.Any()

    Now we don’t care about the actual value passed to the Transform method: we want that, regardless of its value, the method always returns “hello”.

    With Moq, we use It.IsAny<T>() and specify the type of T:

    moqMock.Setup(_ => _.Transform(It.IsAny<string>())).Returns("hello");
    

    With NSubstitute, we use Arg.Any<T>():

    nSubsMock.Transform(Arg.Any<string>()).Returns("hello");
    

    Define method result based on a filter on the input: It.Is() vs Arg.Is()

    Say that we want to return a specific result only when a condition on the input parameter is met.

    For example, every time we pass a string that starts with “IT” to the Transform method, it must return “ciao”.

    With Moq, we use It.Is<T>(func) and we pass an expression as an input.

    moqMock.Setup(_ => _.Transform(It.Is<string>(s => s.StartsWith("IT")))).Returns("ciao");
    

    Similarly, with NSubstitute, we use Arg.Is<T>(func).

    nSubsMock.Transform(Arg.Is<string>(s => s.StartsWith("IT"))).Returns("ciao");
    

    Small trivia: for NSubstitute, the filter is of type Expression<Predicate<T>>, while for Moq it is of type Expression<Func<TValue, bool>>: don’t worry, you can write them in the same way!

    Throwing exceptions

    Since you should test not only happy paths, but even those where an error occurs, you should write tests in which the injected service throws an exception, and verify that that exception is handled correctly.

    With both libraries, you can throw a generic exception by specifying its type:

    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws<ArgumentException>();
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws<ArgumentException>();
    

    You can also throw a specific exception instance – maybe because you want to add an error message:

    var myException = new ArgumentException("My message");
    
    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws(myException);
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws(myException);
    

    If you don’t want to handle that exception, but you want to propagate it up, you can verify it in this way:

    Assert.Throws<ArgumentException>(() => sut.TransformArray(null));
    

    Verify received calls: Verify() vs Received()

    Sometimes, to understand if the code follows the execution paths as expected, you might want to verify that a method has been called with some parameters.

    To verify it, you can use the Verify method on Moq.

    moqMock.Verify(_ => _.Transform("hello"));
    

    Or, if you use NSubstitute, you can use the Received method.

    nSubsMock.Received().Transform("hello");
    

    Similar as we’ve seen before, you can use It.IsAny, It.Is, Arg.Any and Arg.Is to verify some properties of the parameters passed as input.

    Verify the exact count of received calls

    Other times, you might want to verify that a method has been called exactly N times.

    With Moq, you can add a parameter to the Verify method:

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    moqMock.Verify(_ => _.Transform(It.IsAny<string>()), Times.Exactly(3));
    

    Note that you can specify different values for that parameter, like Time.Exactly, Times.Never, Times.Once, Times.AtLeast, and so on.

    With NSubstitute, on the contrary, you can only specify a defined value, added as a parameter to the Received method.

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    nSubsMock.Received(3).Transform(Arg.Any<string>());
    

    Reset received calls

    As you remember, the mocked dependencies have been instantiated within the constructor, so every test method uses the same instance. This may cause some troubles, especially when checking how many calls the dependencies have received (because the count of received calls accumulates for every test method run before). Therefore, we need to reset the count of the received calls.

    In NUnit, you can define a method that will run before any test method – but only if decorated with the SetUp attribute:

    [SetUp]
    public void Setup()
    {
      // reset count
    }
    

    Here we can reset the number of the recorded method invocations on the dependencies and make sure that our test methods use always clean instances.

    With Moq, you can use Invocations.Clear():

    [SetUp]
    public void Setup()
    {
        moqMock.Invocations.Clear();
    }
    

    While, with NSubstitute, you can use ClearReceivedCalls():

    [SetUp]
    public void Setup()
    {
        nSubsMock.ClearReceivedCalls();
    }
    

    Further reading

    As always, the best way to learn what a library can do is head to its documentation. So, here you can find the links to Moq and NSubstitute docs.

    🔗 Moq documentation | GitHub

    🔗 NSubstitute documentation | NSubstitute

    If you already use Moq but you are having some troubles testing and configuring IHttpClientFactory instances, I got you covered:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, if you want to see the complete code of this article, you can find it on GitHub; I’ve written the exact same tests with both libraries so that you can compare them more easily.

    🔗 GitHub repository for the code used in this article | GitHub

    Conclusion

    In this article, we’ve seen how Moq and NSubstitute allow us to perform some basic operations when writing unit tests with C#. They are similar, but each one of them has a specific set of functionalities that are missing on the other library – or, at least, that I don’t know if they exist in both.

    Which library do you use, Moq or NSubstitute? Or maybe, another one?

    Happy coding!
    🐧



    Source link

  • Don’t use too many method arguments &vert; Code4IT

    Don’t use too many method arguments | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Many times, we tend to add too many parameters to a function. But that’s not the best idea: on the contrary, when a function requires too many arguments, grouping them into coherent objects helps writing simpler code.

    Why? How can we do it? What are the main issues with having too many params? Have a look at the following snippet:

    void SendPackage(
        string name,
        string lastname,
        string city,
        string country,
        string packageId
        ) { }
    

    If you need to use another field about the address or the person, you will need to add a new parameter and update all the existing methods to match the new function signature.

    What if we added a State argument? Is this part of the address (state = “Italy”) or something related to the package (state = Damaged)?

    Storing this field in the correct object helps understanding its meaning.

    void SendPackage(Person person, string packageId) { }
    
    class Person {
        public string Name { get; set; }
        public string LastName { get; set; }
        public Address Address {get; set;}
    }
    
    class Address {
        public string City { get; set; }
        public string Country { get; set; }
    }
    

    Another reason to avoid using lots of parameters? To avoid merge conflicts.

    Say that two devs, Alice and Bob, are working on some functionalities that impact the SendPackage method. Alice, on her branch, adds a new param, bool withPriority. In the meanwhile, Bob, on his branch, adds bool applyDiscount. Then, both Alice and Bob merge together their branches on the main one. What’s the result? Of course, a conflict: the method now has two boolean parameters, and the order by which they are added to the final result may cause some troubles. Even more, because every call to the SendPackage method has now one (or two) new params, whose value depends on the context. So, after the merge, the value that Bob defined for the applyDiscount parameter might be used instead of the one added by Alice.

    Conclusion

    To recap, why do we need to reduce the number of parameters?

    • to give context and meaning to those parameters
    • to avoid errors for positional parameters
    • to avoid merge conflicts

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • How to resolve dependencies in .NET APIs based on current HTTP Request


    Did you know that in .NET you can resolve specific dependencies using Factories? We’ll use them to switch between concrete classes based on the current HTTP Request

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an interface and that you want to specify its concrete class at runtime using the native Dependency Injection engine provided by .NET.

    For instance, imagine that you have a .NET API project and that the flag that tells the application which dependency to use is set in the HTTP Request.

    Can we do it? Of course, yes – otherwise I wouldn’t be here writing this article 😅 Let’s learn how!

    Why use different dependencies?

    But first: does all of this make sense? Is there any case when you want to inject different services at runtime?

    Let me share with you a story: once I had to create an API project which exposed just a single endpoint: Process(string ID).

    That endpoint read the item with that ID from a DB – an object composed of some data and some hundreds of children IDs – and then called an external service to download an XML file for every child ID in the object; then, every downloaded XML file would be saved on the file system of the server where the API was deployed to. Finally, a TXT file with the list of the items correctly saved on the file system was generated.

    Quite an easy task: read from DB, call some APIs, store the file, store the report file. Nothing more.

    But, how to run it locally without saving hundreds of files for every HTTP call?

    I decided to add a simple Query Parameter to the HTTP path and let .NET understand whether use the concrete class or a fake one. Let’s see how.

    Define the services on ConfigureServices

    As you may know, the dependencies are defined in the ConfigureServices method inside the Startup class.

    Here we can define our dependencies. For this example, we have an interface, IFileSystemAccess, which is implemented by two classes: FakeFileSystemAccess and RealFileSystemAccess.

    So, to define those mutable dependencies, you can follow this snippet:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
    
        services.AddHttpContextAccessor();
    
        services.AddTransient<FakeFileSystemAccess>();
        services.AddTransient<RealFileSystemAccess>();
    
        services.AddScoped<IFileSystemAccess>(provider =>
        {
            var context = provider.GetRequiredService<IHttpContextAccessor>();
    
            var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    
            if (useFakeFileSystemAccess)
                return provider.GetRequiredService<FakeFileSystemAccess>();
            else
                return provider.GetRequiredService<RealFileSystemAccess>();
        });
    }
    

    As usual, let’s break it down:

    Inject dependencies using a Factory

    Let’s begin with the king of the article:

    services.AddScoped<IFileSystemAccess>(provider =>
    {
    }
    

    We can define our dependencies by using a factory. For instance, now we are using the AddScoped Extension Method (wanna know some interesting facts about Extension Methods?):

    //
    // Summary:
    //     Adds a scoped service of the type specified in TService with a factory specified
    //     in implementationFactory to the specified Microsoft.Extensions.DependencyInjection.IServiceCollection.
    //
    // Parameters:
    //   services:
    //     The Microsoft.Extensions.DependencyInjection.IServiceCollection to add the service
    //     to.
    //
    //   implementationFactory:
    //     The factory that creates the service.
    //
    // Type parameters:
    //   TService:
    //     The type of the service to add.
    //
    // Returns:
    //     A reference to this instance after the operation has completed.
    public static IServiceCollection AddScoped<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class;
    

    This Extension Method allows us to get the information about the services already injected in the current IServiceCollection instance and use them to define how to instantiate the actual dependency for the TService – in our case, IFileSystemAccess.

    Why is this a Scoped dependency? As you might remember from a previous article, in .NET we have 3 lifetimes for dependencies: Singleton, Scoped, and Transient. Scoped dependencies are the ones that get loaded once per HTTP request: therefore, those are the best choice for this specific example.

    Reading from Query String

    Since we need to read a value from the query string, we need to access the HttpRequest object.

    That’s why we have:

    var context = provider.GetRequiredService<IHttpContextAccessor>();
    var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    

    Here I’m getting the HTTP Context and checking if the fake-fs key is defined. Yes, I know, I’m not checking its actual value: I’m just checking whether the key exists or not.

    IHttpContextAccessor is the key part of this snippet: this is a service that acts as a wrap around the HttpContext object. You can inject it everywhere in your code, but under one condition: you have to define it in the ConfigureServices method.

    How? Well, that’s simple:

    services.AddHttpContextAccessor();
    

    Injecting the dependencies based on the request

    Finally, we can define which dependency must be injected for the current HTTP Request:

    if (useFakeFileSystemAccess)
        return provider.GetRequiredService<FakeFileSystemAccess>();
    else
        return provider.GetRequiredService<RealFileSystemAccess>();
    

    Remember that we are inside a factory method: this means that, depending on the value of useFakeFileSystemAccess, we are defining the concrete class of IFileSystemAccess.

    GetRequiredService<T> returns the instance of type T injected in the DI engine. This implies that we have to inject the two different services before accessing them. That’s why you see:

    services.AddTransient<FakeFileSystemAccess>();
    services.AddTransient<RealFileSystemAccess>();
    

    Those two lines of code serve two different purposes:

    1. they make those services available to the GetRequiredService method;
    2. they resolve all the dependencies injected in those services

    Running the example

    Now that we have everything in place, it’s time to put it into practice.

    First of all, we need a Controller with the endpoint we will call:

    [ApiController]
    [Route("[controller]")]
    public class StorageController : ControllerBase
    {
        private readonly IFileSystemAccess _fileSystemAccess;
    
        public StorageController(IFileSystemAccess fileSystemAccess)
        {
            _fileSystemAccess = fileSystemAccess;
        }
    
        [HttpPost]
        public async Task<IActionResult> SaveContent([FromBody] FileInfo content)
        {
            string filename = $"file-{Guid.NewGuid()}.txt";
            var saveResult = await _fileSystemAccess.WriteOnFile(filename, content.Content);
            return Ok(saveResult);
        }
    
        public class FileInfo
        {
            public string Content { get; set; }
        }
    }
    

    Nothing fancy: this POST endpoint receives an object with some text, and calls IFileSystemAccess to store the file. Then, it returns the result of the operation.

    Then, we have the interface:

    public interface IFileSystemAccess
    {
        Task<FileSystemSaveResult> WriteOnFile(string fileName, string content);
    }
    
    public class FileSystemSaveResult
    {
        public FileSystemSaveResult(string message)
        {
            Message = message;
        }
    
        public string Message { get; set; }
    }
    

    which is implemented by the two classes:

    public class FakeFileSystemAccess : IFileSystemAccess
    {
        public Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            return Task.FromResult(new FileSystemSaveResult("Used mock File System access"));
        }
    }
    

    and

    public class RealFileSystemAccess : IFileSystemAccess
    {
        public async Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            await File.WriteAllTextAsync(fileName, content);
            return new FileSystemSaveResult("Used real File System access");
        }
    }
    

    As you could have imagined, only RealFileSystemAccess actually writes on the file system. But both of them return an object with a message that tells us which class completed the operation.

    Let’s see it in practice:

    First of all, let’s call the endpoint without anything in Query String:

    Without specifying the flag in Query String, we are using the real file system access

    And, then, let’s add the key:

    By adding the flag, we are using the mock class, so that we don&rsquo;t create real files

    As expected, depending on the query string, we can see two different results.

    Of course, you can use this strategy not only with values from the Query String, but also from HTTP Headers, cookies, and whatever comes with the HTTP Request.

    Further readings

    If you remember, we’ve defined the dependency to IFileSystemAccess as Scoped. Why? What are the other lifetimes native on .NET?

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    Also, AddScoped is the Extension Method that we used to build our dependencies thanks to a Factory. Here’s an article about some advanced topics about Extension Methods:

    🔗 How you can create Extension Methods in C# | Code4IT

    Finally, the repository for the code used for this article:

    🔗 DependencyInjectionByHttpRequest project | GitHub

    Wrapping up

    In this article, we’ve seen that we can use a Factory to define at runtime which class will be used when resolving a Dependency.

    We’ve used a simple calculation based on the current HTTP request, but of course, there are many other ways to achieve a similar result.

    What would you use instead? Have you ever used a similar approach? And why?

    Happy coding!

    🐧



    Source link

  • Use a SortedSet to avoid duplicates and sort items &vert; Code4IT

    Use a SortedSet to avoid duplicates and sort items | Code4IT


    Using the right data structure is crucial to building robust and efficient applications. So, why use a List or a HashSet to sort items (and remove duplicates) when you have a SortedSet?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As you probably know, you can create collections of items without duplicates by using a HashSet<T> object.

    It is quite useful to remove duplicates from a list of items of the same type.

    How can we ensure that we always have sorted items? The answer is simple: SortedSet<T>!

    HashSet: a collection without duplicates

    A simple HashSet creates a collection of unordered items without duplicates.

    This example

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    
    var resultHashSet = string.Join(',', hashSet);
    Console.WriteLine(resultHashSet);
    

    prints this string: Turin,Naples,Rome,Bari. The order of the inserted items is maintained.

    SortedSet: a sorted collection without duplicates

    To sort those items, we have two approaches.

    You can simply sort the collection once you’ve finished adding items:

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    var items = hashSet.ToList<string>().OrderBy(s => s);
    
    
    var resultHashSet = string.Join(',', items);
    Console.WriteLine(resultHashSet);
    

    Or, even better, use the right data structure: a SortedSet<T>

    var sortedSet = new SortedSet<string>();
    
    sortedSet.Add("Turin");
    sortedSet.Add("Naples");
    sortedSet.Add("Rome");
    sortedSet.Add("Bari");
    sortedSet.Add("Rome");
    sortedSet.Add("Turin");
    
    
    var resultSortedSet = string.Join(',', sortedSet);
    Console.WriteLine(resultSortedSet);
    

    Both results print Bari,Naples,Rome,Turin. But the second approach does not require you to sort a whole list: it is more efficient, both talking about time and memory.

    Use custom sorting rules

    What if we wanted to use a SortedSet with a custom object, like User?

    public class User {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }
    

    Of course, we can do that:

    var set = new SortedSet<User>();
    
    set.Add(new User("Davide", "Bellone"));
    set.Add(new User("Scott", "Hanselman"));
    set.Add(new User("Safia", "Abdalla"));
    set.Add(new User("David", "Fowler"));
    set.Add(new User("Maria", "Naggaga"));
    set.Add(new User("Davide", "Bellone"));//DUPLICATE!
    
    foreach (var user in set)
    {
        Console.WriteLine($"{user.LastName} {user.FirstName}");
    }
    

    But, we will get an error: our class doesn’t know how to compare things!

    That’s why we must update our User class so that it implements the IComparable interface:

    public class User : IComparable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    
        public int CompareTo(object obj)
        {
            var other = (User)obj;
            var lastNameComparison = LastName.CompareTo(other.LastName);
    
            return (lastNameComparison != 0)
                ? lastNameComparison :
                (FirstName.CompareTo(other.FirstName));
        }
    }
    

    In this way, everything works as expected:

    Abdalla Safia
    Bellone Davide
    Fowler David
    Hanselman Scott
    Naggaga Maria
    

    Notice that the second Davide Bellone has disappeared since it was a duplicate.

    This article first appeared on Code4IT

    Wrapping up

    Choosing the right data type is crucial for building robust and performant applications.

    In this article, we’ve used a SortedSet to insert items in a collection and expect them to be sorted and without duplicates.

    I’ve never used it in a project. So, how did I know that? I just explored the libraries I was using!

    From time to time, spend some minutes reading the documentation, have a glimpse of the most common libraries, and so on: you’ll find lots of stuff that you’ve never thought existed!

    Toy with your code! Explore it. Be curious.

    And have fun!

    🐧



    Source link

  • How to parse JSON Lines (JSONL) with C# | Code4IT


    JSONL is JSON’s less famous sibling: it allows you to store JSON objects separating them with new line. We will learn how to parse a JSONL string with C#.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    For sure, you already know JSON: it’s one of the most commonly used formats to share data as text.

    Did you know that there are different flavors of JSON? One of them is JSONL: it represents a JSON document where the items are in different lines instead of being in an array of items.

    It’s quite a rare format to find, so it can be tricky to understand how it works and how to parse it. In this article, we will learn how to parse a JSONL file with C#.

    Introducing JSONL

    As explained in the JSON Lines documentation, a JSONL file is a file composed of different items separated by a \n character.

    So, instead of having

    [{ "name": "Davide" }, { "name": "Emma" }]
    

    you have a list of items without an array grouping them.

    { "name" : "Davide" }
    { "name" : "Emma" }
    

    I must admit that I’d never heard of that format until a few months ago. Or, even better, I’ve already used JSONL files without knowing: JSONL is a common format for logs, where every entry is added to the file in a continuous stream.

    Also, JSONL has some characteristics:

    • every item is a valid JSON item
    • every line is separated by a \n character (or by \r\n, but \r is ignored)
    • it is encoded using UTF-8

    So, now, it’s time to parse it!

    Parsing the file

    Say that you’re creating a videogame, and you want to read all the items found by your character:

    class Item {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Category { get; set; }
    }
    

    The items list can be stored in a JSONL file, like this:

    {  "id": 1,  "name": "dynamite",  "category": "weapon" }
    {  "id": 2,  "name": "ham",  "category": "food" }
    {  "id": 3,  "name": "nail",  "category": "tool" }
    

    Now, all we have to do is to read the file and parse it.

    Assuming that we’ve read the content from a file and that we’ve stored it in a string called content, we can use Newtonsoft to parse those lines.

    As usual, let’s see how to parse the file, and then we’ll deep dive into what’s going on. (Note: the following snippet comes from this question on Stack Overflow)

    List<Item> items = new List<Item>();
    
    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    
    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    return items;
    

    Let’s break it down:

    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    

    The first thing to do is to create an instance of JsonTextReader, a class coming from the Newtonsoft.Json namespace. The constructor accepts a TextReader instance or any derived class. So we can use a StringReader instance that represents a stream from a specified string.

    The key part of this snippet (and, somehow, of the whole article) is the SupportMultipleContent property: when set to true it allows the JsonTextReader to keep reading the content as multiline.

    Its definition, in fact, says that:

    //
    // Summary:
    //     Gets or sets a value indicating whether multiple pieces of JSON content can be
    //     read from a continuous stream without erroring.
    //
    // Value:
    //     true to support reading multiple pieces of JSON content; otherwise false. The
    //     default is false.
    public bool SupportMultipleContent { get; set; }
    

    Finally, we can read the content:

    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    

    Here we create a new JsonSerializer (again, coming from Newtonsoft), and use it to read one item at a time.

    The while (jsonReader.Read()) allows us to read the stream till the end. And, to parse each item found on the stream, we use jsonSerializer.Deserialize<Item>(jsonReader);.

    The Deserialize method is smart enough to parse every item even without a , symbol separating them, because we have the SupportMultipleContent to true.

    Once we have the Item object, we can do whatever we want, like adding it to a list.

    Further readings

    As we’ve learned, there are different flavors of JSON. You can read an overview of them on Wikipedia.

    🔗 JSON Lines introduction | Wikipedia

    Of course, the best place to learn more about a format it’s its official documentation.

    🔗 JSON Lines documentation | Jsonlines

    This article exists thanks to Imran Qadir Baksh’s question on Stack Overflow, and, of course, to Yuval Itzchakov’s answer.

    🔗 Line delimited JSON serializing and de-serializing | Stack Overflow

    Since we’ve used Newtonsoft (aka: JSON.NET), you might want to have a look at its website.

    🔗SupportMultipleContent property | Newtonsoft

    Finally, the repository used for this article.

    🔗 JsonLinesReader repository | GitHub

    Conclusion

    You might be thinking:

    Why has Davide written an article about a comment on Stack Overflow?? I could have just read the same info there!

    Well, if you were interested only in the main snippet, you would’ve been right!

    But this article exists for two main reasons.

    First, I wanted to highlight that JSON is not always the best choice for everything: it always depends on what we need. For continuous streams of items, JSONL is a good (if not the best) choice. Don’t choose the most used format: choose what best fits your needs!

    Second, I wanted to remark that we should not be too attached to a specific library: I’d generally prefer using native stuff, so, for reading JSON files, my first choice is System.Text.Json. But not always it’s the best choice. Yes, we could write some complex workaround (like the second answer on Stack Overflow), but… does it worth it? Sometimes it’s better to use another library, even if just for one specific task. So, you could use System.Text.Json for the whole project unless for the part where you need to read a JSONL file.

    Have you ever met some unusual formats? How did you deal with it?

    Happy coding!

    🐧



    Source link

  • Keep the parameters in a consistent order &vert; Code4IT

    Keep the parameters in a consistent order | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you have a set of related functions, use always a coherent order of parameters.

    Take this bad example:

    IEnumerable<Section> GetSections(Context context);
    
    void AddSectionToContext(Context context, Section newSection);
    
    void AddSectionsToContext(IEnumerable<Section> newSections, Context context);
    

    Notice the order of the parameters passed to AddSectionToContext and AddSectionsToContext: they are swapped!

    Quite confusing, isn’t it?

    Confusion intensifies

    For sure, the code is harder to understand, since the order of the parameters is not what the reader expects it to be.

    But, even worse, this issue may lead to hard-to-find bugs, especially when parameters are of the same type.

    Think of this example:

    IEnumerable<Item> GetPhotos(string type, string country);
    
    IEnumerable<Item> GetVideos(string country, string type);
    

    Well, what could possibly go wrong?!?

    We have two ways to prevent possible issues:

    1. use coherent order: for instance, type is always the first parameter
    2. pass objects instead: you’ll add a bit more code, but you’ll prevent those issues

    To read more about this code smell, check out this article by Maxi Contieri!

    This article first appeared on Code4IT

    Conclusion

    To recap, always pay attention to the order of the parameters!

    • keep them always in the same order
    • use easy-to-understand order (remember the Principle of Least Surprise?)
    • use objects instead, if necessary.

    👉 Let’s discuss it on Twitter or in the comment section below!

    🐧





    Source link