برچسب: vert

  • How to customize fields generation in Visual Studio 2019 | Code4IT


    Every time you ask Visual Studio to generate properties for you, it creates them with a simple, default format. But we can customize them by updating some options on our IDE. Let’s learn how!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We, as developers, hate repetitive tasks, isn’t it? In fact, we often auto-generate code by using our IDE’s capabilities. Yet, sometimes the auto-generated code does not follow our team rules or our personal taste, so we have to rename stuff every single time.

    For instance, say that your golden rule is to have your readonly properties named with a _ prefix: private readonly IService _myService instead of private readonly IService myService. Renaming the properties every time is… boring!

    In this article, you will learn how to customize Visual Studio 2019 to get the most out of the auto-generated code. In particular, we will customize the names of the readonly properties generated when we add a dependency in a class constructor.

    The usual autocomplete

    If you work properly, you do heavy use of Dependency Injection. And, if you do it, you will often define dependencies in a class’ constructor.

    Now, let’s have two simple actors: a class, MyService, and an interface, IMyDependency. We want to inject the IMyDependency service into the MyService constructor.

    public MyService(IMyDependency myDependency)
    {
    
    }
    

    To store somewhere the reference to IMyDependency, you usually click on the lightbulb that appears on the left navigation or hit CTRL+. This command will prompt you with some actions, like creating and initializing a new field:

    Default field generation without underscore

    This automatic task then creates a private readonly IMyDependency myDependency and assigns to this value the dependency defined in the constructor.

    private readonly IMyDependency myDependency;
    
    public MyService(IMyDependency myDependency)
    {
        this.myDependency = myDependency;
    }
    

    Now, let’s say that we want our properties to have an underscore as a prefix: so we must manually rename myDependency to _myDependency. Ok, not that big issue, but we can still save some time just by avoiding doing it manually.

    Setting up the right configurations

    To configure how automatic properties are generated, head to Visual Studio, and, in the top menu, navigate to Tools and then Options.

    Then, browse to Text Editor > C# > Code Style > Naming

    Navigation path in the Options window

    Here we have all the symbols that we can customize.

    The first thing to do is to create a custom naming style. On the right side of the options panel, click on the “Manage naming styles” button, and then on the “+” button. You will see a form that you can fill with your custom styles; the Sample Identifier field shows you the result of the generated fields.

    In the following picture you can see the result you can obtain if you fill all the fields: our properties will have a _ prefix, an Svc suffix, the words will be separated by a - symbol, and the name will be uppercase. As a result, the property name will be _EXAMPLE-IDENTIFIERSvc

    Naming Style window with all the filed filled

    Since we’re only interested in adding a _ prefix and making the text in camelCase, well… just add those settings! And don’t forget to specify a style name, like _fieldName.

    Close the form, and add a new Specification on the list: define that the new style must be applied to every Private or Internal Field, assign to it the newly created style (in my case, _fieldName). And… we’re done!

    Specification orders

    Final result

    Now that we have everything in place, we can try adding a dependency to our MyService class:

    Adding field on constructor

    As you can see, now the generated property is named _myDependency instead of myDependency.

    And the same happens when you instantiate a new instance of MyService and then you pass a new dependency in the constructor: Visual Studio automatically creates a new constructor with the missing dependency and assigns it to a private property (but, in this case, is not defined as readonly).

    Adding field from New statement

    Wrapping up

    In this article, we’ve learned how to configure Visual Studio 2019 to create private properties in a custom format, like adding a prefix to the property name.

    In my opinion, knowing the capabilities and possible customizations of your IDEs is one of the most underrated stuff. We spend most of our time working on an IDE – in my case, Visual Studio – so we should get to know it better to get the best from it and simplify our dev life.

    Are there any other smart customizations that you want to share? Tell us about it in the comment section below!

    So, for now, happy coding!

    🐧



    Source link

  • define Using Aliases to avoid ambiguity | Code4IT

    define Using Aliases to avoid ambiguity | Code4IT


    Sometimes we need to use objects with the same name but from different namespaces. How to remove that ambiguity? By Using Aliases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You may have to reference classes or services that come from different namespaces or packages, but that have the same name. It may become tricky to understand which reference refers to a specific type.

    Yes, you could use the fully qualified name of the class. Or, you could use namespace aliases to write cleaner and easier-to-understand code.

    It’s just a matter of modifying your using statements. Let’s see how!

    The general approach

    Say that you are working on an application that receives info about football matches from different sources using NuGet packages, and then manipulates the data to follow some business rules.

    Both services, ShinyData and JuanStatistics (totally random names!), provide an object called Match. Of course, those objects live in their specific namespaces.

    Since you are using the native implementation you cannot rename the classes to avoid ambiguity. So you’ll end up with code like this:

    void Main()
    {
        var shinyMatch = new ShinyData.Football.Statistics.Match();
        var juanMatch = new JuanStatistics.Stats.Football.Objects.Match();
    }
    

    Writing the fully qualified namespace every time can easily become boring. The code becomes less readable too!

    Luckily we have 2 solutions. Or, better, a solution that we can apply in two different ways.

    Namespace aliases – a cleaner solution

    The following solution will not work:

    using ShinyData.Football.Statistics;
    using JuanStatistics.Stats.Football.Objects;
    
    void Main()
    {
        var shinyMatch = new Match();
        var juanMatch = new Match();
    }
    

    because, of course, the compiler is not able to understand the exact type of shinyMatch and juanMatch.

    But we can use a nice functionality of C#: namespace aliases. It simply means that we can name an imported namespace and use the alias to reference the related classes.

    Using alias for the whole namespace

    using Shiny = ShinyData.Football.Statistics;
    using Juan = JuanStatistics.Stats.Football.Objects;
    
    void Main()
    {
        var shinyMatch = new Shiny.Match();
        var juanMatch = new Juan.Match();
    }
    

    This simple trick boosts the readability of your code.

    Using alias for a specific class

    Can we go another step further? Yes! We can even specify aliases for a specific class!

    using ShinyMatch = ShinyData.Football.Statistics.Match;
    using JuanMatch = JuanStatistics.Stats.Football.Objects.Match;
    
    void Main()
    {
        var shinyMatch = new ShinyMatch();
        var juanMatch = new JuanMatch();
    }
    

    Now we can create an instance of ShinyMatch which, since it is an alias listed among the using statements, is of type ShinyData.Football.Statistics.Match.

    Define alias for generics

    Not only you can use it to specify a simple class, but only for generics.

    Say that the ShinyData namespace defines a generic class, like CustomDictionary<T>. You can reference it just as you did before!

    using ShinyMatch = ShinyData.Football.Statistics.Match;
    using JuanMatch = JuanStatistics.Stats.Football.Objects.Match;
    using ShinyDictionary = ShinyData.Football.Statistics.CustomDictionary<int>;
    
    void Main()
    {
        var shinyMatch = new ShinyMatch();
        var juanMatch = new JuanMatch();
    
        var dictionary = new ShinyDictionary();
    }
    

    Unluckily we have some limitations:

    • we must always specify the inner type of the generic: CustomDictionary<int> is valid, but CustomDictionary<T> is not valid
    • we cannot use as inner type a class defined with an alias: CustomDictionary<ShinyMatch> is invalid, unless we use the fully qualified name

    Conclusion

    We’ve seen how we can define namespace aliases to simplify our C# code: just add a name to an imported namespace in the using statement, and reference it on your code.

    What would you reference, the namespace or the specific class?

    👉 Let’s discuss it on Twitter or on the comment section below.

    🐧





    Source link

  • use the same name for the same concept | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As I always say, naming things is hard. We’ve already talked about this in a previous article.

    By creating a simple and coherent dictionary, your classes will have better names because you are representing the same idea with the same name. This improves code readability and searchability. Also, by simply looking at the names of your classes you can grasp the meaning of them.

    Say that we have 3 objects that perform similar operations: they download some content from external sources.

    class YouTubeDownloader {    }
    
    class TwitterDownloadManager {    }
    
    class FacebookDownloadHandler {    }
    

    Here we are using 3 words to use the same concept: Downloader, DownloadManager, DownloadHandler. Why??

    So, if you want to see similar classes, you can’t even search for “Downloader” on your IDE.

    The solution? Use the same name to indicate the same concept!

    class YouTubeDownloader {    }
    
    class TwitterDownloader {    }
    
    class FacebookDownloader {    }
    

    It’s as simple as that! Just a small change can drastically improve the readability and usability of your code!

    So, consider also this small kind of issue when reviewing PRs.

    Conclusion

    A common dictionary helps to understand the code without misunderstandings. Of course, this tip does not refer only to class names, but to variables too. Avoid using synonyms for objects (eg: video and clip). Instead of synonyms, use more specific names (YouTubeVideo instead of Video).

    Any other ideas?

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • syntax cheat sheet | Code4IT


    Moq and NSubstitute are two of the most used library to mock dependencies on your Unit Tests. How do they differ? How can we move from one library to the other?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing Unit Tests, you usually want to mock dependencies. In this way, you can define the behavior of those dependencies, and have full control of the system under test.

    For .NET applications, two of the most used mocking libraries are Moq and NSubstitute. They allow you to create and customize the behavior of the services injected into your classes. Even though they have similar functionalities, their syntax is slightly different.

    In this article, we will learn how the two libraries implement the most used functionalities; in this way, you can easily move from one to another if needed.

    A real-ish example

    As usual, let’s use a real example.

    For this article, I’ve created a dummy class, StringsWorker, that does nothing but call another service, IStringUtility.

    public class StringsWorker
    {
        private readonly IStringUtility _stringUtility;
    
        public StringsWorker(IStringUtility stringUtility)
            => _stringUtility = stringUtility;
    
        public string[] TransformArray(string[] items)
            => _stringUtility.TransformAll(items);
    
        public string[] TransformSingleItems(string[] items)
            => items.Select(i => _stringUtility.Transform(i)).ToArray();
    
        public string TransformString(string originalString)
            => _stringUtility.Transform(originalString);
    }
    

    To test the StringsWorker class, we will mock its only dependency, IStringUtility. This means that we won’t use a concrete class that implements IStringUtility, but rather we will use Moq and NSubstitute to mock it, defining its behavior and simulating real method calls.

    Of course, to use the two libraries, you have to install them in each tests project.

    How to define mocked dependencies

    The first thing to do is to instantiate a new mock.

    With Moq, you create a new instance of Mock<IStringUtility>, and then inject its Object property into the StringsWorker constructor:

    private Mock<IStringUtility> moqMock;
    private StringsWorker sut;
    
    public MoqTests()
    {
        moqMock = new Mock<IStringUtility>();
        sut = new StringsWorker(moqMock.Object);
    }
    

    With NSubstitute, instead, you declare it with Substitute.For<IStringUtility>() – which returns an IStringUtility, not wrapped in any class – and then you inject it into the StringsWorker constructor:

    private IStringUtility nSubsMock;
    private StringsWorker sut;
    
    public NSubstituteTests()
    {
        nSubsMock = Substitute.For<IStringUtility>();
        sut = new StringsWorker(nSubsMock);
    }
    

    Now we can customize moqMock and nSubsMock to add behaviors and verify the calls to those dependencies.

    Define method result for a specific input value: the Return() method

    Say that we want to customize our dependency so that, every time we pass “ciao” as a parameter to the Transform method, it returns “hello”.

    With Moq we use a combination of Setup and Returns.

    moqMock.Setup(_ => _.Transform("ciao")).Returns("hello");
    

    With NSubstitute we don’t use Setup, but we directly call Returns.

    nSubsMock.Transform("ciao").Returns("hello");
    

    Define method result regardless of the input value: It.IsAny() vs Arg.Any()

    Now we don’t care about the actual value passed to the Transform method: we want that, regardless of its value, the method always returns “hello”.

    With Moq, we use It.IsAny<T>() and specify the type of T:

    moqMock.Setup(_ => _.Transform(It.IsAny<string>())).Returns("hello");
    

    With NSubstitute, we use Arg.Any<T>():

    nSubsMock.Transform(Arg.Any<string>()).Returns("hello");
    

    Define method result based on a filter on the input: It.Is() vs Arg.Is()

    Say that we want to return a specific result only when a condition on the input parameter is met.

    For example, every time we pass a string that starts with “IT” to the Transform method, it must return “ciao”.

    With Moq, we use It.Is<T>(func) and we pass an expression as an input.

    moqMock.Setup(_ => _.Transform(It.Is<string>(s => s.StartsWith("IT")))).Returns("ciao");
    

    Similarly, with NSubstitute, we use Arg.Is<T>(func).

    nSubsMock.Transform(Arg.Is<string>(s => s.StartsWith("IT"))).Returns("ciao");
    

    Small trivia: for NSubstitute, the filter is of type Expression<Predicate<T>>, while for Moq it is of type Expression<Func<TValue, bool>>: don’t worry, you can write them in the same way!

    Throwing exceptions

    Since you should test not only happy paths, but even those where an error occurs, you should write tests in which the injected service throws an exception, and verify that that exception is handled correctly.

    With both libraries, you can throw a generic exception by specifying its type:

    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws<ArgumentException>();
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws<ArgumentException>();
    

    You can also throw a specific exception instance – maybe because you want to add an error message:

    var myException = new ArgumentException("My message");
    
    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws(myException);
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws(myException);
    

    If you don’t want to handle that exception, but you want to propagate it up, you can verify it in this way:

    Assert.Throws<ArgumentException>(() => sut.TransformArray(null));
    

    Verify received calls: Verify() vs Received()

    Sometimes, to understand if the code follows the execution paths as expected, you might want to verify that a method has been called with some parameters.

    To verify it, you can use the Verify method on Moq.

    moqMock.Verify(_ => _.Transform("hello"));
    

    Or, if you use NSubstitute, you can use the Received method.

    nSubsMock.Received().Transform("hello");
    

    Similar as we’ve seen before, you can use It.IsAny, It.Is, Arg.Any and Arg.Is to verify some properties of the parameters passed as input.

    Verify the exact count of received calls

    Other times, you might want to verify that a method has been called exactly N times.

    With Moq, you can add a parameter to the Verify method:

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    moqMock.Verify(_ => _.Transform(It.IsAny<string>()), Times.Exactly(3));
    

    Note that you can specify different values for that parameter, like Time.Exactly, Times.Never, Times.Once, Times.AtLeast, and so on.

    With NSubstitute, on the contrary, you can only specify a defined value, added as a parameter to the Received method.

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    nSubsMock.Received(3).Transform(Arg.Any<string>());
    

    Reset received calls

    As you remember, the mocked dependencies have been instantiated within the constructor, so every test method uses the same instance. This may cause some troubles, especially when checking how many calls the dependencies have received (because the count of received calls accumulates for every test method run before). Therefore, we need to reset the count of the received calls.

    In NUnit, you can define a method that will run before any test method – but only if decorated with the SetUp attribute:

    [SetUp]
    public void Setup()
    {
      // reset count
    }
    

    Here we can reset the number of the recorded method invocations on the dependencies and make sure that our test methods use always clean instances.

    With Moq, you can use Invocations.Clear():

    [SetUp]
    public void Setup()
    {
        moqMock.Invocations.Clear();
    }
    

    While, with NSubstitute, you can use ClearReceivedCalls():

    [SetUp]
    public void Setup()
    {
        nSubsMock.ClearReceivedCalls();
    }
    

    Further reading

    As always, the best way to learn what a library can do is head to its documentation. So, here you can find the links to Moq and NSubstitute docs.

    🔗 Moq documentation | GitHub

    🔗 NSubstitute documentation | NSubstitute

    If you already use Moq but you are having some troubles testing and configuring IHttpClientFactory instances, I got you covered:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, if you want to see the complete code of this article, you can find it on GitHub; I’ve written the exact same tests with both libraries so that you can compare them more easily.

    🔗 GitHub repository for the code used in this article | GitHub

    Conclusion

    In this article, we’ve seen how Moq and NSubstitute allow us to perform some basic operations when writing unit tests with C#. They are similar, but each one of them has a specific set of functionalities that are missing on the other library – or, at least, that I don’t know if they exist in both.

    Which library do you use, Moq or NSubstitute? Or maybe, another one?

    Happy coding!
    🐧



    Source link

  • Don’t use too many method arguments &vert; Code4IT

    Don’t use too many method arguments | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Many times, we tend to add too many parameters to a function. But that’s not the best idea: on the contrary, when a function requires too many arguments, grouping them into coherent objects helps writing simpler code.

    Why? How can we do it? What are the main issues with having too many params? Have a look at the following snippet:

    void SendPackage(
        string name,
        string lastname,
        string city,
        string country,
        string packageId
        ) { }
    

    If you need to use another field about the address or the person, you will need to add a new parameter and update all the existing methods to match the new function signature.

    What if we added a State argument? Is this part of the address (state = “Italy”) or something related to the package (state = Damaged)?

    Storing this field in the correct object helps understanding its meaning.

    void SendPackage(Person person, string packageId) { }
    
    class Person {
        public string Name { get; set; }
        public string LastName { get; set; }
        public Address Address {get; set;}
    }
    
    class Address {
        public string City { get; set; }
        public string Country { get; set; }
    }
    

    Another reason to avoid using lots of parameters? To avoid merge conflicts.

    Say that two devs, Alice and Bob, are working on some functionalities that impact the SendPackage method. Alice, on her branch, adds a new param, bool withPriority. In the meanwhile, Bob, on his branch, adds bool applyDiscount. Then, both Alice and Bob merge together their branches on the main one. What’s the result? Of course, a conflict: the method now has two boolean parameters, and the order by which they are added to the final result may cause some troubles. Even more, because every call to the SendPackage method has now one (or two) new params, whose value depends on the context. So, after the merge, the value that Bob defined for the applyDiscount parameter might be used instead of the one added by Alice.

    Conclusion

    To recap, why do we need to reduce the number of parameters?

    • to give context and meaning to those parameters
    • to avoid errors for positional parameters
    • to avoid merge conflicts

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • Use a SortedSet to avoid duplicates and sort items &vert; Code4IT

    Use a SortedSet to avoid duplicates and sort items | Code4IT


    Using the right data structure is crucial to building robust and efficient applications. So, why use a List or a HashSet to sort items (and remove duplicates) when you have a SortedSet?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As you probably know, you can create collections of items without duplicates by using a HashSet<T> object.

    It is quite useful to remove duplicates from a list of items of the same type.

    How can we ensure that we always have sorted items? The answer is simple: SortedSet<T>!

    HashSet: a collection without duplicates

    A simple HashSet creates a collection of unordered items without duplicates.

    This example

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    
    var resultHashSet = string.Join(',', hashSet);
    Console.WriteLine(resultHashSet);
    

    prints this string: Turin,Naples,Rome,Bari. The order of the inserted items is maintained.

    SortedSet: a sorted collection without duplicates

    To sort those items, we have two approaches.

    You can simply sort the collection once you’ve finished adding items:

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    var items = hashSet.ToList<string>().OrderBy(s => s);
    
    
    var resultHashSet = string.Join(',', items);
    Console.WriteLine(resultHashSet);
    

    Or, even better, use the right data structure: a SortedSet<T>

    var sortedSet = new SortedSet<string>();
    
    sortedSet.Add("Turin");
    sortedSet.Add("Naples");
    sortedSet.Add("Rome");
    sortedSet.Add("Bari");
    sortedSet.Add("Rome");
    sortedSet.Add("Turin");
    
    
    var resultSortedSet = string.Join(',', sortedSet);
    Console.WriteLine(resultSortedSet);
    

    Both results print Bari,Naples,Rome,Turin. But the second approach does not require you to sort a whole list: it is more efficient, both talking about time and memory.

    Use custom sorting rules

    What if we wanted to use a SortedSet with a custom object, like User?

    public class User {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }
    

    Of course, we can do that:

    var set = new SortedSet<User>();
    
    set.Add(new User("Davide", "Bellone"));
    set.Add(new User("Scott", "Hanselman"));
    set.Add(new User("Safia", "Abdalla"));
    set.Add(new User("David", "Fowler"));
    set.Add(new User("Maria", "Naggaga"));
    set.Add(new User("Davide", "Bellone"));//DUPLICATE!
    
    foreach (var user in set)
    {
        Console.WriteLine($"{user.LastName} {user.FirstName}");
    }
    

    But, we will get an error: our class doesn’t know how to compare things!

    That’s why we must update our User class so that it implements the IComparable interface:

    public class User : IComparable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    
        public int CompareTo(object obj)
        {
            var other = (User)obj;
            var lastNameComparison = LastName.CompareTo(other.LastName);
    
            return (lastNameComparison != 0)
                ? lastNameComparison :
                (FirstName.CompareTo(other.FirstName));
        }
    }
    

    In this way, everything works as expected:

    Abdalla Safia
    Bellone Davide
    Fowler David
    Hanselman Scott
    Naggaga Maria
    

    Notice that the second Davide Bellone has disappeared since it was a duplicate.

    This article first appeared on Code4IT

    Wrapping up

    Choosing the right data type is crucial for building robust and performant applications.

    In this article, we’ve used a SortedSet to insert items in a collection and expect them to be sorted and without duplicates.

    I’ve never used it in a project. So, how did I know that? I just explored the libraries I was using!

    From time to time, spend some minutes reading the documentation, have a glimpse of the most common libraries, and so on: you’ll find lots of stuff that you’ve never thought existed!

    Toy with your code! Explore it. Be curious.

    And have fun!

    🐧



    Source link

  • How to parse JSON Lines (JSONL) with C# | Code4IT


    JSONL is JSON’s less famous sibling: it allows you to store JSON objects separating them with new line. We will learn how to parse a JSONL string with C#.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    For sure, you already know JSON: it’s one of the most commonly used formats to share data as text.

    Did you know that there are different flavors of JSON? One of them is JSONL: it represents a JSON document where the items are in different lines instead of being in an array of items.

    It’s quite a rare format to find, so it can be tricky to understand how it works and how to parse it. In this article, we will learn how to parse a JSONL file with C#.

    Introducing JSONL

    As explained in the JSON Lines documentation, a JSONL file is a file composed of different items separated by a \n character.

    So, instead of having

    [{ "name": "Davide" }, { "name": "Emma" }]
    

    you have a list of items without an array grouping them.

    { "name" : "Davide" }
    { "name" : "Emma" }
    

    I must admit that I’d never heard of that format until a few months ago. Or, even better, I’ve already used JSONL files without knowing: JSONL is a common format for logs, where every entry is added to the file in a continuous stream.

    Also, JSONL has some characteristics:

    • every item is a valid JSON item
    • every line is separated by a \n character (or by \r\n, but \r is ignored)
    • it is encoded using UTF-8

    So, now, it’s time to parse it!

    Parsing the file

    Say that you’re creating a videogame, and you want to read all the items found by your character:

    class Item {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Category { get; set; }
    }
    

    The items list can be stored in a JSONL file, like this:

    {  "id": 1,  "name": "dynamite",  "category": "weapon" }
    {  "id": 2,  "name": "ham",  "category": "food" }
    {  "id": 3,  "name": "nail",  "category": "tool" }
    

    Now, all we have to do is to read the file and parse it.

    Assuming that we’ve read the content from a file and that we’ve stored it in a string called content, we can use Newtonsoft to parse those lines.

    As usual, let’s see how to parse the file, and then we’ll deep dive into what’s going on. (Note: the following snippet comes from this question on Stack Overflow)

    List<Item> items = new List<Item>();
    
    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    
    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    return items;
    

    Let’s break it down:

    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    

    The first thing to do is to create an instance of JsonTextReader, a class coming from the Newtonsoft.Json namespace. The constructor accepts a TextReader instance or any derived class. So we can use a StringReader instance that represents a stream from a specified string.

    The key part of this snippet (and, somehow, of the whole article) is the SupportMultipleContent property: when set to true it allows the JsonTextReader to keep reading the content as multiline.

    Its definition, in fact, says that:

    //
    // Summary:
    //     Gets or sets a value indicating whether multiple pieces of JSON content can be
    //     read from a continuous stream without erroring.
    //
    // Value:
    //     true to support reading multiple pieces of JSON content; otherwise false. The
    //     default is false.
    public bool SupportMultipleContent { get; set; }
    

    Finally, we can read the content:

    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    

    Here we create a new JsonSerializer (again, coming from Newtonsoft), and use it to read one item at a time.

    The while (jsonReader.Read()) allows us to read the stream till the end. And, to parse each item found on the stream, we use jsonSerializer.Deserialize<Item>(jsonReader);.

    The Deserialize method is smart enough to parse every item even without a , symbol separating them, because we have the SupportMultipleContent to true.

    Once we have the Item object, we can do whatever we want, like adding it to a list.

    Further readings

    As we’ve learned, there are different flavors of JSON. You can read an overview of them on Wikipedia.

    🔗 JSON Lines introduction | Wikipedia

    Of course, the best place to learn more about a format it’s its official documentation.

    🔗 JSON Lines documentation | Jsonlines

    This article exists thanks to Imran Qadir Baksh’s question on Stack Overflow, and, of course, to Yuval Itzchakov’s answer.

    🔗 Line delimited JSON serializing and de-serializing | Stack Overflow

    Since we’ve used Newtonsoft (aka: JSON.NET), you might want to have a look at its website.

    🔗SupportMultipleContent property | Newtonsoft

    Finally, the repository used for this article.

    🔗 JsonLinesReader repository | GitHub

    Conclusion

    You might be thinking:

    Why has Davide written an article about a comment on Stack Overflow?? I could have just read the same info there!

    Well, if you were interested only in the main snippet, you would’ve been right!

    But this article exists for two main reasons.

    First, I wanted to highlight that JSON is not always the best choice for everything: it always depends on what we need. For continuous streams of items, JSONL is a good (if not the best) choice. Don’t choose the most used format: choose what best fits your needs!

    Second, I wanted to remark that we should not be too attached to a specific library: I’d generally prefer using native stuff, so, for reading JSON files, my first choice is System.Text.Json. But not always it’s the best choice. Yes, we could write some complex workaround (like the second answer on Stack Overflow), but… does it worth it? Sometimes it’s better to use another library, even if just for one specific task. So, you could use System.Text.Json for the whole project unless for the part where you need to read a JSONL file.

    Have you ever met some unusual formats? How did you deal with it?

    Happy coding!

    🐧



    Source link

  • Keep the parameters in a consistent order &vert; Code4IT

    Keep the parameters in a consistent order | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you have a set of related functions, use always a coherent order of parameters.

    Take this bad example:

    IEnumerable<Section> GetSections(Context context);
    
    void AddSectionToContext(Context context, Section newSection);
    
    void AddSectionsToContext(IEnumerable<Section> newSections, Context context);
    

    Notice the order of the parameters passed to AddSectionToContext and AddSectionsToContext: they are swapped!

    Quite confusing, isn’t it?

    Confusion intensifies

    For sure, the code is harder to understand, since the order of the parameters is not what the reader expects it to be.

    But, even worse, this issue may lead to hard-to-find bugs, especially when parameters are of the same type.

    Think of this example:

    IEnumerable<Item> GetPhotos(string type, string country);
    
    IEnumerable<Item> GetVideos(string country, string type);
    

    Well, what could possibly go wrong?!?

    We have two ways to prevent possible issues:

    1. use coherent order: for instance, type is always the first parameter
    2. pass objects instead: you’ll add a bit more code, but you’ll prevent those issues

    To read more about this code smell, check out this article by Maxi Contieri!

    This article first appeared on Code4IT

    Conclusion

    To recap, always pay attention to the order of the parameters!

    • keep them always in the same order
    • use easy-to-understand order (remember the Principle of Least Surprise?)
    • use objects instead, if necessary.

    👉 Let’s discuss it on Twitter or in the comment section below!

    🐧





    Source link

  • Profiling .NET code with MiniProfiler | Code4IT


    Is your application slow? How to find bottlenecks? If so, you can use MiniProfiler to profile a .NET API application and analyze the timings of the different operations.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes your project does not perform well as you would expect. Bottlenecks occur, and it can be hard to understand where and why.

    So, the best thing you should do is to profile your code and analyze the execution time to understand which are the parts that impact the most your application performance.

    In this article, we will learn how to use Miniprofiler to profile code in a .NET 5 API project.

    Setting up the project

    For this article, I’ve created a simple project. This project tells you the average temperature of a place by specifying the country code (eg: IT), and the postal code (eg: 10121, for Turin).

    There is only one endpoint, /Weather, that accepts in input the CountryCode and the PostalCode, and returns the temperature in Celsius.

    To retrieve the data, the application calls two external free services: Zippopotam to get the current coordinates, and OpenMeteo to get the daily temperature using those coordinates.

    Sequence diagram

    Let’s see how to profile the code to see the timings of every operation.

    Installing MiniProfiler

    As usual, we need to install a Nuget package: since we are working on a .NET 5 API project, you can install the MiniProfiler.AspNetCore.Mvc package, and you’re good to go.

    MiniProfiler provides tons of packages you can use to profile your code: for example, you can profile Entity Framework, Redis, PostgreSql, and more.

    MiniProfiler packages on NuGet

    Once you’ve installed it, we can add it to our project by updating the Startup class.

    In the Configure method, you can simply add MiniProfiler to the ASP.NET pipeline:

    Then, you’ll need to configure it in the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMiniProfiler(options =>
            {
                options.RouteBasePath = "/profiler";
                options.ColorScheme = StackExchange.Profiling.ColorScheme.Dark;
            });
    
        services.AddControllers();
        // more...
    }
    

    As you might expect, the king of this method is AddMiniProfiler. It allows you to set MiniProfiler up by configuring an object of type MiniProfilerOptions. There are lots of things you can configure, that you can see on GitHub.

    For this example, I’ve updated the color scheme to use Dark Mode, and I’ve defined the base path of the page that shows the results. The default is mini-profiler-resources, so the results would be available at /mini-profiler-resources/results. With this setting, the result is available at /profiler/results.

    Defining traces

    Time to define our traces!

    When you fire up the application, a MiniProfiler object is created and shared across the project. This object exposes several methods. The most used is Step: it allows you to define a portion of code to profile, by wrapping it into a using block.

    using (MiniProfiler.Current.Step("Getting lat-lng info"))
    {
        (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
    }
    

    The snippet above defines a step, giving it a name (“Getting lat-lng info”), and profiles everything that happens within those lines of code.

    You can also use nested steps by simply adding a parent step:

    using (MiniProfiler.Current.Step("Get temperature for specified location"))
    {
        using (MiniProfiler.Current.Step("Getting lat-lng info"))
        {
            (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
        }
    
        using (MiniProfiler.Current.Step("Getting temperature info"))
        {
            temperature = await _weatherService.GetTemperature(latitude, longitude);
        }
    }
    

    In this way, you can create a better structure of traces and perform better analyses. Of course, this method doesn’t know what happens inside the GetLatLng method. If there’s another Step, it will be taken into consideration too.

    You can also use inline steps to trace an operation and return its value on the same line:

    var response = await MiniProfiler.Current.Inline(() => httpClient.GetAsync(fullUrl), "Http call to OpenMeteo");
    

    Inline traces the operation and returns the return value from that method. Notice that it works even for async methods! 🤩

    Viewing the result

    Now that we’ve everything in place, we can run our application.

    To get better data, you should run the application in a specific way.

    First of all, use the RELEASE configuration. You can change it in the project properties, heading to the Build tab:

    Visual Studio tab for choosing the build configuration

    Then, you should run the application without the debugger attached. You can simply hit Ctrl+F5, or head to the Debug menu and click Start Without Debugging.

    Visual Studio menu to run the application without debugger

    Now, run the application and call the endpoint. Once you’ve got the result, you can navigate to the report page.

    Remember the options.RouteBasePath = "/profiler" option? It’s the one that specifies the path to this page.

    If you head to /profiler/results, you will see a page similar to this one:

    MiniProfiler results

    On the left column, you can see the hierarchy of the messages we’ve defined in the code. On the right column, you can see the timings for each operation.

    Association of every MiniProfiler call to the related result

    Noticed that Show trivial button on the bottom-right corner of the report? It displays the operations that took such a small amount of time that can be easily ignored. By clicking on that button, you’ll see many things, such as all the operations that the .NET engine performs to handle your HTTP requests, like the Action Filters.

    Trivial operations on MiniProfiler

    Lastly, the More columns button shows, well… more columns! You will see the aggregate timing (the operation + all its children), and the timing from the beginning of the request.

    More Columns showed on MiniProfiler

    The mystery of x-miniprofiler-ids

    Now, there’s one particular thing that I haven’t understood of MiniProfiler: the meaning of x-miniprofiler-ids.

    This value is an array of IDs that represent every time we’ve profiled something using by MiniProfiler during this session.

    You can find this array in the HTTP response headers:

    x-miniprofiler-ids HTTP header

    I noticed that every time you perform a call to that endpoint, it adds some values to this array.

    My question is: so what? What can we do with those IDs? Can we use them to filter data, or to see the results in some particular ways?

    If you know how to use those IDs, please drop a message in the comments section 👇

    If you want to run this project and play with MiniProfiler, I’ve shared this project on GitHub.

    🔗 ProfilingWithMiniprofiler repository | GitHub

    In this project, I’ve used Zippopotam to retrieve latitude and longitude given a location

    🔗 Zippopotam

    Once I retrieved the coordinates, I used Open Meteo to get the weather info for that position.

    🔗 Open Meteo documentation | OpenMeteo

    And then, obviously, I used MiniProfiler to profile my code.

    🔗 MiniProfiler repository | GitHub

    I’ve already used MiniProfiler for analyzing the performances of an application, and thanks to this library I was able to improve the response time from 14 seconds (yes, seconds!) to less than 3. I’ve explained all the steps in 2 articles.

    🔗 How I improved the performance of an endpoint by 82% – part 1 | Code4IT

    🔗 How I improved the performance of an endpoint by 82% – part 2 | Code4IT

    Wrapping up

    In this article, we’ve seen how we can profile .NET applications using MiniProfiler.

    This NuGet Package works for almost every version of .NET, from the dear old .NET Framework to the most recent one, .NET 6.

    A suggestion: configure it in a way that you can turn it off easily. Maybe using some environment variables. This will give you the possibility to turn it off when this tracing is no more required and to speed up the application.

    Ever used it? Any alternative tools?

    And, most of all, what the f**k is that x-miniprofiler-ids array??😶

    Happy coding!

    🐧



    Source link