دسته: ذخیره داده‌های موقت

  • Avoid using too many Imports in your classes | Code4IT

    Avoid using too many Imports in your classes | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Actually, this article is not about a tip to write cleaner code, but it is an article that aims at pointing out a code smell.

    Of course, once you find this code smell in your code, you can act in order to eliminate it, and, as a consequence, you will end up with cleaner code.

    The code smell is easy to identify: open your classes and have a look at the imports list (in C#, the using on top of the file).

    A real example of too many imports

    Here’s a real-life example (I censored the names, of course):

    using MyCompany.CMS.Data;
    using MyCompany.CMS.Modules;
    using MyCompany.CMS.Rendering;
    using MyCompany.Witch.Distribution;
    using MyCompany.Witch.Distribution.Elements;
    using MyCompany.Witch.Distribution.Entities;
    using Microsoft.Extensions.Logging;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Serialization;
    using MyProject.Controllers.VideoPlayer.v1.DataSource;
    using MyProject.Controllers.VideoPlayer.v1.Vod;
    using MyProject.Core;
    using MyProject.Helpers.Common;
    using MyProject.Helpers.DataExplorer;
    using MyProject.Helpers.Entities;
    using MyProject.Helpers.Extensions;
    using MyProject.Helpers.Metadata;
    using MyProject.Helpers.Roofline;
    using MyProject.ModelsEntities;
    using MyProject.Models.ViewEntities.Tags;
    using MyProject.Modules.EditorialDetail.Core;
    using MyProject.Modules.VideoPlayer.Models;
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    
    namespace MyProject.Modules.Video
    

    Sounds familiar?

    If we exclude the imports necessary to use some C# functionalities

    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    

    We have lots of dependencies on external modules.

    This means that if something changes in one of the classes that are part of those namespaces, we may end up with code that is difficult to update.

    Class dependencies

    Also, guess what comes with all those imports? Constructor with too many parameters (and, in fact, in this class, I have 11 dependencies injected in the constructor) and code that is too long and difficult to understand (and, in fact, this class has 500+ lines).

    A solution? Refactor your project in order to minimize scattering those dependencies.

    Wrapping up

    Having all those imports (in C# we use the keyword using) is a good indicator that your code does too many things. You should focus on minimizing those imports without cheating (like using global imports).

    Happy coding!

    🐧



    Source link

  • how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT


    After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.

    In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.

    I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.

    Introducing my blog architecture

    To better understand what’s going on, I need a very brief overview of the architecture of my blog.

    It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).

    So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.

    As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.

    Now that you know some general aspects, it’s time to deep dive into my writing process.

    Before writing: organizing ideas with GitHub

    My central source, as you might have already understood, is GitHub.

    There, I write all my notes and keep track of the status of my articles.

    Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.

    Github Projects to track the status of the articles

    GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.

    GitHub projects

    I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.

    In this way, I can use different columns and have more flexibility when handling the status of the tasks.

    GitHub issues templates

    As I said, to write my notes I use GitHub issues.

    When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.

    To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.

    The list of GitHub issues templates I use

    Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form

    Article creation form as generated by a template

    which is prepopulated with some fields:

    • Title: with a placeholder ([Article] )
    • Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
    • Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)

    How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.

    ---
    name: New article
    about: New blog article
    title: "[Article] - "
    labels: Article
    assignees: bellons91
    ---
    
    ## Argomenti
    
    ## Link
    
    ## Appunti vari
    

    And you’re good to go!

    GitHub action to assign issues to a project

    Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!

    With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.

    So, here’s mine:

    Auto-assign to project GitHub Action

    For better readability, you can find the Gist here.

    This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.

    In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎

    Writing

    Now it’s the time of writing!

    As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.

    For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.

    But, of course, I automated it! 😎

    Powershell script to scaffold a new article

    Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.

    Also, every MD file begins with a Frontmatter section, like this:

    ---
    title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
    path: "/blog/automate-articles-creations-github-powershell-azure"
    tags: ["MainArticle"]
    featuredImage: "./cover.png"
    excerpt: "a description for 072-how-i-create-articles"
    created: 4219-11-20
    updated: 4219-11-20
    ---
    

    But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.

    PowerShell script to scaffold a new article

    You can find the code in this Gist.

    This script performs several actions:

    1. Switches to the Master branch and downloads the latest updates
    2. Asks for the article slug that will be used to create the folder name
    3. Creates a new branch using the article slug as a name
    4. Creates a new folder that will contain all the files I will be using for my article (markdown content and images)
    5. Creates the article file with the Frontmatter part populated with dummy values
    6. Copies a placeholder image into this folder; this image will be the temporary cover image

    In this way, with a single command, I can scaffold a new article with all the files I need to get started.

    Ok, but how can I run a PowerShell in a Gatsby repository?

    I added this script in the package.json file

    "create-article": "@powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./article-creator.ps1",
    

    where article-creator.ps1 is the name of the file that contains the script.

    Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.

    Markdown preview on VS Code

    I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).

    One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.

    Here’s what I can see right now while I’m writing:

    Markdown preview on the side with VS Code

    Grammar check with Grammarly

    Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).

    I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.

    Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).

    Unprofessional, but fun, cover images

    One of the tasks I like the most is creating my cover images.

    I don’t use stock images, I prefer using less professional but more original cover images.

    Some of the cover images for my articles

    You can see all of them here.

    Creating and scheduling PR on GitHub with Templates and Actions

    Now that my article is complete, I can set it as ready for being scheduled.

    To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!

    I have created a PR template in an MD file, which I use to create a draft of the PR content.

    Pull Request form on GitHub

    In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).

    Also, I can define when this PR will be merged on Master, using the /schedule tag.

    It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.

    YAML of Merge Schedule action

    So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.

    As usual, you can find the code of this action here

    After the PR is merged, I also receive an email that notifies me of the action.

    After publishing

    Once a new article is online, I like to give it some visibility.

    To do that, I heavily rely on Azure Logic Apps.

    Azure Logic App for sharing on Twitter

    My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.

    I use it to trigger an Azure Logic App to publish a message on Twitter:

    Azure Logic App workflow for publishing on Twitter

    The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.

    If you prefer, you can use a custom Azure Function! The choice is yours!

    Cross-post reminder with Azure Logic Apps

    Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.

    Azure Logic App workflow for crosspost reminders

    I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.

    Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.

    And, my friends, this is everything that happens in the background of my blog!

    What I’m still missing

    I’ve added a lot of effort to my blog, and I’m incredibly proud of it!

    But still, there are a few things I’d like to improve.

    SEO Tools/analysis

    I’ve never considered SEO. Or, better, Keywords.

    I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.

    I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.

    Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.

    Also, I should spend more time thinking of the correct title and section titles.

    Any idea?

    Easy upgrade of Gatsby/Migrate to other headless CMSs

    Lastly, I’d like to find another theme or platform and leave the one I’m currently using.

    Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.

    Wrapping up

    That’s it: in this article, I’ve explained everything that I do when writing a blog post.

    Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!

    So, for now, happy coding!

    🐧



    Source link

  • Convert ExpandoObjects to IDictionary | Code4IT

    Convert ExpandoObjects to IDictionary | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, ExpandoObjects are dynamically-populated objects without a predefined shape.

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    

    Name and Age are not part of the definition of ExpandoObject: they are two fields I added without declaring their type.

    This is a dynamic object, so I can add new fields as I want. Say that I need to add my City: I can simply use

    without creating any field on the ExpandoObject class.

    Now: how can I retrieve all the values? Probably the best way is by converting the ExpandoObject into a Dictionary.

    Create a new Dictionary

    Using an IDictionary makes it easy to access the keys of the object.

    If you have an ExpandoObject that will not change, you can use it to create a new IDictionary:

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    
    
    IDictionary<string, object?> dict = new Dictionary<string, object?>(myObj);
    
    //dict.Keys: [Name, Age]
    
    myObj.City ="Turin";
    
    //dict.Keys: [Name, Age]
    

    Notice that we use the ExpandoObject to create a new IDictionary. This means that after the Dictionary creation if we add a new field to the ExpandoObject, that new field will not be present in the Dictionary.

    Cast to IDictionary

    If you want to use an IDictionary to get the ExpandoObject keys, and you need to stay in sync with the ExpandoObject status, you just have to cast that object to an IDictionary

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    
    IDictionary<string, object?> dict = myObj;
    
    //dict.Keys: [Name, Age]
    
    myObj.City ="Turin";
    
    //dict.Keys: [Name, Age, City]
    

    This works because ExpandoObject implements IDictionary, so you can simply cast to IDictionary without instantiating a new object.

    Here’s the class definition:

    public sealed class ExpandoObject :
    	IDynamicMetaObjectProvider,
    	IDictionary<string, object?>,
    	ICollection<KeyValuePair<string, object?>>,
    	IEnumerable<KeyValuePair<string, object?>>,
    	IEnumerable,
    	INotifyPropertyChanged
    

    Wrapping up

    Both approaches are correct. They both create the same Dictionary, but they act differently when a new value is added to the ExpandoObject.

    Can you think of any pros and cons of each approach?

    Happy coding!

    🐧



    Source link

  • 3 ways to check the object passed to mocks with Moq in C# &vert; Code4IT

    3 ways to check the object passed to mocks with Moq in C# | Code4IT


    In unit tests, sometimes you need to perform deep checks on the object passed to the mocked service. We will learn 3 ways to do that with Moq and C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing unit tests, you can use Mocks to simulate the usage of class dependencies.

    Even though some developers are harshly against the usage of mocks, they can be useful, especially when the mocked operation does not return any value, but still, you want to check that you’ve called a specific method with the correct values.

    In this article, we will learn 3 ways to check the values passed to the mocks when using Moq in our C# Unit Tests.

    To better explain those 3 ways, I created this method:

    public void UpdateUser(User user, Preference preference)
    {
        var userDto = new UserDto
        {
            Id = user.id,
            UserName = user.username,
            LikesBeer = preference.likesBeer,
            LikesCoke = preference.likesCoke,
            LikesPizza = preference.likesPizza,
        };
    
        _userRepository.Update(userDto);
    }
    

    UpdateUser simply accepts two objects, user and preference, combines them into a single UserDto object, and then calls the Update method of _userRepository, which is an interface injected in the class constructor.

    As you can see, we are not interested in the return value from _userRepository.Update. Rather, we are interested in checking that we are calling it with the right values.

    We can do it in 3 ways.

    Verify each property with It.Is

    The simplest, most common way is by using It.Is<T> within the Verify method.

    [Test]
    public void VerifyEachProperty()
    {
        // Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
    
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u =>
            u.Id == expected.Id
            && u.UserName == expected.UserName
            && u.LikesPizza == expected.LikesPizza
            && u.LikesBeer == expected.LikesBeer
            && u.LikesCoke == expected.LikesCoke
        )));
    }
    

    In the example above, we used It.Is<UserDto> to check the exact item that was passed to the Update method of userRepo.

    Notice that it accepts a parameter. That parameter is of type Func<UserDto, bool>, and you can use it to define when your expectations are met.

    In this particular case, we’ve checked each and every property within that function:

    u =>
        u.Id == expected.Id
        && u.UserName == expected.UserName
        && u.LikesPizza == expected.LikesPizza
        && u.LikesBeer == expected.LikesBeer
        && u.LikesCoke == expected.LikesCoke
    

    This approach works well when you have to perform checks on only a few fields. But the more fields you add, the longer and messier that code becomes.

    Also, a problem with this approach is that if it fails, it becomes hard to understand which is the cause of the failure, because there is no indication of the specific field that did not match the expectations.

    Here’s an example of an error message:

    Expected invocation on the mock at least once, but was never performed: _ => _.Update(It.Is<UserDto>(u => (((u.Id == 1 && u.UserName == "Davidde") && u.LikesPizza == True) && u.LikesBeer == True) && u.LikesCoke == False))
    
    Performed invocations:
    
    Mock<IUserRepository:1> (_):
        IUserRepository.Update(UserDto { UserName = Davide, Id = 1, LikesPizza = True, LikesCoke = False, LikesBeer = True })
    

    Can you spot the error? And what if you were checking 15 fields instead of 5?

    Verify with external function

    Another approach is by externalizing the function.

    [Test]
    public void WithExternalFunction()
    {
        //Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u => AreEqual(u, expected))));
    }
    
    private bool AreEqual(UserDto u, UserDto expected)
    {
        Assert.AreEqual(expected.UserName, u.UserName);
        Assert.AreEqual(expected.Id, u.Id);
        Assert.AreEqual(expected.LikesBeer, u.LikesBeer);
        Assert.AreEqual(expected.LikesCoke, u.LikesCoke);
        Assert.AreEqual(expected.LikesPizza, u.LikesPizza);
    
        return true;
    }
    

    Here, we are passing an external function to the It.Is<T> method.

    This approach allows us to define more explicit and comprehensive checks.

    The good parts of it are that you will gain more control over the assertions, and you will also have better error messages in case a test fails:

    Expected string length 6 but was 7. Strings differ at index 5.
    Expected: "Davide"
    But was:  "Davidde"
    

    The bad part is that you will stuff your test class with lots of different methods, and the class can easily become hard to maintain. Unluckily, we cannot use local functions.

    On the other hand, having external functions allows us to combine them when we need to do some tests that can be reused across test cases.

    Intercepting the function parameters with Callback

    Lastly, we can use a hidden gem of Moq: Callbacks.

    With Callbacks, you can store in a local variable the reference to the item that was called by the method.

    [Test]
    public void CompareWithCallback()
    {
        // Arrange
    
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto actual = null;
        userRepo.Setup(_ => _.Update(It.IsAny<UserDto>()))
            .Callback(new InvocationAction(i => actual = (UserDto)i.Arguments[0]));
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        Assert.IsTrue(AreEqual(expected, actual));
    }
    

    In this way, you can use it locally and run assertions directly to that object without relying on the Verify method.

    Or, if you use records, you can use the auto-equality checks to simplify the Verify method as I did in the previous example.

    Wrapping up

    In this article, we’ve explored 3 ways to perform checks on the objects passed to dependencies mocked with Moq.

    Each way has its pros and cons, and it’s up to you to choose the approach that fits you the best.

    I personally prefer the second and third approaches, as they allow me to perform better checks on the passed values.

    What about you?

    For now, happy coding!

    🐧



    Source link

  • Tests should be even more well-written than production code &vert; Code4IT

    Tests should be even more well-written than production code | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You surely take care of your code to make it easy to read and understand, right? RIGHT??

    Well done! 👏

    But most of the developers tend to write good production code (the one actually executed by your system), but very poor test code.

    Production code is meant to be run, while tests are also meant to document your code; therefore there must not be doubts about the meaning and the reason behind a test.
    This also means that all the names must be explicit enough to help readers understand how and why a test should pass.

    This is a valid C# test:

    [Test]
    public void TestHtmlParser()
    {
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml("<p>Hello</p>");
        var node = doc.DocumentNode.ChildNodes[0];
        var parser = new HtmlParser();
    
        Assert.AreEqual("Hello", parser.ParseContent(node));
    }
    

    What is the meaning of this test? We should be able to understand it just by reading the method name.

    Also, notice that here we are creating the HtmlNode object; imagine if this node creation is present in every test method: you will see the same lines of code over and over again.

    Thus, we can refactor this test in this way:

     [Test]
    public void HtmlParser_ExtractsContent_WhenHtmlIsParagraph()
    {
        //Arrange
        string paragraphContent = "Hello";
        string htmlParagraph = $"<p>{paragraphContent}</p>";
        HtmlNode htmlNode = CreateHtmlNode(htmlParagraph);
        var htmlParser = new HtmlParser();
    
        //Act
        var parsedContent = htmlParser.ParseContent(htmlNode);
    
        //Assert
        Assert.AreEqual(paragraphContent, parsedContent);
    }
    

    This test is definitely better:

    • you can understand its meaning by reading the test name
    • the code is concise, and some creation parts are refactored out
    • we’ve well separated the 3 parts of the tests: Arrange, Act, Assert (we’ve already talked about it here)

    Wrapping up

    Tests are still part of your project, even though they are not used directly by your customers.

    Never skip tests, and never write them in a rush. After all, when you encounter a bug, the first thing you should do is write a test to reproduce the bug, and then validate the fix using that same test.

    So, keep writing good code, for tests too!

    Happy coding!

    🐧



    Source link

  • 8 things about Records in C# you probably didn’t know &vert; Code4IT

    8 things about Records in C# you probably didn’t know | Code4IT


    C# recently introduced Records, a new way of defining types. In this article, we will see 8 things you probably didn’t know about C# Records

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Records are the new data type introduced in 2021 with C# 9 and .NET Core 5.

    public record Person(string Name, int Id);
    

    Records are the third way of defining data types in C#; the other two are class and struct.

    Since they’re a quite new idea in .NET, we should spend some time experimenting with it and trying to understand its possibilities and functionalities.

    In this article, we will see 8 properties of Records that you should know before using it, to get the best out of this new data type.

    1- Records are immutable

    By default, Records are immutable. This means that, once you’ve created one instance, you cannot modify any of its fields:

    var me = new Person("Davide", 1);
    me.Name = "AnotherMe"; // won't compile!
    

    This operation is not legit.

    Even the compiler complains:

    Init-only property or indexer ‘Person.Name’ can only be assigned in an object initializer, or on ’this’ or ‘base’ in an instance constructor or an ‘init’ accessor.

    2- Records implement equality

    The other main property of Records is that they implement equality out-of-the-box.

    [Test]
    public void EquivalentInstances_AreEqual()
    {
        var me = new Person("Davide", 1);
        var anotherMe = new Person("Davide", 1);
    
        Assert.That(anotherMe, Is.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    As you can see, I’ve created two instances of Person with the same fields. They are considered equal, but they are not the same instance.

    3- Records can be cloned or updated using ‘with’

    Ok, so if we need to update the field of a Record, what can we do?

    We can use the with keyword:

    [Test]
    public void WithProperty_CreatesNewInstance()
    {
        var me = new Person("Davide", 1);
        var anotherMe = me with { Id = 2 };
    
        Assert.That(anotherMe, Is.Not.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    Take a look at me with { Id = 2 }: that operation creates a clone of me and updates the Id field.

    Of course, you can use with to create a new instance identical to the original one.

    [Test]
    public void With_CreatesNewInstance()
    {
        var me = new Person("Davide", 1);
    
        var anotherMe = me with { };
    
        Assert.That(anotherMe, Is.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    4- Records can be structs and classes

    Basically, Records act as Classes.

    public record Person(string Name, int Id);
    

    Sometimes that’s not what you want. Since C# 10 you can declare Records as Structs:

    public record struct Point(int X, int Y);
    

    Clearly, everything we’ve seen before is still valid.

    [Test]
    public void EquivalentStructsInstances_AreEqual()
    {
        var a = new Point(2, 1);
        var b = new Point(2, 1);
    
        Assert.That(b, Is.EqualTo(a));
        //Assert.That(a, Is.Not.SameAs(b));// does not compile!
    }
    

    Well, almost everything: you cannot use Is.SameAs() because, since structs are value types, two values will always be distinct values. You’ll get notified about it by the compiler, with an error that says:

    The SameAs constraint always fails on value types as the actual and the expected value cannot be the same reference

    5- Records are actually not immutable

    We’ve seen that you cannot update existing Records. Well, that’s not totally correct.

    That assertion is true in the case of “simple” Records like Person:

    public record Person(string Name, int Id);
    

    But things change when we use another way of defining Records:

    public record Pair
    {
        public Pair(string Key, string Value)
        {
            this.Key = Key;
            this.Value = Value;
        }
    
        public string Key { get; set; }
        public string Value { get; set; }
    }
    

    We can explicitly declare the properties of the Record to make it look more like plain classes.

    Using this approach, we still can use the auto-equality functionality of Records

    [Test]
    public void ComplexRecordsAreEquatable()
    {
        var a = new Pair("Capital", "Roma");
        var b = new Pair("Capital", "Roma");
    
        Assert.That(b, Is.EqualTo(a));
    }
    

    But we can update a single field without creating a brand new instance:

    [Test]
    public void ComplexRecordsAreNotImmutable()
    {
        var b = new Pair("Capital", "Roma");
        b.Value = "Torino";
    
        Assert.That(b.Value, Is.EqualTo("Torino"));
    }
    

    Also, only simple types are immutable, even with the basic Record definition.

    The ComplexPair type is a Record that accepts in the definition a list of strings.

    public record ComplexPair(string Key, string Value, List<string> Metadata);
    

    That list of strings is not immutable: you can add and remove items as you wish:

    [Test]
    public void ComplexRecordsAreNotImmutable2()
    {
        var b = new ComplexPair("Capital", "Roma", new List<string> { "City" });
        b.Metadata.Add("Another Value");
    
        Assert.That(b.Metadata.Count, Is.EqualTo(2));
    }
    

    In the example below, you can see that I added a new item to the Metadata list without creating a new object.

    6- Records can have subtypes

    A neat feature is that we can create a hierarchy of Records in a very simple manner.

    Do you remember the Person definition?

    public record Person(string Name, int Id);
    

    Well, you can define a subtype just as you would do with plain classes:

    public record Employee(string Name, int Id, string Role) : Person(Name, Id);
    

    Of course, all the rules of Boxing and Unboxing are still valid.

    [Test]
    public void Records_CanHaveSubtypes()
    {
        Person meEmp = new Employee("Davide", 1, "Chief");
    
        Assert.That(meEmp, Is.AssignableTo<Employee>());
        Assert.That(meEmp, Is.AssignableTo<Person>());
    }
    

    7- Records can be abstract

    …and yes, we can have Abstract Records!

    public abstract record Box(int Volume, string Material);
    

    This means that we cannot instantiate new Records whose type is marked ad Abstract.

    var box = new Box(2, "Glass"); // cannot create it, it's abstract
    

    On the contrary, we need to create concrete types to instantiate new objects:

    public record PlasticBox(int Volume) : Box(Volume, "Plastic");
    

    Again, all the rules we already know are still valid.

    [Test]
    public void Records_CanBeAbstract()
    {
        var plasticBox = new PlasticBox(2);
    
        Assert.That(plasticBox, Is.AssignableTo<Box>());
        Assert.That(plasticBox, Is.AssignableTo<PlasticBox>());
    }
    

    8- Record can be sealed

    Finally, Records can be marked as Sealed.

    public sealed record Point3D(int X, int Y, int Z);
    

    Marking a Record as Sealed means that we cannot declare subtypes.

    public record ColoredPoint3D(int X, int Y, int Z, string RgbColor) : Point3D(X, Y, X); // Will not compile!
    

    This can be useful when exposing your types to external systems.

    This article first appeared on Code4IT

    Additional resources

    As usual, a few links you might want to read to learn more about Records in C#.

    The first one is a tutorial from the Microsoft website that teaches you the basics of Records:

    🔗 Create record types | Microsoft Docs

    The second one is a splendid article by Gary Woodfine where he explores the internals of C# Records, and more:

    🔗C# Records – The good, bad & ugly | Gary Woodfine.

    Finally, if you’re interested in trivia about C# stuff we use but we rarely explore, here’s an article I wrote a while ago about GUIDs in C# – you’ll find some neat stuff in there!

    🔗5 things you didn’t know about Guid in C# | Code4IT

    Wrapping up

    In this article, we’ve seen 8 things you probably didn’t know about Records in C#.

    Records are quite new in the .NET ecosystem, so we can expect more updates and functionalities.

    Is there anything else we should add? Or maybe something you did not expect?

    Happy coding!

    🐧



    Source link

  • C# Tip: use IHttpClientFactory to generate HttpClient instances

    C# Tip: use IHttpClientFactory to generate HttpClient instances


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The problem with HttpClient

    When you create lots of HttpClient instances, you may incur Socket Exhaustion.

    This happens because sockets are a finite resource, and they are not released exactly when you ‘Dispose’ them, but a bit later. So, when you create lots of clients, you may terminate the available sockets.

    Even with using statements you may end up with Socket Exhaustion.

    class ResourceChecker
    {
        public async Task<bool> ResourceExists(string url)
        {
            using (HttpClient client = new HttpClient())
            {
                var response = await client.GetAsync(url);
                return response.IsSuccessStatusCode;
            }
        }
    }
    

    Actually, the real issue lies in the disposal of HttpMessageHandler instances. With simple HttpClient objects, you have no control over them.

    Introducing HttpClientFactory

    The HttpClientFactory class creates HttpClient instances for you.

    class ResourceChecker
    {
        private IHttpClientFactory _httpClientFactory;
    
        public ResourceChecker(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<bool> ResourceExists(string url)
        {
            HttpClient client = _httpClientFactory.CreateClient();
            var response = await client.GetAsync(url);
            return response.IsSuccessStatusCode;
        }
    }
    

    The purpose of IHttpClientFactory is to solve that issue with HttpMessageHandler.

    An interesting feature of IHttpClientFactory is that you can customize it with some general configurations that will be applied to all the HttpClient instances generated in a certain way. For instance, you can define HTTP Headers, Base URL, and other properties in a single point, and have those properties applied everywhere.

    How to add it to .NET Core APIs or Websites

    How can you use HttpClientFactory in your .NET projects?

    If you have the Startup class, you can simply add an instruction to the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddHttpClient();
    }
    

    You can find that extension method under the Microsoft.Extensions.DependencyInjection namespace.

    Wrapping up

    In this article, we’ve seen why you should not instantiate HttpClients manually, but instead, you should use IHttpClientFactory.

    Happy coding!

    🐧



    Source link

  • How to improve Serilog logging in .NET 6 by using Scopes &vert; Code4IT

    How to improve Serilog logging in .NET 6 by using Scopes | Code4IT


    Logs are important. Properly structured logs can be the key to resolving some critical issues. With Serilog’s Scopes, you can enrich your logs with info about the context where they happened.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even though it’s not one of the first things we usually set up when creating a new application, logging is a real game-changer in the long run.

    When an error occurred, if we have proper logging we can get more info about the context where it happened so that we can easily identify the root cause.

    In this article, we will use Scopes, one of the functionalities of Serilog, to create better logs for our .NET 6 application. In particular, we’re going to create a .NET 6 API application in the form of Minimal APIs.

    We will also use Seq, just to show you the final result.

    Adding Serilog in our Minimal APIs

    We’ve already explained what Serilog and Seq are in a previous article.

    To summarize, Serilog is an open source .NET library for logging. One of the best features of Serilog is that messages are in the form of a template (called Structured Logs), and you can enrich the logs with some value automatically calculated, such as the method name or exception details.

    To add Serilog to your application, you simply have to run dotnet add package Serilog.AspNetCore.

    Since we’re using Minimal APIs, we don’t have the StartUp file anymore; instead, we will need to add it to the Program.cs file:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console() );
    

    Then, to create those logs, you will need to add a specific dependency in your classes:

    public class ItemsRepository : IItemsRepository
    {
        private readonly ILogger<ItemsRepository> _logger;
    
        public ItemsRepository(ILogger<ItemsRepository> logger)
        {
            _logger = logger;
        }
    }
    

    As you can see, we’re injecting an ILogger<ItemsRepository>: specifying the related class automatically adds some more context to the logs that we will generate.

    Installing Seq and adding it as a Sink

    Seq is a logging platform that is a perfect fit for Serilog logs. If you don’t have it already installed, head to their download page and install it locally (you can even install it as a Docker container 🤩).

    In the installation wizard, you can select the HTTP port that will expose its UI. Once everything is in place, you can open that page on your localhost and see a page like this:

    Seq empty page on localhost

    On this page, we will see all the logs we write.

    But wait! ⚠ We still have to add Seq as a sink for Serilog.

    A sink is nothing but a destination for the logs. When using .NET APIs we can define our sinks both on the appsettings.json file and on the Program.cs file. We will use the second approach.

    First of all, you will need to install a NuGet package to add Seq as a sink: dotnet add package Serilog.Sinks.Seq.

    Then, you have to update the Serilog definition we’ve seen before by adding a .WriteTo.Seq instruction:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console()
        .WriteTo.Seq("http://localhost:5341")
        );
    

    Notice that we’ve specified also the port that exposes our Seq instance.

    Now, every time we log something, we will see our logs both on the Console and on Seq.

    How to add scopes

    The time has come: we can finally learn how to add Scopes using Serilog!

    Setting up the example

    For this example, I’ve created a simple controller, ItemsController, which exposes two endpoints: Get and Add. With these two endpoints, we are able to add and retrieve items stored in an in-memory collection.

    This class has 2 main dependencies: IItemsRepository and IUsersItemsRepository. Each of these interfaces has its own concrete class, each with a private logger injected in the constructor:

    public ItemsRepository(ILogger<ItemsRepository> logger)
    {
        _logger = logger;
    }
    

    and, similarly

    public UsersItemRepository(ILogger<UsersItemRepository> logger)
    {
        _logger = logger;
    }
    

    How do those classes use their own _logger instances?

    For example, the UsersItemRepository class exposes an AddItem method that adds a specific item to the list of items already possessed by a specific user.

    public void AddItem(string username, Item item)
    {
        if (!_usersItems.ContainsKey(username))
        {
            _usersItems.Add(username, new List<Item>());
            _logger.LogInformation("User was missing from the list. Just added");
        }
        _usersItems[username].Add(item);
        _logger.LogInformation("Added item for to the user's catalogue");
    }
    

    We are logging some messages, such as “User was missing from the list. Just added”.

    Something similar happens in the ItemsRepository class, where we have a GetItem method that returns the required item if it exists, and null otherwise.

    public Item GetItem(int itemId)
    {
        _logger.LogInformation("Retrieving item {ItemId}", itemId);
        return _allItems.FirstOrDefault(i => i.Id == itemId);
    }
    

    Finally, who’s gonna call these methods?

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        var item = _itemsRepository.GetItem(itemId);
    
        if (item == null)
        {
            _logger.LogWarning("Item does not exist");
    
            return NotFound();
        }
        _usersItemsRepository.AddItem(userName, item);
    
        return Ok(item);
    }
    

    Ok then, we’re ready to run the application and see the result.

    When I call that endpoint by passing “davide” as userName and “1” as itemId, we can see these logs:

    Simple logging on Seq

    We can see the 3 log messages but they are unrelated one each other. In fact, if we expand the logs to see the actual values we’ve logged, we can see that only the “Retrieving item 1” log has some information about the item ID we want to associate with the user.

    Expanding logs on Seq

    Using BeginScope with Serilog

    Finally, it’s time to define the Scope.

    It’s as easy as adding a simple using statement; see how I added the scope to the Add method in the Controller:

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
        {
            var item = _itemsRepository.GetItem(itemId);
    
            if (item == null)
            {
                _logger.LogWarning("Item does not exist");
    
                return NotFound();
            }
            _usersItemsRepository.AddItem(userName, item);
    
            return Ok(item);
        }
    }
    

    Here’s the key!

    using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
    

    With this single instruction, we are actually performing 2 operations:

    1. we are adding a Scope to each message – “Adding item 1 for user davide”
    2. we are adding ItemId and UserName to each log entry that falls in this block, in every method in the method chain.

    Let’s run the application again, and we will see this result:

    Expanded logs on Seq with Scopes

    So, now you can use these new properties to get some info about the context of when this log happened, and you can use the ItemId and UserName fields to search for other related logs.

    You can also nest scopes, of course.

    Why scopes instead of Correlation ID?

    You might be thinking

    Why can’t I just use correlation IDs?

    Well, the answer is pretty simple: correlation IDs are meant to correlate different logs in a specific request, and, often, across services. You generally use Correlation IDs that represent a specific call to your API and act as a Request ID.

    For sure, that can be useful. But, sometimes, not enough.

    Using scopes you can also “correlate” distinct HTTP requests that have something in common.

    If I call 2 times the AddItem endpoint, I can filter both for UserName and for ItemId and see all the related logs across distinct HTTP calls.

    Let’s see a real example: I have called the endpoint with different values

    • id=1, username=“davide”
    • id=1, username=“luigi”
    • id=2, username=“luigi”

    Since the scope reference both properties, we can filter for UserName and discover that Luigi has added both Item1 and Item 2.

    Filtering logs by UserName

    At the same time, we can filter by ItemId and discover that the item with id = 2 has been added only once.

    Filtering logs by ItemId

    Ok, then, in the end, Scopes or Correlation IDs? The answer is simple:

    Both is good

    This article first appeared on Code4IT

    Read more

    As always, the best place to find the info about a library is its documentation.

    🔗 Serilog website

    If you prefer some more practical articles, I’ve already written one to help you get started with Serilog and Seq (and with Structured Logs):

    🔗 Logging with Serilog and Seq | Code4IT

    as well as one about adding Serilog to Console applications (which is slightly different from adding Serilog to .NET APIs)

    🔗 How to add logs on Console with .NET Core and Serilog | Code4IT

    Then, you might want to deep dive into Serilog’s BeginScope. Here’s a neat article by Nicholas Blumhardt. Also, have a look at the comments, you’ll find interesting points to consider

    🔗 The semantics of ILogger.BeginScope | Nicholas Blumhardt

    Finally, two must-read articles about logging best practices.

    The first one is by Thiago Nascimento Figueiredo:

    🔗 Logs – Why, good practices, and recommendations | Dev.to

    and the second one is by Llron Tal:

    🔗 9 Logging Best Practices Based on Hands-on Experience | Loom Systems

    Wrapping up

    In this article, we’ve added Scopes to our logs to enrich them with some common fields that can be useful to investigate in case of errors.

    Remember to read the last 3 links I’ve shared above, they’re pure gold – you’ll thank me later 😎

    Happy coding!

    🐧



    Source link

  • Avoid subtle duplication of code and logic &vert; Code4IT

    Avoid subtle duplication of code and logic | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Duplication is not only about lines of code, but also about data usage and meaning.
    Reducing it will help us minimize the impact of every change.

    Take this class as an example:

    class BookShelf
    {
        private Book[] myBooks = new Book[]
        {
             new Book(1, "C# in depth"),
             new Book(2, "I promessi paperi")
        };
    
        public int Count() => myBooks.Length;
        public bool IsEmpty() => myBooks.Length == 0;
        public bool HasElements() => myBooks.Length > 0;
    }
    

    Here, both Count and IsEmpty use the same logical way to check the length of the collection: by calling myBooks.Length.

    What happens if you have to change the myBooks collection and replace the array of Books with a collection that does not expose the Length property? You will have to replace the logic everywhere!

    So, a better approach is to “centralize” the way to count the items in the collection in this way:

    class BookShelf
    {
        private Book[] myBooks = new Book[]
        {
             new Book(1, "C# in depth"),
             new Book(2, "I promessi paperi")
        };
    
        public int Count() => myBooks.Length;
        public bool IsEmpty() => Count() == 0;
        public bool HasElements() => Count() > 0;
    }
    

    If you will need to replace the myBooks data type, you will simply have to update the Count method – everything else will be the same.

    Also, HasElements and IsEmpty are a logical duplication. If they’re not necessary, you should remove one. Remove the one most used in its negative form: if you find lots of if(!HasElements()), you should consider replacing it with if(IsEmpty()): always prefer the positive form!

    Yes, I know, this is an extreme example: it’s too simple. But think of a more complex class or data flow in which you reuse the same logical flow, even if you’re not really using the exact same lines of code.

    By duplicating the logic, you will need to write more tests that do the same thing. Also, it may happen that if you found a flaw in your logic, and you fix it in some places and forget to fix it in other methods.

    Centralizing it will allow you to build safer code that is easier to test and update.

    A simple way to avoid “logical” duplication? Abstract classes!

    Well, there are many others… that I expect you to tell me in the comments section!

    Happy coding!

    🐧



    Source link

  • How to solve InvalidOperationException for constructors using HttpClientFactory in C#

    How to solve InvalidOperationException for constructors using HttpClientFactory in C#


    A suitable constructor for type ‘X’ could not be located. What a strange error message! Luckily it’s easy to solve.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    A few days ago I was preparing the demo for a new article. The demo included a class with an IHttpClientFactory service injected into the constructor. Nothing more.

    Then, running the application (well, actually, executing the code), this error popped out:

    System.InvalidOperationException: A suitable constructor for type ‘X’ could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.

    How to solve it? It’s easy. But first, let me show you what I did in the wrong version.

    Setting up the wrong example

    For this example, I created an elementary project.
    It’s a .NET 7 API project, with only one controller, GenderController, which calls another service defined in the IGenderizeService interface.

    public interface IGenderizeService
    {
        Task<GenderProbability> GetGenderProbabiliy(string name);
    }
    

    IGenderizeService is implemented by a class, GenderizeService, which is the one that fails to load and, therefore, causes the exception to be thrown. The class calls an external endpoint, parses the result, and then returns it to the caller:

    public class GenderizeService : IGenderizeService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public GenderizeService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            var httpClient = _httpClientFactory.CreateClient();
    
            var response = await httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    Finally, I’ve defined the services in the Program class, and then I’ve specified which is the base URL for the HttpClient instance generated in the GenderizeService class:

    // some code
    
    builder.Services.AddScoped<IGenderizeService, GenderizeService>();
    
    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
        client => client.BaseAddress = new Uri("https://api.genderize.io/")
        );
    
    var app = builder.Build();
    
    // some more code
    

    That’s it! Can you spot the error?

    2 ways to solve the error

    The error was quite simple, but it took me a while to spot:

    In the constructor I was injecting an IHttpClientFactory:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    

    while in the host definition I was declaring an HttpClient for a specific class:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>
    

    Apparently, even if we’ve specified how to create an instance for a specific class, we could not build it using an IHttpClientFactory.

    So, here are 2 ways to solve it.

    Use named HttpClient in HttpClientFactory

    Named HttpClients are a helpful way to define a specific HttpClient and use it across different services.

    It’s as simple as assigning a name to an HttpClient instance and then using the same name when you need that specific client.

    So, define it in the Startup method:

    builder.Services.AddHttpClient("genderize",
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and retrieve it using CreateClient:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory;
    }
    
    public async Task<GenderProbability> GetGenderProbabiliy(string name)
    {
        var httpClient = _httpClientFactory.CreateClient("genderize");
    
        var response = await httpClient.GetAsync($"?name={name}");
    
        var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
        return result;
    }
    

    💡 Quick tip: define the HttpClient names in a constant field shared across the whole system!

    Inject HttpClient instead of IHttpClientFactory

    The other way is by injecting an HttpClient instance instead of an IHttpClientFactory.

    So we can restore the previous version of the Startup part:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and, instead of injecting an IHttpClientFactory, we can directly inject an HttpClient instance:

    public class GenderizeService : IGenderizeService
    {
        private readonly HttpClient _httpClient;
    
        public GenderizeService(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            //var httpClient = _httpClientFactory.CreateClient("genderize");
    
            var response = await _httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    We no longer need to call _httpClientFactory.CreateClient because the injected instance of HttpClient is already customized with the settings we’ve defined at Startup.

    Further readings

    I’ve briefly talked about HttpClientFactory in one article of my C# tips series:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instance | Code4IT

    And, more in detail, I’ve also talked about one way to mock HttpClientFactory instances in unit tests using Moq:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, why do we need to use HttpClientFactories instead of HttpClients?

    🔗 Use IHttpClientFactory to implement resilient HTTP requests | Microsoft Docs

    This article first appeared on Code4IT

    Wrapping up

    Yes, it was that easy!

    We received the error message

    A suitable constructor for type ‘X’ could not be located.

    because we were mixing two ways to customize and use HttpClient instances.

    But we’ve only opened Pandora’s box: we will come back to this topic soon!

    For now, Happy coding!

    🐧



    Source link