دسته: ذخیره داده‌های موقت

  • Profiling .NET code with MiniProfiler | Code4IT


    Is your application slow? How to find bottlenecks? If so, you can use MiniProfiler to profile a .NET API application and analyze the timings of the different operations.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes your project does not perform well as you would expect. Bottlenecks occur, and it can be hard to understand where and why.

    So, the best thing you should do is to profile your code and analyze the execution time to understand which are the parts that impact the most your application performance.

    In this article, we will learn how to use Miniprofiler to profile code in a .NET 5 API project.

    Setting up the project

    For this article, I’ve created a simple project. This project tells you the average temperature of a place by specifying the country code (eg: IT), and the postal code (eg: 10121, for Turin).

    There is only one endpoint, /Weather, that accepts in input the CountryCode and the PostalCode, and returns the temperature in Celsius.

    To retrieve the data, the application calls two external free services: Zippopotam to get the current coordinates, and OpenMeteo to get the daily temperature using those coordinates.

    Sequence diagram

    Let’s see how to profile the code to see the timings of every operation.

    Installing MiniProfiler

    As usual, we need to install a Nuget package: since we are working on a .NET 5 API project, you can install the MiniProfiler.AspNetCore.Mvc package, and you’re good to go.

    MiniProfiler provides tons of packages you can use to profile your code: for example, you can profile Entity Framework, Redis, PostgreSql, and more.

    MiniProfiler packages on NuGet

    Once you’ve installed it, we can add it to our project by updating the Startup class.

    In the Configure method, you can simply add MiniProfiler to the ASP.NET pipeline:

    Then, you’ll need to configure it in the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMiniProfiler(options =>
            {
                options.RouteBasePath = "/profiler";
                options.ColorScheme = StackExchange.Profiling.ColorScheme.Dark;
            });
    
        services.AddControllers();
        // more...
    }
    

    As you might expect, the king of this method is AddMiniProfiler. It allows you to set MiniProfiler up by configuring an object of type MiniProfilerOptions. There are lots of things you can configure, that you can see on GitHub.

    For this example, I’ve updated the color scheme to use Dark Mode, and I’ve defined the base path of the page that shows the results. The default is mini-profiler-resources, so the results would be available at /mini-profiler-resources/results. With this setting, the result is available at /profiler/results.

    Defining traces

    Time to define our traces!

    When you fire up the application, a MiniProfiler object is created and shared across the project. This object exposes several methods. The most used is Step: it allows you to define a portion of code to profile, by wrapping it into a using block.

    using (MiniProfiler.Current.Step("Getting lat-lng info"))
    {
        (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
    }
    

    The snippet above defines a step, giving it a name (“Getting lat-lng info”), and profiles everything that happens within those lines of code.

    You can also use nested steps by simply adding a parent step:

    using (MiniProfiler.Current.Step("Get temperature for specified location"))
    {
        using (MiniProfiler.Current.Step("Getting lat-lng info"))
        {
            (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
        }
    
        using (MiniProfiler.Current.Step("Getting temperature info"))
        {
            temperature = await _weatherService.GetTemperature(latitude, longitude);
        }
    }
    

    In this way, you can create a better structure of traces and perform better analyses. Of course, this method doesn’t know what happens inside the GetLatLng method. If there’s another Step, it will be taken into consideration too.

    You can also use inline steps to trace an operation and return its value on the same line:

    var response = await MiniProfiler.Current.Inline(() => httpClient.GetAsync(fullUrl), "Http call to OpenMeteo");
    

    Inline traces the operation and returns the return value from that method. Notice that it works even for async methods! 🤩

    Viewing the result

    Now that we’ve everything in place, we can run our application.

    To get better data, you should run the application in a specific way.

    First of all, use the RELEASE configuration. You can change it in the project properties, heading to the Build tab:

    Visual Studio tab for choosing the build configuration

    Then, you should run the application without the debugger attached. You can simply hit Ctrl+F5, or head to the Debug menu and click Start Without Debugging.

    Visual Studio menu to run the application without debugger

    Now, run the application and call the endpoint. Once you’ve got the result, you can navigate to the report page.

    Remember the options.RouteBasePath = "/profiler" option? It’s the one that specifies the path to this page.

    If you head to /profiler/results, you will see a page similar to this one:

    MiniProfiler results

    On the left column, you can see the hierarchy of the messages we’ve defined in the code. On the right column, you can see the timings for each operation.

    Association of every MiniProfiler call to the related result

    Noticed that Show trivial button on the bottom-right corner of the report? It displays the operations that took such a small amount of time that can be easily ignored. By clicking on that button, you’ll see many things, such as all the operations that the .NET engine performs to handle your HTTP requests, like the Action Filters.

    Trivial operations on MiniProfiler

    Lastly, the More columns button shows, well… more columns! You will see the aggregate timing (the operation + all its children), and the timing from the beginning of the request.

    More Columns showed on MiniProfiler

    The mystery of x-miniprofiler-ids

    Now, there’s one particular thing that I haven’t understood of MiniProfiler: the meaning of x-miniprofiler-ids.

    This value is an array of IDs that represent every time we’ve profiled something using by MiniProfiler during this session.

    You can find this array in the HTTP response headers:

    x-miniprofiler-ids HTTP header

    I noticed that every time you perform a call to that endpoint, it adds some values to this array.

    My question is: so what? What can we do with those IDs? Can we use them to filter data, or to see the results in some particular ways?

    If you know how to use those IDs, please drop a message in the comments section 👇

    If you want to run this project and play with MiniProfiler, I’ve shared this project on GitHub.

    🔗 ProfilingWithMiniprofiler repository | GitHub

    In this project, I’ve used Zippopotam to retrieve latitude and longitude given a location

    🔗 Zippopotam

    Once I retrieved the coordinates, I used Open Meteo to get the weather info for that position.

    🔗 Open Meteo documentation | OpenMeteo

    And then, obviously, I used MiniProfiler to profile my code.

    🔗 MiniProfiler repository | GitHub

    I’ve already used MiniProfiler for analyzing the performances of an application, and thanks to this library I was able to improve the response time from 14 seconds (yes, seconds!) to less than 3. I’ve explained all the steps in 2 articles.

    🔗 How I improved the performance of an endpoint by 82% – part 1 | Code4IT

    🔗 How I improved the performance of an endpoint by 82% – part 2 | Code4IT

    Wrapping up

    In this article, we’ve seen how we can profile .NET applications using MiniProfiler.

    This NuGet Package works for almost every version of .NET, from the dear old .NET Framework to the most recent one, .NET 6.

    A suggestion: configure it in a way that you can turn it off easily. Maybe using some environment variables. This will give you the possibility to turn it off when this tracing is no more required and to speed up the application.

    Ever used it? Any alternative tools?

    And, most of all, what the f**k is that x-miniprofiler-ids array??😶

    Happy coding!

    🐧



    Source link

  • use yield return to return one item at the time | Code4IT

    use yield return to return one item at the time | Code4IT


    Yield is a keyword that allows you to return an item at the time instead of creating a full list and returning it as a whole.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    To me, yield return has always been one of the most difficult things to understand.

    Now that I’ve understood it (not thoroughly, but enough to explain it), it’s my turn to share my learnings.

    So, what does yield return mean? How is it related to collections of items?

    Using Lists

    Say that you’re returning a collection of items and that you need to iterate over them.

    A first approach could be creating a list with all the items, returning it to the caller, and iterating over the collection:

    IEnumerable<int> WithList()
    {
        List<int> items = new List<int>();
    
        for (int i = 0; i < 10; i++)
        {
            Console.WriteLine($"Added item {i}");
            items.Add(i);
        }
    
        return items;
    }
    
    void Main()
    {
        var items = WithList();
    
        foreach (var i in items)
        {
            Console.WriteLine($"This is Mambo number {i}");
        }
    }
    

    This snippet creates the whole collection and then prints the values inside that list. On the console, you’ll see this text:

    Added item 0
    Added item 1
    Added item 2
    Added item 3
    Added item 4
    Added item 5
    Added item 6
    Added item 7
    Added item 8
    Added item 9
    This is Mambo number 0
    This is Mambo number 1
    This is Mambo number 2
    This is Mambo number 3
    This is Mambo number 4
    This is Mambo number 5
    This is Mambo number 6
    This is Mambo number 7
    This is Mambo number 8
    This is Mambo number 9
    

    This means that, if you need to operate over a collection with 1 million items, at first you’ll create ALL the items, and then you’ll perform operations on each of them. This approach has two main disadvantages: it’s slow (especially if you only need to work with a subset of those items), and occupies a lot of memory.

    With Yield

    We can use another approach: use the yield return keywords:

    IEnumerable<int> WithYield()
    {
        for (int i = 0; i < 10; i++)
        {
            Console.WriteLine($"Returning item {i}");
    
            yield return i;
        }
    }
    
    void Main()
    {
        var items = WithYield();
    
        foreach (var i in items)
        {
            Console.WriteLine($"This is Mambo number {i}");
        }
    }
    

    With this method, the order of messages is different:

    Returning item 0
    This is Mambo number 0
    Returning item 1
    This is Mambo number 1
    Returning item 2
    This is Mambo number 2
    Returning item 3
    This is Mambo number 3
    Returning item 4
    This is Mambo number 4
    Returning item 5
    This is Mambo number 5
    Returning item 6
    This is Mambo number 6
    Returning item 7
    This is Mambo number 7
    Returning item 8
    This is Mambo number 8
    Returning item 9
    This is Mambo number 9
    

    So, instead of creating the whole list, we create one item at a time, and only when needed.

    Benefits of Yield

    As I said before, there are several benefits with yield: the application is more performant when talking about both the execution time and the memory usage.

    It’s like an automatic iterator: every time you get a result, the iterator advances to the next item.

    Just a note: yield works only for methods that return IAsyncEnumerable<T>, IEnumerable<T>, IEnumerable, IEnumerator<T>, or IEnumerator.

    You cannot use it with a method that returns, for instance, List<T>, because, as the error message says,

    The body of X cannot be an iterator block because List<int> is not an iterator interface type

    Cannot use yield return with lists

    A real use case

    If you use NUnit as a test suite, you’ve probably already used this keyword.

    In particular, when using the TestCaseSource attribute, you specify the name of the class that outputs the test cases.

    public class MyTestClass
    {
        [TestCaseSource(typeof(DivideCases))]
        public void DivideTest(int n, int d, int q)
        {
            Assert.AreEqual(q, n / d);
        }
    }
    
    class DivideCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new object[] { 12, 3, 4 };
            yield return new object[] { 12, 2, 6 };
            yield return new object[] { 12, 4, 3 };
        }
    }
    

    When executing the tests, an iterator returns a test case at a time, without creating a full list of test cases.

    The previous snippet is taken directly from NUnit’s documentation for the TestCaseSource attribute, that you can find here.

    Wrapping up

    Yes, yield is a quite difficult keyword to understand.

    To read more, head to the official docs.

    Another good resource is “C# – Use yield return to minimize memory usage” by Makolyte. You should definitely check it out!

    And, if you want, check out the conversation I had about this keyword on Twitter.

    Happy coding!

    🐧





    Source link

  • should we trust Open Source after Log4J’s issues? | Code4IT


    With Log4J’s vulnerability, we’ve all been reminded that systems are vulnerable, and OSS are not immune too. What should we do now?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    After the Log4J vulnerability, we should reflect on how open source impacts our projects, and what are the benefits and disadvantages of using such libraries.

    The following article is more an opinion, just some random thoughts about what happened and what we can learn from this event.

    A recap of the Log4J vulnerability

    To give some context to those who have never heard (or forgot) about the Log4J vulnerability, here’s a short recap.

    Log4J is a popular Java library for logging. So popular that it has been inglobed in the Apache ecosystem.

    For some reason I haven’t understood, the logger evaluates the log messages instead of just treating them as strings. So, a kind of SQL injection (but for logs) could be executed: by sending a specific string format to services that use Log4J, that string is evaluated and executed on the server; as a result, external scripts could be run on the server, allowing attackers to access your server. Of course, it’s not a detailed and 100% accurate description: there are plenty of resources on the Internet if you want to deep dive into this topic.

    Some pieces of evidence show that the earliest exploitation of this vulnerability happened on Dec 1, 2021, as stated by Matthew Prince, CEO of Cloudflare, in this Tweet. But the vulnerability became public 9 days later.

    Benefits of OSS projects

    The source code of Log4J is publicly available on GitHub

    This means that:

    it’s free to use (yes, OSS != free, but it’s rare to find paid OSS projects)
    you can download and run the source code
    you can inspect the code and propose changes
    it saves you time: you don’t have to reinvent the wheel – everything is already done by others.

    Issues with OSS projects

    Given that the source code is publicly accessible, attackers can study it to find security flaws, and – of course – take advantage of those vulnerabilities before the community notices them.

    Most of the time, OSS projects are created by single devs to solve their specific problems. Then, they share those repositories to help their peers and allow other devs to work on the library. All the coding is done for free and in their spare time. As you can expect, the quality is deeply impacted by this.

    What to do with OSS projects?

    So, what should we do with all those OSS projects? Should we stop using them?

    I don’t think so. just because those kinds of issues can arise, it doesn’t mean that they will arise so often.

    Also, it’s pretty stupid to build everything from scratch “just in case”. Just because attackers don’t know the source code, it doesn’t mean that they can’t find a way to access your systems.

    On the other hand, we should not blindly use every library we see on GitHub. It’s not true that just because it’s open source, it’s safe to use – as the Log4J story taught us.

    So, what should we do?

    I don’t have an answer. But for sure we can perform some operations when working on our projects.

    We should review which external packages we’re using, and keep track of their version. Every N months, we should write a recap (even an Excel file is enough) to update the list of packages we’re using. In this way, if a vulnerability is discovered for a package, and a patch is released, we can immediately apply that patch to our applications.

    Finding installed dependencies for .NET projects is quite simple: you can open the csproj file and see the list of NuGet packages you’re using.

    NuGet packages listed in the csproj file

    The problem with this approach is that you don’t see the internal dependencies: if one of the packages you’re using depends on another package with a known vulnerability, your application may be vulnerable too.

    How can you list all your dependencies? Are there any tools that work with your programming language? Drop a comment below, it can help other devs!

    Then, before choosing a project instead of another, we should answer (at least) three questions. Does this package solve our problem? Does the code look safe to use? Is the repository active, or is it stale?

    Spend some time skimming the source code, looking for weird pieces of code. Pay attention when they evaluate the code (possible issues like with Log4J), when they perform unexpected calls to external services (are they tracking you?), and so on.

    Look at the repository history: is the repository still being updated? Is it maintained by a single person, or is there a community around it?

    You can find this info on GitHub under the Insight tab.

    In the following picture, you can see the contributions to the Log4J library (available here):

    Contributions graph to Log4J repository

    Does this repo have tests? Without tests (or, maybe worse, with not meaningful tests), the package should not be considered safe. Have a look at the code and at the CI pipelines, if publicly available.

    Finally, a hope for a future: to define a standard and some procedures to rate the security of a package/repository. I don’t know if it can be feasible, but it would be a good addition to the OSS world.

    Further readings

    If you’re interested in the general aspects of the Log4J vulnerability, you can have a look at this article by the Wall Street Journal:

    🔗What Is the Log4j Vulnerability? What to Know | The Wall Street Journal

    If you prefer a more technical article, DataDog’s blog got you covered. In particular, jump to the “How the Log4Shell vulnerability works” section.

    🔗Takeaways from the Log4j Log4Shell vulnerability | DataDog

    But if you prefer some even more technical info, you can head to the official vulnerability description:

    🔗CVE-2021-45046 vulnerability

    Here’s the JIRA ticket created to track it:

    🔗JNDI lookups in layout (not message patterns) enabled in Log4j2 < 2.16.0 | Jira

    Wrapping up

    This was not the usual article/tutorial, it was more an opinion on the current status of OSS and on what we should do to avoid issues like those caused by Log4J.

    It’s not the first vulnerability, and for sure it won’t be the only one.

    What do you think? Should we move away from OSS?

    How would you improve the OSS world?

    Happy coding!

    🐧





    Source link

  • How to run PostgreSQL locally with Docker &vert; Code4IT

    How to run PostgreSQL locally with Docker | Code4IT


    PostgreSQL is a famous relational database. In this article, we will learn how to run it locally using Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is a relational database characterized for being open source and with a growing community supporting the project.

    There are several ways to store a Postgres database online so that you can use it to store data for your live applications. But, for local development, you might want to spin up a Postgres database on your local machine.

    In this article, we will learn how to run PostgreSQL on a Docker container for local development.

    Pull Postgres Docker Image

    As you may know, Docker allows you to download images of almost everything you want in order to run them locally (or wherever you want) without installing too much stuff.

    The best way to check the available versions is to head to DockerHub and search for postgres.

    Postgres image on DockerHub

    Here you’ll find a description of the image, all the documentation related to the installation parameters, and more.

    If you have Docker already installed, just open a terminal and run

    to download the latest image of PostgreSQL.

    Docker pull result

    Run the Docker Container

    Now that we have the image in our local environment, we can spin up a container and specify some parameters.

    Below, you can see the full command.

    docker run
        --name myPostgresDb
        -p 5455:5432
        -e POSTGRES_USER=postgresUser
        -e POSTGRES_PASSWORD=postgresPW
        -e POSTGRES_DB=postgresDB
        -d
        postgres
    

    Time to explain each and every part! 🔎

    docker run is the command used to create and run a new container based on an already downloaded image.

    --name myPostgresDb is the name we assign to the container that we are creating.

    -p 5455:5432 is the port mapping. Postgres natively exposes the port 5432, and we have to map that port (that lives within Docker) to a local port. In this case, the local 5455 port maps to Docker’s 5432 port.

    -e POSTGRES_USER=postgresUser, -e POSTGRES_PASSWORD=postgresPW, and -e POSTGRES_DB=postgresDB set some environment variables. Of course, we’re defining the username and password of the admin user, as well as the name of the database.

    -d indicates that the container run in a detached mode. This means that the container runs in a background process.

    postgres is the name of the image we are using to create the container.

    As a result, you will see the newly created container on the CLI (running docker ps) or view it using some UI tool like Docker Desktop:

    Containers running on Docker Desktop

    If you forgot which environment variables you’ve defined for that container, you can retrieve them using Docker Desktop or by running docker exec myPostgresDb env, as shown below:

    List all environment variables associated to a Container

    Note: environment variables may change with newer image versions. Always refer to the official docs, specifically to the documentation related to the image version you are consuming.

    Now that we have Postgres up and running, we can work with it.

    You can work with the DB using the console, or, if you prefer, using a UI.

    I prefer the second approach (yes, I know, it’s not cool as using the terminal, but it works), so I downloaded pgAdmin.

    There, you can connect to the server by using the environment variable you’ve defined when running docker run. Remember that the hostname is simply localhost.

    Connect to Postgres by using pgAdmin

    And we’ve finished! 🥳 Now you can work with a local instance of Postgres and shut it remove it when you don’t need it anymore.

    Additional resources

    I’ve already introduced Docker in another article, where I explained how to run MongoDB locally:

    🔗 First steps with Docker | Code4IT

    As usual, the best resource is the official website:

    🔗 PostgreSQL image | DockerHub

    Finally, a special mention to Francesco Ciulla, who thought me how to run Postgres with Docker while I thought him how to query it with C#. Yes, mutual help! 👏

    🔗 Francesco Ciulla’s blog

    Wrapping up

    In this article, we’ve seen how to download and install a PostgreSQL database on our local environment by using Docker.

    It’s just a matter of running a few commands and paying attention to the parameters passed in input.

    In a future article, we will learn how to perform CRUD operations on a PostgreSQL database using C#.

    For now, happy coding!

    🐧



    Source link

  • Avoid mental mappings &vert; Code4IT

    Avoid mental mappings | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Every name must be meaningful and clear. If names are not obvious, other developers (or your future self) may misinterpret what you were meaning.

    Avoid using mental mapping to abbreviate names, unless the abbreviation is obvious or common.

    Names should not be based on mental mapping, even worse without context.

    Bad mental mappings

    Take this bad example:

    public void RenderWOSpace()
    

    What is a WOSpace? Without context, readers won’t understand its meaning. Ok, some people use WO as an abbreviation of without.

    So, a better name is, of course:

    public void RenderWithoutSpace()
    

    Acceptable mappings

    Some abbreviations are quite obvious and are totally fine to be used.

    For instance, standard abbreviations, like km for kilometer.

    public int DistanceInKm()
    

    or variables used, for instance, in a loop:

    for (int i = 0; i <; 100; i++){}
    

    or in lambdas:

    int[] collection = new int[] { 2, 3, 5, 8 };
    collection.Where(c => c < 5);
    

    It all depends on the scope: the narrower the scope, the meaningless (don’t get me wrong!) can be the variable.

    An edge case

    Sometimes, a common (almost obvious) abbreviation can have multiple meanings. What does DB mean? Database? Decibel? It all depends on the context!

    So, a _dbConnection obviously refers to the database. But a defaultDb, is the default decibel value or the default database?

    This article first appeared on Code4IT

    Conclusion

    As usual, clarity is the key for good code: a name, may it be for classes, modules, or variables, should be explicit and obvious to everyone.

    So, always use meaningful names!

    Happy coding!

    🐧



    Source link

  • CRUD operations on PostgreSQL using C# and Npgsql &vert; Code4IT

    CRUD operations on PostgreSQL using C# and Npgsql | Code4IT


    Once we have a Postgres instance running, we can perform operations on it. We will use Npgsql to query a Postgres instance with C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is one of the most famous relational databases. It has got tons of features, and it is open source.

    In a previous article, we’ve seen how to run an instance of Postgres by using Docker.

    In this article, we will learn how to perform CRUD operations in C# by using Npgsql.

    Introducing the project

    To query a Postgres database, I’ve created a simple .NET API application with CRUD operations.

    We will operate on a single table that stores info for my board game collection. Of course, we will Create, Read, Update and Delete items from the DB (otherwise it would not be an article about CRUD operations 😅).

    Before starting writing, we need to install Npgsql, a NuGet package that acts as a dataprovider for PostgreSQL.

    NpgSql Nuget Package

    Open the connection

    Once we have created the application, we can instantiate and open a connection against our database.

    private NpgsqlConnection connection;
    
    public NpgsqlBoardGameRepository()
    {
        connection = new NpgsqlConnection(CONNECTION_STRING);
        connection.Open();
    }
    

    We simply create a NpgsqlConnection object, and we keep a reference to it. We will use that reference to perform queries against our DB.

    Connection string

    The only parameter we can pass as input to the NpgsqlConnection constructor is the connection string.

    You must compose it by specifying the host address, the port, the database name we are connecting to, and the credentials of the user that is querying the DB.

    private const string CONNECTION_STRING = "Host=localhost:5455;" +
        "Username=postgresUser;" +
        "Password=postgresPW;" +
        "Database=postgresDB";
    

    If you instantiate Postgres using Docker following the steps I described in a previous article, most of the connection string configurations we use here match the Environment variables we’ve defined before.

    CRUD operations

    Now that everything is in place, it’s time to operate on our DB!

    We are working on a table, Games, whose name is stored in a constant:

    private const string TABLE_NAME = "Games";
    

    The Games table consists of several fields:

    Field name Field type
    id INTEGER PK
    Name VARCHAR NOT NULL
    MinPlayers SMALLINT NOT NULL
    MaxPlayers SMALLINT
    AverageDuration SMALLINT

    This table is mapped to the BoardGame class:

    public class BoardGame
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public int MinPlayers { get; set; }
        public int MaxPlayers { get; set; }
        public int AverageDuration { get; set; }
    }
    

    To double-check the results, you can use a UI tool to access the Database. For instance, if you use pgAdmin, you can find the list of databases running on a host.

    Database listing on pgAdmin

    And, if you want to see the content of a particular table, you can select it under Schemas>public>Tables>tablename, and then select View>AllRows

    How to view table rows on pgAdmin

    Create

    First things first, we have to insert some data in our DB.

    public async Task Add(BoardGame game)
    {
        string commandText = $"INSERT INTO {TABLE_NAME} (id, Name, MinPlayers, MaxPlayers, AverageDuration) VALUES (@id, @name, @minPl, @maxPl, @avgDur)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    The commandText string contains the full command to be issued. In this case, it’s a simple INSERT statement.

    We use the commandText string to create a NpgsqlCommandobject by specifying the query and the connection where we will perform that query. Note that the command must be Disposed after its use: wrap it in a using block.

    Then, we will add the parameters to the query. AddWithValue accepts two parameters: the first is the name of the key, with the same name defined in the query, but without the @ symbol; in the query, we use @minPl, and as a parameter, we use minPl.

    Never, never, create the query by concatenating the input params as a string, to avoid SQL Injection attacks.

    Finally, we can execute the query asynchronously with ExecuteNonQueryAsync.

    Read

    Now that we have some games stored in our table, we can retrieve those items:

    public async Task<BoardGame> Get(int id)
    {
        string commandText = $"SELECT * FROM {TABLE_NAME} WHERE ID = @id";
        await using (NpgsqlCommand cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", id);
    
            await using (NpgsqlDataReader reader = await cmd.ExecuteReaderAsync())
                while (await reader.ReadAsync())
                {
                    BoardGame game = ReadBoardGame(reader);
                    return game;
                }
        }
        return null;
    }
    

    Again, we define the query as a text, use it to create a NpgsqlCommand, specify the parameters’ values, and then we execute the query.

    The ExecuteReaderAsync method returns a NpgsqlDataReader object that we can use to fetch the data. We update the position of the stream with reader.ReadAsync(), and then we convert the current data with ReadBoardGame(reader) in this way:

    private static BoardGame ReadBoardGame(NpgsqlDataReader reader)
    {
        int? id = reader["id"] as int?;
        string name = reader["name"] as string;
        short? minPlayers = reader["minplayers"] as Int16?;
        short? maxPlayers = reader["maxplayers"] as Int16?;
        short? averageDuration = reader["averageduration"] as Int16?;
    
        BoardGame game = new BoardGame
        {
            Id = id.Value,
            Name = name,
            MinPlayers = minPlayers.Value,
            MaxPlayers = maxPlayers.Value,
            AverageDuration = averageDuration.Value
        };
        return game;
    }
    

    This method simply reads the data associated with each column (for instance, reader["averageduration"]), then we convert them to their data type. Then we build and return a BoardGame object.

    Update

    Updating items is similar to inserting a new item.

    public async Task Update(int id, BoardGame game)
    {
        var commandText = $@"UPDATE {TABLE_NAME}
                    SET Name = @name, MinPlayers = @minPl, MaxPlayers = @maxPl, AverageDuration = @avgDur
                    WHERE id = @id";
    
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Of course, the query is different, but the general structure is the same: create the query, create the Command, add parameters, and execute the query with ExecuteNonQueryAsync.

    Delete

    Just for completeness, here’s how to delete an item by specifying its id.

    public async Task Delete(int id)
    {
        string commandText = $"DELETE FROM {TABLE_NAME} WHERE ID=(@p)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("p", id);
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Always the same story, so I have nothing to add.

    ExecuteNonQueryAsync vs ExecuteReaderAsync

    As you’ve seen, some operations use ExecuteNonQueryAsync, while some others use ExecuteReaderAsync. Why?

    ExecuteNonQuery and ExecuteNonQueryAsync execute commands against a connection. Those methods do not return data from the database, but only the number of rows affected. They are used to perform INSERT, UPDATE, and DELETE operations.

    On the contrary, ExecuteReader and ExecuteReaderAsync are used to perform queries on the database and return a DbDataReader object, which is a read-only stream of rows retrieved from the data source. They are used in conjunction with SELECT queries.

    Bonus 1: Create the table if not already existing

    Of course, you can also create tables programmatically.

    public async Task CreateTableIfNotExists()
    {
        var sql = $"CREATE TABLE if not exists {TABLE_NAME}" +
            $"(" +
            $"id serial PRIMARY KEY, " +
            $"Name VARCHAR (200) NOT NULL, " +
            $"MinPlayers SMALLINT NOT NULL, " +
            $"MaxPlayers SMALLINT, " +
            $"AverageDuration SMALLINT" +
            $")";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        await cmd.ExecuteNonQueryAsync();
    }
    

    Again, nothing fancy: create the command text, create a NpgsqlCommand object, and execute the command.

    Bonus 2: Check the database version

    To check if the database is up and running, and your credentials are correct (those set in the connection string), you might want to retrieve the DB version.

    You can do it in 2 ways.

    With the following method, you query for the version directly on the database.

    public async Task<string> GetVersion()
    {
        var sql = "SELECT version()";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        var versionFromQuery = (await cmd.ExecuteScalarAsync()).ToString();
    
        return versionFromQuery;
    }
    

    This method returns lots of info that directly depend on the database instance. In my case, I see PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit.

    The other way is to use PostgreSqlVersion.

    public async Task<string> GetVersion()
    {
        var versionFromConnection = connection.PostgreSqlVersion;
    
        return versionFromConnection;
    }
    

    PostgreSqlVersion returns a Version object containing some fields like Major, Minor, Revision, and more.

    PostgresVersion from connection info

    You can call the ToString method of that object to get a value like “14.1”.

    Additional readings

    In a previous article, we’ve seen how to download and run a PostgreSQL instance on your local machine using Docker.

    🔗 How to run PostgreSQL locally with Docker | Code4IT

    To query PostgreSQL with C#, we used the Npsgql NuGet package. So, you might want to read the official documentation.

    🔗 Npgsql documentation | Npsgql

    In particular, an important part to consider is the mapping between C# and SQL data types:

    🔗 PostgreSQL to C# type mapping | Npsgql

    When talking about parameters to be passed to the query, I mentioned the SQL Injection vulnerability. Here you can read more about it.

    🔗 SQL Injection | Imperva

    Finally, here you can find the repository used for this article.

    🔗 Repository used for this article | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned how to perform simple operations on a PostgreSQL database to retrieve and update the content of a table.

    This is the most basic way to perform those operations. You explicitly write the queries and issue them without much stuff in between.

    In future articles, we will see some other ways to perform the same operations in C#, but using other tools and packages. Maybe Entity Framework? Maybe Dapper? Stay tuned!

    Happy coding!

    🐧



    Source link

  • Exception handling with WHEN clause &vert; Code4IT

    Exception handling with WHEN clause | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    From C# 6 on, you can use the when keyword to specify a condition before handling an exception.

    Consider this – pretty useless, I have to admit – type of exception:

    public class RandomException : System.Exception
    {
        public int Value { get; }
        public RandomException()
        {
            Value = (new Random()).Next();
        }
    }
    

    This exception type contains a Value property which is populated with a random value when the exception is thrown.

    What if you want to print a different message depending on whether the Value property is odd or even?

    You can do it this way:

    try
    {
        throw new RandomException();
    }
    catch (RandomException re)
    {
        if(re.Value % 2 == 0)
            Console.WriteLine("Exception with even value");
        else
            Console.WriteLine("Exception with odd value");
    }
    

    But, well, you should keep your catch blocks as simple as possible.

    That’s where the when keyword comes in handy.

    CSharp when clause

    You can use it to create two distinct catch blocks, each one of them handles their case in the cleanest way possible.

    try
    {
        throw new RandomException();
    }
    catch (RandomException re) when (re.Value % 2 == 0)
    {
        Console.WriteLine("Exception with even value");
    }
    catch (RandomException re)
    {
        Console.WriteLine("Exception with odd value");
    }
    

    You must use the when keyword in conjunction with a condition, which can also reference the current instance of the exception being caught. In fact, the condition references the Value property of the RandomException instance.

    A real usage: HTTP response errors

    Ok, that example with the random exception is a bit… useless?

    Let’s see a real example: handling different HTTP status codes in case of failing HTTP calls.

    In the following snippet, I call an endpoint that returns a specified status code (506, in my case).

    try
    {
        var endpoint = "https://mock.codes/506";
        var httpClient = new HttpClient();
        var response = await httpClient.GetAsync(endpoint);
        response.EnsureSuccessStatusCode();
    }
    catch (HttpRequestException ex) when (ex.StatusCode == (HttpStatusCode)506)
    {
        Console.WriteLine("Handle 506: Variant also negotiates");
    }
    catch (HttpRequestException ex)
    {
        Console.WriteLine("Handle another status code");
    }
    

    If the response is not a success, the response.EnsureSuccessStatusCode() throws an exception of type HttpRequestException. The thrown exception contains some info about the returned status code, which we can use to route the exception handling to the correct catch block using when (ex.StatusCode == (HttpStatusCode)506).

    Quite interesting, uh? 😉

    This article first appeared on Code4IT

    To read more, you can head to the official documentation, even though there’s not so much.

    Happy coding!

    🐧



    Source link

  • injecting and testing the current time with TimeProvider and FakeTimeProvider &vert; Code4IT

    injecting and testing the current time with TimeProvider and FakeTimeProvider | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Things that depend on concrete stuff are difficult to use when testing. Think of the file system: to have tests work properly, you have to ensure that the file system is structured exactly as you are expecting it to be.

    A similar issue occurs with dates: if you create tests based on the current date, they will fail the next time you run them.

    In short, you should find a way to abstract these functionalities, to make them usable in the tests.

    In this article, we are going to focus on the handling of dates: we’ll learn what the TimeProvider class is, how to use it and how to mock it.

    The old way for handling dates: a custom interface

    Back in the days, the most straightforward approach to add abstraction around the date management was to manually create an interface, or an abstract class, to wrap the access to the current date:

    public interface IDateTimeWrapper
    {
      DateTime GetCurrentDate();
    }
    

    Then, the standard implementation implemented the interface by using only the UTC date:

    public class DateTimeWrapper : IDateTimeWrapper
    {
      public DateTime GetCurrentDate() => DateTime.UtcNow;
    }
    

    A similar approach is to have an abstract class instead:

    public abstract class DateTimeWrapper
    {
      public virtual DateTime GetCurrentDate() => DateTime.UctNow;
    }
    

    Easy: you then have to add an instance of it in the DI engine, and you are good to go.

    The only problem? You have to do it for every project you are working on. Quite a waste of time!

    How to use TimeProvider in a .NET application to get the current date

    Along with .NET 8, the .NET team released an abstract class named TimeProvider. This abstract class, beyond providing an abstraction for local time, exposes methods for working with high-precision timestamps and TimeZones.

    It’s important to notice that dates are returned as DateTimeOffset, and not as DateTime instances.

    TimeProvider comes out-of-the-box with a .NET Console application, accessible as a singleton:

    static void Main(string[] args)
    {
      Console.WriteLine("Hello, World!");
      
      DateTimeOffset utc = TimeProvider.System.GetUtcNow();
      Console.WriteLine(utc);
    
      DateTimeOffset local = TimeProvider.System.GetLocalNow();
      Console.WriteLine(local);
    }
    

    On the contrary, if you need to use Dependency Injection, for example, in .NET APIs, you have to inject it as a singleton, like this:

    builder.Services.AddSingleton(TimeProvider.System);
    

    So that you can use it like this:

    public class SummerVacationCalendar
    {
      private readonly TimeProvider _timeProvider;
    
      public SummerVacationCalendar(TimeProvider timeProvider)
     {
        this._timeProvider = timeProvider;
     }
    
      public bool ItsVacationTime()
     {
        var today = _timeProvider.GetLocalNow();
        return today.Month == 8;
     }
    }
    

    How to test TimeProvider with FakeTimeProvider

    Now, how can we test the ItsVacationTime of the SummerVacationCalendar class?

    We can use the Microsoft.Extensions.TimeProvider.Testing NuGet library, still provided by Microsoft, which provides a FakeTimeProvider class that acts as a stub for the TimeProvider abstract class:

    TimeProvider.Testing NuGet package

    By using the FakeTimeProvider class, you can set the current UTC and Local time, as well as configure the other options provided by TimeProvider.

    Here’s an example:

    [Fact]
    public void WhenItsAugust_ShouldReturnTrue()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 8, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.True(isVacation);
    }
    
    [Fact]
    public void WhenItsNotAugust_ShouldReturnFalse()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 3, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.False(isVacation);
    }
    

    Further readings

    Actually, TimeProvider provides way more functionalities than just returning the UTC and the Local time.

    Maybe we’ll explore them in the future. But for now, do you know how the DateTimeKind enumeration impacts the way you create new DateTimes?

    🔗 C# tip: create correct DateTimes with DateTimeKind | Code4IT

    This article first appeared on Code4IT 🐧

    However, always remember to test the code not against the actual time but against static values. But, if for some reason you cannot add TimeProvider in your classes, there are other less-intrusive strategies that you can use (and that can work for other types of dependencies as well, like the file system):

    🔗 3 ways to inject DateTime and test it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link