برچسب: Using

  • How to Build an Immersive 3D Circular Carousel in WordPress Using Droip

    How to Build an Immersive 3D Circular Carousel in WordPress Using Droip


    A flat carousel is nice. 

    But what if your cards could float in 3D space and orbit around like planets on your WordPress site?

    You read that right. Droip, the modern no-code website builder, now makes it possible to design immersive 3D interactions in WordPress without any third-party plugins or coding.

    In this tutorial, you’ll build a 3D circular marquee (a rotating ring of cards that tilt, orbit, and feel alive), all inside Droip’s visual editor.

    What We’re Building 

    Imagine a hula hoop standing upright in front of you. 

    Now, place 12 cards evenly around that hoop. As the hoop spins, cards travel around, some face you, some tilt away, and the one at the back hides in perspective. 

    With Droip’s advanced interactions, you can create this striking 3D effect with just a bit of math.

    This is the illusion we’ll create. A dynamic 3D ring of cards with Droip’s advanced transform and animation tools. See it live and get a feel for what you’ll be building.

    You can use this 3D Marquee to showcase portfolios, products, or creative content as an example of the advanced interactions now possible in WordPress with a modern WordPress website builder.

    Part 1: Planning The Key Pieces

    Before we start creating, let’s plan out what we’ll need to make the 3D circular marquee work:

    • Stage (the hoop): A parent element that spins, carrying all the cards.
    • Cards (the orbiting items): Each card sits at a fixed angle around the circle.
    • Perspective: A visual depth setting that makes near cards appear closer and far ones smaller.
    • Tilt: A subtle rotation that gives realism to the motion.
    • Animation: The continuous rotation that makes the ring orbit infinitely.

    Spacing Cards Around the Circle

    We’ll have 12 cards around a 360° ring, meaning each card sits 30° apart. Think of it like clock positions:

    • Card 0: 0° (front)
    • Card 3: 90° (right side)
    • Card 6: 180° (back)
    • Card 9: 270° (left side)

    Each card will be rotated by its angle and pushed outward to form the circular ring.

    The 3D Transforms

    Every card uses a combination of transforms to position correctly:

    rotateY(angle), moveZ(radius)

    Here’s what happens:

    • rotateY(angle): turns the card to its position around the circle.
    • moveZ(radius): moves it outward from the center onto the ring.

    That’s all you need to place the cards evenly in a circle. 

    Why rotate, then move?

    If you move Z first and then rotate Y, the translation happens in the element’s original space; rotating afterward will spin that translated offset around the origin and do something different. 

    The rotateY(angle) followed by moveZ(radius) means “turn the element to the angle, then push it out along its forward direction,” which places it on the circumference.

    Part 2: Building the 3D Circular Marquee in the Droip Visual Editor

    Now that you know how the structure works, let’s start building everything visually inside Droip.

    Step 1: Create the Wrapper and base layout

    1. Add a Div and rename it to Wrapper.
    2. Set Width: 100%, Height: 100vh, and choose a nice background (solid or gradient).
    3. Inside it, add two children:
      • Custom Cursor (Optional)
      • Banner (the section that holds our 3D Marquee)

    Step 2: Create the custom cursor (Optional)

    Next, we’ll add a custom cursor. Totally optional, but it gives your build that extra touch of uniqueness and polish.

    1. Inside the Wrapper, add a Div and rename it Cursor.
    2. Size: 32×32px, position it to absolute, top: 0, left: 0, z-index: 100.
    3. Add a Shape element (your cursor visual) inside the Cursor div. Resize the shape element to 32×32px. You can add your preferred cursor shape by simply replacing the SVG. 
    1. For interactions (making this custom shape act like a cursor): Select the Cursor div and click on interaction:
    • Trigger: Scroll into view.
    • Animation: Cursor Trail.
    • Scope: Viewport.
    • Smoothing: 75%.

    Now your cursor will smoothly follow your movement in preview mode.

    Step 3: Create the Banner (base for marquee) 

    Inside the Wrapper, add another Div and rename it Banner.

    Set up the following properties:

    • Width: 100vw
    • Height: 100vh
    • Position: relative
    • Z-index: 1

    This Banner will serve as the main stage for your 3D Marquee. Later in the tutorial, we’ll add an interaction here for the click-to-scale zoom effect.

    Step 4: Create the Container & 3D Transform wrapper

    Now it’s time to set up the structure that will hold and control our 3D elements.

    Inside the Banner, add a Div and rename it Container. This will act as the main layout holder for the 3D stage.

    Configure the Container:

    • Width: 100%
    • Max-width: 800px
    • Margin: auto (to center it on the page)
    • Position: relative
    • Z-index: 2

    Next, inside the Container, add another Div and rename it 3D Transform. This element will define the 3D space where all your cards will orbit.

    Set the following properties:

    • Width/Height: 100%
    • Position: absolute; top: 0; left: 0
    • Z-index: 100

    Now, in the Effects > Transform panel:

    • Enable Preserve 3D: this ensures all child elements (like your cards) exist in a true 3D environment.
    • Set Child Perspective to 9000px: this gives the illusion of depth, where closer objects appear larger and farther ones appear smaller.
    • Optionally, apply Scale X/Y: 0.8 if you want to reduce the overall stage size slightly.

    In short, this step creates the 3D “space” your rotating cards will live in — like setting up the stage before the show begins.

    Step 5: Create the 3D Marquee (Orbit Center)

    Now we’ll create the core of the carousel,  the rotating stage that all your cards will attach to.

    Inside the 3D Transform, add a Div and rename it 3D Marquee. This element acts as the orbit center. When it spins, all the cards will revolve around it.

    Set up the 3D Marquee as follows:

    • Width: 435px. This will be the size of the card
    • Height: auto
    • Position: relative
    • Enable Preserve 3D (so its child elements, the cards, maintain their depth in 3D space).
    • Rotate X: -10° – this slightly tilts the ring backward, giving a more natural perspective when viewed from the front.
    • Scale: X: 1, Y: 1

    In simple terms: this is your spinning hub. When the animation runs, this element will rotate continuously, carrying all the cards with it to create that smooth, orbiting 3D effect.

    Step 6: Create the Card Template (One Card Structure)

    Next, we’ll build a single card that will serve as the template. Once complete, we’ll duplicate it 11 more times to complete the ring.

    1. Create the Front Card

    Inside 3D Marquee, add a Div and rename it Front Card.

    Configure it:

    • Width/Height: 100% (the final position will be controlled via transforms)
    • Border-radius: 20px
    • Position: absolute
    • Enable Preserve 3D in the transforms panel

    Note: This is the element where you’ll later apply rotateY(…) translateZ(orbitZ) to position it around the circle.

    2. Add the 3D Container

    Inside Front Card, add another Div and rename it to Card-3D. This acts as a 3D wrapper so we can rotate and position the card in space without affecting its internal layout.

    Settings:

    • Width/Height: 100%
    • Position: relative
    • Z-index: 3
    • Enable Preserve 3D

    3. Add the Popup (Visible Front Face)

    Inside Card-3D, add a Div and rename it Popup. This holds the main content, the image or design that users interact with.

    Settings:

    • Width/Height: 100%
    • Background: White
    • Border-radius: 20px

    Inside Popup, add an Image element:

    • Width/Height: 100%
    • Border-radius: 12px

    4. Add the Backface

    Inside the Popup, add another Div and rename it Backface.

    Settings:

    • Padding: 12px
    • Width/Height: 100%
    • Background: #FEDEFF 
    • Border-radius: 20px
    • Position: absolute; top: 0; left: 0; z-index: 1
      Transforms: Rotate Y = 180° (so it appears when the card flips)
    • Disable showing the real backside by toggling backface-visibility

    Now you have a complete single card ready to be duplicated and positioned around the orbit. Each card will inherit the 3D rotation and spacing we’ll set in the next step.

    Step 7: Duplicate Cards and Position Them Around the Orbit

    Now that we have a single card ready, we’ll create all 12 cards for the carousel and place them evenly around the circular orbit.

    Duplicate the Card-Template

    • Right-click on your Front Card and select Duplicate. This creates a new card that copies all the styles of the original card.
    • Duplicate the class holding the transform styles. This gives the new card its own separate class for rotation/position.
    • Do this 11 times so you have Card-1 through Card-12. Rename the cards

    💡 Tip: Duplicating the card class is important so each card’s transform is independent.

    Set Each Card’s Position with 3D Transforms

    For each card, set the Transform fields (Rotate Y + Move Z). Use these exact values:

    1. Front Card: rotateY(0deg), MoveZ(850px)
    2. Card 1: rotateY( 30deg), MoveZ(850px)
    3. Card 2: rotateY( 60deg), MoveZ(850px)
    4. Card 3: rotateY( 90deg), MoveZ(850px)
    5. Card 4: rotateY(120deg), MoveZ(850px)
    6. Card 5: rotateY(150deg), MoveZ(850px)
    7. Card 6: rotateY(180deg), MoveZ(850px)
    8. Card 7: rotateY(-150deg), MoveZ(850px)
    9. Card 8: rotateY(-120deg), MoveZ(850px)
    10. Card 9: rotateY(-90deg), MoveZ(850px)
    11. Card 10: rotateY(-60deg), MoveZ(850px)
    12. Card 11: rotateY(-30deg), MoveZ(850px)

    At this point, if Preserve 3D and Perspective are correctly set, you should see a ring of cards in 3D space.

    Step 8: Animate the Orbit (Rotate the 3D Marquee)

    Now that your cards are all in place, let’s bring the marquee to life by making it spin.

    1. In the Layers panel, select Page, then go to Interactions and select Page Load.
    2. Choose the 3D Marquee div as your animation target — this is the parent element that holds all the cards.
    3. Add a Rotate action and set these values:
    • Duration: 30s (or any speed you like)
    • X: -10°
    • Y: 360°
    • Loop: Infinite

    Hit Preview, and you’ll see your entire 3D ring smoothly spinning in space — just like a rotating carousel!

    💡 Tip: The -10° tilt keeps the spin looking natural and adds depth to the orbit, rather than a flat, top-down rotation.

    Step 9: Add Click-to-Scale Interaction on the Banner (Zoom Toggle)

    Let’s make your 3D Marquee more fun to play with by adding a click-to-zoom effect, so users can zoom in and out of the carousel with a single click.

    1. Select the Banner. This is the background container holding your 3D Marquee.
    2. Go to Interactions and create a new one with:
      • Trigger: Mouse Click (Tap)
      • Target: 3D Transform

    The Banner acts as the clickable area. When you click it, the animation targets the 3D Transform div (which contains everything inside the 3D scene).

    Now we’ll set up a two-step toggle animation:

    Step 1: First Click 

    Create two responses and name them:

    We’re creating both Zoom In/Out and Zoom In/Out (Tab) because desktop and tablet screens behave differently. A zoom value that looks perfect on a wide desktop might push the 3D ring out of view or look oversized on a smaller tablet screen.

    So by having two versions, Droip automatically applies the right animation depending on the device, keeping the zoom effect centered and balanced across all viewports.

    Zoom In:

    • Scale X: 2, Y: 2
    • Move Y: -250

    Zoom In (Tab):

    • Scale X: 1, Y: 1
    • Move Y: 0

    Step 2: Second Click (Zoom Out)

    Duplicate the first set and rename them:

    Zoom Out:

    • Scale X: 0.8, Y: 0.8
    • Move Y: 0

    Zoom Out (Tab):

    • Scale X: 0.4, Y: 0.4
    • Move Y: 0

    Now, when you click anywhere on the Banner, the whole 3D scene smoothly zooms in and out, making it feel alive and responsive.

    💡 Tip: Adjust the scale and movement values to find your perfect zoom balance for desktop and tablet views.

    Final Preview

    That’s it! You’ve just built a fully interactive 3D circular marquee inside Droip with no code, no plugins. 

    It might seem like a lot at first, but once you get the hang of it, you’ll realize how much power Droip gives you. 

    With this modern WordPress website builder, almost any advanced web interactions are now possible in WordPress, all visually. 



    Source link

  • define Using Aliases to avoid ambiguity | Code4IT

    define Using Aliases to avoid ambiguity | Code4IT


    Sometimes we need to use objects with the same name but from different namespaces. How to remove that ambiguity? By Using Aliases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You may have to reference classes or services that come from different namespaces or packages, but that have the same name. It may become tricky to understand which reference refers to a specific type.

    Yes, you could use the fully qualified name of the class. Or, you could use namespace aliases to write cleaner and easier-to-understand code.

    It’s just a matter of modifying your using statements. Let’s see how!

    The general approach

    Say that you are working on an application that receives info about football matches from different sources using NuGet packages, and then manipulates the data to follow some business rules.

    Both services, ShinyData and JuanStatistics (totally random names!), provide an object called Match. Of course, those objects live in their specific namespaces.

    Since you are using the native implementation you cannot rename the classes to avoid ambiguity. So you’ll end up with code like this:

    void Main()
    {
        var shinyMatch = new ShinyData.Football.Statistics.Match();
        var juanMatch = new JuanStatistics.Stats.Football.Objects.Match();
    }
    

    Writing the fully qualified namespace every time can easily become boring. The code becomes less readable too!

    Luckily we have 2 solutions. Or, better, a solution that we can apply in two different ways.

    Namespace aliases – a cleaner solution

    The following solution will not work:

    using ShinyData.Football.Statistics;
    using JuanStatistics.Stats.Football.Objects;
    
    void Main()
    {
        var shinyMatch = new Match();
        var juanMatch = new Match();
    }
    

    because, of course, the compiler is not able to understand the exact type of shinyMatch and juanMatch.

    But we can use a nice functionality of C#: namespace aliases. It simply means that we can name an imported namespace and use the alias to reference the related classes.

    Using alias for the whole namespace

    using Shiny = ShinyData.Football.Statistics;
    using Juan = JuanStatistics.Stats.Football.Objects;
    
    void Main()
    {
        var shinyMatch = new Shiny.Match();
        var juanMatch = new Juan.Match();
    }
    

    This simple trick boosts the readability of your code.

    Using alias for a specific class

    Can we go another step further? Yes! We can even specify aliases for a specific class!

    using ShinyMatch = ShinyData.Football.Statistics.Match;
    using JuanMatch = JuanStatistics.Stats.Football.Objects.Match;
    
    void Main()
    {
        var shinyMatch = new ShinyMatch();
        var juanMatch = new JuanMatch();
    }
    

    Now we can create an instance of ShinyMatch which, since it is an alias listed among the using statements, is of type ShinyData.Football.Statistics.Match.

    Define alias for generics

    Not only you can use it to specify a simple class, but only for generics.

    Say that the ShinyData namespace defines a generic class, like CustomDictionary<T>. You can reference it just as you did before!

    using ShinyMatch = ShinyData.Football.Statistics.Match;
    using JuanMatch = JuanStatistics.Stats.Football.Objects.Match;
    using ShinyDictionary = ShinyData.Football.Statistics.CustomDictionary<int>;
    
    void Main()
    {
        var shinyMatch = new ShinyMatch();
        var juanMatch = new JuanMatch();
    
        var dictionary = new ShinyDictionary();
    }
    

    Unluckily we have some limitations:

    • we must always specify the inner type of the generic: CustomDictionary<int> is valid, but CustomDictionary<T> is not valid
    • we cannot use as inner type a class defined with an alias: CustomDictionary<ShinyMatch> is invalid, unless we use the fully qualified name

    Conclusion

    We’ve seen how we can define namespace aliases to simplify our C# code: just add a name to an imported namespace in the using statement, and reference it on your code.

    What would you reference, the namespace or the specific class?

    👉 Let’s discuss it on Twitter or on the comment section below.

    🐧





    Source link

  • CRUD operations on PostgreSQL using C# and Npgsql &vert; Code4IT

    CRUD operations on PostgreSQL using C# and Npgsql | Code4IT


    Once we have a Postgres instance running, we can perform operations on it. We will use Npgsql to query a Postgres instance with C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is one of the most famous relational databases. It has got tons of features, and it is open source.

    In a previous article, we’ve seen how to run an instance of Postgres by using Docker.

    In this article, we will learn how to perform CRUD operations in C# by using Npgsql.

    Introducing the project

    To query a Postgres database, I’ve created a simple .NET API application with CRUD operations.

    We will operate on a single table that stores info for my board game collection. Of course, we will Create, Read, Update and Delete items from the DB (otherwise it would not be an article about CRUD operations 😅).

    Before starting writing, we need to install Npgsql, a NuGet package that acts as a dataprovider for PostgreSQL.

    NpgSql Nuget Package

    Open the connection

    Once we have created the application, we can instantiate and open a connection against our database.

    private NpgsqlConnection connection;
    
    public NpgsqlBoardGameRepository()
    {
        connection = new NpgsqlConnection(CONNECTION_STRING);
        connection.Open();
    }
    

    We simply create a NpgsqlConnection object, and we keep a reference to it. We will use that reference to perform queries against our DB.

    Connection string

    The only parameter we can pass as input to the NpgsqlConnection constructor is the connection string.

    You must compose it by specifying the host address, the port, the database name we are connecting to, and the credentials of the user that is querying the DB.

    private const string CONNECTION_STRING = "Host=localhost:5455;" +
        "Username=postgresUser;" +
        "Password=postgresPW;" +
        "Database=postgresDB";
    

    If you instantiate Postgres using Docker following the steps I described in a previous article, most of the connection string configurations we use here match the Environment variables we’ve defined before.

    CRUD operations

    Now that everything is in place, it’s time to operate on our DB!

    We are working on a table, Games, whose name is stored in a constant:

    private const string TABLE_NAME = "Games";
    

    The Games table consists of several fields:

    Field name Field type
    id INTEGER PK
    Name VARCHAR NOT NULL
    MinPlayers SMALLINT NOT NULL
    MaxPlayers SMALLINT
    AverageDuration SMALLINT

    This table is mapped to the BoardGame class:

    public class BoardGame
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public int MinPlayers { get; set; }
        public int MaxPlayers { get; set; }
        public int AverageDuration { get; set; }
    }
    

    To double-check the results, you can use a UI tool to access the Database. For instance, if you use pgAdmin, you can find the list of databases running on a host.

    Database listing on pgAdmin

    And, if you want to see the content of a particular table, you can select it under Schemas>public>Tables>tablename, and then select View>AllRows

    How to view table rows on pgAdmin

    Create

    First things first, we have to insert some data in our DB.

    public async Task Add(BoardGame game)
    {
        string commandText = $"INSERT INTO {TABLE_NAME} (id, Name, MinPlayers, MaxPlayers, AverageDuration) VALUES (@id, @name, @minPl, @maxPl, @avgDur)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    The commandText string contains the full command to be issued. In this case, it’s a simple INSERT statement.

    We use the commandText string to create a NpgsqlCommandobject by specifying the query and the connection where we will perform that query. Note that the command must be Disposed after its use: wrap it in a using block.

    Then, we will add the parameters to the query. AddWithValue accepts two parameters: the first is the name of the key, with the same name defined in the query, but without the @ symbol; in the query, we use @minPl, and as a parameter, we use minPl.

    Never, never, create the query by concatenating the input params as a string, to avoid SQL Injection attacks.

    Finally, we can execute the query asynchronously with ExecuteNonQueryAsync.

    Read

    Now that we have some games stored in our table, we can retrieve those items:

    public async Task<BoardGame> Get(int id)
    {
        string commandText = $"SELECT * FROM {TABLE_NAME} WHERE ID = @id";
        await using (NpgsqlCommand cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", id);
    
            await using (NpgsqlDataReader reader = await cmd.ExecuteReaderAsync())
                while (await reader.ReadAsync())
                {
                    BoardGame game = ReadBoardGame(reader);
                    return game;
                }
        }
        return null;
    }
    

    Again, we define the query as a text, use it to create a NpgsqlCommand, specify the parameters’ values, and then we execute the query.

    The ExecuteReaderAsync method returns a NpgsqlDataReader object that we can use to fetch the data. We update the position of the stream with reader.ReadAsync(), and then we convert the current data with ReadBoardGame(reader) in this way:

    private static BoardGame ReadBoardGame(NpgsqlDataReader reader)
    {
        int? id = reader["id"] as int?;
        string name = reader["name"] as string;
        short? minPlayers = reader["minplayers"] as Int16?;
        short? maxPlayers = reader["maxplayers"] as Int16?;
        short? averageDuration = reader["averageduration"] as Int16?;
    
        BoardGame game = new BoardGame
        {
            Id = id.Value,
            Name = name,
            MinPlayers = minPlayers.Value,
            MaxPlayers = maxPlayers.Value,
            AverageDuration = averageDuration.Value
        };
        return game;
    }
    

    This method simply reads the data associated with each column (for instance, reader["averageduration"]), then we convert them to their data type. Then we build and return a BoardGame object.

    Update

    Updating items is similar to inserting a new item.

    public async Task Update(int id, BoardGame game)
    {
        var commandText = $@"UPDATE {TABLE_NAME}
                    SET Name = @name, MinPlayers = @minPl, MaxPlayers = @maxPl, AverageDuration = @avgDur
                    WHERE id = @id";
    
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Of course, the query is different, but the general structure is the same: create the query, create the Command, add parameters, and execute the query with ExecuteNonQueryAsync.

    Delete

    Just for completeness, here’s how to delete an item by specifying its id.

    public async Task Delete(int id)
    {
        string commandText = $"DELETE FROM {TABLE_NAME} WHERE ID=(@p)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("p", id);
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Always the same story, so I have nothing to add.

    ExecuteNonQueryAsync vs ExecuteReaderAsync

    As you’ve seen, some operations use ExecuteNonQueryAsync, while some others use ExecuteReaderAsync. Why?

    ExecuteNonQuery and ExecuteNonQueryAsync execute commands against a connection. Those methods do not return data from the database, but only the number of rows affected. They are used to perform INSERT, UPDATE, and DELETE operations.

    On the contrary, ExecuteReader and ExecuteReaderAsync are used to perform queries on the database and return a DbDataReader object, which is a read-only stream of rows retrieved from the data source. They are used in conjunction with SELECT queries.

    Bonus 1: Create the table if not already existing

    Of course, you can also create tables programmatically.

    public async Task CreateTableIfNotExists()
    {
        var sql = $"CREATE TABLE if not exists {TABLE_NAME}" +
            $"(" +
            $"id serial PRIMARY KEY, " +
            $"Name VARCHAR (200) NOT NULL, " +
            $"MinPlayers SMALLINT NOT NULL, " +
            $"MaxPlayers SMALLINT, " +
            $"AverageDuration SMALLINT" +
            $")";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        await cmd.ExecuteNonQueryAsync();
    }
    

    Again, nothing fancy: create the command text, create a NpgsqlCommand object, and execute the command.

    Bonus 2: Check the database version

    To check if the database is up and running, and your credentials are correct (those set in the connection string), you might want to retrieve the DB version.

    You can do it in 2 ways.

    With the following method, you query for the version directly on the database.

    public async Task<string> GetVersion()
    {
        var sql = "SELECT version()";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        var versionFromQuery = (await cmd.ExecuteScalarAsync()).ToString();
    
        return versionFromQuery;
    }
    

    This method returns lots of info that directly depend on the database instance. In my case, I see PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit.

    The other way is to use PostgreSqlVersion.

    public async Task<string> GetVersion()
    {
        var versionFromConnection = connection.PostgreSqlVersion;
    
        return versionFromConnection;
    }
    

    PostgreSqlVersion returns a Version object containing some fields like Major, Minor, Revision, and more.

    PostgresVersion from connection info

    You can call the ToString method of that object to get a value like “14.1”.

    Additional readings

    In a previous article, we’ve seen how to download and run a PostgreSQL instance on your local machine using Docker.

    🔗 How to run PostgreSQL locally with Docker | Code4IT

    To query PostgreSQL with C#, we used the Npsgql NuGet package. So, you might want to read the official documentation.

    🔗 Npgsql documentation | Npsgql

    In particular, an important part to consider is the mapping between C# and SQL data types:

    🔗 PostgreSQL to C# type mapping | Npsgql

    When talking about parameters to be passed to the query, I mentioned the SQL Injection vulnerability. Here you can read more about it.

    🔗 SQL Injection | Imperva

    Finally, here you can find the repository used for this article.

    🔗 Repository used for this article | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned how to perform simple operations on a PostgreSQL database to retrieve and update the content of a table.

    This is the most basic way to perform those operations. You explicitly write the queries and issue them without much stuff in between.

    In future articles, we will see some other ways to perform the same operations in C#, but using other tools and packages. Maybe Entity Framework? Maybe Dapper? Stay tuned!

    Happy coding!

    🐧



    Source link

  • Advanced parsing using Int.TryParse in C# | Code4IT


    We all need to parse strings as integers. Most of the time, we use int.TryParse(string, out int). But there’s a more advanced overload that we can use for complex parsing.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You have probably used the int.TryParse method with this signature:

    public static bool TryParse (string? s, out int result);
    

    That C# method accepts a string, s, which, if it can be parsed, will be converted to an int value and whose integer value will be stored in the result parameter; at the same time, the method returns true to notify that the parsing was successful.

    As an example, this snippet:

    if (int.TryParse("100", out int result))
    {
        Console.WriteLine(result + 2); // correctly parsed as an integer
    }
    else
    {
        Console.WriteLine("Failed");
    }
    

    prints 102.

    Does it work? Yes. Is this the best we can do? No!

    How to parse complex strings with int.TryParse

    What if you wanted to parse 100€? There is a less-known overload that does the job:

    public static bool TryParse (
        string? s,
        System.Globalization.NumberStyles style,
        IFormatProvider? provider,
        out int result);
    

    As you see, we have two more parameters: style and provider.

    IFormatProvider? provider allows you to specify the culture information: examples are CultureInfo.InvariantCulture and new CultureInfo("es-es").

    But the real king of this overload is the style parameter: it is a Flagged Enum which allows you to specify the expected string format.

    style is of type System.Globalization.NumberStyles, which has several values:

    [Flags]
    public enum NumberStyles
    {
        None = 0x0,
        AllowLeadingWhite = 0x1,
        AllowTrailingWhite = 0x2,
        AllowLeadingSign = 0x4,
        AllowTrailingSign = 0x8,
        AllowParentheses = 0x10,
        AllowDecimalPoint = 0x20,
        AllowThousands = 0x40,
        AllowExponent = 0x80,
        AllowCurrencySymbol = 0x100,
        AllowHexSpecifier = 0x200,
        Integer = 0x7,
        HexNumber = 0x203,
        Number = 0x6F,
        Float = 0xA7,
        Currency = 0x17F,
        Any = 0x1FF
    }
    

    You can combine those values with the | symbol.

    Let’s see some examples.

    Parse as integer

    The simplest example is to parse a simple integer:

    [Fact]
    void CanParseInteger()
    {
        NumberStyles style = NumberStyles.Integer;
        var canParse = int.TryParse("100", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    Notice the NumberStyles style = NumberStyles.Integer;, used as a baseline.

    Parse parenthesis as negative numbers

    In some cases, parenthesis around a number indicates that the number is negative. So (100) is another way of writing -100.

    In this case, you can use the NumberStyles.AllowParentheses flag.

    [Fact]
    void ParseParenthesisAsNegativeNumber()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowParentheses;
        var canParse = int.TryParse("(100)", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(-100, result);
    }
    

    Parse with currency

    And if the string represents a currency? You can use NumberStyles.AllowCurrencySymbol.

    [Fact]
    void ParseNumberAsCurrency()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowCurrencySymbol;
        var canParse = int.TryParse(
    "100€",
     style,
     new CultureInfo("it-it"),
    out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    But, remember: the only valid symbol is the one related to the CultureInfo instance you are passing to the method.

    Both

    var canParse = int.TryParse(
        "100€",
        style,
        new CultureInfo("en-gb"),
        out int result);
    

    and

    var canParse = int.TryParse(
        "100$",
        style,
        new CultureInfo("it-it"),
        out int result);
    

    are not valid. One because we are using English culture to parse Euros, the other because we are using Italian culture to parse Dollars.

    Hint: how to get the currency symbol given a CultureInfo? You can use NumberFormat.CurrecySymbol, like this:

    new CultureInfo("it-it").NumberFormat.CurrencySymbol; // €
    

    Parse with thousands separator

    And what to do when the string contains the separator for thousands? 10.000 is a valid number, in the Italian notation.

    Well, you can specify the NumberStyles.AllowThousands flag.

    [Fact]
    void ParseThousands()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowThousands;
        var canParse = int.TryParse("10.000", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    Parse hexadecimal values

    It’s a rare case, but it may happen: you receive a string in the Hexadecimal notation, but you need to parse it as an integer.

    In this case, NumberStyles.AllowHexSpecifier is the correct flag.

    [Fact]
    void ParseHexValue()
    {
        NumberStyles style = NumberStyles.AllowHexSpecifier;
        var canParse = int.TryParse("F", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(15, result);
    }
    

    Notice that the input string does not contain the Hexadecimal prefix.

    Use multiple flags

    You can compose multiple Flagged Enums to create a new value that represents the union of the specified values.

    We can use this capability to parse, for example, a currency that contains the thousands separator:

    [Fact]
    void ParseThousandsCurrency()
    {
        NumberStyles style =
    NumberStyles.Integer
    | NumberStyles.AllowThousands
    | NumberStyles.AllowCurrencySymbol;
    
        var canParse = int.TryParse("10.000€", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    NumberStyles.AllowThousands | NumberStyles.AllowCurrencySymbol does the trick.

    Conclusion

    We all use the simple int.TryParse method, but when parsing the input string requires more complex calculations, we can rely on those overloads. Of course, if it’s still not enough, you should create your custom parsers (or, as a simpler approach, you can use regular expressions).

    Are there any methods that have overloads that nobody uses? Share them in the comments!

    Happy coding!

    🐧



    Source link

  • Avoid using too many Imports in your classes &vert; Code4IT

    Avoid using too many Imports in your classes | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Actually, this article is not about a tip to write cleaner code, but it is an article that aims at pointing out a code smell.

    Of course, once you find this code smell in your code, you can act in order to eliminate it, and, as a consequence, you will end up with cleaner code.

    The code smell is easy to identify: open your classes and have a look at the imports list (in C#, the using on top of the file).

    A real example of too many imports

    Here’s a real-life example (I censored the names, of course):

    using MyCompany.CMS.Data;
    using MyCompany.CMS.Modules;
    using MyCompany.CMS.Rendering;
    using MyCompany.Witch.Distribution;
    using MyCompany.Witch.Distribution.Elements;
    using MyCompany.Witch.Distribution.Entities;
    using Microsoft.Extensions.Logging;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Serialization;
    using MyProject.Controllers.VideoPlayer.v1.DataSource;
    using MyProject.Controllers.VideoPlayer.v1.Vod;
    using MyProject.Core;
    using MyProject.Helpers.Common;
    using MyProject.Helpers.DataExplorer;
    using MyProject.Helpers.Entities;
    using MyProject.Helpers.Extensions;
    using MyProject.Helpers.Metadata;
    using MyProject.Helpers.Roofline;
    using MyProject.ModelsEntities;
    using MyProject.Models.ViewEntities.Tags;
    using MyProject.Modules.EditorialDetail.Core;
    using MyProject.Modules.VideoPlayer.Models;
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    
    namespace MyProject.Modules.Video
    

    Sounds familiar?

    If we exclude the imports necessary to use some C# functionalities

    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    

    We have lots of dependencies on external modules.

    This means that if something changes in one of the classes that are part of those namespaces, we may end up with code that is difficult to update.

    Class dependencies

    Also, guess what comes with all those imports? Constructor with too many parameters (and, in fact, in this class, I have 11 dependencies injected in the constructor) and code that is too long and difficult to understand (and, in fact, this class has 500+ lines).

    A solution? Refactor your project in order to minimize scattering those dependencies.

    Wrapping up

    Having all those imports (in C# we use the keyword using) is a good indicator that your code does too many things. You should focus on minimizing those imports without cheating (like using global imports).

    Happy coding!

    🐧



    Source link

  • How to improve Serilog logging in .NET 6 by using Scopes &vert; Code4IT

    How to improve Serilog logging in .NET 6 by using Scopes | Code4IT


    Logs are important. Properly structured logs can be the key to resolving some critical issues. With Serilog’s Scopes, you can enrich your logs with info about the context where they happened.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even though it’s not one of the first things we usually set up when creating a new application, logging is a real game-changer in the long run.

    When an error occurred, if we have proper logging we can get more info about the context where it happened so that we can easily identify the root cause.

    In this article, we will use Scopes, one of the functionalities of Serilog, to create better logs for our .NET 6 application. In particular, we’re going to create a .NET 6 API application in the form of Minimal APIs.

    We will also use Seq, just to show you the final result.

    Adding Serilog in our Minimal APIs

    We’ve already explained what Serilog and Seq are in a previous article.

    To summarize, Serilog is an open source .NET library for logging. One of the best features of Serilog is that messages are in the form of a template (called Structured Logs), and you can enrich the logs with some value automatically calculated, such as the method name or exception details.

    To add Serilog to your application, you simply have to run dotnet add package Serilog.AspNetCore.

    Since we’re using Minimal APIs, we don’t have the StartUp file anymore; instead, we will need to add it to the Program.cs file:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console() );
    

    Then, to create those logs, you will need to add a specific dependency in your classes:

    public class ItemsRepository : IItemsRepository
    {
        private readonly ILogger<ItemsRepository> _logger;
    
        public ItemsRepository(ILogger<ItemsRepository> logger)
        {
            _logger = logger;
        }
    }
    

    As you can see, we’re injecting an ILogger<ItemsRepository>: specifying the related class automatically adds some more context to the logs that we will generate.

    Installing Seq and adding it as a Sink

    Seq is a logging platform that is a perfect fit for Serilog logs. If you don’t have it already installed, head to their download page and install it locally (you can even install it as a Docker container 🤩).

    In the installation wizard, you can select the HTTP port that will expose its UI. Once everything is in place, you can open that page on your localhost and see a page like this:

    Seq empty page on localhost

    On this page, we will see all the logs we write.

    But wait! ⚠ We still have to add Seq as a sink for Serilog.

    A sink is nothing but a destination for the logs. When using .NET APIs we can define our sinks both on the appsettings.json file and on the Program.cs file. We will use the second approach.

    First of all, you will need to install a NuGet package to add Seq as a sink: dotnet add package Serilog.Sinks.Seq.

    Then, you have to update the Serilog definition we’ve seen before by adding a .WriteTo.Seq instruction:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console()
        .WriteTo.Seq("http://localhost:5341")
        );
    

    Notice that we’ve specified also the port that exposes our Seq instance.

    Now, every time we log something, we will see our logs both on the Console and on Seq.

    How to add scopes

    The time has come: we can finally learn how to add Scopes using Serilog!

    Setting up the example

    For this example, I’ve created a simple controller, ItemsController, which exposes two endpoints: Get and Add. With these two endpoints, we are able to add and retrieve items stored in an in-memory collection.

    This class has 2 main dependencies: IItemsRepository and IUsersItemsRepository. Each of these interfaces has its own concrete class, each with a private logger injected in the constructor:

    public ItemsRepository(ILogger<ItemsRepository> logger)
    {
        _logger = logger;
    }
    

    and, similarly

    public UsersItemRepository(ILogger<UsersItemRepository> logger)
    {
        _logger = logger;
    }
    

    How do those classes use their own _logger instances?

    For example, the UsersItemRepository class exposes an AddItem method that adds a specific item to the list of items already possessed by a specific user.

    public void AddItem(string username, Item item)
    {
        if (!_usersItems.ContainsKey(username))
        {
            _usersItems.Add(username, new List<Item>());
            _logger.LogInformation("User was missing from the list. Just added");
        }
        _usersItems[username].Add(item);
        _logger.LogInformation("Added item for to the user's catalogue");
    }
    

    We are logging some messages, such as “User was missing from the list. Just added”.

    Something similar happens in the ItemsRepository class, where we have a GetItem method that returns the required item if it exists, and null otherwise.

    public Item GetItem(int itemId)
    {
        _logger.LogInformation("Retrieving item {ItemId}", itemId);
        return _allItems.FirstOrDefault(i => i.Id == itemId);
    }
    

    Finally, who’s gonna call these methods?

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        var item = _itemsRepository.GetItem(itemId);
    
        if (item == null)
        {
            _logger.LogWarning("Item does not exist");
    
            return NotFound();
        }
        _usersItemsRepository.AddItem(userName, item);
    
        return Ok(item);
    }
    

    Ok then, we’re ready to run the application and see the result.

    When I call that endpoint by passing “davide” as userName and “1” as itemId, we can see these logs:

    Simple logging on Seq

    We can see the 3 log messages but they are unrelated one each other. In fact, if we expand the logs to see the actual values we’ve logged, we can see that only the “Retrieving item 1” log has some information about the item ID we want to associate with the user.

    Expanding logs on Seq

    Using BeginScope with Serilog

    Finally, it’s time to define the Scope.

    It’s as easy as adding a simple using statement; see how I added the scope to the Add method in the Controller:

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
        {
            var item = _itemsRepository.GetItem(itemId);
    
            if (item == null)
            {
                _logger.LogWarning("Item does not exist");
    
                return NotFound();
            }
            _usersItemsRepository.AddItem(userName, item);
    
            return Ok(item);
        }
    }
    

    Here’s the key!

    using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
    

    With this single instruction, we are actually performing 2 operations:

    1. we are adding a Scope to each message – “Adding item 1 for user davide”
    2. we are adding ItemId and UserName to each log entry that falls in this block, in every method in the method chain.

    Let’s run the application again, and we will see this result:

    Expanded logs on Seq with Scopes

    So, now you can use these new properties to get some info about the context of when this log happened, and you can use the ItemId and UserName fields to search for other related logs.

    You can also nest scopes, of course.

    Why scopes instead of Correlation ID?

    You might be thinking

    Why can’t I just use correlation IDs?

    Well, the answer is pretty simple: correlation IDs are meant to correlate different logs in a specific request, and, often, across services. You generally use Correlation IDs that represent a specific call to your API and act as a Request ID.

    For sure, that can be useful. But, sometimes, not enough.

    Using scopes you can also “correlate” distinct HTTP requests that have something in common.

    If I call 2 times the AddItem endpoint, I can filter both for UserName and for ItemId and see all the related logs across distinct HTTP calls.

    Let’s see a real example: I have called the endpoint with different values

    • id=1, username=“davide”
    • id=1, username=“luigi”
    • id=2, username=“luigi”

    Since the scope reference both properties, we can filter for UserName and discover that Luigi has added both Item1 and Item 2.

    Filtering logs by UserName

    At the same time, we can filter by ItemId and discover that the item with id = 2 has been added only once.

    Filtering logs by ItemId

    Ok, then, in the end, Scopes or Correlation IDs? The answer is simple:

    Both is good

    This article first appeared on Code4IT

    Read more

    As always, the best place to find the info about a library is its documentation.

    🔗 Serilog website

    If you prefer some more practical articles, I’ve already written one to help you get started with Serilog and Seq (and with Structured Logs):

    🔗 Logging with Serilog and Seq | Code4IT

    as well as one about adding Serilog to Console applications (which is slightly different from adding Serilog to .NET APIs)

    🔗 How to add logs on Console with .NET Core and Serilog | Code4IT

    Then, you might want to deep dive into Serilog’s BeginScope. Here’s a neat article by Nicholas Blumhardt. Also, have a look at the comments, you’ll find interesting points to consider

    🔗 The semantics of ILogger.BeginScope | Nicholas Blumhardt

    Finally, two must-read articles about logging best practices.

    The first one is by Thiago Nascimento Figueiredo:

    🔗 Logs – Why, good practices, and recommendations | Dev.to

    and the second one is by Llron Tal:

    🔗 9 Logging Best Practices Based on Hands-on Experience | Loom Systems

    Wrapping up

    In this article, we’ve added Scopes to our logs to enrich them with some common fields that can be useful to investigate in case of errors.

    Remember to read the last 3 links I’ve shared above, they’re pure gold – you’ll thank me later 😎

    Happy coding!

    🐧



    Source link

  • How to solve InvalidOperationException for constructors using HttpClientFactory in C#

    How to solve InvalidOperationException for constructors using HttpClientFactory in C#


    A suitable constructor for type ‘X’ could not be located. What a strange error message! Luckily it’s easy to solve.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    A few days ago I was preparing the demo for a new article. The demo included a class with an IHttpClientFactory service injected into the constructor. Nothing more.

    Then, running the application (well, actually, executing the code), this error popped out:

    System.InvalidOperationException: A suitable constructor for type ‘X’ could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.

    How to solve it? It’s easy. But first, let me show you what I did in the wrong version.

    Setting up the wrong example

    For this example, I created an elementary project.
    It’s a .NET 7 API project, with only one controller, GenderController, which calls another service defined in the IGenderizeService interface.

    public interface IGenderizeService
    {
        Task<GenderProbability> GetGenderProbabiliy(string name);
    }
    

    IGenderizeService is implemented by a class, GenderizeService, which is the one that fails to load and, therefore, causes the exception to be thrown. The class calls an external endpoint, parses the result, and then returns it to the caller:

    public class GenderizeService : IGenderizeService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public GenderizeService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            var httpClient = _httpClientFactory.CreateClient();
    
            var response = await httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    Finally, I’ve defined the services in the Program class, and then I’ve specified which is the base URL for the HttpClient instance generated in the GenderizeService class:

    // some code
    
    builder.Services.AddScoped<IGenderizeService, GenderizeService>();
    
    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
        client => client.BaseAddress = new Uri("https://api.genderize.io/")
        );
    
    var app = builder.Build();
    
    // some more code
    

    That’s it! Can you spot the error?

    2 ways to solve the error

    The error was quite simple, but it took me a while to spot:

    In the constructor I was injecting an IHttpClientFactory:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    

    while in the host definition I was declaring an HttpClient for a specific class:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>
    

    Apparently, even if we’ve specified how to create an instance for a specific class, we could not build it using an IHttpClientFactory.

    So, here are 2 ways to solve it.

    Use named HttpClient in HttpClientFactory

    Named HttpClients are a helpful way to define a specific HttpClient and use it across different services.

    It’s as simple as assigning a name to an HttpClient instance and then using the same name when you need that specific client.

    So, define it in the Startup method:

    builder.Services.AddHttpClient("genderize",
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and retrieve it using CreateClient:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory;
    }
    
    public async Task<GenderProbability> GetGenderProbabiliy(string name)
    {
        var httpClient = _httpClientFactory.CreateClient("genderize");
    
        var response = await httpClient.GetAsync($"?name={name}");
    
        var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
        return result;
    }
    

    💡 Quick tip: define the HttpClient names in a constant field shared across the whole system!

    Inject HttpClient instead of IHttpClientFactory

    The other way is by injecting an HttpClient instance instead of an IHttpClientFactory.

    So we can restore the previous version of the Startup part:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and, instead of injecting an IHttpClientFactory, we can directly inject an HttpClient instance:

    public class GenderizeService : IGenderizeService
    {
        private readonly HttpClient _httpClient;
    
        public GenderizeService(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            //var httpClient = _httpClientFactory.CreateClient("genderize");
    
            var response = await _httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    We no longer need to call _httpClientFactory.CreateClient because the injected instance of HttpClient is already customized with the settings we’ve defined at Startup.

    Further readings

    I’ve briefly talked about HttpClientFactory in one article of my C# tips series:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instance | Code4IT

    And, more in detail, I’ve also talked about one way to mock HttpClientFactory instances in unit tests using Moq:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, why do we need to use HttpClientFactories instead of HttpClients?

    🔗 Use IHttpClientFactory to implement resilient HTTP requests | Microsoft Docs

    This article first appeared on Code4IT

    Wrapping up

    Yes, it was that easy!

    We received the error message

    A suitable constructor for type ‘X’ could not be located.

    because we were mixing two ways to customize and use HttpClient instances.

    But we’ve only opened Pandora’s box: we will come back to this topic soon!

    For now, Happy coding!

    🐧



    Source link

  • How to propagate HTTP Headers (and  Correlation IDs) using HttpClients in C#

    How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C#


    Propagating HTTP Headers can be useful, especially when dealing with Correlation IDs. It’s time to customize our HttpClients!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Imagine this: you have a system made up of different applications that communicate via HTTP. There’s some sort of entry point, exposed to the clients, that orchestrates the calls to the other applications. How do you correlate those requests?

    A good idea is to use a Correlation ID: one common approach for HTTP-based systems is passing a value to the “public” endpoint using HTTP headers; that value will be passed to all the other systems involved in that operation to say that “hey, these incoming requests in the internal systems happened because of THAT SPECIFIC request in the public endpoint”. Of course, it’s more complex than this, but you got the idea.

    Now. How can we propagate an HTTP Header in .NET? I found this solution on GitHub, provided by no less than David Fowler. In this article, I’m gonna dissect his code to see how he built this solution.

    Important update: there’s a NuGet package that implements these functionalities: Microsoft.AspNetCore.HeaderPropagation. Consider this article as an excuse to understand what happens behind the scenes of an HTTP call, and use it to learn how to customize and extend those functionalities. Here’s how to integrate that package.

    Just interested in the C# methods?

    As I said, I’m not reinventing anything new: the source code I’m using for this article is available on GitHub (see link above), but still, I’ll paste the code here, for simplicity.

    First of all, we have two extension methods that add some custom functionalities to the IServiceCollection.

    public static class HeaderPropagationExtensions
    {
        public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
        {
            services.AddHttpContextAccessor();
            services.ConfigureAll(configure);
            services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
            return services;
        }
    
        public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
        {
            builder.Services.AddHttpContextAccessor();
            builder.Services.Configure(builder.Name, configure);
            builder.AddHttpMessageHandler((sp) =>
            {
                var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
                var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
                return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
            });
    
            return builder;
        }
    }
    

    Then we have a Filter that will be used to customize how the HttpClients must be built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    next, a simple class that holds the headers we want to propagate

    public class HeaderPropagationOptions
    {
        public IList<string> HeaderNames { get; set; } = new List<string>();
    }
    

    and, lastly, the handler that actually propagates the headers.

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    Ok, and how can we use all of this?

    It’s quite easy: if you want to propagate the my-correlation-id header for all the HttpClients created in your application, you just have to add this line to your Startup method.

    builder.Services.AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    

    Time to study this code!

    How to “enrich” HTTP requests using DelegatingHandler

    Let’s start with the HeaderPropagationMessageHandler class:

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    This class lies in the middle of the HTTP Request pipeline. It can extend the functionalities of HTTP Clients because it inherits from System.Net.Http.DelegatingHandler.

    If you recall from a previous article, the SendAsync method is the real core of any HTTP call performed using .NET’s HttpClients, and here we’re enriching that method by propagating some HTTP headers.

     protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
    {
        if (_contextAccessor.HttpContext != null)
        {
            foreach (var headerName in _options.HeaderNames)
            {
                // Get the incoming header value
                var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                if (StringValues.IsNullOrEmpty(headerValue))
                {
                    continue;
                }
    
                request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
            }
        }
    
        return base.SendAsync(request, cancellationToken);
    }
    

    By using _contextAccessor we can access the current HTTP Context. From there, we retrieve the current HTTP headers, check if one of them must be propagated (by looking up _options.HeaderNames), and finally, we add the header to the outgoing HTTP call by using TryAddWithoutValidation.

    HTTP Headers are “cloned” and propagated

    Notice that we’ve used `TryAddWithoutValidation` instead of `Add`: in this way, we can use whichever HTTP header key we want without worrying about invalid names (such as the ones with a new line in it). Invalid header names will simply be ignored, as opposed to the Add method that will throw an exception.
    Finally, we continue with the HTTP call by executing `base.SendAsync`, passing the `HttpRequestMessage` object now enriched with additional headers.

    Using HttpMessageHandlerBuilder to configure how HttpClients must be built

    The Microsoft.Extensions.Http.IHttpMessageHandlerBuilderFilter interface allows you to apply some custom configurations to the HttpMessageHandlerBuilder right before the HttpMessageHandler object is built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    The Configure method allows you to customize how the HttpMessageHandler will be built: we are adding a new instance of the HeaderPropagationMessageHandler class we’ve seen before to the current HttpMessageHandlerBuilder’s AdditionalHandlers collection. All the handlers registered in the list will then be used to build the HttpMessageHandler object we’ll use to send and receive requests.

    via GIPHY

    By having a look at the definition of HttpMessageHandlerBuilder you can grasp a bit of what happens when we’re creating HttpClients in .NET.

    namespace Microsoft.Extensions.Http
    {
        public abstract class HttpMessageHandlerBuilder
        {
            protected HttpMessageHandlerBuilder();
    
            public abstract IList<DelegatingHandler> AdditionalHandlers { get; }
    
            public abstract string Name { get; set; }
    
            public abstract HttpMessageHandler PrimaryHandler { get; set; }
    
            public virtual IServiceProvider Services { get; }
    
            protected internal static HttpMessageHandler CreateHandlerPipeline(HttpMessageHandler primaryHandler, IEnumerable<DelegatingHandler> additionalHandlers);
    
            public abstract HttpMessageHandler Build();
        }
    
    }
    

    Ah, and remember the wise words you can read in the docs of that class:

    The Microsoft.Extensions.Http.HttpMessageHandlerBuilder is registered in the service collection as a transient service.

    Nice 😎

    Share the behavior with all the HTTP Clients in the .NET application

    Now that we’ve defined the custom behavior of HTTP clients, we need to integrate it into our .NET application.

    public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
    {
        services.AddHttpContextAccessor();
        services.ConfigureAll(configure);
        services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
        return services;
    }
    

    Here, we’re gonna extend the IServiceCollection with those functionalities. At first, we’re adding AddHttpContextAccessor, which allows us to access the current HTTP Context (the one we’ve used in the HeaderPropagationMessageHandler class).

    Then, services.ConfigureAll(configure) registers an HeaderPropagationOptions that will be used by HeaderPropagationMessageHandlerBuilderFilter. Without that line, we won’t be able to specify the names of the headers to be propagated.

    Finally, we have this line:

    services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
    

    Honestly, I haven’t understood it thoroughly: I thought that it allows us to use more than one class implementing IHttpMessageHandlerBuilderFilter, but apparently if we create a sibling class and add them both using Add, everything works the same. If you know what this line means, drop a comment below! 👇

    Wherever you access the ServiceCollection object (may it be in the Startup or in the Program class), you can propagate HTTP headers for every HttpClient by using

    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    Yes, AddHeaderPropagation is the method we’ve seen in the previous paragraph!

    Seeing it in action

    Now we have all the pieces in place.

    It’s time to run it 😎

    To fully understand it, I strongly suggest forking this repository I’ve created and running it locally, placing some breakpoints here and there.

    As a recap: in the Program class, I’ve added these lines to create a named HttpClient specifying its BaseAddress property. Then I’ve added the HeaderPropagation as we’ve seen before.

    builder.Services.AddHttpClient("items")
                        .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://en5xof8r16a6h.x.pipedream.net/"));
    
    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    There’s also a simple Controller that acts as an entry point and that, using an HttpClient, sends data to another endpoint (the one defined in the previous snippet).

    [HttpPost]
    public async Task<IActionResult> PostAsync([FromQuery] string value)
    {
        var item = new Item(value);
    
        var httpClient = _httpClientFactory.CreateClient("items");
        await httpClient.PostAsJsonAsync("/", item);
        return NoContent();
    }
    

    What happens at start-up time

    When a .NET application starts up, the Main method in the Program class acts as an entry point and registers all the dependencies and configurations required.

    We will then call builder.Services.AddHeaderPropagation, which is the method present in the HeaderPropagationExtensions class.

    All the configurations are then set, but no actual operations are being executed.

    The application then starts normally, waiting for incoming requests.

    What happens at runtime

    Now, when we call the PostAsync method by passing an HTTP header such as my-correlation-id:123, things get interesting.

    The first operation is

    var httpClient = _httpClientFactory.CreateClient("items");
    

    While creating the HttpClient, the engine is calling all the registered IHttpMessageHandlerBuilderFilter and calling their Configure method. So, you’ll see the execution moving to HeaderPropagationMessageHandlerBuilderFilter’s Configure.

    public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
    {
        return builder =>
        {
            builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
            next(builder);
        };
    }
    

    Of course, you’re also executing the HeaderPropagationMessageHandler constructor.

    The HttpClient is now ready: when we call httpClient.PostAsJsonAsync("/", item) we’re also executing all the registered DelegatingHandler instances, such as our HeaderPropagationMessageHandler. In particular, we’re executing the SendAsync method and adding the required HTTP Headers to the outgoing HTTP calls.

    We will then see the same HTTP Header on the destination endpoint.

    We did it!

    Propagating CorrelationId to a specific HttpClient

    You can also specify which headers need to be propagated on single HTTP Clients:

    public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
    {
        builder.Services.AddHttpContextAccessor();
        builder.Services.Configure(builder.Name, configure);
    
        builder.AddHttpMessageHandler((sp) =>
        {
            var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
            var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
            return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
        });
    
        return builder;
    }
    

    Which works similarly, but registers the Handler only to a specific HttpClient.

    For instance, you can have 2 distinct HttpClient that will propagate only a specific set of HTTP Headers:

    builder.Services.AddHttpClient("items")
            .AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    
    builder.Services.AddHttpClient("customers")
            .AddHeaderPropagation(options => options.HeaderNames.Add("another-correlation-id"));
    

    Further readings

    Finally, some additional resources if you want to read more.

    For sure, you should check out (and star⭐) David Fowler’s code:

    🔗 Original code | GitHub

    If you’re not sure about what are extension methods (and you cannot respond to this question: How does inheritance work with extension methods?), then you can have a look at this article:

    🔗 How you can create extension methods in C# | Code4IT

    We heavily rely on HttpClient and HttpClientFactory. How can you test them? Well, by mocking the SendAsync method!

    🔗 How to test HttpClientFactory with Moq | Code4IT

    We’ve seen which is the role of HttpMessageHandlerBuilder when building HttpClients. You can explore that class starting from the documentation.

    🔗 HttpMessageHandlerBuilder Class | Microsoft Docs

    We’ve already seen how to inject and use HttpContext in our applications:

    🔗 How to access the HttpContext in .NET API

    Finally, the repository that you can fork to toy with it:

    🔗 PropagateCorrelationIdOnHttpClients | GitHub

    This article first appeared on Code4IT

    Conclusion

    What a ride!

    We’ve seen how to add functionalities to HttpClients and to HTTP messages. All integrated into the .NET pipeline!

    We’ve learned how to propagate generic HTTP Headers. Of course, you can choose any custom HttpHeader and promote one of them as CorrelationId.

    Again, I invite you to download the code and toy with it – it’s incredibly interesting 😎

    Happy coding!

    🐧



    Source link

  • How to deploy .NET APIs on Azure using GitHub actions &vert; Code4IT

    How to deploy .NET APIs on Azure using GitHub actions | Code4IT


    Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.

    To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.

    In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.

    Create a .NET API project

    Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.

    To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.

    Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.

    All our code is stored in the Program file:

    var builder = WebApplication.CreateBuilder(args);
    
    var app = builder.Build();
    
    app.UseHttpsRedirection();
    
    app.MapGet("/", () => "Hello World!");
    
    app.Run();
    

    Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.

    Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.

    Create an App Service on Azure

    Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.

    Open the Azure Portal, navigate to the App Service section, and create a new one.

    Configure it as you wish, and then proceed until you have it up and running.

    Once everything is done, you should have something like this:

    Azure App Service overview

    Now the application is ready to be used: we now need to deploy our code here.

    Generate the GitHub Action YAML file for deploying .NET APIs on Azure

    It’s time to create our Continuous Delivery pipeline.

    Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.

    On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.

    New Workflow button on GitHub

    You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:

    Template for deploying the .NET Application on Azure

    Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.

    In particular, you will have to update the environment variables specified in this section:

    env:
      AZURE_WEBAPP_NAME: your-app-name # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "5" # set this to the .NET Core version to use
    

    Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.

    For my specific project, I’ve replaced that section with

    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName> # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "6.0" # set this to the .NET Core version to use
    

    🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧

    Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.

    So, as a reference, here’s the full YAML file I’m using to deploy my APIs:

    name: Build and deploy ASP.Net Core app to an Azure Web App
    
    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName>
      AZURE_WEBAPP_PACKAGE_PATH: "."
      DOTNET_VERSION: "6.0"
    
    on:
      push:
        branches: ["master"]
      workflow_dispatch:
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
          - uses: actions/checkout@v3
    
          - name: Set up .NET Core
            uses: actions/setup-dotnet@v2
            with:
              dotnet-version: ${{ env.DOTNET_VERSION }}
    
          - name: Set up dependency caching for faster builds
            uses: actions/cache@v3
            with:
              path: ~/.nuget/packages
              key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
              restore-keys: |
                            ${{ runner.os }}-nuget-
    
          - name: Build with dotnet
            run: dotnet build --configuration Release
    
          - name: dotnet publish
            run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
    
          - name: Upload artifact for deployment job
            uses: actions/upload-artifact@v3
            with:
              name: .net-app
              path: ${{env.DOTNET_ROOT}}/myapp
    
      deploy:
        permissions:
          contents: none
        runs-on: ubuntu-latest
        needs: build
        environment:
          name: "Development"
          url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
    
        steps:
          - name: Download artifact from build job
            uses: actions/download-artifact@v3
            with:
              name: .net-app
    
          - name: Deploy to Azure Web App
            id: deploy-to-webapp
            uses: azure/webapps-deploy@v2
            with:
              app-name: ${{ env.AZURE_WEBAPP_NAME }}
              publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
              package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    As you can see, we have 2 distinct steps: build and deploy.

    In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.

    In the deploy step, we retrieve the newly created artifact and publish it on Azure.

    Store the Publish profile as GitHub Secret

    As you can see in the instructions of the workflow file, you have to

    Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.

    That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).

    A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.

    We have to get our publish profile and save it into GitHub secrets.

    To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.

    Get Publish Profile button on Azure Portal

    Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.

    Here you can create a new secret related to your repository.

    Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.

    You will then see something like this:

    GitHub secret for Publish profile

    Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:

    - name: Deploy to Azure Web App
        id: deploy-to-webapp
        uses: azure/webapps-deploy@v2
        with:
            app-name: ${{ env.AZURE_WEBAPP_NAME }}
            publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
            package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.

    Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.

    Final result

    It’s time to see the final result.

    Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.

    Under the Actions tab, you will see your CD pipeline run.

    CD workflow run

    Once it’s completed, you can head to your application root and see the final result.

    Final result of the API

    Further readings

    Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.

    My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…

    If you want to peek at what I do, here are my little secrets:

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.

    Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.

    Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.

    Nice and easy, right? 😉

    Happy coding!

    🐧



    Source link

  • How to create an API Gateway using Azure API Management &vert; Code4IT

    How to create an API Gateway using Azure API Management | Code4IT


    In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.

    In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂

    In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.

    Demo: publish .NET API services and locate the OpenAPI definition

    For the sake of this article, we will work with 2 API services: BooksService and VideosService.

    They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).

    Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.

    Swagger pages

    How to create Azure API Management (APIM) Service from Azure Portal

    Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:

    An API Gateway hides origin endpoints to clients

    It’s time to create our APIM resource.👷‍♂️

    Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.

    API Management description on Azure Portal

    The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).

    Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.

    After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.

    API management dashboard

    We are now ready to add our APIs and expose them to our clients.

    How to add APIs to Azure API Management using Swagger definition (OpenAPI)

    As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.

    Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.

    Swagger UI for BooksAPI

    We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.

    Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).

    Import API from OpenAPI specification

    You will see a form that allows you to create new resources from OpenAPI specifications.

    Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.

    Wizard to import APIs from OpenAPI

    You will then see your APIs appear in the panel shown below. It is composed of different parts:

    • The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
    • The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
    • A list of policies that are applied to the inbound requests before hitting the real endpoint;
    • The real endpoint used when calling the facade exposed by APIM;
    • A list of policies applied to the outbound requests after the origin has processed the requests.

    API detail panel

    For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.

    Consuming APIs exposed on the API Gateway

    We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.

    Where to find the Gateway URL

    This will be the root URL that our clients will use.

    We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).

    The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.

    Videos API on Origin and on API Gateway

    On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:

    Books API on Origin and on API Gateway

    Further readings

    As usual, a bunch of interesting readings 📚

    In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:

    🔗 What is Azure API Management? | Microsoft docs

    To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.

    🔗 How to deploy .NET APIs on Azure using GitHub actions | Code4IT

    Lastly, since we’ve talked about Swagger, here’s an article where I dissected how you can integrate Swagger in dotNET Core applications:

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.

    We will come back to this topic soon.

    Happy coding!

    🐧



    Source link