برچسب: Azure

  • how to view Code Coverage report on Azure DevOps | Code4IT


    Code coverage is a good indicator of the health of your projects. We’ll see how to show Cobertura reports associated to your builds on Azure DevOps and how to display the progress on Dashboard.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code coverage is a good indicator of the health of your project: the more your project is covered by tests, the lesser are the probabilities that you have easy-to-find bugs in it.

    Even though 100% of code coverage is a good result, it is not enough: you have to check if your tests are meaningful and bring value to the project; it really doesn’t make any sense to cover each line of your production code with tests valid only for the happy path; you also have to cover the edge cases!

    But, even if it’s not enough, having an idea of the code coverage on your project is a good practice: it helps you understanding where you should write more tests and, eventually, help you removing some bugs.

    In a previous article, we’ve seen how to use Coverlet and Cobertura to view the code coverage report on Visual Studio (of course, for .NET projects).

    In this article, we’re gonna see how to show that report on Azure DevOps: by using a specific command (or, even better, a set of flags) on your YAML pipeline definition, we are going to display that report for every build we run on Azure DevOps. This simple addition will help you see the status of a specific build and, if it’s the case, update the code to add more tests.

    Then, in the second part of this article, we’re gonna see how to view the coverage history on your Azure DevOps dashboard, by using a plugin called Code Coverage Protector.

    But first, let’s start with the YAML pipelines!

    Coverlet – the NuGet package for code coverage

    As already explained in my previous article, the very first thing to do to add code coverage calculation is to install a NuGet package called Coverlet. This package must be installed in every test project in your Solution.

    So, running a simple dotnet add package coverlet.msbuild on your test projects is enough!

    Create YAML tasks to add code coverage

    Once we have Coverlet installed, it’s time to add the code coverage evaluation to the CI pipeline.

    We need to add two steps to our YAML file: one for collecting the code coverage on test projects, and one for actually publishing it.

    Run tests and collect code coverage results

    Since we are working with .NET Core applications, we need to use a DotNetCoreCLI@2 task to run dotnet test. But we need to specify some attributes: in the arguments field, add /p:CollectCoverage=true to tell the task to collect code coverage results, and /p:CoverletOutputFormat=cobertura to specify which kind of code coverage format we want to receive as output.

    The task will have this form:

    - task: DotNetCoreCLI@2
      displayName: "Run tests"
      inputs:
        command: "test"
        projects: "**/*[Tt]est*/*.csproj"
        publishTestResults: true
        arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
    

    You can see the code coverage preview directly in the log panel of the executing build. The ASCII table tells you the code coverage percentage for each module, specifying the lines, branches, and methods covered by tests for every module.

    Logging dotnet test

    Another interesting thing to notice is that this task generates two files: a trx file, that contains the test results info (which tests passed, which ones failed, and other info), and a coverage.cobertura.xml, that is the file we will use in the next step to publish the coverage results.

    dotnet test generated files

    Publish code coverage results

    Now that we have the coverage.cobertura.xml file, the last thing to do is to publish it.

    Create a task of type PublishCodeCoverageResults@1, specify that the result format is Cobertura, and then specify the location of the file to be published.

    - task: PublishCodeCoverageResults@1
      displayName: "Publish code coverage results"
      inputs:
        codeCoverageTool: "Cobertura"
        summaryFileLocation: "**/*coverage.cobertura.xml"
    

    Final result

    Now that we know what are the tasks to add, we can write the most basic version of a build pipeline:

    trigger:
      - master
    
    pool:
      vmImage: "windows-latest"
    
    variables:
      solution: "**/*.sln"
      buildPlatform: "Any CPU"
      buildConfiguration: "Release"
    
    steps:
      - task: DotNetCoreCLI@2
        displayName: "Build"
        inputs:
          command: "build"
      - task: DotNetCoreCLI@2
        displayName: "Run tests"
        inputs:
          command: "test"
          projects: "**/*[Tt]est*/*.csproj"
          publishTestResults: true
          arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
      - task: PublishCodeCoverageResults@1
        displayName: "Publish code coverage results"
        inputs:
          codeCoverageTool: "Cobertura"
          summaryFileLocation: "**/*coverage.cobertura.xml"
    

    So, here, we simply build the solution, run the tests and publish both test and code coverage results.

    Where can we see the results?

    If we go to the build execution details, we can see the tests and coverage results under the Tests and coverage section.

    Build summary panel

    By clicking on the Code Coverage tab, we can jump to the full report, where we can see how many lines and branches we have covered.

    Test coverage report

    And then, when we click on a class (in this case, CodeCoverage.MyArray), you can navigate to the class details to see which lines have been covered by tests.

    Test coverage details on the MyArray class

    Code Coverage Protector: an Azure DevOps plugin

    Now what? We should keep track of the code coverage percentage over time. But open every Build execution to see the progress is not a good idea, isn’t it? We should find another way to see the progress.

    A really useful plugin to manage this use case is Code Coverage Protector, developed by Dave Smits: among other things, it allows you to display the status of code coverage directly on your Azure DevOps Dashboards.

    To install it, head to the plugin page on the marketplace and click get it free.

    “Code Coverage Protector plugin”

    Once you have installed it, you can add one or more of its widgets to your project’s Dashboard, define which Build pipeline it must refer to, select which metric must be taken into consideration (line, branch, class, and so on), and set up a few other options (like the size of the widget).

    “Code Coverage Protector widget on Azure Dashboard”

    So, now, with just one look you can see the progress of your project.

    Wrapping up

    In this article, we’ve seen how to publish code coverage reports for .NET applications on Azure DevOps. We’ve used Cobertura and Coverlet to generate the reports, some YAML configurations to show them in the related build panel, and Code Coverage Protector to show the progress in your Azure DevOps dashboard.

    If you want to do one further step, you could use Code Coverage Protector as a build step to make your builds fail if the current Code Coverage percentage is less than the one from the previous builds.

    Happy coding!





    Source link

  • [ITA] Azure DevOps: build and release pipelines to deploy with confidence


    About the author

    Davide Bellone is a Principal Backend Developer with more than 10 years of professional experience with Microsoft platforms and frameworks.

    He loves learning new things and sharing these learnings with others: that’s why he writes on this blog and is involved as speaker at tech conferences.

    He’s a Microsoft MVP 🏆, conference speaker (here’s his Sessionize Profile) and content creator on LinkedIn.



    Source link

  • [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    Source link

  • Azure Service Bus and C#


    Azure Service bus is a message broker generally used for sharing messages between applications. In this article, we’re gonna see an introduction to Azure Service Bus, and how to work with it with .NET and C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Azure Service Bus is a message broker that allows you to implement queues and pub-subs topics. It is incredibly common to use queues to manage the communication between microservices: it is a simple way to send messages between applications without bind them tightly.

    In this introduction, we’re going to learn the basics of Azure Service Bus: what it is, how to create a Bus and a Queue, how to send and receive messages on the Bus with C#, and more.

    This is the first part of a series about Azure Service Bus. We will see:

    1. An introduction to Azure Service Bus with C#
    2. Queues vs Topics
    3. Handling Azure Service Bus errors with .NET

    But, for now, let’s start from the basics.

    What is Azure Service Bus?

    Azure Service Bus is a complex structure that allows you to send content through a queue.

    As you may already know, a queue is… well, a queue! First in, first out!

    This means that the messages will be delivered in the same order as they were sent.

    Queue of penguins

    Why using a queue is becoming more and more common, for scalable applications?
    Let’s consider this use case: you are developing a microservices-based application. With the common approach, communication occurs via HTTP: this means that

    • if the receiver is unreachable, the HTTP message is lost (unless you add some kind of retry policy)
    • if you have to scale out, you will need to add a traffic manager/load balancer to manage which instance must process the HTTP Request

    On the contrary, by using a queue,

    • if the receiver is down, the message stays in the queue until the receiver becomes available again
    • if you have to scale out, nothing changes, because the first instance that receives the message removes it from the queue, so you will not have multiple receivers that process the same message.

    How to create an Azure Service Bus instance

    It is really simple to create a new Service Bus on Azure!

    Just open Portal Azure, head to the Service Bus section, and start creating a new resource.

    You will be prompted to choose which subscription will be linked to this new resource, and what will be the name of that resource.

    Lastly, you will have to choose which will be the pricing tier to apply.

    Service Bus creation wizard on Azure UI

    There are 3 pricing tiers available:

    • Basic: its price depends on how many messages you send. At the moment of writing, with Basic tier you pay 0.05$ for every million messages sent.
    • Standard: Similar to the Basic tier, but allows you to have both Queues and Topics. You’ll see the difference between Queue and Topics in the next article
    • Premium: zone-redundant, with both Queues and Topics; of course, quite expensive

    So now, you can create the resource and see it directly on the browser.

    Policies and Connection Strings

    The first thing to do to connect to the Azure Service Bus is to create a Policy that allows you to perform specific operations on the Bus.

    By default, under the Shared access policies tab you’ll see a policy called RootManageSharedAccessKey: this is the default Policy that allows you to send and receive messages on the Bus.

    To get the connection string, click on that Policy and head to Primary Connection String:

    How to define Service Bus Policy via UI

    A connection string for the Service Bus looks like this:

    Endpoint=sb://c4it-testbus.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=my-secret-key
    

    Let’s break it down:

    The first part represents the Host name: this is the value you’ve set in the creation wizard, and the one you can see on the Overview tab:

    Service Bus instance Host name

    Then, you’ll see the SharedAccessKeyName field, which contains the name of the policy to use (in this case, RootManageSharedAccessKey).

    Then, we have the secret Key. If you select the Primary Connection String you will use the Primary Key; same if you use the Secondary Connection String.

    Keep that connection string handy, we’re gonna use it in a moment!

    Adding a queue

    Now that we have created the general infrastructure, we need to create a Queue. This is the core of the bus – all the messages pass through a queue.

    To create one, on the Azure site head to Entities > Queues and create a new queue.

    You will be prompted to add different values, but for now, we are only interested in defining its name.

    Write the name of the queue and click Create.

    Create queue panel on Azure UI

    Once you’ve created your queue (for this example, I’ve named it PizzaOrders), you’ll be able to see it in the Queues list and see its details.

    You can even define one or more policies for that specific queue just as we did before: you’ll be able to generate a connection string similar to the one we’ve already analyzed, with the only difference that, here, you will see a new field in the connection string, EntityPath, whose value is the name of the related queue.

    So, a full connection string will have this form:

    Service Bus connection string breakdown

    ServiceBusExplorer – and OSS UI for accessing Azure Service Bus

    How can you see what happens inside the Service Bus?

    You have two options: use the Service Bus Explorer tool directly on Azure:

    Service Bus Explorer on Azure UI

    Or use an external tool.

    I honestly prefer to use ServiceBusExplorer, a project that you can download from Chocolatey: this open source tool allows you to see what is happening inside Azure Service Bus: just insert your connection string and… voilá! You’re ready to go!

    ServiceBusExplorer project on Windows

    With this tool, you can see the status of all the queues, as well as send, read, and delete messages.

    If you want to save a connection, you have to open that tool as Administrator, otherwise, you won’t have enough rights to save it.

    How to send and receive messages with .NET 5

    To test it, we’re gonna create a simple project that manages pizza orders.
    A .NET 5 API application receives a list of pizzas to be ordered, then it creates a new message for every pizza received and sends them into the PizzaOrders queue.

    With another application, we’re gonna receive the order of every single pizza by reading it from the same queue.

    For both applications, you’ll need to install the Azure.Messaging.ServiceBus NuGet package.

    How to send messages on Azure Service Bus

    The API application that receives pizza orders from the clients is very simple: just a controller with a single action.

    [ApiController]
    [Route("[controller]")]
    public class PizzaOrderController : ControllerBase
    {
        private string ConnectionString = ""; //hidden
    
        private string QueueName = "PizzaOrders";
    
        [HttpPost]
        public async Task<IActionResult> CreateOrder(IEnumerable<PizzaOrder> orders)
        {
            await ProcessOrder(orders);
            return Ok();
        }
    }
    

    Nothing fancy, just receive a list of Pizza Orders objects with this shape:

    public class PizzaOrder
    {
        public string Name { get; set; }
        public string[] Toppings { get; set; }
    }
    

    and process those with a valid quantity.

    As you can imagine, the core of the application is the ProcessOrder method.

    private async Task ProcessOrder(IEnumerable<PizzaOrder> orders)
    {
        await using (ServiceBusClient client = new ServiceBusClient(ConnectionString))
        {
            ServiceBusSender sender = client.CreateSender(QueueName);
    
            foreach (var order in orders)
            {
                string jsonEntity = JsonSerializer.Serialize(order);
                ServiceBusMessage serializedContents = new ServiceBusMessage(jsonEntity);
                await sender.SendMessageAsync(serializedContents);
            }
        }
    }
    

    Let’s break it down.

    We need to create a client to connect to the Service Bus by using the specified Connection string:

    await using (ServiceBusClient client = new ServiceBusClient(ConnectionString))
    {
    }
    

    This client must be disposed after its use.

    Then, we need to create a ServiceBusSender object whose sole role is to send messages to a specific queue:

    ServiceBusSender sender = client.CreateSender(QueueName);
    

    Lastly, for every pizza order, we convert the object into a string and we send it as a message in the queue.

    // Serialize as JSON string
    string jsonEntity = JsonSerializer.Serialize(order);
    
    /// Create Bus Message
    ServiceBusMessage serializedContents = new ServiceBusMessage(jsonEntity);
    
    // Send the message on the Bus
    await sender.SendMessageAsync(serializedContents);
    

    Hey! Never used async, await, and Task? If you want a short (but quite thorough) introduction to asynchronous programming, head to this article!

    And that’s it! Now the message is available on the PizzaOrders queue and can be received by any client subscribed to it.

    Pizza Order message as shown on ServiceBusExplorer

    Here I serialized the PizzaOrder into a JSON string. This is not mandatory: you can send messages in whichever format you want: JSON, XML, plain text, BinaryData… It’s up to you!

    Also, you can add lots of properties to each message. To read the full list, head to the ServiceBusMessage Class documentation.

    How to receive messages on Azure Service Bus

    Once we have the messages on the Bus, we need to read them.

    To demonstrate how to read messages from a queue using C#, I have created a simple Console App, named PizzaChef. The first thing to do, of course, is to install the Azure.Messaging.ServiceBus NuGet package.

    As usual, we need a ServiceBusClient object to access the resources on Azure Service Bus. Just as we did before, create a new Client in this way:

    ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    

    This time, instead of using a ServiceBusSender, we need to create a ServiceBusProcessor object which, of course, will process all the messages coming from the Queue. Since receiving a message on the queue is an asynchronous operation, we need to register an Event Handler both for when we receive the message and when we receive an error:

    ServiceBusProcessor   _ordersProcessor = serviceBusClient.CreateProcessor(QueueName);
    _ordersProcessor.ProcessMessageAsync += PizzaItemMessageHandler;
    _ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
    

    For now, let’s add an empty implementation of both handlers.

    private Task PizzaItemErrorHandler(ProcessErrorEventArgs arg)
    {
    
    }
    
    private async Task PizzaItemMessageHandler(ProcessMessageEventArgs args)
    {
    
    }
    

    Note: in this article I’ll implement only the PizzaItemMessageHandler method. The PizzaItemErrorHandler, however, must be at least declared, even if empty: you will get an exception if you forget about it. Anyways, we’ll implement it in the last article of this series, the one about error handling.

    To read the content received in the PizzaItemMessageHandler method, you must simply access the Message.Body property of the args parameter:

    string body = args.Message.Body.ToString();
    

    And, from here, you can do whatever you want with the body of the message. For instance, deserialize it into an object. Of course, you can reuse the PizzaOrder class we used before, or create a new class with more properties but, still, compatible with the content of the message.

    public class ProcessedPizzaOrder
    {
        public string Name { get; set; }
        public string[] Toppings { get; set; }
    
        public override string ToString()
        {
            if (Toppings?.Any() == true)
                return $"Pizza {Name} with some toppings: {string.Join(',', Toppings)}";
            else
                return $"Pizza {Name} without toppings";
        }
    }
    

    Lastly, we need to mark the message as complete.

    await args.CompleteMessageAsync(args.Message);
    

    Now we can see the full example of the PizzaItemMessageHandler implementation:

    private async Task PizzaItemMessageHandler(ProcessMessageEventArgs args)
    {
        try
        {
            string body = args.Message.Body.ToString();
            Console.WriteLine("Received " + body);
    
            var processedPizza = JsonSerializer.Deserialize<ProcessedPizzaOrder>(body);
    
            Console.WriteLine($"Processing {processedPizza}");
    
            // complete the message. messages is deleted from the queue.
            await args.CompleteMessageAsync(args.Message);
        }
        catch (System.Exception ex)
        {
            // handle exception
        }
    }
    

    Does it work? NO.

    We forgot to start processing the incoming messages. It’s simple: in the Main method, right after the declaration of the ServiceBusProcessor object, we need to call StartProcessingAsync to start processing and, similarly, StartProcessingAsync to end the processing.

    Here’s the full example of the Main method: pay attention to the calls to Start and Stop processing.

    private static async Task Main(string[] args)
    {
        ServiceBusProcessor _ordersProcessor = null;
        try
        {
            ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    
            _ordersProcessor = serviceBusClient.CreateProcessor(QueueName);
            _ordersProcessor.ProcessMessageAsync += PizzaItemMessageHandler;
            _ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
            await _ordersProcessor.StartProcessingAsync();
    
            Console.WriteLine("Waiting for pizza orders");
            Console.ReadKey();
        }
        catch (Exception)
        {
            throw;
        }
        finally
        {
            if (_ordersProcessor != null)
                await _ordersProcessor.StopProcessingAsync();
        }
    }
    

    While the call to StartProcessingAsync is mandatory (otherwise, how would you receive messages?), the call to StopProcessingAsync, in a console application, can be skipped, since we are destroying the application. At least, I think so. I still haven’t found anything that says whether to call or skip it. If you know anything, please contact me on Twitter or, even better, here in the comments section – so that we can let the conversation going.

    Wrapping up

    This is part of what I’ve learned from my first approach with Azure Service Bus, and the use of Queues in microservice architectures.

    Is there anything else I should say? Have you ever used queues in your applications? As usual, feel free to drop a comment in the section below, or to contact me on Twitter.

    In the next article, we’re gonna explore another topic about Azure Service Bus, called… Topic! We will learn how to use them and what is the difference between a Queue and a Topic.

    But, for now, happy coding!



    Source link

  • Handling Azure Service Bus errors with .NET | Code4IT


    Senders and Receivers handle errors on Azure Service Bus differently. We’ll see how to catch them, what they mean and how to fix them. We’ll also introduce Dead Letters.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In this article, we are gonna see which kind of errors you may get on Azure Service Bus and how to fix them. We will look at simpler errors, the ones you get if configurations on your code are wrong, or you’ve not declared the modules properly; then we will have a quick look at Dead Letters and what they represent.

    This is the last part of the series about Azure Service Bus. In the previous parts, we’ve seen

    1. Introduction to Azure Service Bus
    2. Queues vs Topics
    3. Error handling

    For this article, we’re going to introduce some errors in the code we used in the previous examples.

    Just to recap the context, our system receives orders for some pizzas via HTTP APIs, processes them by putting some messages on a Topic on Azure Service Bus. Then, a different application that is listening for notifications on the Topic, reads the message and performs some dummy operations.

    Common exceptions with .NET SDK

    To introduce the exceptions, we’d better keep at hand the code we used in the previous examples.

    Let’s recall that a connection string has a form like this:

    string ConnectionString = "Endpoint=sb://<myHost>.servicebus.windows.net/;SharedAccessKeyName=<myPolicy>;SharedAccessKey=<myKey>=";
    

    To send a message in the Queue, remember that we have 3 main steps:

    1. create a new ServiceBusClient instance using the connection string
    2. create a new ServiceBusSender specifying the name of the queue or topic (in our case, the Topic)
    3. send the message by calling the SendMessageAsync method
    await using (ServiceBusClient client = new ServiceBusClient(ConnectionString))
    {
        ServiceBusSender sender = client.CreateSender(TopicName);
    
        foreach (var order in validOrders)
        {
    
            /// Create Bus Message
            ServiceBusMessage serializedContents = CreateServiceBusMessage(order);
    
            // Send the message on the Bus
            await sender.SendMessageAsync(serializedContents);
        }
    }
    

    To receive messages from a Topic, we need the following steps:

    1. create a new ServiceBusClient instance as we did before
    2. create a new ServiceBusProcessor instance by specifying the name of the Topic and of the Subscription
    3. define a handler for incoming messages
    4. define a handler for error handling
    ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    ServiceBusProcessor _ordersProcessor = serviceBusClient.CreateProcessor(TopicName, SubscriptionName);
    _ordersProcessor.ProcessMessageAsync += PizzaInvoiceMessageHandler;
    _ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
    await _ordersProcessor.StartProcessingAsync();
    

    Of course, I recommend reading the previous articles to get a full understanding of the examples.

    Now it’s time to introduce some errors and see what happens.

    No such host is known

    When the connection string is invalid because the host name is wrong, you get an Azure.Messaging.ServiceBus.ServiceBusException exception with this message: No such host is known. ErrorCode: HostNotFound.

    What is the host? It’s the first part of the connection string. For example, in a connection string like

    Endpoint=sb://myHost.servicebus.windows.net/;SharedAccessKeyName=myPolicy;SharedAccessKey=myKey
    

    the host is myHost.servicebus.net.

    So we can easily understand why this error happens: that host name does not exist (or, more probably, there’s a typo).

    A curious fact about this exception: it is thrown later than I expected. I was expecting it to be thrown when initializing the ServiceBusClient instance, but it is actually thrown only when a message is being sent using SendMessageAsync.

    Code is executed correctly even though the host name is wrong

    You can perform all the operations you want without receiving any error until you really access the resources on the Bus.

    Put token failed: The messaging entity X could not be found

    Another message you may receive is Put token failed. status-code: 404, status-description: The messaging entity ‘X’ could not be found.

    The reason is pretty straightforward: the resource you are trying to use does not exist: by resource I mean Queue, Topic, and Subscription.

    Again, that exception is thrown only when interacting directly with Azure Service Bus.

    Put token failed: the token has an invalid signature

    If the connection string is not valid because of invalid SharedAccessKeyName or SharedAccessKey, you will get an exception of type System.UnauthorizedAccessException with the following message: Put token failed. status-code: 401, status-description: InvalidSignature: The token has an invalid signature.

    The best way to fix it is to head to the Azure portal and copy again the credentials, as I explained in the introductory article.

    Cannot begin processing without ProcessErrorAsync handler set.

    Let’s recall a statement from my first article about Azure Service Bus:

    The PizzaItemErrorHandler, however, must be at least declared, even if empty: you will get an exception if you forget about it.

    That’s odd, but that’s true: you have to define handlers both for manage success and failure.

    If you don’t, and you only declare the ProcessMessageAsync handler, like in this example:

    ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    ServiceBusProcessor _ordersProcessor = serviceBusClient.CreateProcessor(TopicName, SubscriptionName);
    _ordersProcessor.ProcessMessageAsync += PizzaInvoiceMessageHandler;
    //_ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
    await _ordersProcessor.StartProcessingAsync();
    

    you will get an exception with the message: Cannot begin processing without ProcessErrorAsync handler set.

    An exception is thrown when the ProcessErrorAsync handler is not defined

    So, the simplest way to solve this error is… to create the handler for ProcessErrorAsync, even empty. But why do we need it, then?

    Why do we need the ProcessErrorAsync handler?

    As I said, yes, you could declare that handler and leave it empty. But if it exists, there must be a reason, right?

    The handler has this signature:

    private Task PizzaItemErrorHandler(ProcessErrorEventArgs arg)
    

    and acts as a catch block for the receivers: all the errors we’ve thrown in the first part of the article can be handled here. Of course, we are not directly receiving an instance of Exception, but we can access it by navigating the arg object.

    As an example, let’s update again the host part of the connection string. When running the application, we can see that the error is caught in the PizzaItemErrorHandler method, and the arg argument contains many fields that we can use to handle the error. One of them is Exception, which wraps the Exception types we’ve already seen.

    Error handling on ProcessErrorAsync

    This means that in this method you have to define your error handling, add logs, and whatever may help your application managing errors.

    The same handler can be used to manage errors that occur while performing operations on a message: if an exception is thrown when processing an incoming message, you have two choices: handle it in the ProcessMessageAsync handler, in a try-catch block, or leave the error handling on the ProcessErrorAsync handler.

    ProcessErrorEventArgs details

    In the above picture, I’ve simulated an error while processing an incoming message by throwing a new DivideByZeroException. As a result, the PizzaItemErrorHandler method is called, and the arg argument contains info about the thrown exception.

    I personally prefer separating the two error handling situations: in the ProcessMessageAsync method I handle errors that occur in the business logic, when operating on an already received message; in the ProcessErrorAsync method I handle error coming from the infrastructure, like errors in the connection string, invalid credentials and so on.

    Dead Letters: when messages become stale

    When talking about queues, you’ll often come across the term dead letter. What does it mean?

    Dead letters are unprocessed messages: messages die when a message cannot be processed for a certain period of time. You can ignore that message because it has become obsolete or, anyway, it cannot be processed – maybe because it is malformed.

    Messages like these are moved to a specific queue called Dead Letter Queue (DLQ): messages are moved here to avoid making the normal queue full of messages that will never be processed.

    You can see which messages are present in the DLQ to try to understand the reason they failed and put them again into the main queue.

    Dead Letter Queue on ServiceBusExplorer

    in the above picture, you can see how the DLQ can be navigated using Service Bus Explorer: you can see all the messages in the DLQ, update them (not only the content, but even the associated metadata), and put them again into the main Queue to be processed.

    Wrapping up

    In this article, we’ve seen some of the errors you can meet when working with Azure Service Bus and .NET.

    We’ve seen the most common Exceptions, how to manage them both on the Sender and the Receiver side: on the Receiver you must handle them in the ProcessErrorAsync handler.

    Finally, we’ve seen what is a Dead Letter, and how you can recover messages moved to the DLQ.

    This is the last part of this series about Azure Service Bus and .NET: there’s a lot more to talk about, like dive deeper into DLQ and understanding Retry Patterns.

    For more info, you can read this article about retry mechanisms on the .NET SDK available on Microsoft Docs, and have a look at this article by Felipe Polo Ruiz.

    Happy coding! 🐧



    Source link

  • how I automated my blogging workflow with GitHub, PowerShell, and Azure &vert; Code4IT

    how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT


    After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.

    In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.

    I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.

    Introducing my blog architecture

    To better understand what’s going on, I need a very brief overview of the architecture of my blog.

    It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).

    So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.

    As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.

    Now that you know some general aspects, it’s time to deep dive into my writing process.

    Before writing: organizing ideas with GitHub

    My central source, as you might have already understood, is GitHub.

    There, I write all my notes and keep track of the status of my articles.

    Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.

    Github Projects to track the status of the articles

    GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.

    GitHub projects

    I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.

    In this way, I can use different columns and have more flexibility when handling the status of the tasks.

    GitHub issues templates

    As I said, to write my notes I use GitHub issues.

    When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.

    To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.

    The list of GitHub issues templates I use

    Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form

    Article creation form as generated by a template

    which is prepopulated with some fields:

    • Title: with a placeholder ([Article] )
    • Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
    • Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)

    How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.

    ---
    name: New article
    about: New blog article
    title: "[Article] - "
    labels: Article
    assignees: bellons91
    ---
    
    ## Argomenti
    
    ## Link
    
    ## Appunti vari
    

    And you’re good to go!

    GitHub action to assign issues to a project

    Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!

    With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.

    So, here’s mine:

    Auto-assign to project GitHub Action

    For better readability, you can find the Gist here.

    This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.

    In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎

    Writing

    Now it’s the time of writing!

    As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.

    For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.

    But, of course, I automated it! 😎

    Powershell script to scaffold a new article

    Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.

    Also, every MD file begins with a Frontmatter section, like this:

    ---
    title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
    path: "/blog/automate-articles-creations-github-powershell-azure"
    tags: ["MainArticle"]
    featuredImage: "./cover.png"
    excerpt: "a description for 072-how-i-create-articles"
    created: 4219-11-20
    updated: 4219-11-20
    ---
    

    But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.

    PowerShell script to scaffold a new article

    You can find the code in this Gist.

    This script performs several actions:

    1. Switches to the Master branch and downloads the latest updates
    2. Asks for the article slug that will be used to create the folder name
    3. Creates a new branch using the article slug as a name
    4. Creates a new folder that will contain all the files I will be using for my article (markdown content and images)
    5. Creates the article file with the Frontmatter part populated with dummy values
    6. Copies a placeholder image into this folder; this image will be the temporary cover image

    In this way, with a single command, I can scaffold a new article with all the files I need to get started.

    Ok, but how can I run a PowerShell in a Gatsby repository?

    I added this script in the package.json file

    "create-article": "@powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./article-creator.ps1",
    

    where article-creator.ps1 is the name of the file that contains the script.

    Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.

    Markdown preview on VS Code

    I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).

    One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.

    Here’s what I can see right now while I’m writing:

    Markdown preview on the side with VS Code

    Grammar check with Grammarly

    Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).

    I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.

    Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).

    Unprofessional, but fun, cover images

    One of the tasks I like the most is creating my cover images.

    I don’t use stock images, I prefer using less professional but more original cover images.

    Some of the cover images for my articles

    You can see all of them here.

    Creating and scheduling PR on GitHub with Templates and Actions

    Now that my article is complete, I can set it as ready for being scheduled.

    To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!

    I have created a PR template in an MD file, which I use to create a draft of the PR content.

    Pull Request form on GitHub

    In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).

    Also, I can define when this PR will be merged on Master, using the /schedule tag.

    It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.

    YAML of Merge Schedule action

    So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.

    As usual, you can find the code of this action here

    After the PR is merged, I also receive an email that notifies me of the action.

    After publishing

    Once a new article is online, I like to give it some visibility.

    To do that, I heavily rely on Azure Logic Apps.

    Azure Logic App for sharing on Twitter

    My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.

    I use it to trigger an Azure Logic App to publish a message on Twitter:

    Azure Logic App workflow for publishing on Twitter

    The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.

    If you prefer, you can use a custom Azure Function! The choice is yours!

    Cross-post reminder with Azure Logic Apps

    Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.

    Azure Logic App workflow for crosspost reminders

    I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.

    Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.

    And, my friends, this is everything that happens in the background of my blog!

    What I’m still missing

    I’ve added a lot of effort to my blog, and I’m incredibly proud of it!

    But still, there are a few things I’d like to improve.

    SEO Tools/analysis

    I’ve never considered SEO. Or, better, Keywords.

    I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.

    I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.

    Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.

    Also, I should spend more time thinking of the correct title and section titles.

    Any idea?

    Easy upgrade of Gatsby/Migrate to other headless CMSs

    Lastly, I’d like to find another theme or platform and leave the one I’m currently using.

    Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.

    Wrapping up

    That’s it: in this article, I’ve explained everything that I do when writing a blog post.

    Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!

    So, for now, happy coding!

    🐧



    Source link

  • How to deploy .NET APIs on Azure using GitHub actions &vert; Code4IT

    How to deploy .NET APIs on Azure using GitHub actions | Code4IT


    Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.

    To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.

    In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.

    Create a .NET API project

    Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.

    To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.

    Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.

    All our code is stored in the Program file:

    var builder = WebApplication.CreateBuilder(args);
    
    var app = builder.Build();
    
    app.UseHttpsRedirection();
    
    app.MapGet("/", () => "Hello World!");
    
    app.Run();
    

    Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.

    Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.

    Create an App Service on Azure

    Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.

    Open the Azure Portal, navigate to the App Service section, and create a new one.

    Configure it as you wish, and then proceed until you have it up and running.

    Once everything is done, you should have something like this:

    Azure App Service overview

    Now the application is ready to be used: we now need to deploy our code here.

    Generate the GitHub Action YAML file for deploying .NET APIs on Azure

    It’s time to create our Continuous Delivery pipeline.

    Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.

    On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.

    New Workflow button on GitHub

    You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:

    Template for deploying the .NET Application on Azure

    Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.

    In particular, you will have to update the environment variables specified in this section:

    env:
      AZURE_WEBAPP_NAME: your-app-name # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "5" # set this to the .NET Core version to use
    

    Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.

    For my specific project, I’ve replaced that section with

    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName> # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "6.0" # set this to the .NET Core version to use
    

    🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧

    Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.

    So, as a reference, here’s the full YAML file I’m using to deploy my APIs:

    name: Build and deploy ASP.Net Core app to an Azure Web App
    
    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName>
      AZURE_WEBAPP_PACKAGE_PATH: "."
      DOTNET_VERSION: "6.0"
    
    on:
      push:
        branches: ["master"]
      workflow_dispatch:
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
          - uses: actions/checkout@v3
    
          - name: Set up .NET Core
            uses: actions/setup-dotnet@v2
            with:
              dotnet-version: ${{ env.DOTNET_VERSION }}
    
          - name: Set up dependency caching for faster builds
            uses: actions/cache@v3
            with:
              path: ~/.nuget/packages
              key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
              restore-keys: |
                            ${{ runner.os }}-nuget-
    
          - name: Build with dotnet
            run: dotnet build --configuration Release
    
          - name: dotnet publish
            run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
    
          - name: Upload artifact for deployment job
            uses: actions/upload-artifact@v3
            with:
              name: .net-app
              path: ${{env.DOTNET_ROOT}}/myapp
    
      deploy:
        permissions:
          contents: none
        runs-on: ubuntu-latest
        needs: build
        environment:
          name: "Development"
          url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
    
        steps:
          - name: Download artifact from build job
            uses: actions/download-artifact@v3
            with:
              name: .net-app
    
          - name: Deploy to Azure Web App
            id: deploy-to-webapp
            uses: azure/webapps-deploy@v2
            with:
              app-name: ${{ env.AZURE_WEBAPP_NAME }}
              publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
              package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    As you can see, we have 2 distinct steps: build and deploy.

    In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.

    In the deploy step, we retrieve the newly created artifact and publish it on Azure.

    Store the Publish profile as GitHub Secret

    As you can see in the instructions of the workflow file, you have to

    Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.

    That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).

    A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.

    We have to get our publish profile and save it into GitHub secrets.

    To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.

    Get Publish Profile button on Azure Portal

    Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.

    Here you can create a new secret related to your repository.

    Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.

    You will then see something like this:

    GitHub secret for Publish profile

    Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:

    - name: Deploy to Azure Web App
        id: deploy-to-webapp
        uses: azure/webapps-deploy@v2
        with:
            app-name: ${{ env.AZURE_WEBAPP_NAME }}
            publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
            package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.

    Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.

    Final result

    It’s time to see the final result.

    Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.

    Under the Actions tab, you will see your CD pipeline run.

    CD workflow run

    Once it’s completed, you can head to your application root and see the final result.

    Final result of the API

    Further readings

    Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.

    My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…

    If you want to peek at what I do, here are my little secrets:

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.

    Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.

    Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.

    Nice and easy, right? 😉

    Happy coding!

    🐧



    Source link

  • How to create an API Gateway using Azure API Management &vert; Code4IT

    How to create an API Gateway using Azure API Management | Code4IT


    In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.

    In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂

    In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.

    Demo: publish .NET API services and locate the OpenAPI definition

    For the sake of this article, we will work with 2 API services: BooksService and VideosService.

    They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).

    Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.

    Swagger pages

    How to create Azure API Management (APIM) Service from Azure Portal

    Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:

    An API Gateway hides origin endpoints to clients

    It’s time to create our APIM resource.👷‍♂️

    Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.

    API Management description on Azure Portal

    The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).

    Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.

    After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.

    API management dashboard

    We are now ready to add our APIs and expose them to our clients.

    How to add APIs to Azure API Management using Swagger definition (OpenAPI)

    As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.

    Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.

    Swagger UI for BooksAPI

    We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.

    Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).

    Import API from OpenAPI specification

    You will see a form that allows you to create new resources from OpenAPI specifications.

    Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.

    Wizard to import APIs from OpenAPI

    You will then see your APIs appear in the panel shown below. It is composed of different parts:

    • The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
    • The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
    • A list of policies that are applied to the inbound requests before hitting the real endpoint;
    • The real endpoint used when calling the facade exposed by APIM;
    • A list of policies applied to the outbound requests after the origin has processed the requests.

    API detail panel

    For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.

    Consuming APIs exposed on the API Gateway

    We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.

    Where to find the Gateway URL

    This will be the root URL that our clients will use.

    We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).

    The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.

    Videos API on Origin and on API Gateway

    On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:

    Books API on Origin and on API Gateway

    Further readings

    As usual, a bunch of interesting readings 📚

    In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:

    🔗 What is Azure API Management? | Microsoft docs

    To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.

    🔗 How to deploy .NET APIs on Azure using GitHub actions | Code4IT

    Lastly, since we’ve talked about Swagger, here’s an article where I dissected how you can integrate Swagger in dotNET Core applications:

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.

    We will come back to this topic soon.

    Happy coding!

    🐧



    Source link

  • How to automatically refresh configurations with Azure App Configuration in ASP.NET Core &vert; Code4IT

    How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT


    ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.

    We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.

    In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.

    Since this one is a kind of improvement of the previous article, you should read it first.

    Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:

    {
      "MyNiceConfig": {
        "PageSize": 6,
        "Host": {
          "BaseUrl": "https://www.mydummysite.com",
          "Password": "123-go"
        }
      }
    }
    

    In the constructor of the API controller, I injected an IOptions<MyConfig> instance that holds the current data stored in the application.

     public ConfigDemoController(IOptions<MyConfig> config)
            => _config = config;
    

    The only HTTP Endpoint is a GET: it just accesses that value and returns it to the client.

    [HttpGet()]
    public IActionResult Get()
    {
        return Ok(_config.Value);
    }
    

    Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    Now we can move on and make configurations dynamic.

    Sentinel values: a guard value to monitor changes in the configurations

    On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.

    Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.

    Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.

    A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.

    So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.

    Sentinel value on Azure App Configuration

    As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).

    Define how to refresh configurations using ASP.NET Core app startup

    We can finally move to the code!

    If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    That instruction is used to load all the configurations, without managing polling and updates. Therefore, we must remove it.

    Instead of that instruction, add this other one:

    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options
        .Connect(ConnectionString)
        .Select(KeyFilter.Any, LabelFilter.Null)
        // Configure to reload configuration if the registered sentinel key is modified
        .ConfigureRefresh(refreshOptions =>
                  refreshOptions.Register("Sentinel", label: LabelFilter.Null, refreshAll: true)
            .SetCacheExpiration(TimeSpan.FromSeconds(3))
          );
    });
    

    Let’s deep dive into each part:

    options.Connect(ConnectionString) just tells ASP.NET that the configurations must be loaded from that specific connection string.

    .Select(KeyFilter.Any, LabelFilter.Null) loads all keys that have no Label;

    and, finally, the most important part:

    .ConfigureRefresh(refreshOptions =>
                refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
          .SetCacheExpiration(TimeSpan.FromSeconds(3))
        );
    

    Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).

    Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.

    Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.

    First of all, add Azure App Configuration to the IServiceCollection object:

    builder.Services.AddAzureAppConfiguration();
    

    Finally, we have to add it to our existing middlewares by calling

    app.UseAzureAppConfiguration();
    

    The final result is this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        const string ConnectionString = "......";
    
        // Load configuration from Azure App Configuration
        builder.Configuration.AddAzureAppConfiguration(options =>
        {
            options.Connect(ConnectionString)
                    .Select(KeyFilter.Any, LabelFilter.Null)
                    // Configure to reload configuration if the registered sentinel key is modified
                    .ConfigureRefresh(refreshOptions =>
                        refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
                        .SetCacheExpiration(TimeSpan.FromSeconds(3)));
        });
    
        // Add the service to IServiceCollection
        builder.Services.AddAzureAppConfiguration();
    
        builder.Services.AddControllers();
        builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
    
        var app = builder.Build();
    
        // Add the middleware
        app.UseAzureAppConfiguration();
    
        app.UseHttpsRedirection();
    
        app.MapControllers();
    
        app.Run();
    }
    

    IOptionsMonitor: accessing and monitoring configuration values

    It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.

    Default config coming from Azure App Configuration

    Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯

    This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.

    So, head back to the ConfigDemoController, and replace the way we retrieve the config:

    [ApiController]
    [Route("[controller]")]
    public class ConfigDemoController : ControllerBase
    {
        private readonly IOptionsMonitor<MyConfig> _config;
    
        public ConfigDemoController(IOptionsMonitor<MyConfig> config)
        {
            _config = config;
            _config.OnChange(Update);
        }
    
        [HttpGet()]
        public IActionResult Get()
        {
            return Ok(_config.CurrentValue);
        }
    
        private void Update(MyConfig arg1, string? arg2)
        {
          Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
                    $" Password is {arg1.Host.Password}");
        }
    }
    

    When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;

    We can finally run the application and update the values on Azure App Configuration.

    Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.

    Then, call again the application (again, without restarting it), and admire the updated values!

    You can see a live demo here:

    Demo of configurations refreshed dinamically

    As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.

    My 2 cents on timing

    As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:

    • a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
    • a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.

    There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:

    1. UserService starts
    2. PaymentService starts
    3. Someone updates the values on Azure
    4. UserService restarts, while PaymentService doesn’t.

    We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.

    Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.

    So, think thoroughly before choosing an expiration time!

    Further readings

    This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    This article first appeared on Code4IT 🐧

    Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core | Code4IT

    Finally, I briefly talked about pricing. As of July 2023, there are just 2 pricing tiers, with different limitations.

    🔗 App Configuration pricing | Microsoft Learn

    Wrapping up

    In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.

    Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.

    Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application &vert; Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link