برچسب: How

  • Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT

    Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT


    Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.

    Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.

    In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.

    For this article, my code is structured into 3 Assemblies:

    • CommonClasses, a Class Library that contains some utility classes;
    • NetCoreScripts, a Class Library that contains the code we are going to execute;
    • ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.

    The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.

    Class library dependencies

    In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.

    How to load an Assembly in C#, with different methods

    The starting point to analyse an Assembly is, well, to have an Assembly.

    So, in the Scripts Class Library (the middle one), I wrote:

    var assembly = DefineAssembly();
    

    In the DefineAssembly method we can choose the Assembly we are going to analyse.

    Load the Assembly containing a specific class

    The easiest way is to do something like this:

    private static Assembly DefineAssembly()
        => typeof(AssemblyAnalysis).Assembly;
    

    Where AssemblyAnalysis is the class that contains our scripts.

    Similarly, we can get the Assembly info for a class belonging to another Assembly, like this:

    private static Assembly DefineAssembly()
        => typeof(CommonClasses.BaseExecutable).Assembly;
    

    In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!

    Load the current, the calling, and the executing Assembly

    The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.

    Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).

    var executing = System.Reflection.Assembly.GetExecutingAssembly();
    var calling = System.Reflection.Assembly.GetCallingAssembly();
    var entry = System.Reflection.Assembly.GetEntryAssembly();
    

    Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.

    Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.

    Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.

    Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.

    Method name Meaning In this example…
    GetExecutingAssembly The current Assembly CommonClasses
    GetCallingAssembly The caller Assembly NetCoreScripts
    GetEntryAssembly The top-level executor ScriptsRunner

    How to retrieve classes of a given .NET Assembly

    Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.

    You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.

    For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.

    You may want to analyse public classes: therefore, you can do something like:

    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly
                .GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    

    How to list the Methods belonging to a C# Type

    Given a Type, you can extract the info about all the available methods.

    The Type type contains several methods that can help you find useful information, such as GetConstructors.

    In our case, we are only interested in public methods, declared in that class (and not inherited from a base class):

    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    

    The BindingFlags enum is a 🔗Flagged Enum: it’s an enum with special values that allow you to perform an OR operation on the values.

    Value Description Example
    Public Includes public members. public void Print()
    NonPublic Includes non-public members (private, protected, etc.). private void Calculate()
    Instance Includes instance (non-static) members. public void Save()
    Static Includes static members. public static void Log(string msg)
    FlattenHierarchy Includes static members up the inheritance chain. public static void Helper() (this method exists in the base class)
    DeclaredOnly Only members declared in the given type, not inherited. public void MyTypeSpecific() (this method does not exist in the base class)

    How to get the parameters of a MethodInfo object

    The final step is to retrieve the list of parameters from a MethodInfo object.

    This step is pretty easy: just call the GetParameter() method:

    public ParameterInfo[] GetParameters(MethodInfo method) => method.GetParameters();
    

    A ParameterInfo object contains several pieces of information, such as the name, the type and the default value of the parameter.

    Let’s consider this silly method:

    public static void RandomCity(string[] cities, string fallback = "Rome")
    { }
    

    If we have a look at its parameters, we will find the following values:

    Properties of a ParameterInfo object

    Bonus tip: Auto-properties act as Methods

    Let’s focus a bit more on the properties of a class.

    Consider this class:

    public class User
    {
      public string Name { get; set; }
    }
    

    There are no methods; only one public property.

    But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.

    Automatic Getter and Setter of the Name property in C#

    Further readings

    Do you remember that exceptions are, in the end, Types?

    And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?

    If not, check this article out!

    🔗 Exception handling with WHEN clause | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up (plus the full example)

    From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.

    In short, an example of a simple code analyser can be this one:

    public void Execute()
    {
        var assembly = DefineAssembly();
        var paramsInfo = AnalyzeAssembly(assembly);
    
        AnalyzeParameters(paramsInfo);
    }
    
    private static Assembly DefineAssembly()
        => Assembly.GetExecutingAssembly();
    
    public static List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
    {
        List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
        var types = GetAllPublicTypes(assembly);
    
        foreach (var type in types)
        {
            var publicMethods = GetPublicMethods(type);
    
            foreach (var method in publicMethods)
            {
                var parameters = method.GetParameters();
                if (parameters.Length > 0)
                {
                    var f = parameters.First();
                }
    
                all.Add(new ParamsMethodInfo(
                    assembly.GetName().Name,
                    type.Name,
                    method
                    ));
            }
        }
        return all;
    }
    
    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    
    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    
    public class ParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
    {
        public string MethodName => Method.Name;
        public ParameterInfo[] Parameters => Method.GetParameters();
    }
    

    And then, in the AnalyzeParameters, you can add your own logic.

    As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to customize Conventional Commits in a .NET application using GitHooks &vert; Code4IT

    How to customize Conventional Commits in a .NET application using GitHooks | Code4IT


    Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖

    A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.

    Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.

    Conventional Commits

    Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:

    • they help developers understand the history of a git branch;
    • they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
    • using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
    • they allow you to create automated Changelog files.

    So, what does an average Conventional Commit look like?

    There’s not just one way to specify such formats.

    For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:

    feat(api): send an email to the customer
    

    Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.

    fix: prevent racing condition
    
    Introduce a request id and a reference to latest request. Dismiss
    incoming responses other than from latest request.
    

    There are several types of commits that you can support, such as:

    • feat, used when you add a new feature to the application;
    • fix, when you fix a bug;
    • docs, used to add or improve documentation to the project;
    • refactor, used – well – after some refactoring;
    • test, when adding tests or fixing broken ones

    All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.

    Some Changes

    So, now, it’s time to include Conventional Commits in our .NET applications.

    What is our goal?

    For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.

    Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.

    For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.

    Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.

    The goal of this article is, then, to force developers to create Commit messages such as

    feat/FOO-123: commit short description
    

    or, if you want to add a full description of the commit,

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.

    Install NPM in your folder

    Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.

    First things first: head to the Command Line and run

    After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.

    Now we can move on and add a GIT Hook.

    Husky: integrate GIT Hooks to improve commit messages

    To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.

    We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.

    Head to the terminal, and install Husky by running

    npm install husky --save-dev
    

    This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:

    "devDependencies": {
        "husky": "^8.0.3"
    }
    

    Finally, to enable Git Hooks, we have to run

    npm pkg set scripts.prepare="husky install"
    

    and notice the new section in the package.json.

    "scripts": {
        "prepare": "husky install"
    },
    

    Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.

    Git commit message editor

    Save and close the file. The commit message has been applied, as you can see by running git log --oneline.

    CommitLint: a package to validate Commit messages

    We need to install and configure CommitLint, the NPM package that does the dirty job.

    On the same terminal as before, run

    npm install --save-dev @commitlint/config-conventional @commitlint/cli
    

    to install both commitlint/config-conventional, which add the generic functionalities, and commitlint/cli, which allows us to run the scripts via CLI.

    You will see both packages listed in your package.json file:

    "devDependencies": {
        "@commitlint/cli": "^17.4.2",
        "@commitlint/config-conventional": "^17.4.2",
        "husky": "^8.0.3"
    }
    

    Next step: scaffold the file that handles the configurations on how we want our Commit Messages to be structured.

    On the root, create a brand new file, commitlint.config.js, and paste this snippet:

    module.exports = {
      extends: ["@commitlint/config-conventional"],
    }
    

    This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.

    To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:

    npm install -g @commitlint/cli @commitlint/config-conventional
    

    and, in a console, we can run

    echo 'foo: a message with wrong format' | commitlint
    

    and see the error messages

    Testing commitlint with errors

    At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.

    We need to do some more steps.

    First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.

    Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.

    Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.

    npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'
    

    We’re almost ready: everything is set, but we need to activate the functionality. So you just have to run

    to see it working:

    CommitLint correctly validates the commit message

    Commitlint.config.js: defining explicit rules on Git Messages

    Now, remember that we want to enforce certain rules on the commit message.

    We don’t want them to be like

    feat(api): send an email to the customer when a product is shipped
    

    but rather like

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    This means that we have to configure the commitlint.config.js file to override default values.

    Let’s have a look at a valid Commitlint file:

    module.exports = {
      extends: ["./node_modules/@commitlint/config-conventional"],
      parserPreset: {
        parserOpts: {
          headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
          headerCorrespondence: ["type", "scope", "subject"],
        },
      },
      rules: {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
          2,
          "never",
          ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
      },
    }
    

    Time to deep dive into those sections:

    The ParserOpts section: define how CommitList should parse text

    The first part tells the parser how to parse the header message:

    parserOpts: {
        headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
        headerCorrespondence: ["type", "scope", "subject"],
    },
    

    It’s a regular expression, where every matching part has its correspondence in the headerCorrespondence array:

    So, in the message hello/FOO-123: my tiny message, we will have type=hello, scope=123, subject=my tiny message.

    Rules: define specific rules for each message section

    The rules section defines the rules to be applied to each part of the message structure.

    rules:
    {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
            2,
            "never",
            ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
    },
    

    The first value is a number that expresses the severity of the rule:

    • 0: the rule is disabled;
    • 1: show a warning;
    • 2: it’s an error.

    The second value defines if the rule must be applied (using always), or if it must be reversed (using never).

    The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.

    You can read more about each and every configuration on the official documentation 🔗.

    Setting the commit structure using .gitmessage

    Now that everything is set, we can test it.

    But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.

    Default commit editor

    You can set your own text with hints about the structure of the messages.

    You just need to create a file named .gitmessage and put some text in it, such as:

    # <type>/FOO-<jira-ticket-id>: <title>
    # YOU CAN WRITE WHATEVER YOU WANT HERE
    # allowed types: feat | fix | hot | chore
    # Example:
    #
    # feat/FOO-01: first commit
    #
    # No more than 50 chars. #### 50 chars is here:  #
    
    # Remember blank line between title and body.
    
    # Body: Explain *what* and *why* (not *how*)
    # Wrap at 72 chars. ################################## which is here:  #
    #
    

    Now, we have to tell Git to use that file as a template:

    git config commit.template ./.gitmessage
    

    and.. TA-DAH! Here’s your message template!

    Customized message template

    Putting all together

    Finally, we have everything in place: git hooks, commit template, and template hints.

    If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.

    Commit message with wrong format gets rejected

    Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working

    Further readings

    Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:

    🔗 Conventional Commits

    As we saw before, there are a lot of configurations that you can set for your commits. You can see the full list here:

    🔗 CommitLint rules

    This article first appeared on Code4IT 🐧

    This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1:
    🔗 Semantic Versioning

    And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer:
    🔗 How to deploy .NET APIs on Azure using GitHub actions

    Wrapping up

    In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.

    The steps we’ve seen before work for every type of application, even not related to dotnet.

    So, to recap everything, we have to:

    1. Install NPM: npm init;
    2. Install Husky: npm install husky --save-dev;
    3. Enable Husky: npm pkg set scripts.prepare="husky install";
    4. Install CommitLint: npm install --save-dev @commitlint/config-conventional @commitlint/cli;
    5. Create the commitlint.config.js file: module.exports = { extends: '@commitlint/config-conventional']};;
    6. Create the Husky folder: mkdir .husky;
    7. Link Husky and CommitLint: npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}';
    8. Activate the whole functionality: npx husky install;

    Then, you can customize the commitlint.config.js file and, if you want, create a better .gitmessage file.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to download an online file and store it on file system with C# &vert; Code4IT

    How to download an online file and store it on file system with C# | Code4IT


    Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Downloading files from an online source and saving them on the local machine seems an easy task.

    And guess what? It is!

    In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?

    How to download a file stream from an online resource using HttpClient

    Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.

    HttpClient httpClient = new HttpClient();
    Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
    

    Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.

    But, ok, it looks easy – way too easy! There are two more parts to consider.

    How to handle errors while downloading a stream of data

    You know, shit happens!

    There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.

    In both cases, an HttpRequestException exception is thrown, with the following stack trace:

    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
    at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
    

    As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.

    You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.

    Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.

    So, let me refactor the part that downloads the file stream and put it in a standalone method:

    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    

    so that we can use Stream.Null to check for the existence of the stream.

    How to store a file in your local machine

    Now we have our stream of data. We need to store it somewhere.

    We will need to copy our input stream to a FileStream object, placed within a using block.

    using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
    {
        await fileStream.CopyToAsync(outputFileStream);
    }
    

    Possible errors and considerations

    When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.

    FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.

    public enum FileMode
    {
        CreateNew = 1,
        Create,
        Open,
        OpenOrCreate,
        Truncate,
        Append
    }
    

    Again, there are some edge cases that we have to consider:

    • the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
    • the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.

    Full Example

    It’s time to put the pieces together:

    async Task DownloadAndSave(string sourceFile, string destinationFolder, string destinationFileName)
    {
        Stream fileStream = await GetFileStream(sourceFile);
    
        if (fileStream != Stream.Null)
        {
            await SaveStream(fileStream, destinationFolder, destinationFileName);
        }
    }
    
    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    
    async Task SaveStream(Stream fileStream, string destinationFolder, string destinationFileName)
    {
        if (!Directory.Exists(destinationFolder))
            Directory.CreateDirectory(destinationFolder);
    
        string path = Path.Combine(destinationFolder, destinationFileName);
    
        using (FileStream outputFileStream = new FileStream(path, FileMode.CreateNew))
        {
            await fileStream.CopyToAsync(outputFileStream);
        }
    }
    

    Bonus tips: how to deal with file names and extensions

    You have the file URL, and you want to get its extension and its plain file name.

    You can use some methods from the Path class:

    string image = "https://website.come/csharptips/format-interpolated-strings/featuredimage.png";
    Path.GetExtension(image); // .png
    Path.GetFileNameWithoutExtension(image); // featuredimage
    

    But not every image has a file extension. For example, Twitter cover images have this format: https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080×360

    string image = "https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080x360";
    Path.GetExtension(image); // [empty string]
    Path.GetFileNameWithoutExtension(image); // 1080x360
    

    Further readings

    As I said, you should not instantiate a new HttpClient() every time. You should use HttpClientFactory instead.

    If you want to know more details, I’ve got an article for you:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This was a quick article, quite easy to understand – I hope!

    My main point here is that not everything is always as easy as it seems – you should always consider edge cases!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion

    Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion



    21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space, the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA—where innovation, design, and technology converge into a rich, immersive journey.

    The result is a site that goes beyond static content, inviting users to explore through motion, interactivity, and meticulously crafted visuals. Developed through a close collaboration between type8 Studio and DEPARTMENT Maison de Création, the project pushes creative and technical boundaries to deliver a seamless, engaging experience.

    Concept & Art Direction

    The creative direction led by Paul Barbin played a crucial role in shaping the website’s identity. The design embraces a minimalist yet bold aesthetic—strictly monochromatic, anchored by a precise and structured typographic system. The layout is intentionally clean, but the experience stays dynamic thanks to well-orchestrated WebGL animations and subtle interactions.

    Grid & Style

    The definition of the grid played a fundamental role in structuring and clarifying the brand’s message. More than just a layout tool, the grid became a strategic framework—guiding content organization, enhancing readability, and ensuring visual consistency across all touchpoints.

    We chose an approach inspired by the Swiss style, also known as the International Typographic Style, celebrated for its clarity, precision, and restraint. This choice reflects our commitment to clear, direct, and functional communication, with a strong focus on user experience. The grid allows each message to be delivered with intention, striking a subtle balance between aesthetics and efficiency.

    A unique aspect of the project was the integration of AI-generated imagery. These visuals were thoughtfully curated and refined to align with the brand’s futuristic and enigmatic identity, further reinforcing the immersive nature of the website.

    Interaction & Motion Design

    The experience of 21 TSI is deeply rooted in movement. The site feels alive—constantly shifting and morphing in response to user interactions. Every detail works together to evoke a sense of fluidity:

    • WebGL animations add depth and dimension, making the site feel tactile and immersive.
    • Morphing transitions enable smooth navigation between sections, avoiding abrupt visual breaks.
    • Cursor distortion effects introduce a subtle layer of interactivity, letting users influence their journey through motion.
    • Scroll-based animations strike a careful balance between engagement and clarity, ensuring motion enhances the experience without overwhelming it.

    This dynamic approach creates a browsing experience that feels both organic and responsive—keeping users engaged without ever overwhelming them.

    Technical Implementation & Motion Design

    For this project, we chose a technology stack designed to deliver high performance and smooth interactions, all while maintaining the flexibility needed for creative exploration:

    • OGL: A lightweight alternative to Three.js, used for WebGL-powered animations and visual effects.
    • Anime.js: Handles motion design elements and precise animation timing.
    • Locomotive Scroll: Enables smooth, controlled scroll behavior throughout the site.
    • Eleventy (11ty): A static site generator that ensures fast load times and efficient content management.
    • Netlify: Provides seamless deployment and version control, keeping the development workflow agile.

    One of the key technical challenges was optimizing performance across devices while preserving the same fluid motion experience. Carefully balancing GPU-intensive WebGL elements with lightweight animations made seamless performance possible.

    Challenges & Solutions

    One of the primary challenges was ensuring that the high level of interactivity didn’t compromise usability. The team worked extensively to refine transitions so they felt natural, while keeping navigation intuitive. Balancing visual complexity with performance was equally critical—avoiding unnecessary elements while preserving a rich, engaging experience.

    Another challenge was the use of AI-generated visuals. While they introduced unique artistic possibilities, these assets required careful curation and refinement to align with the creative vision. Ensuring coherence between the AI-generated content and the designed elements was a meticulous process.

    Conclusion

    The 21 TSI website is a deep exploration of digital storytelling through design and interactivity. It captures the intersection of technology and aesthetics, offering an experience that goes well beyond a traditional corporate presence.

    The project was recognized with multiple awards, including Website of the Day on CSS Design Awards, FWA of the Day, and Awwwards, reinforcing its impact in the digital design space.

    This collaboration between type8 Studio and Paul Barbin of DEPARTMENT Maison de Création showcases how thoughtful design, innovative technology, and a strong artistic vision can come together to craft a truly immersive web experience.



    Source link

  • How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application &vert; Code4IT

    How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application | Code4IT


    By default, you cannot use Dependency Injection, custom logging, and configurations from settings in a Console Application. Unless you create a custom Host!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you just want to create a console application to run a complex script. Just because it is a “simple” console application, it doesn’t mean that you should not use best practices, such as using Dependency Injection.

    Also, you might want to test the code: Dependency Injection allows you to test the behavior of a class without having a strict dependency on the referenced concrete classes: you can use stubs and mocks, instead.

    In this article, we’re going to learn how to add Dependency Injection in a .NET 7 console application. The same approach can be used for other versions of .NET. We will also add logging, using Serilog, and configurations coming from an appsettings.json file.

    We’re going to start small, with the basic parts, and gradually move on to more complex scenarios. We’re gonna create a simple, silly console application: we will inject a bunch of services, and print a message on the console.

    We have a root class:

    public class NumberWorker
    {
        private readonly INumberService _service;
    
        public NumberWorker(INumberService service) => _service = service;
    
        public void PrintNumber()
        {
            var number = _service.GetPositiveNumber();
            Console.WriteLine($"My wonderful number is {number}");
        }
    }
    

    that injects an INumberService, implemented by NumberService:

    public interface INumberService
    {
        int GetPositiveNumber();
    }
    
    public class NumberService : INumberService
    {
        private readonly INumberRepository _repo;
    
        public NumberService(INumberRepository repo) => _repo = repo;
    
        public int GetPositiveNumber()
        {
            int number = _repo.GetNumber();
            return Math.Abs(number);
        }
    }
    

    which, in turn, uses an INumberRepository implemented by NumberRepository:

    public interface INumberRepository
    {
        int GetNumber();
    }
    
    public class NumberRepository : INumberRepository
    {
        public int GetNumber()
        {
            return -42;
        }
    }
    

    The console application will create a new instance of NumberWorker and call the PrintNumber method.

    Now, we have to build the dependency tree and inject such services.

    How to create an IHost to use a host for a Console Application

    The first step to take is to install some NuGet packages that will allow us to add a custom IHost container so that we can add Dependency Injection and all the customization we usually add in projects that have a StartUp (or a Program) class, such as .NET APIs.

    We need to install 2 NuGet packages: Microsoft.Extensions.Hosting.Abstractions and Microsoft.Extensions.Hosting will be used to create a new IHost that will be used to build the dependencies tree.

    By navigating your csproj file, you should be able to see something like this:

    <ItemGroup>
        <PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
        <PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="7.0.0" />
    </ItemGroup>
    

    Now we are ready to go! First, add the following using statements:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    

    and then, within the Program class, add this method:

    private static IHost CreateHost() =>
      Host.CreateDefaultBuilder()
          .ConfigureServices((context, services) =>
          {
              services.AddSingleton<INumberRepository, NumberRepository>();
              services.AddSingleton<INumberService, NumberService>();
          })
          .Build();
    }
    

    Host.CreateDefaultBuilder() creates the default IHostBuilder – similar to the IWebHostBuilder, but without any reference to web components.

    Then we add all the dependencies, using services.AddSingleton<T, K>. Notice that it’s not necessary to add services.AddSingleton<NumberWorker>: when we will use the concrete instance, the dependency tree will be resolved, without the need of having an indication of the root itself.

    Finally, once we have everything in place, we call Build() to create a new instance of IHost.

    Now, we just have to run it!

    In the Main method, create the IHost instance by calling CreateHost(). Then, by using the ActivatorUtilities class (coming from the Microsoft.Externsions.DependencyInjection namespace), create a new instance of NumberWorker, so that you can call PrintNumber();

    private static void Main(string[] args)
    {
      IHost host = CreateHost();
      NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
      worker.PrintNumber();
    }
    

    Now you are ready to run the application, and see the message on the console:

    Basic result on Console

    Read configurations from appsettings.json for a Console Library

    We want to make our system configurable and place our configurations in an appsettings.json file.

    As we saw in a recent article 🔗, we can use IOptions<T> to inject configurations in the constructor. For the sake of this article, I’m gonna use a POCO class, NumberConfig, that is mapped to a configuration section and injected into the classes.

    public class NumberConfig
    {
        public int DefaultNumber { get; set; }
    }
    

    Now we need to manually create an appsettings.json file within the project folder, and add a new section that will hold the values of the configuration:

    {
      "Number": {
        "DefaultNumber": -899
      }
    }
    

    and now we can add the configuration binding in our CreateHost() method, within the ConfigureServices section:

    services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    

    Finally, we can update the NumberRepository to accept the configurations in input and use them to return the value:

    public class NumberRepository : INumberRepository
    {
        private readonly NumberConfig _config;
    
        public NumberRepository(IOptions<NumberConfig> options) => _config = options.Value;
    
        public int GetNumber() => _config.DefaultNumber;
    }
    

    Run the project to admire the result, and… BOOM! It will not work! You should see the message “My wonderful number is 0”, even though the number we set on the config file is -899.

    This happens because we must include the appsettings.json file in the result of the compilation. Right-click on that file, select the Properties menu, and set the “Copy to Output Directory” to “Copy always”:

    Copy always the appsettings file to the Output Directory

    Now, build and run the project, and you’ll see the correct message: “My wonderful number is 899”.

    Clearly, the same values can be accessed via IConfigurations.

    Add Serilog logging to log on Console and File

    Finally, we can add Serilog logs to our console applications – as well as define Sinks.

    To add Serilog, you first have to install these NuGet packages:

    • Serilog.Extensions.Hosting and Serilog.Formatting.Compact to add the basics of Serilog;
    • Serilog.Settings.Configuration to read logging configurations from settings (if needed);
    • Serilog.Sinks.Console and Serilog.Sinks.File to add the Console and the File System as Sinks.

    Let’s get back to the CreateHost() method, and add a new section right after ConfigureServices:

    .UseSerilog((context, services, configuration) => configuration
        .ReadFrom.Configuration(context.Configuration)
        .ReadFrom.Services(services)
        .Enrich.FromLogContext()
        .WriteTo.Console()
        .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
        )
    

    Here we’re telling that we need to read the config from Settings, add logging context, and write both on Console and on File (only if the log message level is greater or equal than Warning).

    Then, add an ILogger here and there, and admire the final result:

    Serilog Logging is visible on the Console

    Final result

    To wrap up, here’s the final implementation of the Program class and the
    CreateHost method:

    private static void Main(string[] args)
    {
        IHost host = CreateHost();
        NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
        worker.PrintNumber();
    }
    
    private static IHost CreateHost() =>
      Host
      .CreateDefaultBuilder()
      .ConfigureServices((context, services) =>
      {
          services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    
          services.AddSingleton<INumberRepository, NumberRepository>();
          services.AddSingleton<INumberService, NumberService>();
      })
      .UseSerilog((context, services, configuration) => configuration
          .ReadFrom.Configuration(context.Configuration)
          .ReadFrom.Services(services)
          .Enrich.FromLogContext()
          .WriteTo.Console()
          .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
          )
      .Build();
    

    Further readings

    As always, a few resources to learn more about the topics discussed in this article.

    First and foremost, have a look at this article with a full explanation of Generic Hosts in a .NET Core application:

    🔗 .NET Generic Host in ASP.NET Core | Microsoft docs

    Then, if you recall, we’ve already learned how to print Serilog logs to the Console:

    🔗 How to log to Console with .NET Core and Serilog | Code4IT

    This article first appeared on Code4IT 🐧

    Lastly, we accessed configurations using IOptions<NumberConfig>. Did you know that there are other ways to access config?

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT

    as well as defining configurations for your project?

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we’ve learned how we can customize a .NET Console application to use dependency injection, external configurations, and Serilog logging.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to automatically refresh configurations with Azure App Configuration in ASP.NET Core &vert; Code4IT

    How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT


    ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.

    We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.

    In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.

    Since this one is a kind of improvement of the previous article, you should read it first.

    Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:

    {
      "MyNiceConfig": {
        "PageSize": 6,
        "Host": {
          "BaseUrl": "https://www.mydummysite.com",
          "Password": "123-go"
        }
      }
    }
    

    In the constructor of the API controller, I injected an IOptions<MyConfig> instance that holds the current data stored in the application.

     public ConfigDemoController(IOptions<MyConfig> config)
            => _config = config;
    

    The only HTTP Endpoint is a GET: it just accesses that value and returns it to the client.

    [HttpGet()]
    public IActionResult Get()
    {
        return Ok(_config.Value);
    }
    

    Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    Now we can move on and make configurations dynamic.

    Sentinel values: a guard value to monitor changes in the configurations

    On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.

    Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.

    Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.

    A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.

    So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.

    Sentinel value on Azure App Configuration

    As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).

    Define how to refresh configurations using ASP.NET Core app startup

    We can finally move to the code!

    If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    That instruction is used to load all the configurations, without managing polling and updates. Therefore, we must remove it.

    Instead of that instruction, add this other one:

    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options
        .Connect(ConnectionString)
        .Select(KeyFilter.Any, LabelFilter.Null)
        // Configure to reload configuration if the registered sentinel key is modified
        .ConfigureRefresh(refreshOptions =>
                  refreshOptions.Register("Sentinel", label: LabelFilter.Null, refreshAll: true)
            .SetCacheExpiration(TimeSpan.FromSeconds(3))
          );
    });
    

    Let’s deep dive into each part:

    options.Connect(ConnectionString) just tells ASP.NET that the configurations must be loaded from that specific connection string.

    .Select(KeyFilter.Any, LabelFilter.Null) loads all keys that have no Label;

    and, finally, the most important part:

    .ConfigureRefresh(refreshOptions =>
                refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
          .SetCacheExpiration(TimeSpan.FromSeconds(3))
        );
    

    Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).

    Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.

    Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.

    First of all, add Azure App Configuration to the IServiceCollection object:

    builder.Services.AddAzureAppConfiguration();
    

    Finally, we have to add it to our existing middlewares by calling

    app.UseAzureAppConfiguration();
    

    The final result is this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        const string ConnectionString = "......";
    
        // Load configuration from Azure App Configuration
        builder.Configuration.AddAzureAppConfiguration(options =>
        {
            options.Connect(ConnectionString)
                    .Select(KeyFilter.Any, LabelFilter.Null)
                    // Configure to reload configuration if the registered sentinel key is modified
                    .ConfigureRefresh(refreshOptions =>
                        refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
                        .SetCacheExpiration(TimeSpan.FromSeconds(3)));
        });
    
        // Add the service to IServiceCollection
        builder.Services.AddAzureAppConfiguration();
    
        builder.Services.AddControllers();
        builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
    
        var app = builder.Build();
    
        // Add the middleware
        app.UseAzureAppConfiguration();
    
        app.UseHttpsRedirection();
    
        app.MapControllers();
    
        app.Run();
    }
    

    IOptionsMonitor: accessing and monitoring configuration values

    It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.

    Default config coming from Azure App Configuration

    Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯

    This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.

    So, head back to the ConfigDemoController, and replace the way we retrieve the config:

    [ApiController]
    [Route("[controller]")]
    public class ConfigDemoController : ControllerBase
    {
        private readonly IOptionsMonitor<MyConfig> _config;
    
        public ConfigDemoController(IOptionsMonitor<MyConfig> config)
        {
            _config = config;
            _config.OnChange(Update);
        }
    
        [HttpGet()]
        public IActionResult Get()
        {
            return Ok(_config.CurrentValue);
        }
    
        private void Update(MyConfig arg1, string? arg2)
        {
          Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
                    $" Password is {arg1.Host.Password}");
        }
    }
    

    When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;

    We can finally run the application and update the values on Azure App Configuration.

    Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.

    Then, call again the application (again, without restarting it), and admire the updated values!

    You can see a live demo here:

    Demo of configurations refreshed dinamically

    As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.

    My 2 cents on timing

    As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:

    • a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
    • a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.

    There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:

    1. UserService starts
    2. PaymentService starts
    3. Someone updates the values on Azure
    4. UserService restarts, while PaymentService doesn’t.

    We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.

    Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.

    So, think thoroughly before choosing an expiration time!

    Further readings

    This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    This article first appeared on Code4IT 🐧

    Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core | Code4IT

    Finally, I briefly talked about pricing. As of July 2023, there are just 2 pricing tiers, with different limitations.

    🔗 App Configuration pricing | Microsoft Learn

    Wrapping up

    In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.

    Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.

    Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How an XDR Platform Transforms Your SOC Operations

    How an XDR Platform Transforms Your SOC Operations


    XDR solutions are revolutionizing how security teams handle threats by dramatically reducing false positives and streamlining operations. In fact, modern XDR platforms generate significantly fewer false positives than traditional SIEM threat analytics, allowing security teams to focus on genuine threats rather than chasing shadows. We’ve seen firsthand how security operations centers (SOCs) struggle with alert fatigue, fragmented visibility, and resource constraints. However, an XDR platform addresses these challenges by unifying information from multiple sources and providing a holistic view of threats. This integration enables organizations to operate advanced threat detection and response with fewer SOC resources, making it a cost-effective approach to modern security operations.

    An XDR platform consolidates security data into a single system, ensuring that SOC teams and surrounding departments can operate from the same information base. Consequently, this unified approach not only streamlines operations but also minimizes breach risks, making it an essential component of contemporary cybersecurity strategies.

    In this article, we’ll explore how XDR transforms SOC operations, why traditional tools fall short, and the practical benefits of implementing this technology in your security framework.

    The SOC Challenge: Why Traditional Tools Fall Short

    Security Operations Centers (SOCs) today face unprecedented challenges with their traditional security tools. While security teams strive to protect organizations, they’re increasingly finding themselves overwhelmed by fundamental limitations in their security infrastructure.

    Alert overload and analyst fatigue

    Modern SOC teams are drowning in alerts. As per Vectra AI, an overwhelming 71% of SOC practitioners worry they’ll miss real attacks buried in alert floods, while 51% believe they simply cannot keep pace with mounting security threats. The statistics paint a troubling picture:

    Siloed tools and fragmented visibility

    The tool sprawl in security operations creates massive blind spots. According to Vectra AI findings, 73% of SOCs have more than 10 security tools in place, while 45% juggle more than 20 different tools. Despite this arsenal, 47% of practitioners don’t trust their tools to work as needed.

    Many organizations struggle with siloed security data across disparate systems. Each department stores logs, alerts, and operational details in separate repositories that rarely communicate with one another. This fragmentation means threat hunting becomes guesswork because critical artifacts sit in systems that no single team can access.

    Slow response times and manual processes

    Traditional SOCs rely heavily on manual processes, significantly extending detection and response times. When investigating incidents, analysts must manually piece together information from different silos, losing precious time during active cyber incidents.

    According to research by Palo Alto Networks, automation can reduce SOC response times by up to 50%, significantly limiting breach impacts. Unfortunately, most traditional SOCs lack this capability. The workflow in traditional environments is characterized by manual processes that exacerbate alert fatigue while dealing with massive threat alert volumes.

    The complexity of investigations further slows response. When an incident occurs, analysts must combine data from various sources to understand the full scope of an attack, a time-consuming process that allows threats to linger in systems longer than necessary.

    What is an XDR Platform and How Does It Work?

    Extended Detection and Response (XDR) platforms represent the evolution of cybersecurity technology, breaking down traditional barriers between security tools. Unlike siloed solutions, XDR solutions provide a holistic approach to threat management through unified visibility and coordinated response.

    Unified data collection across endpoints, network, and cloud

    At its core, an XDR platform aggregates and correlates data from multiple security layers into a centralized repository. This comprehensive data collection encompasses:

    • Endpoints (computers, servers, mobile devices)
    • Network infrastructure and traffic
    • Cloud environments and workloads
    • Email systems and applications
    • Identity and access management

    This integration eliminates blind spots that typically plague security operations. By collecting telemetry from across the entire attack surface, XDR platforms provide security teams with complete visibility into potential threats. The system automatically ingests, cleans, and standardizes this data, ensuring consistent, high-quality information for analysis.

    Real-time threat detection using AI and ML

    XDR platforms leverage advanced analytics, artificial intelligence, and machine learning to identify suspicious patterns and anomalies that human analysts might miss. These capabilities enable:

    • Automatic correlation of seemingly unrelated events across different security layers
    • Identification of sophisticated multi-vector attacks through pattern recognition
    • Real-time monitoring and analysis of data streams for immediate threat identification
    • Reduction in false positives through contextual understanding of alerts

    The AI-powered capabilities enable XDR platforms to detect threats at a scale and speed impossible for human analysts alone. Moreover, these systems continuously learn and adapt to evolving threats through machine learning models.

    Automated response and orchestration capabilities

    Once threats are detected, XDR platforms can initiate automated responses without requiring manual intervention. This automation includes:

    • Isolation of compromised devices to contain threats
    • Blocking of malicious IP addresses and domains
    • Execution of predefined response playbooks for consistent remediation
    • Prioritization of incidents based on severity for efficient resource allocation

    Key Benefits of XDR for SOC Operations

    Implementing an XDR platform delivers immediate, measurable advantages to security operations centers struggling with traditional tools and fragmented systems. SOC teams gain specific capabilities that fundamentally transform their effectiveness against modern threats.

    Faster threat detection and reduced false positives

    The strategic advantage of XDR solutions begins with their ability to dramatically reduce alert volume. XDR tools automatically group related alerts into unified incidents, representing entire attack sequences rather than isolated events. This correlation across different security layers identifies complex attack patterns that traditional solutions might miss.

    Improved analyst productivity through automation

    As per the Tines report, 64% of analysts spend over half their time on tedious manual work, with 66% believing that half of their tasks could be automated. XDR platforms address this challenge through built-in orchestration and automation that offload repetitive tasks. Specifically, XDR can automate threat detection through machine learning, streamline incident response processes, and generate AI-powered incident reports. This automation allows SOC teams to detect sophisticated attacks with fewer resources while reducing response time.

    Centralized visibility and simplified workflows

    XDR provides a single pane view that eliminates “swivel chair integration,” where analysts manually interface across multiple security systems. This unified approach aggregates data from endpoints, networks, applications, and cloud environments into a consolidated platform. As a result, analysts gain swift investigation capabilities with instant access to all forensic artifacts, events, and threat intelligence in one location. This centralization particularly benefits teams during complex investigations, enabling them to quickly understand the complete attack story.

    Better alignment with compliance and audit needs

    XDR strengthens regulatory compliance through detailed documentation and monitoring capabilities. The platform generates comprehensive logs and audit trails of security events, user activities, and system changes, helping organizations demonstrate compliance to regulators. Additionally, XDR’s continuous monitoring adapts to new threats and regulatory changes, ensuring consistent compliance over time. Through centralized visibility and data aggregation, XDR effectively monitors data flows and access patterns, preventing unauthorized access to sensitive information.

    Conclusion

    XDR platforms clearly represent a significant advancement in cybersecurity technology.  At Seqrite, we offer a comprehensive XDR platform designed to help organizations simplify their SOC operations, improve detection accuracy, and automate responses. If you are looking to strengthen your cybersecurity posture with an effective and scalable XDR solution, Seqrite XDR is built to help you stay ahead of evolving threats.

     



    Source link

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application &vert; Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Unit Tests for Model Validation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Model validation is fundamental to any project: it brings security and robustness acting as a first shield against an invalid state.

    You should then add Unit Tests focused on model validation. In fact, when defining the input model, you should always consider both the valid and, even more, the invalid models, making sure that all the invalid models are rejected.

    BDD is a good approach for this scenario, and you can use TDD to implement it gradually.

    Okay, but how can you validate that the models and model attributes you defined are correct?

    Let’s define a simple model:

    public class User
    {
        [Required]
        [MinLength(3)]
        public string FirstName { get; set; }
    
        [Required]
        [MinLength(3)]
        public string LastName { get; set; }
    
        [Range(18, 100)]
        public int Age { get; set; }
    }
    

    Have we defined our model correctly? Are we covering all the edge cases? A well-written Unit Test suite is our best friend here!

    We have two choices: we can write Integration Tests to send requests to our system, which is running an in-memory server, and check the response we receive. Or we can use the internal Validator class, the one used by ASP.NET to validate input models, to create slim and fast Unit Tests. Let’s use the second approach.

    Here’s a utility method we can use in our tests:

    public static IList<ValidationResult> ValidateModel(object model)
    {
        var results = new List<ValidationResult>();
    
        var validationContext = new ValidationContext(model, null, null);
    
        Validator.TryValidateObject(model, validationContext, results, true);
    
        if (model is IValidatableObject validatableModel)
           results.AddRange(validatableModel.Validate(validationContext));
    
        return results;
    }
    

    In short, we create a validation context without any external dependency, focused only on the input model: new ValidationContext(model, null, null).

    Next, we validate each field by calling TryValidateObject and store the validation results in a list, result.

    Finally, if the Model implements the IValidatableObject interface, which exposes the Validate method, we call that Validate() method and store the returned validation errors in the final result list created before.

    As you can see, we can handle both validation coming from attributes on the fields, such as [Required], and custom validation defined in the model class’s Validate() method.

    Now, we can use this method to verify whether the validation passes and, in case it fails, which errors are returned:

    [Test]
    public void User_ShouldPassValidation_WhenModelIsValid()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Empty);
    }
    
    [Test]
    public void User_ShouldNotPassValidation_WhenLastNameIsEmpty()
    {
        var model = new User { FirstName = "Davide", LastName = null, Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    
    
    [Test]
    public void User_ShouldNotPassValidation_WhenAgeIsLessThan18()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 10 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    

    Further readings

    Model Validation allows you to create more robust APIs. To improve robustness, you can follow Postel’s law:

    🔗 Postel’s law for API Robustness | Code4IT

    This article first appeared on Code4IT 🐧

    Model validation, in my opinion, is one of the cases where Unit Tests are way better than Integration Tests. This is a perfect example of Testing Diamond, the best (in most cases) way to structure a test suite:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    If you still prefer writing Integration Tests for this kind of operation, you can rely on the WebApplicationFactory class and use it in your NUnit tests:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    Model validation is crucial. Testing the correctness of model validation can make or break your application. Please don’t skip it!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal &vert; Code4IT

    OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal | Code4IT


    Learn how to integrate Oh My Posh, a cross-platform tool that lets you create beautiful and informative prompts for PowerShell.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The content of the blog you are reading right now is stored in a Git repository. Every time I create an article, I create a new Git Branch to isolate the changes.

    To generate the skeleton of the articles, I use the command line (well, I generally use PowerShell); in particular, given that I’m using both Windows 10 and Windows 11 – depending on the laptop I’m working on – I use the Integrated Terminal, which allows you to define the style, the fonts, and so on of every terminal configured in the settings.

    Windows terminal with default style

    The default profile is pretty basic: no info is shown except for the current path – I want to customize the appearance.

    I want to show the status of the Git repository, including:

    • repository name
    • branch name
    • outgoing commits

    There are lots of articles that teach how to use OhMyPosh with Cascadia Code. Unfortunately, I couldn’t make them work.

    In this article, I teach you how I fixed it on my local machine. It’s a step-by-step guide I wrote while installing it on my local machine. I hope it works for you as well!

    Step 1: Create the $PROFILE file if it does not exist

    In PowerShell, you can customize the current execution by updating the $PROFILE file.

    Clearly, you first have to check if the profile file exists.

    Open the PowerShell and type:

    $PROFILE # You can also use $profile lowercase - it's the same!
    

    This command shows you the expected path of this file. The file, if it exists, is stored in that location.

    The Profile file is expected to be under a specific folder whose path can be found using the $PROFILE command

    In this case, the $Profile file should be available under the folder C:\Users\d.bellone\Documents\WindowsPowerShell. In my case, it does not exist, though!

    The Profile file is expected to be under a specific path, but it may not exist

    Therefore, you must create it manually: head to that folder and create a file named Microsoft.PowerShell_profile.ps1.

    Note: it might happen that not even the WindowsPowerShell folder exists. If it’s missing, well, create it!

    Step 2: Install OhMyPosh using Winget, Scoop, or PowerShell

    To use OhMyPosh, we have to – of course – install it.

    As explained in the official documentation, we have three ways to install OhMyPosh, depending on the tool you prefer.

    If you use Winget, just run:

    winget install JanDeDobbeleer.OhMyPosh -s winget
    

    If you prefer Scoop, the command is:

    scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
    

    And, if you like working with PowerShell, execute:

    Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
    

    I used Winget, and you can see the installation process here:

    Install OhMyPosh with Winget

    Now, to apply these changes, you have to restart the PowerShell.

    Step 3: Add OhMyPosh to the PowerShell profile

    Open the Microsoft.PowerShell_profile.ps1 file and add the following line:

    oh-my-posh init pwsh | Invoke-Expression
    

    This command is executed every time you open the PowerShell with the default profile, and it initializes OhMyPosh to have it available during the current session.

    Now, you can save and close the file.

    Hint: you can open the profile file with Notepad by running notepad $PROFILE.

    Step 4: Set the Execution Policy to RemoteSigned

    Restart the terminal. In all probability, you will see an error like this:

    &ldquo;The file .ps1 is not digitally signed&rdquo; error

    The error message

    The file <path>\Microsoft.PowerShell_profile.ps1 is
    not digitally signed. You cannot run this script on the current system

    means that PowerShell does not trust the script it’s trying to load.

    To see which Execution Policy is currently active, run:

    You’ll probably see that the value is AllSigned.

    To enable the execution of scripts created on your local machine, you have to set the Execution Policy value to RemoteSigned, using this command by running the PowerShell in administrator mode:

    Set-ExecutionPolicy RemoteSigned
    

    Let’s see the definition of the RemoteSigned Execution policy as per SQLShack’s article:

    This is also a safe PowerShell Execution policy to set in an enterprise environment. This policy dictates that any script that was not created on the system that the script is running on, should be signed. Therefore, this will allow you to write your own script and execute it.

    So, yeah, feel free to proceed and set the new Execution policy to have your PowerShell profile loaded correctly every time you open a new PowerShell instance.

    Now, OhMyPosh can run in the current profile.

    Head to a Git repository and notice that… It’s not working!🤬 Or, well, we have the Git information, but we are missing some icons and glyphs.

    Oh My Posh is loaded correctly, but some icons are missing due to the wrong font

    Step 5: Use CaskaydiaCove, not Cascadia Code, as a font

    We still have to install the correct font with the missing icons.

    We will install it using Chocolatey, a package manager available for Windows 11.

    To check if you have it installed, run:

    Now, to install the correct font family, open a PowerShell with administration privileges and run:

    choco install cascadia-code-nerd-font
    

    Once the installation is complete, you must tell Integrated Terminal to use the correct font by following these steps:

    1. open to the Settings page (by hitting CTRL + ,)
    2. select the profile you want to update (in my case, I’ll update the default profile)
    3. open the Appearance section
    4. under Font face select CaskaydiaCove Nerd Font

    PowerShell profile settings - Font Face should be CaskaydiaCove Nerd Font

    Now close the Integrated Terminal to apply the changes.

    Open it again, navigate to a Git repository, and admire the result.

    OhMyPosh with icons and fonts loaded correctly

    Further readings

    The first time I read about OhMyPosh, it was on Scott Hanselman’s blog. I couldn’t make his solution work – and that’s the reason I wrote this article. However, in his article, he shows how he customized his own Terminal with more glyphs and icons, so you should give it a read.

    🔗 My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal | Scott Hanselman’s blog

    We customized our PowerShell profile with just one simple configuration. However, you can do a lot more. You can read Ruud’s in-depth article about PowerShell profiles.

    🔗 How to Create a PowerShell Profile – Step-by-Step | Lazyadmin

    One of the core parts of this article is that we have to use CaskaydiaCove as a font instead of the (in)famous Cascadia Code. But why?

    🔗 Why CaskaydiaCove and not Cascadia Code? | GitHub

    Finally, as I said at the beginning of this article, I use Git and Git Branches to handle the creation and management of my blog articles. That’s just the tip of the iceberg! 🏔️

    If you want to steal my (previous) workflow, have a look at the behind-the-scenes of my blogging process (note: in the meanwhile, a lot of things have changed, but these steps can still be helpful for you)

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to install OhMyPosh in PowerShell and overcome all the errors you (well, I) don’t see described in other articles.

    I wrote this step-by-step article alongside installing these tools on my local machine, so I’m confident the solution will work.

    Did this solution work for you? Let me know! 📨

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link