برچسب: and

  • Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT


    Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.

    In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).

    In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.

    For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,

    GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
    

    will return

    {
      "instanceName": "Real",
      "info": {
        "socialNetworkName": "Twitter",
        "sourceUrl": "https://twitter.com/BelloneDavide/status/1682305491785973760",
        "username": "BelloneDavide",
        "id": "1682305491785973760"
      }
    }
    

    For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.

    Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.

    There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.

    As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.

    Class Diagram

    But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.

    How to create a custom WebApplicationFactory in .NET

    When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.

    Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
    
    }
    

    Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.

    If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.

    What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:

    public partial class Program { }
    

    Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
              builder.ConfigureAppConfiguration((host, configurationBuilder) => { });
        }
    }
    

    How to use WebApplicationFactory in your NUnit tests

    It’s time to start working on some real Integration Tests!

    As we said before, we have only one HTTP endpoint, defined like this:

    
    private readonly ISocialLinkParser _parser;
    private readonly ILogger<SocialPostLinkController> _logger;
    private readonly IConfiguration _config;
    
    public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
    {
        _parser = parser;
        _logger = logger;
        _config = config;
    }
    
    [HttpGet]
    public IActionResult Get([FromQuery] string uri)
    {
        _logger.LogInformation("Received uri {Uri}", uri);
        if (Uri.TryCreate(uri, new UriCreationOptions {  }, out Uri _uri))
        {
            var linkInfo = _parser.GetLinkInfo(_uri);
            _logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
    
            var instance = new Instance
            {
                InstanceName = _config.GetValue<string>("InstanceName"),
                Info = linkInfo
            };
            return Ok(instance);
        }
        else
        {
            _logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
            return BadRequest();
        }
    }
    

    We have 2 flows to validate:

    • If the input URI is valid, the HTTP Status code should be 200;
    • If the input URI is invalid, the HTTP Status code should be 400;

    We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.

    First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.

    public class ApiIntegrationTests : IDisposable
    {
        private IntegrationTestWebApplicationFactory _factory;
        private HttpClient _client;
    
        [OneTimeSetUp]
        public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
        [SetUp]
        public void Setup() => _client = _factory.CreateClient();
    
        public void Dispose() => _factory?.Dispose();
    }
    

    As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.

    From now on, we can use the _client instance to work with the in-memory instance of the API.

    One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.

    Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:

    [Test]
    public async Task Should_ReturnHttp200_When_UrlIsValid()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
    }
    

    Otherwise, return Bad Request:

    [Test]
    public async Task Should_ReturnBadRequest_When_UrlIsNotValid()
    {
        string inputUrl = "invalid-url";
    
        var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
    }
    

    How to create test-specific configurations using InMemoryCollection

    WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.

    Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.

    For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("Real"));
    }
    

    But some other times you might want to override a specific configuration key.

    The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.

    If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureAppConfiguration((host, configurationBuilder) =>
        {
            configurationBuilder.AddInMemoryCollection(
                new List<KeyValuePair<string, string?>>
                {
                    new KeyValuePair<string, string?>("InstanceName", "FromTests")
                });
        });
    }
    

    Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.

    You can validate this change by simply replacing the expected value in the previous test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
    }
    

    If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.

    How to set up custom dependencies for your tests

    Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.

    We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:

    public class StubSocialLinkParser : ISocialLinkParser
    {
        public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
        {
            SocialNetworkName = "test from stub",
            Id = "test id",
            SourceUrl = postUri,
            Username = "test username"
        };
    }
    

    We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    });
    

    Finally, we can create a method to validate this change:

    [Test]
    public async Task Should_UseStubName()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
    }
    

    How to create Integration Tests on specific resolved dependencies

    Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.

    We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.

    For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:

    builder.Services.AddScoped<SocialLinkParser>();
    

    Now we can create a test to validate that it is working:

    [Test]
    public async Task Should_ResolveDependency()
    {
        using (var _scope = _factory.Services.CreateScope())
        {
            var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
            Assert.That(service, Is.Not.Null);
            Assert.That(service, Is.AssignableTo<SocialLinkParser>());
        }
    }
    

    As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.

    The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.

    Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:

    protected IServiceScope _scope;
    protected SocialLinkParser _sut;
    private IntegrationTestWebApplicationFactory _factory;
    
    [OneTimeSetUp]
    public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
    [SetUp]
    public void Setup()
    {
        _scope = _factory.Services.CreateScope();
        _sut = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
    }
    
    [TearDown]
    public void TearDown()
    {
        _sut = null;
        _scope.Dispose();
    }
    
    public void Dispose() => _factory?.Dispose();
    

    You can see an example of the implementation here in the SocialLinkParserTests class.

    Where are my logs?

    Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.

    But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddLogging(builder => builder.AddConsole().AddDebug());
    });
    

    Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:

    Logs appear in the Output panel of VisualStudio

    Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():

    services.AddLogging(builder => builder.ClearProviders() .AddConsole().AddDebug());
    

    Full example

    In this article, we’ve configured many parts of our WebApplicationFactory. Here’s the final result:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            builder.ConfigureAppConfiguration((host, configurationBuilder) =>
            {
                // Remove other settings sources, if necessary
                configurationBuilder.Sources.Clear();
    
                //Create custom key-value pairs to be used as settings
                configurationBuilder.AddInMemoryCollection(
                    new List<KeyValuePair<string, string?>>
                    {
                        new KeyValuePair<string, string?>("InstanceName", "FromTests")
                    });
            });
    
            builder.ConfigureTestServices(services =>
            {
                //Add stub classes
                services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    
                //Configure logging
                services.AddLogging(builder => builder.ClearProviders().AddConsole().AddDebug());
            });
        }
    }
    

    You can find the source code used for this article on my GitHub; feel free to download it and toy with it!

    Further readings

    This is an in-depth article about Integration Tests in .NET. I already wrote an article about it with a simpler approach that you might enjoy:

    🔗 How to run Integration Tests for .NET API | Code4IT

    This article first appeared on Code4IT 🐧

    As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.

    In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.

    Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Wrapping up

    This was a huge article, I know.

    Again, feel free to download and run the example code I shared on my GitHub.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal &vert; Code4IT

    OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal | Code4IT


    Learn how to integrate Oh My Posh, a cross-platform tool that lets you create beautiful and informative prompts for PowerShell.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The content of the blog you are reading right now is stored in a Git repository. Every time I create an article, I create a new Git Branch to isolate the changes.

    To generate the skeleton of the articles, I use the command line (well, I generally use PowerShell); in particular, given that I’m using both Windows 10 and Windows 11 – depending on the laptop I’m working on – I use the Integrated Terminal, which allows you to define the style, the fonts, and so on of every terminal configured in the settings.

    Windows terminal with default style

    The default profile is pretty basic: no info is shown except for the current path – I want to customize the appearance.

    I want to show the status of the Git repository, including:

    • repository name
    • branch name
    • outgoing commits

    There are lots of articles that teach how to use OhMyPosh with Cascadia Code. Unfortunately, I couldn’t make them work.

    In this article, I teach you how I fixed it on my local machine. It’s a step-by-step guide I wrote while installing it on my local machine. I hope it works for you as well!

    Step 1: Create the $PROFILE file if it does not exist

    In PowerShell, you can customize the current execution by updating the $PROFILE file.

    Clearly, you first have to check if the profile file exists.

    Open the PowerShell and type:

    $PROFILE # You can also use $profile lowercase - it's the same!
    

    This command shows you the expected path of this file. The file, if it exists, is stored in that location.

    The Profile file is expected to be under a specific folder whose path can be found using the $PROFILE command

    In this case, the $Profile file should be available under the folder C:\Users\d.bellone\Documents\WindowsPowerShell. In my case, it does not exist, though!

    The Profile file is expected to be under a specific path, but it may not exist

    Therefore, you must create it manually: head to that folder and create a file named Microsoft.PowerShell_profile.ps1.

    Note: it might happen that not even the WindowsPowerShell folder exists. If it’s missing, well, create it!

    Step 2: Install OhMyPosh using Winget, Scoop, or PowerShell

    To use OhMyPosh, we have to – of course – install it.

    As explained in the official documentation, we have three ways to install OhMyPosh, depending on the tool you prefer.

    If you use Winget, just run:

    winget install JanDeDobbeleer.OhMyPosh -s winget
    

    If you prefer Scoop, the command is:

    scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
    

    And, if you like working with PowerShell, execute:

    Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
    

    I used Winget, and you can see the installation process here:

    Install OhMyPosh with Winget

    Now, to apply these changes, you have to restart the PowerShell.

    Step 3: Add OhMyPosh to the PowerShell profile

    Open the Microsoft.PowerShell_profile.ps1 file and add the following line:

    oh-my-posh init pwsh | Invoke-Expression
    

    This command is executed every time you open the PowerShell with the default profile, and it initializes OhMyPosh to have it available during the current session.

    Now, you can save and close the file.

    Hint: you can open the profile file with Notepad by running notepad $PROFILE.

    Step 4: Set the Execution Policy to RemoteSigned

    Restart the terminal. In all probability, you will see an error like this:

    &ldquo;The file .ps1 is not digitally signed&rdquo; error

    The error message

    The file <path>\Microsoft.PowerShell_profile.ps1 is
    not digitally signed. You cannot run this script on the current system

    means that PowerShell does not trust the script it’s trying to load.

    To see which Execution Policy is currently active, run:

    You’ll probably see that the value is AllSigned.

    To enable the execution of scripts created on your local machine, you have to set the Execution Policy value to RemoteSigned, using this command by running the PowerShell in administrator mode:

    Set-ExecutionPolicy RemoteSigned
    

    Let’s see the definition of the RemoteSigned Execution policy as per SQLShack’s article:

    This is also a safe PowerShell Execution policy to set in an enterprise environment. This policy dictates that any script that was not created on the system that the script is running on, should be signed. Therefore, this will allow you to write your own script and execute it.

    So, yeah, feel free to proceed and set the new Execution policy to have your PowerShell profile loaded correctly every time you open a new PowerShell instance.

    Now, OhMyPosh can run in the current profile.

    Head to a Git repository and notice that… It’s not working!🤬 Or, well, we have the Git information, but we are missing some icons and glyphs.

    Oh My Posh is loaded correctly, but some icons are missing due to the wrong font

    Step 5: Use CaskaydiaCove, not Cascadia Code, as a font

    We still have to install the correct font with the missing icons.

    We will install it using Chocolatey, a package manager available for Windows 11.

    To check if you have it installed, run:

    Now, to install the correct font family, open a PowerShell with administration privileges and run:

    choco install cascadia-code-nerd-font
    

    Once the installation is complete, you must tell Integrated Terminal to use the correct font by following these steps:

    1. open to the Settings page (by hitting CTRL + ,)
    2. select the profile you want to update (in my case, I’ll update the default profile)
    3. open the Appearance section
    4. under Font face select CaskaydiaCove Nerd Font

    PowerShell profile settings - Font Face should be CaskaydiaCove Nerd Font

    Now close the Integrated Terminal to apply the changes.

    Open it again, navigate to a Git repository, and admire the result.

    OhMyPosh with icons and fonts loaded correctly

    Further readings

    The first time I read about OhMyPosh, it was on Scott Hanselman’s blog. I couldn’t make his solution work – and that’s the reason I wrote this article. However, in his article, he shows how he customized his own Terminal with more glyphs and icons, so you should give it a read.

    🔗 My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal | Scott Hanselman’s blog

    We customized our PowerShell profile with just one simple configuration. However, you can do a lot more. You can read Ruud’s in-depth article about PowerShell profiles.

    🔗 How to Create a PowerShell Profile – Step-by-Step | Lazyadmin

    One of the core parts of this article is that we have to use CaskaydiaCove as a font instead of the (in)famous Cascadia Code. But why?

    🔗 Why CaskaydiaCove and not Cascadia Code? | GitHub

    Finally, as I said at the beginning of this article, I use Git and Git Branches to handle the creation and management of my blog articles. That’s just the tip of the iceberg! 🏔️

    If you want to steal my (previous) workflow, have a look at the behind-the-scenes of my blogging process (note: in the meanwhile, a lot of things have changed, but these steps can still be helpful for you)

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to install OhMyPosh in PowerShell and overcome all the errors you (well, I) don’t see described in other articles.

    I wrote this step-by-step article alongside installing these tools on my local machine, so I’m confident the solution will work.

    Did this solution work for you? Let me know! 📨

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System

    Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System


    From the outset, we knew we wanted something that subverted any conventional agency website formulas. Instead,
    inspired by the unseen energy that drives creativity, connection and transformation, we arrived at the idea of
    invisible forces
    . Could we take the powerful yet intangible elements that shape our world—motion, emotion, intuition, and
    inspiration—and manifest them in a digital space?

    We were excited about creating something that included many custom interactions and a very experiential feel. However,
    our concern was picking a set of tools that would allow most of our developers to contribute to and maintain the site
    after launch.

    We chose to start from a Next / React base, as we often do at Phantom. React also has the advantage of being
    compatible with the excellent React Three Fiber library, which we used to seamlessly bridge the gap between our DOM
    components and the WebGL contexts used across the site. For styles, we are using our very own
    CSS components
    as well as SASS.

    For interactive behaviours and animation, we chose to use GSAP for two main reasons. Firstly, it contains a lot of
    plugins we know and love, such as SplitText, CustomEase and ScrollTrigger. Secondly, GSAP allows us to use a single
    animation framework across DOM and WebGL components.

    We could go on and on talking about the details behind every single animation and micro-interaction on the site, but
    for this piece we have chosen to focus our attention on two of the most unique components of our site: the homepage
    grid and the scrollable employee face particle carousel.

    The Homepage Grid

    It took us a very long time to get this view to perform and feel just how we wanted it to. In this article, we will focus on the interactive part. For more info on how we made things performant, head to our previous article: Welcome back to Phantomland

    Grid View

    The project’s grid view is integrated into the homepage by incorporating a primitive Three.js object into a React
    Three Fiber scene.

    //GridView.tsx
    const GridView = () => {
      return (
        <Canvas>
          ...
          <ProjectsGrid />
          <Postprocessing />
        </Canvas>
      );
    }
    
    //ProjectsGrid.tsx
    const ProjectsGrid = ({atlases, tiles}: Props) => {
      const {canvas, camera} = useThree();
      
      const grid = useMemo(() => {
        return new Grid(canvas, camera, atlases, tiles);
      }, [canvas, camera, atlases, tiles]);
    
      if(!grid) return null;
      return (
        <primitive object={grid} />
      );
    }

    We initially wanted to write all the code for the grid using React Three Fiber but realised that, due to the
    complexity of our grid component, a vanilla
    Three.js
    class would be easier to maintain.

    One of the key elements that gives our grid its iconic feel is our post-processing distortion effect. We implemented
    this feature by creating a custom shader pass within our post-processing pipeline:

    // Postprocessing.tsx
    const Postprocessing = () => {
      const {gl, scene, camera} = useThree();
      
      // Create Effect composer
      const {effectComposer, distortionShader} = useMemo(() => {
        const renderPass = new RenderPass(scene, camera);
        const distortionShader = new DistortionShader();
        const distortionPass = new ShaderPass(distortionShader);
        const outputPass = new OutputPass();
    
        const effectComposer = new EffectComposer(gl);
        effectComposer.addPass(renderPass);
        effectComposer.addPass(distortionPass);
        effectComposer.addPass(outputPass);
    
        return {effectComposer, distortionShader};
      }, []);
      
      // Update distortion intensity
      useEffect(() => {
        if (workgridState === WorkgridState.INTRO) {
          distortionShader.setDistortion(CONFIG.distortion.flat);
        } else {
          distortionShader.setDistortion(CONFIG.distortion.curved);
        }
      }, [workgridState, distortionShader]);
      
      // Update distortion intensity
      useFrame(() => {
        effectComposer.render();
      }, 1);
     
      return null;
    }

    When the grid transitions in and out on the site, the distortion intensity changes to make the transition feel
    natural. This animation is done through a simple tween in our
    DistortionShader
    class:

    class DistortionShader extends ShaderMaterial {
      private distortionIntensity = 0;
    
      super({
          name: 'DistortionShader',
          uniforms: {
            distortionIntensity: {value: new Vector2()},
            ...
          },
          vertexShader,
          fragmentShader,
      });
    
      update() {
        const ratio = window.innerWidth, window.innerHeight;
        this.uniforms[DistortionShaderUniforms.DISTORTION].value.set(
          this.distortionIntensity * ratio,
          this.distortionIntensity * ratio,
        );
      }
    
      setDistortion(value: number) {
        gsap.to(this, {
          distortionIntensity: value,
          duration: 1,
          ease: 'power2.out',
          onUpdate: () => this.update()    }
      }
    }

    Then the distortion is applied through our custom shader:

    // fragment.ts
    export const fragmentShader = /* glsl */ `
      uniform sampler2D tDiffuse;
      uniform vec2 distortion;
      uniform float vignetteOffset;
      uniform float vignetteDarkness;
    
      varying vec2 vUv;
      
      // convert uv range from 0 -> 1 to -1 -> 1
      vec2 getShiftedUv(vec2 uv) {
        return 2. * (uv - .5);
      }
      
      // convert uv range from -1 -> 1 to 0 -> 1
      vec2 getUnshiftedUv(vec2 shiftedUv) {
        return shiftedUv * 0.5 + 0.5;
      }
    
    
      void main() {
        vec2 shiftedUv = getShiftedUv(vUv);
        float distanceToCenter = length(shiftedUv);
        
        // Lens distortion effect
        shiftedUv *= (0.88 + distortion * dot(shiftedUv));
        vec2 transformedUv = getUnshiftedUv(shiftedUv);
        
        // Vignette effect
        float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  (vignetteDarkness + vignetteOffset) * distanceToCenter);
        
        // Sample render texture and output fragment
        color = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
        gl_FragColor = vec4(color, 1.);
      }

    We also added a vignette effect to our post-processing shader to darken the corners of the viewport, focusing the
    user’s attention toward the center of the screen.

    In order to make our home view as smooth as possible, we also spent a fair amount of time crafting the
    micro-interactions and transitions of the grid.

    Ambient mouse offset

    When the user moves their cursor around the grid, the grid moves slightly in the opposite direction, creating a very
    subtle ambient floating effect. This was simply achieved by calculating the mouse position on the grid and moving the
    grid mesh accordingly:

    getAmbientCursorOffset() {
      // Get the pointer coordinates in UV space ( 0 - 1 ) range
      const uv = this.navigation.pointerUv;
      const offset = uv.subScalar(0.5).multiplyScalar(0.2);
      return offset;
    }
    
    update() {
      ...
      // Apply cursor offset to grid position
      const cursorOffset = getAmbientCursorOffset();
      this.mesh.position.x += cursorOffset.x;
      this.mesh.position.y += cursorOffset.y;
    }

    Drag Zoom

    When the grid is dragged around, a zoom-out effect occurs and the camera seems to pan away from the grid. We created
    this effect by detecting when the user starts and stops dragging their cursor, then using that to trigger a GSAP
    animation with a custom ease for extra control.

    onPressStart = () => {
      this.animateCameraZ(0.5, 1);
    }
    
    onPressEnd = (isDrag: boolean) => {
      if(isDrag) {
        this.animateCameraZ(0, 1);
      }
    }
    
    animateCameraZ(distance: number, duration: number) {
      gsap.to(this.camera.position, {
        z: distance,
        duration,
        ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
      });
    }

    Drag Movement

    Last but not least, when the user drags across the grid and releases their cursor, the grid slides through with a
    certain amount of inertia.

    drag(offset: Vector2) {
      this.dragAction = offset;
    
      // Gradually increase velocity with drag time and distance
      this.velocity.lerp(offset, 0.8);
    }
    
    // Every frame
    update() {
      // positionOffset is later used to move the grid mesh
      if(this.isDragAction) {
        // if the user is dragging their cursor, add the drag value to offset
        this.positionOffset.add(this.dragAction.clone());
      } else {
        // if the user is not dragging, add the velocity to the offset
        this.positionOffset.add(this.velocity);
      }
    
      this.dragAction.set(0, 0);
      // Attenuate velocity with time
      this.velocity.lerp(new Vector2(), 0.1);
    }

    Face Particles

    The second major component we want to highlight is our employee face carousel, which presents team members through a
    dynamic 3D particle system. Built with React Three Fiber’s
    BufferGeometry
    and custom GLSL shaders, this implementation leverages custom shader materials for lightweight performance and
    flexibility, allowing us to generate entire 3D face representations using only a 2D colour photograph and its
    corresponding depth map—no 3D models required.

    Core Concept: Depth-Driven Particle Generation

    The foundation of our face particle system lies in converting 2D imagery into volumetric 3D representations. We’ve
    kept things efficient, with each face using only two optimized 256×256 WebP images (under 15KB each).

    To capture the images, each member of the Phantom team was 3D scanned using
    RealityScan
    from Unreal Engine on iPhone, creating a 3D model of their face.

    These scans were cleaned up and then rendered from Cinema4D with a position and colour pass.

    The position pass was converted into a greyscale depth map in Photoshop, and this—along with the colour pass—was
    retouched where needed, cropped, and then exported from Photoshop to share with the dev team.

    Each face is constructed from approximately 78,400 particles (280×280 grid), where each particle’s position and
    appearance is determined by sampling data from our two source textures.

    /* generate positions attributes array */
    const POINT_AMOUNT = 280;
    
    const points = useMemo(() => {
      const length = POINT_AMOUNT * POINT_AMOUNT;
      const vPositions = new Float32Array(length * 3);
      const vIndex = new Float32Array(length * 2);
      const vRandom = new Float32Array(length * 4);
    
      for (let i = 0; i < length; i++) {
          const i2 = i * 2;
          vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
          vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;
    
          const i3 = i * 3;
          const theta = Math.random() * 360;
          const phi = Math.random() * 360;
          vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
          vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
          vPositions[i3 + 2] = 1 * Math.cos(theta);
    
          const i4 = i * 4;
          vRandom.set(
            Array(4)
              .fill(0)
              .map(() => Math.random()),
            i4,
          );
      }
    
      return {vPositions, vRandom, vIndex};
    }, []);
    // React Three Fiber component structure 
    const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
      return (
        <points ref={pointsRef} position={pointsPosition}>
          <bufferGeometry>
            <bufferAttribute attach="attributes-vIndex" 
                 args={[points.vIndex, 2]} />
            <bufferAttribute attach="attributes-position"
                 args={[points.vPositions, 3]} />
            <bufferAttribute attach="attributes-vRandom"
                 args={[points.vRandom, 4]} />
          </bufferGeometry>
          
          <shaderMaterial
            blending={NormalBlending}
            transparent={true}
            fragmentShader={faceFrag}
            vertexShader={faceVert}
            uniforms={uniforms}
          />
        </points>
      );
    };

    The depth map provides normalized values (0–1) that directly translate to Z-depth positioning. A value of 0 represents
    the furthest point (background), while 1 represents the closest point (typically the nose tip).

    /* vertex shader */ 
    
    // sample depth and color data for each particle
    vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;
    
    // convert depth to Z-position
    float zDepth = (1. - depthValue.z);
    pos.z = (zDepth * 2.0 - 1.0) * zScale;

    Dynamic Particle Scaling Through Colour Analysis

    One of the key methods that brings our faces to life is utilizing colour data to influence particle scale. In our
    vertex shader, rather than using uniform particle sizes, we analyze the colour density of each pixel so that brighter,
    more colourful areas of the face (like eyes, lips, or well-lit cheeks) generate larger, more prominent particles,
    while darker areas (shadows, hair) create smaller, subtler particles. The result is a more organic, lifelike
    representation that emphasizes facial features naturally.

    /* vertex shader */ 
    
    vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
    
    // calculate color density
    float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
    
    // map density to particle scale
    float pScale = mix(pScaleMin, pScaleMax, density);

    The calibration below demonstrates the influence of colour (contrast, brightness, etc.) on the final 3D particle formation.

    Ambient Noise Animation

    To prevent static appearances and maintain visual interest, we apply continuous noise-based animation to all
    particles. This ambient animation system uses curl noise to create subtle, flowing movement across the entire
    face structure.

    /* vertex shader */ 
    
    // primary curl noise for overall movement 
    pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
    // animation updates in React Three Fiber
    
    useFrame((state, delta) => {
      if (!materialRef.current) return;
      
      materialRef.current.uniforms.time.value = state.clock.elapsedTime * NOISE_SPEED;
      
      // update rotation based on mouse interaction
      easing.damp(pointsRef.current.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
      easing.damp(pointsRef.current.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);
    
    });

    Face Transition Animation

    When transitioning between different team members, we combine timeline-based interpolation with visual effects written
    in shader materials.

    GSAP-Driven Lerp Method

    The transition foundation uses GSAP timelines to animate multiple shader parameters simultaneously:

    timelineRef.current = gsap
      .timeline()
      .fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
      .to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
      .to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);

    And the shader handles the visual blending between two face states:

    /* vertex shader */ 
    
    // smooth transition curve
    float speed = clamp(transition * mix(0.8, .9, transition), 0., 1.0); 
    speed = smoothstep(0.0, 1.0, speed); 
    
    // blend textures 
    vec3 mainColorTexture = mix(colorTexture1, colorTexture2, speed); 
    vec3 depthValue =mix(depthTexture1, depthTexture2, speed);

    To add visual interest during transitions, we further inject additional noise that’s strongest at the midpoint of the
    transition. This creates a subtle “disturbance” effect where particles temporarily deviate from their target
    positions, making transitions feel more dynamic and organic.

    /* vertex shader */ 
    
    // secondary noise movement applied for transition
    float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;
    
    float smoothTransition = abs(sin(speed * PI)); 
    pos.x += nxScale * randomZ * 0.1 * smoothTransition; 
    pos.y += nyScale *randomZ * 0.1 * smoothTransition;
    pos.z += nzScale * randomZ * 0.1 * smoothTransition;

    Custom Depth of Field Effect

    To enhance the three-dimensional perception, we implemented a custom depth of field effect directly in our shader
    material. It calculates view-space distance for each particle and modulates both opacity and size based on proximity
    to a configurable focus plane.

    /* vertex shader - calculate view distance */
    
    vec4 viewPosition = viewMatrix * modelPosition;
    vDistance = abs(focus +viewPosition.z); 
    
    // apply distance to point size for blur effect 
    gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
    /* fragment shader - calculate distance-based alpha for DOF */
    
    
    float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
    gl_FragColor = vec4(color, alpha);

    Challenges: Unifying Face Scales

    One of the challenges we faced was achieving visual consistency across different team members’ photos. Each photograph
    was captured under slightly different conditions—varying lighting, camera distances, and facial proportions.
    Therefore, we went through each face to calibrate multiple scaling factors:

    • Depth scale calibration
      to ensure no nose protrudes too aggressively
    • Colour density balancing
      to maintain consistent particle size relationships
    • Focus plane optimization
      to prevent excessive blur on any individual face
    // individual face parameters requiring manual tuning 
    
    particle_params: { 
      offset_z: 0,           // overall Z-position
      z_depth_scale: 0,      // depth map scaling factor
      face_size: 0,          // overall face scale 
    }

    Final Words

    Our face particle system demonstrates how simple yet careful technical implementation can create fun visual
    experiences from minimal assets. By combining lightweight WebP textures, custom shader materials, and animations,
    we’ve created a system that transforms simple 2D portraits into interactive 3D figures.

    Check out the full site.

    Curious about what we’re up to in the Phantom studio? Or have a project you think we’d be interested in? Get in touch.



    Source link

  • Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite

    Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite


    In today’s world, organizations are rapidly embracing cloud security to safeguard their data and operations. However, as cloud adoption grows, so do the risks. In this post, we highlight the top cloud security challenges and show how Seqrite can help you tackle them with ease.

    1.    Misconfigurations

    One of the simplest yet most dangerous mistakes is misconfiguring cloud workloads think storage buckets left public, weak IAM settings, or missing encryption. Cybercriminals actively scan for these mistakes. A small misconfiguration can lead to significant data leakage or worst-case, ransomware deployment. Seqrite Endpoint Protection Cloud ensure your cloud environment adheres to best-practice security settings before threats even strike.

    2.    Shared Responsibility Confusion

    The cloud model operates on shared responsibility: providers secure infrastructure, you manage your data and configurations. Too many teams skip this second part. Inadequate control over access, authentication, and setup drives serious risks. With Seqrite’s unified dashboard for access control, IAM, and policy enforcement, you stay firmly in control without getting overwhelmed.

    3.    Expanded Attack Surface

    More cloud services, more code, more APIs, more opportunities for attacks. Whether it’s serverless functions or public API endpoints, the number of access points grows quickly. Seqrite tackles this with integrated API scanning, vulnerability assessment, and real-time threat detection. Every service, even ephemeral ones is continuously monitored.

    4.    Unauthorized Access & Account Hijacking

    Attackers often gain entry via stolen credentials, especially in shared or multi-cloud environments. Once inside, they move laterally and hijack more resources. Seqrite’s multi-factor authentication, adaptive risk scoring, and real-time anomaly detection lock out illicit access and alert you instantly.

    5.    Insufficient Data Encryption

    Unencrypted data whether at rest or in transit is a gold mine for attackers. Industries with sensitive or regulated information, like healthcare or finance, simply can’t afford this. Seqrite ensures enterprise-grade encryption everywhere you store or transmit data and handles key management so that it’s secure and hassle-free.

    6.    Poor Visibility and Monitoring

    Without centralized visibility, security teams rely on manual cloud consoles and piecemeal logs. That slows response and leaves gaps. Seqrite solves this with a unified monitoring layer that aggregates logs and events across all your cloud environments. You get complete oversight and lightning-fast detection.

    7.     Regulatory Compliance Pressures

    Compliance with GDPR, HIPAA, PCI-DSS, DPDPA and other regulations is mandatory—but complex in multi-cloud environments. Seqrite Data Privacy simplifies compliance with continuous audits, policy enforcement, and detailed reports, helping you reduce audit stress and regulatory risk.

    8.    Staffing & Skills Gap

    Hiring cloud-native, security-savvy experts is tough. Many teams lack the expertise to monitor and secure dynamic cloud environments. Seqrite’s intuitive interface, automation, and policy templates remove much of the manual work, allowing lean IT teams to punch above their weight.

    9.    Multi-cloud Management Challenges

    Working across AWS, Azure, Google Cloud and maybe even private clouds? Each has its own models and configurations. This fragmentation creates blind spots and policy drift. Seqrite consolidates everything into one seamless dashboard, ensuring consistent cloud security policies across all environments.

    10.  Compliance in Hybrid & Multi-cloud Setups

    Hybrid cloud setups introduce additional risks, cross-environment data flows, networking complexities, and inconsistent controls. Seqrite supports consistent security policy application across on-premises, private clouds, and public clouds, no matter where a workload lives.

    Bring in Seqrite to secure your cloud journey, safe, compliant, and hassle-free.

     



    Source link

  • What is MDM and Why Your Business Can’t Ignore It Anymore

    What is MDM and Why Your Business Can’t Ignore It Anymore


    In today’s always-connected, mobile-first world, employees are working on the go—from airports, cafes, living rooms, and everywhere in between. That’s great for flexibility and productivity—but what about security? How do you protect sensitive business data when it’s spread across dozens or hundreds of mobile devices?  This is where Mobile Device Management (MDM) steps in. Let’s see what is MDM.

     

    What is MDM?

    MDM, short for Mobile Device Management, is a system that allows IT teams to monitor, manage, and secure employees’ mobile devices—whether company-issued or BYOD (Bring Your Own Device).

    It’s like a smart control panel for your organization’s phones and tablets. From pushing software updates and managing apps to enforcing security policies and wiping lost devices—MDM gives you full visibility and control, all from a central dashboard.

    MDM helps ensure that only secure, compliant, and authorized devices can access your company’s network and data.

     

    Why is MDM Important?

    As the modern workforce becomes more mobile, data security risks also rise. Devices can be lost, stolen, or compromised. Employees may install risky apps or access corporate files from unsecured networks. Without MDM, IT teams are essentially blind to these risks.

    A few common use cases of MDM:

    • A lost smartphone with access to business emails.
    • An employee downloading malware-infected apps.
    • Data breaches due to unsecured Wi-Fi use on personal devices.
    • Non-compliance with industry regulations due to lack of control.

    MDM helps mitigate all these risks while still enabling flexibility.

     

    Key Benefits of MDM Solution

    Enhanced Security

    Remotely lock, wipe, or locate lost devices. Prevent unauthorized access, enforce passcodes, and control which apps are installed.

    Centralized Management

    Manage all mobile devices, iOS and Android from a single dashboard. Push updates, install apps, and apply policies in bulk.

    Improved Productivity

    Set devices in kiosk mode for focused app usage. Push documents, apps, and files on the go. No downtime, no waiting.

    Compliance & Monitoring

    Track usage, enforce encryption, and maintain audit trails. Ensure your devices meet industry compliance standards at all times. 

     

    Choosing the Right MDM Solution

    There are many MDM solutions out there, but the right one should go beyond basic management. It should make your life easier, offer deep control, and scale with your organization’s needs—without compromising user experience.

    Why Seqrite MDM is Built for Today’s Mobile Workforce

     Seqrite Enterprise Mobility Management (EMM) is a comprehensive MDM solution tailored for businesses that demand both security and simplicity. Here’s what sets it apart:

    1. Unified Management Console: Manage all enrolled mobile devices in one place—track location, group devices, apply custom policies, and more.
    1. AI-Driven Security: Built-in antivirus, anti-theft features, phishing protection, and real-time web monitoring powered by artificial intelligence.
    1. Virtual Fencing: Set geo, Wi-Fi, and time-based restrictions to control device access and usage great for field teams and remote employees.
    1. App & Kiosk Mode Management: Push apps, lock devices into single- or multi-app kiosk mode, and publish custom apps to your enterprise app store.
    1. Remote File Transfer & Troubleshooting: Send files to one or multiple devices instantly and troubleshoot issues remotely to reduce device downtime.
    1. Automation & Reporting: Get visual dashboards, schedule regular exports, and access real-time logs and audit reports to stay ahead of compliance.

     

     Final Thoughts

    As work continues to shift beyond the boundaries of the office, MDM is no longer a luxury, it’s a necessity. Whether you’re a growing startup or a large enterprise, protecting your mobile workforce is key to maintaining both productivity and security.

    With solutions like Seqrite Enterprise Mobility Management, businesses get the best of both worlds powerful control and seamless management, all wrapped in a user-friendly experience.



    Source link

  • Designing TrueKind: A Skincare Brand’s Journey Through Moodboards, Motion, and Meaning

    Designing TrueKind: A Skincare Brand’s Journey Through Moodboards, Motion, and Meaning


    Project Backstory

    TrueKind approached us with a clear but ambitious goal: they wanted a skincare website that stood out—not just in the Indian skincare space, but globally.

    The challenge? Most skincare websites (especially local ones) lean heavily commercial. They emphasize offers, discounts, and aggressive product pushes. But TrueKind wanted something gentler, more thoughtful, and centered on one message: honest skincare.

    From the very first conversation, I knew this would require a delicate balance. We wanted to create a site that was visually fresh and a little unconventional, but not so experimental that it alienated everyday customers.

    We set aside around 1–2 months for the design phase, allowing time for multiple iterations and careful refinement. One of the best parts of this project was the incredibly trusting, supportive client team—working with people who are genuinely open to creativity makes all the difference.

    Crafting the Visual Direction

    Every project I work on begins with listening. Before touching any design tools, I immersed myself in the client’s vision, mood, and tone.

    I created a moodboard to align with their aesthetic, making sure the images I pulled weren’t just random “nice” visuals. This is something I see many younger designers overlook: it’s not just about curating pretty pictures; it’s about curating pictures that match the brand’s energy, saturation, color language, and atmosphere.

    🌟 When building moodboards, don’t be afraid to tweak image properties. Adjust exposure, warmth, contrast, and saturation until they feel cohesive. You’re not just grabbing references—you’re crafting a controlled atmosphere.

    For the typefaces, I leaned on my go-to foundry, Pangram Pangram. Their fonts are beautifully made and (for personal projects) wonderfully accessible. For TrueKind, we selected PP Mori (for a modern, clean backbone) and Editorial Neue (to bring in an elegant, editorial touch).

    Even though the client wanted something unconventional, I knew we had to keep the animation and interaction design balanced. Too much movement can be overwhelming. So, we built the visual experience primarily around typography—letting type choices and layouts carry the creative weight.

    On Working Before AI Image Tools

    This project dates back to around 2021, before the surge of AI image generation tools. So when it came to placeholders and visual exploration, I often turned to Behance or similar platforms to source reference imagery that fit the vibe.

    Of course, for the final launch, we didn’t want any copyright issues—so we conducted a professional photoshoot in Worli, Mumbai, capturing clean, fresh product imagery. For the Awwwards showcase, we’ve swapped in AI-generated images purely for display purposes.

    Iteration and Evolution

    Here’s a personal moment of honesty: The first version I designed? I wasn’t thrilled with it.

    It lacked the polish, elegance, and depth I knew the brand deserved. But instead of settling, I went back, refined, iterated, and kept pushing. That’s something I’d tell any designer reading this:

    🌟 Don’t be afraid to walk away from your early drafts. You can feel when something’s not hitting the mark—trust that instinct, and give yourself room to improve.

    Animation & Interaction Design

    I’m a sucker for scroll-based animations. Smooth scrolling, layered reveals, subtle movement—these elements can elevate a static design a hundredfold if used thoughtfully.

    For TrueKind, I didn’t want unnecessary flash. The scroll interactions enhance the content flow without overpowering it. The text reveals, section transitions, and layered elements were designed to add just enough dynamism to keep the user engaged while still respecting the calm, honest tone of the brand.

    Bringing in Reksa: Development Insights

    At a certain point, I knew I needed help to fully do justice to the design. That’s when I reached out to Reksa—a developer I deeply admire, not just for his technical skill but for his meticulous creative eye.

    Handing over a design like this isn’t always easy. But with Reksa, it felt seamless. He understood the nuances, respected the design intention, and delivered 1000%.

    In the dev section below, Reksa will walk you through the stack, architecture, key challenges, and how he brought the design to life with care and precision.

    Tech Stack & Challenges

    Nuxt.js 3 for the frontend: This project was built with Nuxt.js 3 as the frontend framework. It’s my main tech stack and a powerful choice, especially for creative websites. I find Nuxt.js offers far more flexibility than other frameworks.

    SCSS for styling: While many developers prefer CSS frameworks, I lean toward vanilla CSS as my primary approach. SCSS is used here mainly for class scoping and maintainability, but the overall syntax remains vanilla. Writing custom CSS makes the most sense for my needs—especially in creative development, where unique layouts and their connection to animation/motion often demand full styling control.

    Vercel for hosting: It provides a simple, plug-and-play experience for hosting Nuxt.js 3 projects.

    Prismic as CMS: I use Prismic as the headless CMS. It’s my go-to for most projects—straightforward and well-suited to this project’s needs.

    GSAP for animations: For smooth motion experiences, GSAP is unmatched. Its exceptional plugins—like SplitText and DrawSVG—allow me to craft fantastic animations that elevate the design.

    Lenis for smooth scrolling: To enhance the motion and animation quality, implementing smooth scroll is a must. It ensures that animations flow beautifully in sync with the scroll timeline.

    The key challenges for this project were implementing the “floating” layout and ensuring it remained responsive across all screen sizes. Abhishek’s design was beautifully unique, though that uniqueness also posed its own set of difficulties. To bring it to life, I had to carefully apply techniques like position: absolute in CSS to achieve the right structure and layering.

    My favorite part of developing this project was the page transitions and micro-interactions.

    The page transition to the product view uses a solid color from the product background, expands it to full screen, and then switches the page seamlessly. Meanwhile, micro-interactions—like SVG draw motions, button hovers, and click animations—add small but impactful details. These make the site feel more alive and engaging for users.

    Awards & Recognition

    We’re incredibly happy that the project received such a positive response. Some of the awards and recognitions include:

    • Awwwards – Site of the Day & Developer Award
    • Awwwards – E-commerce Honors (Nominee)
    • FWA – FWA of the Day
    • CSSDA – Website of the Day
    • GSAP – Site of the Day
    • Muz.li – Picks Honor
    • Made With GSAP – Showcase Feature

    Reflections

    This project was a joy. Not just because of the outcome, but because of the process: working with thoughtful clients, collaborating with talented partners, and building something that felt true to its mission.

    There was, however, an interesting twist. While the final site looked and felt fresh and unconventional, over time, the client gradually shifted toward simpler, more familiar designs—closer to what everyday users are used to.

    And here’s a reflection for all creatives:

    🌟 Creative websites are a feast for the eyes, but they don’t always convert perfectly. As designers, we thrive on bold, experimental ideas. But businesses often need to balance creativity with practicality. And that’s okay.

    This project left a lasting impression—not just on the client, but on us as creators. It reminded me why we do this work: not just to make things look good, but to tell stories, evoke feelings, and bring meaningful ideas into the world.

    Final Thoughts

    If you’re a young creative reading this: Keep learning, keep experimenting, and keep collaborating. It’s not about chasing perfection—it’s about chasing truth in your work.

    And when you find a team that shares that vision? That’s where the magic happens.

    Thank you for reading.



    Source link

  • Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Source link

  • Lured and Compromised: Unmasking the Digital Danger of Honey Traps

    Lured and Compromised: Unmasking the Digital Danger of Honey Traps


    Behind the screen, a delicate balance of trust and deception plays out. Honey traps, once the preserve of espionage, have now insidiously spread into the digital realm, capitalizing on human emotions. What starts as a harmless-looking chat or friend request can unexpectedly spiral into blackmail, extortion, or theft. The truth is, vulnerability knows no bounds – whether you’re an ordinary citizen or a high-profile target, you could be at risk. Let’s delve into the complex world of digital honey traps, understand their destructive power, and uncover vital strategies to safeguard ourselves. Attackers do break the firewall, but an insider threat bypasses it.

    Who Gets Targeted?

    • Government officers with access to classified documents
    • Employees in IT, finance, defense, or research divisions
    • Anyone with access credentials or decision-making power

    Takeaway #1: If someone online gets close fast and wants details about your work or sends flirty messages too soon — that’s a red flag.

    Fake romantic relationships are used to manipulate officials into breaching confidentiality, exploiting emotions rather than digital systems. Attackers gain unauthorized access through clever deception, luring victims into sharing sensitive data. This sophisticated social engineering tactic preys on human vulnerabilities, making it a potent threat. It’s catfishing with a malicious intent, targeting high-stakes individuals for data extraction. Emotional manipulation is the key to this clever attack.

    Anatomy of the crime

    1. Targeting / victim profiling :  Takeaway #2: Social Media is the First Door

    Scammers often target individuals in authoritative positions with access to sensitive corporate or government data. They collect personal info like marital status and job profile to identify vulnerabilities. The primary vulnerability they exploit is emotional weakness, which can lead to further digital breaches. Social media is often the starting point for gathering this information.

    1. Initiation:

    Scammers use social media platforms like Facebook, LinkedIn, and dating apps to establish initial contact with their victims. They trace the victim’s online footprint and create a connection, often shifting the conversation from public platforms to private ones like WhatsApp. As communication progresses, the tone of messages changes from professional to friendly and eventually to romantic, marking a significant escalation in the scammer’s approach.

    Takeaway #3: Verify Before You Trust

    1. Gaining the trust:  Takeaway #4: Flattery is the Oldest Trap

    Scammers build trust with their victims through flattery, regular chats, and video calls, giving them unnecessary attention and care. They exchange photos, which are later used as leverage to threaten the victim if they try to expose the scammer. The scammer coerces the victim by threatening to damage their public image or spread defamatory content.

    🚨 Enterprise Alert: A sudden behavioral shift in an employee — secrecy, emotional distraction, or odd online behavior — may hint at psychological compromise.

    1. Exploitation:

    In the final stage of the scam, the scammer reveals their true intentions and asks the victim for confidential data, such as project details or passwords to encrypted workplace domains. This stolen information can pose a serious threat to national security and is often sold on the black market, leading to further exploitation and deeper security breaches.

    1. Threat to defamation:  Takeaway #5: Silence Helps the Scammer

    If the victim tries to expose the scam, the scammer misuses private data like photos, chats, and recordings to threaten public defamation. This threat coerces the victim into silence, preventing them from reporting the crime due to fear of reputational damage.

    Enterprise Tip: Conduct employee awareness sessions focused on     psychological manipulation and emotional engineering.

    Psychological Manipulation 

    Takeaway #6: Cybersecurity is Emotional, Not Just Technical

    • Love Bombing: intense attention and flattering messages.
    • Induction of Fear: Threathen to leak the private images / chats unless the confidential data is presented .

    Takeaway #7: Real Love Doesn’t Ask for Passwords

    • Guilt-tripping: Push the victim into a state of guilt using expressions such as “ Dont you trust me anymore?”

    Takeaway #8: The ‘Urgency’ Card Is a Red Flag

    • Urgency: The urgent need of money is presented to gain the sympathy of the victim
    • Isolation: Preventing the victim from being in contact with others and thus maintaining the identity of the scammer , not exposed.

    Risk to Corporate and National Security

    Takeaway #9: Corporate Security Starts With Personal  Awareness

    These scams can lead to severe consequences, including insider threats where employees leak confidential data, espionage by state-sponsored actors targeting government officials, and intellectual property loss that can compromise national security. Additionally, exposure of scandalous content can result in reputation damage, tarnishing brands and causing long-lasting harm.

    Detection:  Takeaway #10: Watch the Behavioral Shift

    Suspicious behaviors include a sudden shift from a friendly to romantic tone, refusal to real-time video calls, controlling communication terms, sharing personal life details to evoke pity, and requesting huge financial support – all potential warning signs of a scam.

    Prevention

    Protect yourself by avoiding sharing personal info, verifying profile photos via reverse image search, and refraining from sending money or explicit content. Also, be cautious with unknown links and files, and enforce zero-trust access control.

    Legal Horizon

    Honey traps can lead to serious offenses like extortion, privacy violation, and transmission of obscene material. Victims can report such cases to cybercrime cells for action.

    Proof in Action

    1. Indian Army Honey Trap Case (2023)

    A 2023 case involved an Army Jawan arrested for leaking sensitive military information to a Pakistani intelligence operative posing as a woman on Facebook. The jawan was lured through romantic conversations and later blackmailed. Such incidents highlight the threat of honey traps to national security.

    2. DRDO Scientist Arrested (2023)  

    Similarly, a senior DRDO scientist was honey-trapped by a foreign spy posing as a woman, leading to the sharing of classified defense research material. The interaction occurred via WhatsApp and social media, highlighting the risks of online espionage.

    3. Pakistan ISI Honey Traps in Indian Navy (2019–2022)

    Indian Navy personnel were arrested for being honey-trapped by ISI agents using fake female profiles on Facebook and WhatsApp. The agents gathered sensitive naval movement data through romantic exchanges.

    Conclusion

    Honey traps prey on emotions, not just systems. Stay vigilant and protect yourself from emotional manipulation. Real love doesn’t ask for passwords. Be cautious of strangers online and keep personal info private. Awareness is key to staying safe. Lock down your digital life.



    Source link

  • How to extract, create, and navigate Zip Files in C# &vert; Code4IT

    How to extract, create, and navigate Zip Files in C# | Code4IT


    Learn how to zip and unzip compressed files with C#. Beware: it’s not as obvious as it might seem!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working with local files, you might need to open, create, or update Zip files.

    In this article, we will learn how to work with Zip files in C#. We will learn how to perform basic operations such as opening, extracting, and creating a Zip file.

    The main class we will use is named ZipFile, and comes from the System.IO.Compression namespace. It’s been present in C# since .NET Framework 4.5, so we can say it’s pretty stable 😉 Nevertheless, there are some tricky points that you need to know before using this class. Let’s learn!

    Using C# to list all items in a Zip file

    Once you have a Zip file, you can access the internal items without extracting the whole Zip.

    You can use the ZipFile.Open method.

    using ZipArchive archive = ZipFile.Open(zipFilePath, ZipArchiveMode.Read);
    System.Collections.ObjectModel.ReadOnlyCollection<ZipArchiveEntry> entries = archive.Entries;
    

    Notice that I specified the ZipArchiveMode. This is an Enum whose values are Read, Create, and Update.

    Using the Entries property of the ZipArchive, you can access the whole list of files stored within the Zip folder, each represented by a ZipArchiveEntry instance.

    All entries in the current Zip file

    The ZipArchiveEntry object contains several fields, like the file’s name and the full path from the root archive.

    Details of a single ZipEntry item

    There are a few key points to remember about the entries listed in the ZipArchiveEntry.

    1. It is a ReadOnlyCollection<ZipArchiveEntry>: it means that even if you find a way to add or update the items in memory, the changes are not applied to the actual files;
    2. It lists all files and folders, not only those at the root level. As you can see from the image above, it lists both the files at the root level, like File.txt, and those in inner folders, such as TestZip/InnerFolder/presentation.pptx;
    3. Each file is characterized by two similar but different properties: Name is the actual file name (like presentation.pptx), while FullName contains the path from the root of the archive (e.g. TestZip/InnerFolder/presentation.pptx);
    4. It lists folders as if they were files: in the image above, you can see TestZip/InnerFolder. You can recognize them because their Name property is empty and their Length is 0;

    Folders are treated like files, but with no Size or Name

    Lastly, remember that ZipFile.Open returns an IDisposable, so you should place the operations within a using statement.

    ❓❓A question for you! Why do we see an item for the TestZip/InnerFolder folder, but there is no reference to the TestZip folder? Drop a comment below 📩

    Extracting a Zip folder is easy but not obvious.

    We have only one way to do that: by calling the ZipFile.ExtractToDirectory method.

    It accepts as mandatory parameters the path of the Zip file to be extracted and the path to the destination:

    var zipPath = @"C:\Users\d.bellone\Desktop\TestZip.zip";
    var destinationPath = @"C:\Users\d.bellone\Desktop\MyDestination";
    ZipFile.ExtractToDirectory(zipPath, destinationPath);
    

    Once you run it, you will see the content of the Zip copied and extracted to the MyDestination folder.

    Note that this method creates the destination folder if it does not exist.

    This method accepts two more parameters:

    • entryNameEncoding, by which you can specify the encoding. The default value is UTF-8.
    • overwriteFiles allows you to specify whether it must overwrite existing files. The default value is false. If set to false and the destination files already exist, this method throws a System.IO.IOException saying that the file already exists.

    Using C# to create a Zip from a folder

    The key method here is ZipFile.CreateFromDirectory, which allows you to create Zip files in a flexible way.

    The first mandatory value is, of course, the source directory path.

    The second mandatory parameter is the destination of the resulting Zip file.

    It can be the local path to the file:

    string sourceFolderPath = @"\Desktop\myFolder";
    string destinationZipPath = @"\Desktop\destinationFile.zip";
    
    ZipFile.CreateFromDirectory(sourceFolderPath, destinationZipPath);
    

    Or it can be a Stream that you can use later for other operations:

    using (MemoryStream memStream = new MemoryStream())
    {
        string sourceFolderPath = @"\Desktop\myFolder";
        ZipFile.CreateFromDirectory(sourceFolderPath, memStream);
    
        var lenght = memStream.Length;// here the Stream is populated
    }
    

    You can finally add some optional parameters:

    • compressionLevel, whose values are Optimal, Fastest, NoCompression, SmallestSize.
    • includeBaseDirectory: a flag that defines if you have to copy only the first-level files or also the root folder.

    A quick comparison of the four Compression Levels

    As we just saw, we have four compression levels: Optimal, Fastest, NoCompression, and SmallestSize.

    What happens if I use the different values to zip all the photos and videos of my latest trip?

    The source folder’s size is 16.2 GB.

    Let me zip it with the four compression levels:

     private long CreateAndTrack(string sourcePath, string destinationPath, CompressionLevel compression)
     {
         Stopwatch stopwatch = Stopwatch.StartNew();
    
         ZipFile.CreateFromDirectory(
             sourceDirectoryName: sourcePath,
             destinationArchiveFileName: destinationPath,
             compressionLevel: compression,
             includeBaseDirectory: true
             );
         stopwatch.Stop();
    
         return stopwatch.ElapsedMilliseconds;
     }
    
    // in Main...
    
    var smallestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Smallest.zip"),
        CompressionLevel.SmallestSize);
    
    var noCompressionTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "NoCompression.zip"),
        CompressionLevel.NoCompression);
    
    var fastestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Fastest.zip"),
        CompressionLevel.Fastest);
    
    var optimalTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Optimal.zip"),
        CompressionLevel.Optimal);
    

    By executing this operation, we have this table:

    Compression Type Execution time (ms) Execution time (s) Size (bytes) Size on disk (bytes)
    Optimal 483481 483 17,340,065,594 17,340,067,840
    Fastest 661674 661 16,935,519,764 17,004,888,064
    Smallest 344756 344 17,339,881,242 17,339,883,520
    No Compression 42521 42 17,497,652,162 17,497,653,248

    We can see a bunch of weird things:

    • Fastest compression generates a smaller file than Smallest compression.
    • Fastest compression is way slower than Smallest compression.
    • Optimal lies in the middle.

    This is to say: don’t trust the names; remember to benchmark the parts where you need performance, even with a test as simple as this.

    Wrapping up

    This was a quick article about one specific class in the .NET ecosystem.

    As we saw, even though the class is simple and it’s all about three methods, there are some things you should keep in mind before using this class in your code.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link