نویسنده: post Bina

  • Judicial Notification Phish in Colombia — .SVG Delivers AsyncRAT

    Judicial Notification Phish in Colombia — .SVG Delivers AsyncRAT


    Content Overview

    • Introduction
    • Initial Vector
    • Infection Chain
    • Analysis of .SVG Attachment
    • Analysis of .HTA file
    • Analysis of .VBS file
    • Analysis of .ps1 file
    • Analysis of Downloader/Loader
      • Anti-VM Technique
      • Persistence Technique
      • Download and Loader Function
    • AsyncRAT Payload
    • File MD5’s
    • Quick Heal \ Seqrite Detections
    • MITRE Attack Tactics, Techniques, and Procedures (TTPs)

    Introduction –

    There has been a significant increase in the use of SVG files in malware campaigns. These harmless looking files can hide harmful scripts that hackers use to launch sneaky phishing attacks.

    Recently, we have observed one such attack in Spanish language targeting Colombian users with Judicial Notification lure. The campaign demonstrates the use of geographical and institutional details to make the phishing lure look more legitimate and trustworthy to the targeted victim. The campaign leverages SVG, HTA, VBS, and Powershell stages to download and decode a loader, which finally injects AsyncRAT into a legitimate Windows process, evading detection.

    Initial Vector –

    Campaign follows a cleverly crafted phish email that impersonates a judicial notification from “Juzgado 17 Civil Municipal del Circuito de Bogotá”. It references to “17th Municipal Civil Court of the Bogotá Circuit”. Bogotá is the capital and largest city of Colombia and many government institutions like courts, ministries and other officials are based there.

    Fig-1: Phishing Email

     

    Body of the email contains below message in Spanish-

    Attached is a lawsuit filed against you.

    17th Municipal Civil Court of the Bogotá Circuit

    September 11, 2025

    Sincerely,

    Judicial Notification System

     

    The email is written and mimicked the style of an official judicial notice by using formal legal language and institutional naming.

     

    The spam contains an .SVG (Scaler Vector Graphics) file as an attachment. Name of the file is “Fiscalia General De La Nacion Juzgado Civil 17.svg” that translate to “Attorney General’s Office Civil Court 17.svg” in English.

     

    This carefully crafted email is an important entry point of infection chain that leverages social engineering and official-looking contents to entice recipients into opening the attachment.

    Infection Chain –

     

    Fig-2: Infection Chain of Campaign

     

    As we can see in above diagram, the infection chain begins with judicial phishing lure (Spanish-language) with Subject “Demanda judicial en su contra – Juzgado 17 Civil Municipal” — carrying a seemingly harmless .SVG file.

    Opening the .SVG file takes the user to a fake Web page masquerading as the Attorney General’s Office. It asks the user to download an important document from the official website. Once clicked on the page, an attacker-controlled-download is triggered that downloads embedded .HTA file. It further executes and drops a Visual Basic dropper (actualiza.vbs).

    The VBS calls a Powershell downloader (veooZ.ps1) which retrieves an encoded blob as a text file (Ysemg.txt). After decoding, the blob is written as classlibrary3.dll.

    classlibrary3.dll acts as a module loader. It fetches an injector component and the AsyncRAT payload, then performs an in-memory injection of AsyncRAT into MSBuild.exe. By running the RAT inside a trusted process, the attacker gains persistence and stealth.

    Analysis of .SVG Attachment –

    SVGs (Scalable Vector Graphics) are a type of image file that uses XML code (a text-based format) to describe shapes, lines, colors, and text. Unlike normal images (like JPGs or PNGs), SVGs don’t store pixels. Instead, they store instructions which can become a very easy place for attackers to store their malicious intentions. Moreover, these file types enable attacker to stay FUD (Fully Undetected), as many of the traditional security solutions do not check these files for malicious code.

    During the time of analysis of this campaign, the SVG file in attachment was detected by QuickHeal/Seqrite.

    Fig-3: Very Less Detection on Attached .SVG File

    Upon executing malicious .SVG file, a Web page gets opened in Web Browser.

     

    Fig-4: Lure Web Page for Attorney General’s Office

     

    This mimics a Web page or a website related to Attorney General’s Office and Citizen’s Consultation Portal. It contains some more fake fields like Judicial Information System, fake consultation registration number etc. to make it look more genuine. It also lures victims to DOWNLOAD DOCUMENT.

    When analyzed the code in SVG file we can see below defined elements –

     

    Fig-5: Important Elements in SVG File

     

    • style=“cursor:pointer;” – Shows clickable cursor on the image.
    • onclick=“openDocument()” – This is an important defined element. When user clicks the SVG, the browser will call the openDocument(), which is JavaScript function. Function definition looks like below –

     

    Fig-6: Code/Action of openDocument()

     

    Function openDocument() –

    1. accept base64 encoded embedded data,
    2. decode it to attacker controlled “HTML” blob,
    3. create a temporary URL object for that blob,
    4. open that URL in new tab.

     

    This opens next stage HTML page –

    Fig-7: Lure Judicial Page, with Fake Progress Bar UI

    The above HTML page poses as an official “Rama Judicial” document viewer. It uses a fake progress bar UI to convince victims that a legitimate download is going on.

    On load it decodes a Base64 blob and forces the browser to download DOCUMENTO_OFICIAL_JUZGADO.HTA file. HTA files execute an arbitrary Windows script.

     

    Fig-8: Preparing .HTA File for Download

    This client-side dropper (Base64 → Blob → createObjectURL → forced .HTA dropper) is a clear staged dropper intended to deliver and run further payloads.

    Analysis of .HTA file –

    HTA file contains a lot of junk code and there is a chunk of malicious script kept tucked between this junk code with huge blob of base64 encoded part. This base64 encoded part is decoded and saved as actualiza.vbs as shown in fig

    Fig-9: HTA file with base64 encoded code which will drop actualiza.vbs

     

    Analysis of .VBS file –

    The base64 decoded file again contains lots of repetitive junk lines which on cleanup, looks like below:

    Fig-10: VBS file

     

    The code writes a Powershell script which is inside variable named GftsOTSaty. The actual Powershell code is kept incomprehensible by placing character “9&” instead of “A” with further base64 encoding. The decoded code is written to file veooZ.ps1 which will be executed further.

    Analysis of .ps1 file –

    The Powershell script will connect to dpaste domain URL and download a plaintext file, named as “Ysemg.txt”.

    Fig-11: Powershell script

    Ysemg.txt is a raw text file. Contents of this file is cleaned and is base64 decoded. As you can see in Fig-11, “$$” is replaced by letter “A” and is base64 decoded which gives us a

    .NET assembly file with its name as classlibrary3.dll. One of its method called “MSqBIbY” is invoked in the script and some values are passed to this method as arguments. In our case, the first argument passed is base64 encoded string, as we can see in Fig-11:

    Fig-12: Method from classlibrary3.dll

    The second argument in script is %JkQasDfgrTg% but when you check the other commands (refer below snippets), you can see it is passing the .vbs file with its path as second argument.

    Fig-13: Code snippets from script

    In Fig-11, from this file path which is being passed in second argument, “\”as replaced by “$“, this will be again replaced in .net file.

    Analysis of Downloader-Loader –

    The decoded file is a .NET dll which will get one URL through the arguments passed in the script and it has one embedded in it. On checking the static code, it primarily looks like a downloader file with some other checks that will make sure everything goes well and in certain scenarios, can also try to add persistence factor for the malware.

    The file is heavily obfuscated and uses XOR’ing and shifting operations loop to decode these obfuscated values.

    Fig-14: File-path check

     

    As said previously, the second argument will be the file path of vbs script in which “\” is replaced with “$”, the dll file again replaces the value and makes the file-path proper.

    Anti-VM Technique:

    Fig-15: Anti-VM technique

    There is an anti-analysis trick found in the code, where it is checking for VirtualBox and VMware related processes. First it checks if yktfr variable is 4 if yes, it checks for running processes and if VM related processes found, it exists. In our case it is 0, so this will be ignored.

     

    Persistence Technique:

    It also checks for “1” in the fourth argument, if yes it creates a Powershell script through concatenation which is run through shell (Fig-15).  But in our case, as previously said it is 0.

    The Powershell script adds the vbs file in run registry to maintain persistence.

    Fig-16: PS script creation to maintain persistence

     

    Similarly, it drops .lnk shortcut file in the startup folder if the value is 3. Again, a persistence technique much used by attackers.

    Download and Loader Function:

    Fig-17: Encoded dpaste url

    The value in text5 decodes to a reversed dpaste url again which is formatted first and then through “webClient.DownloadString(text5)”, a txt kind of file with base64 code and stars is downloaded and saved in text4. On reversal, we can see TVqQ which is base64 encoded MZ marker. In next step, the dll replaces stars with A. So, now we have a new PE file.

     

    Fig-18: Downloaded content upon reversal

    The file is also a .NET dll. In similar fashion, one more file is downloaded but, in this case, the URL is our first argument. The file is just reversed and base64encoded and it is a .NET executable stored in variable text7. The text7 file is actually AsyncRAT file which will be discussed later.

     

    Fig-19: File code taking passed encoded URL and downloading another file

     

    As in below figure (Fig-18), the new downloaded dll (stored in text4) is loaded through AppDomain.CurrentDomain() function and a method is invoked to which two arguments are passed as we can see. On checking the function that is being called (Fig-19), it takes two arguments, one in which injection that will take place and the other containing the code that is to be injected.

    Fig-20: Process Injection function being called

     

    So, the new dll acts as injector which is injecting AsyncRAT  payload in MSBuild.exe.

    Below is a snippet from the injector dll, The \uFDD0 function have all the injection related functions:

    Fig-21: Process Injection Function from Injector Dll

     

    AsyncRAT Payload-

    AsyncRAT is a remote access Trojan (RAT) written in C#. It provides typical RAT and data-stealing functions—such as keystroke logging, executing or injecting additional payloads, and command-and-control—whose exact capabilities depend on its embedded configuration. It is a .NET compiled binary, and, in our case, the code was not obfuscated and can be analyzed easily. AsyncRAT ’s primary usage is to steal your data and send it to C&C. Some of the notable observations from this payload we analyzed are below –

     

    • For creating persistence, it checks whether current process is running with elevated privileges.
      • If yes, creates a scheduled task with command – schtasks /create /f /sc onlogon /rl highest /tn “<filename>” /tr “<fullpath>”
      • If no, writes its path under registry-HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\
    • Has Anti-Analysis, Anti-VM, Amsi-bypass checks.
    • Checks for the presence of Mutex with name “DcRatMutex_qwqdanchun”.
    • Checks whether a webcam is available on the infected machine. If a camera is found, the malware can later use it for spying or surveillance purpose.
    • Iterate through running processes and kill process monitoring and analysis tools, like Taskmgr.exe, ProcessHacker.exe, etc.
    • Gathers system details such as HWID, OS, user privileges, camera presence, and antivirus information.
    • Establish connection to C2.
    • Dynamically load and run a plugin sent from the C2 server.
    • Securely pack the gathered data into MessagePack object and send over the TLS connection (Large messages are split into chunks before transmitting).

    File MD5’s –

    b1ed63ee45ec48b324bf126446fdc888

    817081c745aa14fcb15d76f029e80e15

    6da792b17c4bba72ca995061e040f984

    f3b56b3cfe462e4f8a32c989cd0c5a7c

    5fad0c5b6e5a758059c5a4e633424555

    fe0fc2949addeefa6506b50215329ed9

     

    Quick Heal \ Seqrite Detections –

    Trojan.InjectorCiR

    Html.Asyncrat.49974.GC

    Script.Trojan.49969.GC

    Backdoor.MsilFC.S13564499

    Trojandownloader.AgentCiR

    MITRE Attack Tactics, Techniques, and Procedures (TTPs)

    Tactics (ATT&CK ID) Techniques / Sub-technique (ID) Procedure
    Initial Access (TA0001) T1566.001 Malicious .svg attachment opened
    Execution (TA0002) T1218.005 / T1059.001 SVG drops/executes .hta → PowerShell
    Execution (TA0002) T1059.005 HTA writes & runs actualiza.vbs
    Persistence (TA0003) T1547.001 Adds Run key under HKCU\…\Run
    Persistence (TA0003) T1053.005 Creates schtasks onlogon task
    Defense Evasion (TA0005) T1027 Base64 / reversed strings / junk obfuscation
    Defense Evasion (TA0005) T1562.001 Kills security / monitoring tools
    Defense Evasion (TA0005) T1055 Injects AsyncRAT into MSBuild.exe
    Defense Evasion (TA0005) T1497 VM/sandbox WMI & process checks (exit in VMs)
    Defense Evasion / Impact (TA0005 / TA0006) T1112 / T1070 Deletes/cleans specific registry keys
    Discovery (TA0007) T1057 Enumerates running processes
    Discovery (TA0007) T1082 / T1012 Collects system info; reads registry
    Collection (TA0009) T1125 Checks for webcam presence
    Command and Control (TA0011) T1071 / T1573 TLS-wrapped TCP using MsgPack
    C2 & Modular Capabilities (TA0011) T1105 Downloads injector and payload modules
    C2 & Modular Capabilities (TA0011) T1543 / T1609 Loads plugins from registry on demand
    Exfiltration (TA0010) T1041 Sends data over encrypted C2 (chunked)

     

     

     

    Authors –
    Prashil Moon, Kirti Kshatriya



    Source link

  • How to open the same URL on different environments with PowerShell | Code4IT


    Revise PowerShell basics with a simple script that opens a browser for each specified URL. We’re gonna cover how to declare variables, define arrays, concatenate strings and run CMD commands.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that your project is already deployed on multiple environments: dev, UAT, and production; now you want to open the same page from all the environments.

    You could do it manually, by composing the URL on a notepad. Or you could create a PowerShell script that opens them for you.

    In this article, I’m going to share with you a simple script to open multiple browsers with predefined URLs. First of all, I’ll show you the completed script, then I’ll break it down to understand what’s going on and to brush up on some basic syntax for PowerShell.

    Understanding the problem: the full script

    I have a website deployed on 3 environments: dev, UAT, and production, and I want to open all of them under the same page, in this case under “/Image?w=60”.

    So, here’s the script that opens 3 instances of my default browser, each with the URL of one of the environments:

    $baseUrls =
    "https://am-imagegenerator-dev.azurewebsites.net",
    "https://am-imagegenerator-uat.azurewebsites.net",
    "https://am-imagegenerator-prd.azurewebsites.net";
    
    $path = "/Image?w=600";
    
    foreach($baseUrl in $baseUrls)
    {
        $fullUrl = "$($baseUrl)$($path)";
        Invoke-Expression "cmd.exe /C start $($fullUrl)"
    }
    

    Let’s analyze the script step by step to brush up on some basic notions about PowerShell.

    Variables in PowerShell

    The first thing to notice is the way to declare variables:

    There’s not so much to say, except that variables have no type declaration and that each variable name must start with the “$” symbol.

    Arrays in PowerShell

    Talking about arrays, we can see that there is no [] syntax:

    $baseUrls =
        "https://am-imagegenerator-dev.azurewebsites.net",
        "https://am-imagegenerator-uat.azurewebsites.net",
        "https://am-imagegenerator-prd.azurewebsites.net";
    

    In fact, to declare an array you must simply separate each string with ,.

    Foreach loops in PowerShell

    Among the other loops (while, do-while, for), the foreach loop is probably the most used.

    Even here, it’s really simple:

    foreach($baseUrl in $baseUrls)
    {
    
    }
    

    As we’ve already seen before, there is no type declaration for the current item.

    Just like C#, the keyword used in the body of the loop definition is in.

    foreach (var item in collection)
    {
        // In C# we use the `var` keyword to declare the variable
    }
    

    String concatenation in PowerShell

    The $fullUrl variable is the concatenation of 2 string variables: $baseUrl and $path.

    $fullUrl = "$($baseUrl)$($path)";
    

    We can see that to declare this new string we must wrap it between "...".

    More important, every variable that must be interpolated is wrapped in a $() block.

    How to run a command with PowerShell

    The key part of this script is for sure this line:

    Invoke-Expression "cmd.exe /C start $($fullUrl)"
    

    The Invoke-Expression cmdlet evaluates and runs the specified string in your local machine.

    The command cmd.exe /C start $($fullUrl) just tells the CMD to open the link stored in the $fullUrl variable with the default browser.

    Wrapping up

    We learned how to open multiple browser instances with PowerShell. As you can understand, this was just an excuse to revise some basic concepts of PowerShell.

    I think that many of us are too focused on our main language (C#, Java, JavaScript, and so on) that we forget to learn something different that may help us with our day-to-day job.

    Happy coding!



    Source link

  • a (better?) alternative to Testing Diamond and Testing Pyramid &vert; Code4IT

    a (better?) alternative to Testing Diamond and Testing Pyramid | Code4IT


    The Testing Pyramid focuses on Unit Tests; the Testing Diamond focuses on Integration Tests; and what about the Testing Vial?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Testing is crucial in any kind of application. It can also be important in applications that are meant to be thrown away: in fact, with a proper testing strategy, you can ensure that the application will do exactly what you expect it to do; instead of running it over and over again to fix the parts, by adding some specific tests, you will speed up the development of that throwaway project.

    The most common testing strategies are the Testing Pyramid and the Testing Diamond. They are both useful, but I think that they are not perfect.

    That’s why I came up with a new testing strategy that I called “the Testing Vial”: in this article, I’m going to introduce it and explain the general idea.

    Since it’s a new idea, I’d like to hear your honest feedback. Don’t be afraid to tell me that this is a terrible idea – let’s work on it together!

    The Testing Pyramid: the focus is on Unit Tests

    The Testing Pyramid is a testing strategy where the focus is on Unit Tests.

    Unit Tests are easy to write (well, they are often easy to write: it depends on whether your codebase is a mess!), they are fast to execute, so they provide immediate feedback.

    The testing pyramid

    So, the focus here is on technical details: if you create a class named Foo, most probably you will have its sibling class FooTests. And the same goes for each (public) method in it.

    Yes, I know: unit tests can operate across several methods of the same class, as long as it is considered a “unit”. But let’s be real: most of the time, we write tests against each single public method. And, even worse, we are overusing mocks.

    Problems with the Testing Pyramid

    The Testing Pyramid relies too much on unit tests.

    But Unit Tests are not perfect:

    1. They often rely too much on mocks: tests might not reflect the real execution of the system;
    2. They are too closely coupled with the related class and method: if you add one parameter to one single method, you most probably will have to update tens of test methods;
    3. They do not reflect the business operations: you might end up creating the strongest code ever, but missing the point of the whole business meaning. Maybe, because you focused too much on technical details and forgot to evaluate all the acceptance criteria.

    Now, suppose that you have to change something big, like

    • add OpenTelemetry support on the whole system;
    • replace SQL with MongoDB;
    • refactor a component, replacing a huge internal switch-case block with the Chain Of Responsibility pattern.

    Well, in this case, you will have to update or delete a lot of Unit Tests. And, still, you might not be sure you haven’t added regressions. This is one of the consequences of focusing too much on Unit Tests.

    The Testing Diamond: the focus is on Integration Tests

    The Testing Diamond emphasises the importance of Integration Tests.

    The Testing Diamond

    So, when using this testing strategy, you are expected to write many more Integration Tests and way fewer Unit Tests.

    In my opinion, this is a better approach to testing: this way, you can focus more on the business value and less on the technical details.

    Using this approach, you may refactor huge parts of the system without worrying too much about regressions and huge changes in tests: in fact, Integration Tests will give you a sort of safety net, ensuring that the system still works as expected.

    So, if I had to choose, I’d go with the Testing Diamond: implementations may change, while the overall application functionality will still be preserved.

    Problems with the Testing Diamond

    Depending on the size of the application and on how it is structured, Integration Tests may be time-consuming and hard to spin up.

    Maybe you have a gigantic monolith that takes minutes to start up: in this case, running Integration Tests may take literally hours.

    Also, there is a problem with data: if you are going to write data to a database (or an external resource), how can you ensure that the operation does not insert duplicate or dirty data?

    For this problem, there are several solutions, such as:

    • using Ephemeral Environments specifically to run these tests;
    • using TestContainers to create a sandbox environment;
    • replacing some specific operations (like saving data on the DB or sending HTTP requests) by using a separate, standalone service (as we learned in this article, where we customised a WebApplicationFactory).

    Those approaches may not be easy to implement, I know.

    Also, Integration Tests alone may not cover all the edge cases, making your application less robust.

    Introducing the Testing Vial: the focus is on business entities

    Did you notice? Both the Testing Pyramid and the Testing Diamond focus on the technical aspects of the tests, and not on the meaning for the business.

    I think that is a wrong approach, and that we should really shift our focus from the number of tests of a specific type (more Unit Tests or more Integration Tests?) to the organisational value they bring: that’s why I came up with the idea of the Testing Vial.

    The Testing Vial

    You can imagine tests to be organised into sealed vials.

    In each vial, you have

    • E2E tests: to at least cover the most critical flows
    • Integration tests: to cover at least all the business requirements as they are described in the Acceptance Criteria of your User Stories (or, in general, to cover all Happy Paths and the most common Unhappy Paths);
    • Unit test: to cover at least all the edge cases that are hard to reproduce with Integration tests.

    So, using the Testing Vial, you don’t have to worry about the number of tests of a specific type: you only care that, regardless of their number, tests are focused on Business concerns.

    But, ok, nothing fancy: it’s just common sense.

    To make the Testing Vial effective, there are two more parts to add.

    Architectural tests, to validate that the system design hasn’t changed

    After you have all these tests, in a variable number which depends solely on what is actually helpful for you, you also write some Architectural Tests, for example by using ArchUnit, for Java, or ArchUnit.NET for .NET applications.

    This way, other than focusing on the business value (regardless of this goal being achieved by Unit Tests or Integration Tests), you also validate that the system hasn’t changed in unexpected ways. For example, you might have added a dependency between modules, making the system more coupled and less maintainable.

    Generally speaking, Architectural Tests should be written in the initial phases of a project, so that, by running them from time to time, they can ensure that nothing has changed.

    With Architectural Tests, which act as a cap for the vial, you ensure that the tests are complete, valid, and that the architecture-wise maintainability of the system is preserved.

    But that’s not enough!

    Categories, to identify and isolate areas of your application

    All of this makes sense if you add one or more tags to your tests: these tags should identify the business entity the test is referring to. For example, in an e-shop application, you should add categories about “Product”, “Cart”, “User”, and so on. This is way easier if you already do DDD, clearly.

    In C# you can categorise tests by using TestCategory if you use MSTest or NUnit, or Trait if you use xUnit.*

    [TestCategory("Cart")]
    [TestCategory("User")]
    public async Task User_Should_DoSomethingWithCart(){}
    

    Ok, but why?

    Well, categorising tests allows you to keep track of the impacts of a change more broadly. Especially at the beginning, you might notice that too many tests are marked with too many categories: this might be a sign of a poor design, and you might want to work to improve it.

    Also, by grouping by category, you can have a complete view of everything that happens in the system about that specific Entity, regardless of the type of test.

    Did you know that in Visual Studio you can group tests by Category (called Traits), so that you can see and execute all the tests related to a specific Category?

    Tests grouped by Category in Visual Studio 2022

    By using Code Coverage tools wisely – executing them in combination with tests of a specific category – you can identify all the parts of the application that are affected by such tests. This is especially true if you have many Integration Tests: just by looking at the executed methods, you can have a glimpse of all the parts touched by that test. This simple trick can also help you out with reorganising the application (maybe by moving from monolith to modular monolith).

    Finally, having tests tagged, allows you to have a catalogue of all the Entities and their dependencies. And, in case you need to work on a specific activity that changes something about an Entity, you can perform better analyses to find potential, overlooked impacts.

    Further readings

    There is a lot of content about tests and testing strategies, so here are some of them.

    End-to-End Testing vs Integration Testing | Testim

    This article first appeared on Code4IT 🐧

    In this article I described how I prefer the Testing Diamond over the Testing Pyramid.

    Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Then, I clearly changed my mind and came up with the idea of the Testing Vial.

    Wrapping up

    With the Testing Vial approach, the shift moves from technical to business concerns: you don’t really care if you’ve written more Unit Tests or more Integration tests; you only care that you have covered everything that the business requires, and that by using Architecture Tests and Test Categories you can make sure that you are not introducing unwanted dependencies between modules, improving maintainability.

    Vials are meant to be standalone: by accessing the content of a vial, you can see everything related to it: its dependencies, its architecture, main user cases and edge cases.

    Yzma

    Clearly, the same test may appear in multiple vials, but that’s not a problem.

    I came up with this idea recently, so I want to hear from you what you think about it. I’m sure there are areas of improvement!

    Let me know!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 10 underestimated tasks to do before your next virtual presentation | Code4IT


    When performing a talk, the audience experience is as important as the content. They must be focused on what you say, and not get distracted by external outputs. So, here’s 10 tips to rock your next virtual talk.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    More and more developers crave to be also tech speakers. We can see every day dozens of meetups, live streaming, and YouTube videos by developers from all over the world. But regardless of the topic and the type of talk you’re doing, there are a few tips you should keep in mind to rock the execution.

    Those tips are not about the content, but about the presentation itself. So, maybe, consider re-reading this checklist about 30 minutes before your next virtual conference.

    1- Hide desktop icons

    Many of you have lots of icons on your desktop, right? Me too. I often save on Desktop temporary files (that I always forget to move or delete) and many program icons, like Postman, Fiddler, Word, and so on.

    They are just a distraction to your audience. You should keep the desktop as clean as possible.

    You can do it in 2 ways: hide all the icons (on Windows: right-click > View > untick Show desktop icons) or just remove the ones that are not necessary.

    The second option is better if you have lots of content to show from different sources, like images, plots, demo with different tools, and so on.

    If you have everything under a single folder, you can simply hide all icons and pin that folder on Quick Access.

    2- Choose a neutral desktop background

    Again, your audience should focus on your talk, not on your desktop. So just remove funny or distracting background images.

    Even more, if you use memes or family photos as desktop background.

    A good idea is to create a custom desktop background for the event you are participating in: a simple image with the name of the talk, your name, and your social contacts.

    A messy background is cool, but distracts the audience

    3- Mute your phone

    Avoid all the possible distractions. WhatsApp notifications, calls from Call Centres, alarm clocks you forgot to turn off…

    So, just use Airplane mode.

    4- Remove useless bookmarks (or use a different browser)

    Just as desktop icons, bookmarks can distract your audience.

    You don’t want to show everyone which social networks are you using, what are the projects you’re currently working on, and other private info about you.

    A good alternative is to use a different browser. But remember to do a rehearsal with that browser: sometimes some JavaScript and CSS functionalities are not available on every browser, so don’t take anything for granted.

    5- Close background processes

    What if you get an awkward message on Skype or Slack while you’re sharing your screen?

    So, remember to close all useless background processes: all the chats (Skype, Discord, Telegram…) and all the backup platforms (OneDrive, Dropbox, and so on).

    A risk: unwanted notifications that appear while sharing your screen. And even worse, all those programs require network bandwidth and use CPU and Memory: shutting them down will boost the other applications and make everything run smoother.

    6- Check font size and screen resolution

    You don’t know the device your audience will use. Some of them will watch you talk on a smartphone, some others on a 60″ TV.

    So, even if you’re used to small fonts and icons, make everything bigger. Start with screen resolution. If it is OK, now increase the font size for both your slides and your IDE.

    Make sure everyone can read it. If you can, during the rehearsals share your screen with a smartphone and a big TV, and find the balance.

    7- Disable dark mode

    Accessibility is the key, even more for virtual events. And not everyone can see everything as you do. So, switch everything to light mode: IDEs, websites, tools. Everything that natively comes with light mode.

    8- Check mic volume

    This is simple: if your mic volume is too low, your audience won’t hear a word from you. So, instead of screaming for one hour, just put your mic near you or increase the volume.

    9- Use ZoomIt to draw on your screen

    «Ok, now, I click on this button on the top-left corner with the Home icon».

    How many times have you heard this phrase? It’s not wrong to say so, but you can simply show it. Remember, show, don’t tell!

    For Windows, you can install a small tool, ZoomIt, that allows you to draw lines, arrows, and shapes on your screen.

    You can read more on this page by Microsoft, where you can find the download file, some shortcuts, and more info.

    So, download it, try out some shortcuts (eg: R, G, B to use a red, green, or blue pen, and Hold Ctrl + Shift to draw an arrow) and use it to help your audience see what you’re indicating with your mouse.

    With ZoomIt you can draw lines and rectangles on your screen

    10- Have a backup in case of network failures

    Your internet connection goes down during the live. First reaction: shock. But then, you remember you have everything under control: you can use your smartphone as a hotspot and use that connection to move on with your talk. So, always have a plan B.

    And what if the site you’re showing for your demos goes down? Say that you’re explaining what are Azure Functions, and suddenly the Azure Dashboard becomes unavailable. How to prevent this situation?

    You can’t. But you can have a backup plan: save screenshots and screencasts, and show them if you cannot access the original sites.

    Wrapping up

    We’ve seen that there are lots of things to do to improve the quality of your virtual talks. If you have more tips to share, share them in the comment section below or on this discussion on Twitter.

    Performing your first talks is really challenging, I know. But it’s worth a try. If you want to read more about how to be ready for it, here’s the recap of what I’ve learned after my very first public speech.





    Source link

  • 14 to 2 seconds: how I improved the performance of an endpoint by 82%


    Language details may impact application performance. In this article we’ll see some of the C# tips that brought me to improve my application. Singleton creation, StringBuilder and more!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In this second article, I’m going to share some more tips that brought me to improve the performance of an API from 14sec to less than 3 seconds: an improvement of 82%.

    In the previous article, we’ve seen some general, language-agnostic ways to approach this kind of problem, and what you can try (and avoid) to do to achieve a similar result.

    In this article, we’re going to see some .NET-related tips that can help to improve your APIs performance.

    WarmUp your application using Postman to create Singleton dependencies

    In my application, we use (of course) dependency injection. Almost all the dependencies are marked ad Singleton: this means that every dependency is created at the start-up of the application and is then shared through all the lifespan of the application.

    Pss: if you want to know the difference between Singleton, Transient, and Scoped lifetimes with real examples, check out this article!

    It makes sense, right? But have a closer look at the timing in this picture:

    Timings with initial warmup time

    The blue line is the whole HTTP call, and the black line is the API Action.

    There are almost 2 seconds of nothing! Why?

    Well, as explained in the article “Reducing initial request latency by pre-building services in a startup task in ASP.NET Core” by Andrew Lock, singletons are created during the first request, not at the real start-up of the application. And, given that all the dependencies in this application are singletons, the first 2 seconds are being used to create those instances.

    While Andrew explains how to create a Startup task to warm up the dependencies, I opted for a quick-and-dirty option: create a Warmup endpoint and call it before any call in Postman.

    [HttpGet, Route("warmup")]
    public ActionResult<string> WarmUp()
    {
        var obj = new
        {
            status = "ready"
        };
    
        return Ok(obj);
    }
    

    It is important to expose that endpoint under a controller that uses DI: as we’ve seen before, dependencies are created during the first request they’re needed; so, if you create an empty controller with only the WarmUp method, you won’t build any dependency and you’ll never see improvements. My suggestion is to place the WarmUp method under a controller that requires one of the root services: in this way, you’ll create the services and all their dependencies.

    To call the WarmUp endpoint before every request, I’ve created this simple script:

    pm.sendRequest("https://localhost:44326/api/warmup", function (err, response) {
      console.log("ok")
    })
    

    So, if you paste it in Postman, into the Pre-requests Script tab, it executes this call before the main HTTP call and warms up your application.

    Pre-request script on Postman

    This tip will not speed up your application but gives your a more precise value for the timings.

    Improve language-specific details

    Understanding how C# works and what functionalities it offers is crucial to get well working applications.

    There’s plenty of articles around the Internet that tell you some nice tips and trick to improve .NET performance; here I’ll list some of my favorite tips an why you should care about them.

    Choose the correct data type

    There’s a lot you can do, like choosing the right data type: if you are storing a player’s age, is int the right choice? Remember that int.MinValue is -2147483648 and int.MaxValue is -2147483648.

    You could use byte: its range is [0,255], so it’s perfectly fine to use it.

    To have an idea of what data type to choose, here’s a short recap with the Min value, the Max value, and the number of bytes occupied by that data type:

    Data type Min value Max Value # of bytes
    byte 0 255 1
    short -32768 32767 2
    ushort 0 65535 2
    int -2147483648 2147483647 4
    uint 0 4294967295 4

    So, just by choosing the right data type, you’ll improve memory usage and then the overall performance.

    It will not bring incredible results, but it’s a good idea to think well of what you need and why you should use a particular data type.

    StringBuilder instead of string concatenation

    Strings are immutable, in C#. This means that every time you concatenate 2 strings, you are actually creating a third one that will contain the result.

    So, have a look at this snippet of code:

    string result = "<table>";
    for (int i = 0; i < 19000; i++)
    {
        result += "<tr><td>"+i+"</td><td>Number:"+i+"</td></tr>";
    }
    
    result += "</table>";
    
    Console.WriteLine(result);
    

    This loop took 2784 milliseconds.

    That’s where the StringBuilder class comes in handy: you avoid all the concatenation and store all the substrings in the StringBuilder object:

    StringBuilder result = new StringBuilder();
    
    result.Append("<table>");
    for (int i = 0; i < 19000; i++)
    {
        result.Append("<tr><td>");
        result.Append(i);
        result.Append("</td><td>Number:");
        result.Append(i);
        result.Append("</td></tr>");
    }
    
    result.Append("</table>");
    
    Console.WriteLine(result.ToString());
    

    Using StringBuilder instead of string concatenation I got the exact same result as the example above but in 58 milliseconds.

    So, just by using the StringBuilder, you can speed up that part by 98%.

    Don’t return await if it’s the only operation in that method

    Every time you mark a method as async, behind the scenes .NET creates a state machine that keeps track of the execution of each method.

    So, have a look at this program where every method returns the result from another one. Pay attention to the many return await statements;

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    async Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return await IsPathAvailable(articlePath);
    }
    
    async Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return await IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    So, what did I mean with state machine?

    Here’s just a small part of the result of the decompilation of that code. It’s a looooong listing: don’t focus on the details, just have a look at the general structure:

    If you are interested in the full example, here you can find the gist with both the original and the decompiled file.

    internal static class <Program>$
    {
        private sealed class <<<Main>$>g__Main|0_0>d : IAsyncStateMachine
        {
            public int <>1__state;
    
            public AsyncTaskMethodBuilder <>t__builder;
    
            private bool <isAvailable>5__1;
    
            private bool <>s__2;
    
            private TaskAwaiter<bool> <>u__1;
    
            private void MoveNext()
            {
                int num = <>1__state;
                try
                {
                    TaskAwaiter<bool> awaiter;
                    if (num != 0)
                    {
                        awaiter = <<Main>$>g__IsArticleAvailable|0_1().GetAwaiter();
                        if (!awaiter.IsCompleted)
                        {
                            num = (<>1__state = 0);
                            <>u__1 = awaiter;
                            <<<Main>$>g__Main|0_0>d stateMachine = this;
                            <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
                            return;
                        }
                    }
                    else
                    {
                        awaiter = <>u__1;
                        <>u__1 = default(TaskAwaiter<bool>);
                        num = (<>1__state = -1);
                    }
                    <>s__2 = awaiter.GetResult();
                    <isAvailable>5__1 = <>s__2;
                    Console.WriteLine(<isAvailable>5__1);
                }
                catch (Exception exception)
                {
                    <>1__state = -2;
                    <>t__builder.SetException(exception);
                    return;
                }
                <>1__state = -2;
                <>t__builder.SetResult();
            }
    
            void IAsyncStateMachine.MoveNext()
            {
                //ILSpy generated this explicit interface implementation from .override directive in MoveNext
                this.MoveNext();
            }
    
            [DebuggerHidden]
            private void SetStateMachine(IAsyncStateMachine stateMachine)
            {
            }
    
            void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
            {
                //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine
                this.SetStateMachine(stateMachine);
            }
        }
    
        private sealed class <<<Main>$>g__IsArticleAvailable|0_1>d : IAsyncStateMachine
        {
            public int <>1__state;
    
            public AsyncTaskMethodBuilder<bool> <>t__builder;
    
            private string <articlePath>5__1;
    
            private bool <>s__2;
    
            private TaskAwaiter<bool> <>u__1;
    
            private void MoveNext()
            {
                int num = <>1__state;
                bool result;
                try
                {
                    TaskAwaiter<bool> awaiter;
                    if (num != 0)
                    {
                        <articlePath>5__1 = "/blog/clean-code-error-handling";
                        awaiter = <<Main>$>g__IsPathAvailable|0_2(<articlePath>5__1).GetAwaiter();
                        if (!awaiter.IsCompleted)
                        {
                            num = (<>1__state = 0);
                            <>u__1 = awaiter;
                            <<<Main>$>g__IsArticleAvailable|0_1>d stateMachine = this;
                            <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
                            return;
                        }
                    }
                    else
                    {
                        awaiter = <>u__1;
                        <>u__1 = default(TaskAwaiter<bool>);
                        num = (<>1__state = -1);
                    }
                    <>s__2 = awaiter.GetResult();
                    result = <>s__2;
                }
                catch (Exception exception)
                {
                    <>1__state = -2;
                    <articlePath>5__1 = null;
                    <>t__builder.SetException(exception);
                    return;
                }
                <>1__state = -2;
                <articlePath>5__1 = null;
                <>t__builder.SetResult(result);
            }
    
            void IAsyncStateMachine.MoveNext()
            {
                //ILSpy generated this explicit interface implementation from .override directive in MoveNext
                this.MoveNext();
            }
    
            [DebuggerHidden]
            private void SetStateMachine(IAsyncStateMachine stateMachine)
            {
            }
    
            void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
            {
                //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine
                this.SetStateMachine(stateMachine);
            }
        }
    
        [AsyncStateMachine(typeof(<<<Main>$>g__IsArticleAvailable|0_1>d))]
        [DebuggerStepThrough]
        internal static Task<bool> <<Main>$>g__IsArticleAvailable|0_1()
        {
            <<<Main>$>g__IsArticleAvailable|0_1>d stateMachine = new <<<Main>$>g__IsArticleAvailable|0_1>d();
            stateMachine.<>t__builder = AsyncTaskMethodBuilder<bool>.Create();
            stateMachine.<>1__state = -1;
            stateMachine.<>t__builder.Start(ref stateMachine);
            return stateMachine.<>t__builder.Task;
        }
    

    Every method marked as async “creates” a class that implements the IAsyncStateMachine interface and implements the MoveNext method.

    So, to improve performance, we have to get rid of lots of this stuff: we can do it by simply removing await calls when there is only one awaited method and you do nothing after calling that method.

    So, we can transform the previous snippet:

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    async Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return await IsPathAvailable(articlePath);
    }
    
    async Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return await IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    into this one:

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return IsPathAvailable(articlePath);
    }
    
    Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    Notice that I removed both async and await keywords in the IsArticleAvailable and IsPathAvailable method.

    So, as you can see in this Gist, the only state machines are the ones for the Main method and for the IsResourceAvailable method.

    As usual, the more we improve memory usage, the better our applications will work.

    Other stuff

    There’s a lot more that you can improve. Look for articles that explain the correct usage of LINQ and why you should prefer HttpClientFactory over HttpClient.

    Run operations in parallel – but pay attention to the parallelism

    Let’s recap a bit what problem I needed to solve: I needed to get some details for a list of sports matches:

    Initial sequence diagram

    As you see, I perform the same set of operations for every match. Working on them in parallel improved a bit the final result.

    Sequence diagram with parallel operations

    Honestly, I was expecting a better improvement. Parallel calculation is not the silver bullet. And you should know how to implement it.

    And I still don’t know.

    After many attempts, I’ve created this class that centralizes the usage or parallel operations, so that if I find a better way to implement it, I just need to update a single class.

    Feel free to copy it or suggest improvements.

    public static class ParallelHelper
    {
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<In> items, Func<In, Out> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
    
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            Parallel.ForEach(items, options, item =>
            {
                cb.Add(fn(item));
            });
            return cb.ToList();
        }
    
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<IEnumerable<In>> batches, Func<In, Out> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            foreach (var batch in batches)
            {
                Parallel.ForEach(batch, options, item =>
                {
                    cb.Add(fn(item));
                });
            }
            return cb.ToList();
        }
    
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<IEnumerable<In>> batches, Func<IEnumerable<In>, IEnumerable<Out>> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            Parallel.ForEach(batches, options, batch =>
                {
                    var resultValues = fn(batch).ToList();
                    foreach (var result in resultValues)
                    {
                        cb.Add(result);
                    }
                });
            return cb.ToList();
        }
    }
    

    The first method performs the operation specified in the Func on every item passed in the IEnumerable parameter: then it aggregates the result in the ConcurrentBag object (it’s a thread-safe collection) and then returns the final result.

    The other methods do a similar thing but to a list of lists: this is useful when splitting the calculation into batches and performing each of these batches in sequence.

    But, why the MaxDegreeOfParallelism? Well, resources are not infinite; you can’t perform the same heavy operation on 200000 items at the same time, even more, if many requests arrive simultaneously. You have to reduce the number of items processed in parallel.

    Parallel execution of assets

    In the picture above you can see the parallel execution of the search for assets: every call begins at the same moment, so the final timing is a lot better than if I had performed all the operations in sequence.

    Move to .NET 5

    As reported by the official documentation, there has been a huge improvement in performance in the latest version of .NET.

    Those improvements are mainly about the usage of Garbage Collector, JIT optimization, and usage of strings and Regex-s.

    If you are interested, here’s a good article on Microsoft’s blog.

    So, did it really improved my application?

    Well, no.

    As you already know, the main bottlenecks are because of external dependencies (aka API calls). So, nothing that an update of the whole framework could impact.

    But, just to try it, I moved my application from .NET Core 3.1 to .NET 5: the porting was incredibly easy. But, as I was expecting, I did not get any significant improvement.

    So, since the application was a dependency of a wider system, I rolled it back to .NET 3.1.

    Ask, discuss, communicate

    The last tip is one of the most simple yet effective ones: talk with your colleagues, keep track of what worked and what didn’t, and communicate with other developers and managers.

    Even if a question is silly, ask. Maybe you’ll find some tip that gives you the best idea.

    Have a call with your colleagues, share your code and let them help you: even a simple trick, a tool they can suggest, an article that solves one of your problems, can be the key to the success.

    Don’t expect any silver bullet: you’ll improve your application with small steps.

    Wrapping up

    We’ve seen how I managed to improve the performance of an API endpoint passing from 14 seconds to 3.

    In this article you’ve seen some .NET-related tips to improve the performance of your applications: nothing fancy, but those little steps might help you reach the desired result.

    Of course, there is more: if you are want to know how compression algorithms and hosting models affect your applications, check out this article!

    If you have more tips, feel free to share them in the comments session!

    Happy coding!



    Source link

  • NITEX: Building a Brand and Digital Platform for Fashion’s New Supply Chain

    NITEX: Building a Brand and Digital Platform for Fashion’s New Supply Chain



    NITEX is not just another fashion-tech company. Their mission is to redefine the supply chain for fashion – bringing speed, sustainability, and intelligence to a traditionally rigid process. Their platform spans the entire workflow: design, trend forecasting, material sourcing, production, and logistics. In short, they offer a seamless, end-to-end system for brands who want to move faster and smarter.

    When NITEX approached us, the challenge was clear: they needed more than a website. They needed a platform that could translate their vision into an experience that worked for multiple audiences – brands seeking services, investors looking for clarity, factories wanting partnerships, and talent exploring opportunities.

    The project took shape over several months, moving from brand definition to UX architecture, UI design, and technical development. The turning point came with the realization that a single, linear site could not balance storytelling with action. To resolve this, we developed a dual-structure model: one path for narrative and inspiration, and another for practical conversion. This idea shaped every design and technical decision moving forward.

    Crafting the Hybrid Identity

    NITEX’s identity needed to reflect a unique duality: part fashion brand, part technology company. Our approach was to build a system that could flex between editorial elegance and sharp technical clarity.

    At the heart of the identity sits the NITEX logo, an angular form created from a forward-leaning N and X. This symbol is more than a mark – it acts as a flexible frame. The hollow center creates a canvas for imagery, data, or color, visualizing collaboration and adaptability.

    This angular geometry informed much of the visual language across the site:

    • Buttons expand or tilt along the logo’s angles when hovered.
    • The progress bar in navigation and footer fills in the same diagonal form.
    • Headlines reveal themselves with angled wipes, reinforcing a consistent rhythm.

    Typography was kept bold yet minimal, with global sans-serif structures that feel equally at home in high fashion and digital environments. Imagery played an equally important role. We chose photography that conveyed motion and energy, often with candid blur or dynamic framing. To push this further, we incorporated AI-generated visuals, adding intensity and reinforcing the sense of momentum at the core of the NITEX story. The result is a brand system that feels dynamic, flexible, and scalable – capable of stretching from streetwear to luxury contexts while always staying rooted in clarity and adaptability.

    Building the Engine

    A complex brand and experience required a strong technical foundation. For this, our developers chose tools that balanced performance, flexibility, and scalability:

    • Frontend: Nuxt
    • Backend / CMS: Sanity
    • Animations & Motion: GSAP and the Web Animations API

    The heavy reliance on native CSS transitions and the Web Animations API ensured smooth performance even on low-powered devices. GSAP was used to orchestrate more complex transitions while still keeping load times and resource use efficient. A key architectural decision was to give overlays their own URLs. This meant that when users opened deep-dive layers or content modules, those states were addressable, shareable, and SEO-friendly. This approach kept the experience immersive while ensuring that content remained accessible outside the narrative scroll.

    Defining the Flow

    Several features stand out in the NITEX site for how they balance storytelling with functionality:

    • Expandable overlays: Each narrative chapter can unfold into deep-dive layers – showing case studies, workflow diagrams, or leadership perspectives without breaking the scroll.
    • Dynamic conversion flows: Forms adapt to the user’s audience type – brands, investors, talent, or factories – showing tailored fields and next steps.
    • Calendar integration: Visitors can book demos or design lab visits directly, streamlining the lead process and reinforcing immediacy.

    This mix of storytelling modules and smart conversion flows ensured that every audience had a pathway forward, whether to be inspired, informed, or engaged.

    Bringing It to Life

    NITEX’s brand identity found its fullest expression in the motion and interaction design of the site. The site opens with scroll-based storytelling, each chapter unfolding with smooth transitions. Page transitions maintain energy, using angled wipes and overlays that slide in from the side. These overlays carry their own links, allowing users to dive deep without losing orientation. The angular motion language of the logo carries through:

    • Buttons expand dynamically on hover.
    • Rectangular components tilt into angular forms.
    • The dual-image module sees the N and X frame track the viewport, dynamically revealing new perspectives.

    This creates a consistent visual rhythm, where every motion feels connected to the brand’s DNA. The imagery reinforces this, emphasizing speed and creativity through motion blur, candid composition, and AI-driven intensity. Importantly, we kept the overall experience modular and scalable. Each content block is built on a flexible grid with clear typographic hierarchy. This ensures usability while leaving room for surprise – whether it’s an animated reveal, a bold image transition, or a subtle interactive detail.

    Under the Hood

    From a structural standpoint, the site was designed to scale as NITEX grows. The codebase follows a modular approach, with reusable components that can be repurposed across sections. Sanity’s CMS allows editors to easily add new chapters, forms, or modules without breaking the system.

    The split-entry structure – narrative vs. action – was the architectural anchor. This allowed us to keep storytelling immersive without sacrificing usability for users who came with a clear transactional intent.

    Looking Back

    This project was as much about balance as it was about creativity. Balancing brand storytelling with user conversion. Balancing motion and expressiveness with speed and performance. Balancing multiple audience needs within a single coherent system.

    One of the most rewarding aspects was seeing how the dual-experience model solved what initially felt like an unsolvable challenge: how to serve users who want inspiration and those who want action without building two entirely separate sites.

    The deep-dive overlays also proved powerful, letting NITEX show rather than just tell their story. They allowed us to layer complexity while keeping the surface experience clean and intuitive.

    Looking ahead, the NITEX platform is built to evolve. Future possibilities include investor dashboards with live performance metrics, brand-specific case modules curated by industry, or interactive workflow tools aligned with NITEX’s trend-to-delivery logic. The foundation we built makes all of this possible.

    Ultimately, the NITEX project reflects the company’s own values: clarity, adaptability, and speed. For us, it was an opportunity to merge brand design, UX, UI, and development into a single seamless system – one that redefines what a fashion-tech platform can look and feel like.



    Source link

  • Clean code tips – Tests | Code4IT


    Tests are as important as production code. Well, they are even more important! So writing them well brings lots of benefits to your projects.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Clean code principles apply not only to production code but even to tests. Indeed, a test should be even more clean, easy-to-understand, and meaningful than production code.

    In fact, tests not only prevent bugs: they even document your application! New team members should look at tests to understand how a class, a function, or a module works.

    So, every test must have a clear meaning, must have its own raison d’être, and must be written well enough to let the readers understand it without too much fuss.

    In this last article of the Clean Code Series, we’re gonna see some tips to improve your tests.

    If you are interested in more tips about Clean Code, here are the other articles:

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Why you should keep tests clean

    As I said before, tests are also meant to document your code: given a specific input or state, they help you understand what the result will be in a deterministic way.

    But, since tests are dependent on the production code, you should adapt them when the production code changes: this means that tests must be clean and flexible enough to let you update them without big issues.

    If your test suite is a mess, even the slightest update in your code will force you to spend a lot of time updating your tests: that’s why you should organize your tests with the same care as your production code.

    Good tests have also a nice side effect: they make your code more flexible. Why? Well, if you have a good test coverage, and all your tests are meaningful, you will be more confident in applying changes and adding new functionalities. Otherwise, when you change your code, you will not be sure not only that the new code works as expected, but that you have not introduced any regression.

    So, having a clean, thorough test suite is crucial for the life of your application.

    How to keep tests clean

    We’ve seen why we should write clean tests. But how should you write them?

    Let’s write a bad test:

    [Test]
    public void CreateTableTest()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        var node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    This test proves that the CreateTableInfo method of the TableInfoCreator class parses correctly the HTML passed in input and returns a TableInfo object that contains info about rows and headers.

    This is kind of a mess, isn’t it? Let’s improve it.

    Use appropriate test names

    What does CreateTableTest do? How does it help the reader understand what’s going on?

    We need to explicitly say what the tests want to achieve. There are many ways to do it; one of the most used is the Given-When-Then pattern: every method name should express those concepts, possibly in a consistent way.

    I like to use always the same format when naming tests: {Something}_Should_{DoSomething}_When_{Condition}. This format explicitly shows what and why the test exists.

    So, let’s change the name:

    [Test]
    public void CreateTableInfo_Should_CreateTableInfoWithCorrectHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        HtmlNode node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    Now, just by reading the name of the test, we know what to expect.

    Initialization

    The next step is to refactor the tests to initialize all the stuff in a better way.

    The first step is to remove the creation of the HtmlNode seen in the previous example, and move it to an external function: this will reduce code duplication and help the reader understand the test without worrying about the HtmlNode creation details:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
     // HERE!
        HtmlNode node = CreateNodeElement(tableContent);
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    
    private static HtmlNode CreateNodeElement(string content)
    {
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(content);
        return doc.DocumentNode.ChildNodes[0];
    }
    

    Then, depending on what you are testing, you could even extract input and output creation into different methods.

    If you extract them, you may end up with something like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var node = CreateWellFormedHtmlTable();
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = CreateWellFormedTableInfo();
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    private static TableInfo CreateWellFormedTableInfo()
    {
        var tableInfo = new TableInfo(2);
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
        return tableInfo;
    }
    
    private HtmlNode CreateWellFormedHtmlTable()
    {
        var table = CreateWellFormedTable();
        return CreateNodeElement(table);
    }
    
    private static string CreateWellFormedTable()
        => @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColA</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    

    So, now, the general structure of the test is definitely better. But, to understand what’s going on, readers have to jump to the details of both CreateWellFormedHtmlTable and CreateWellFormedTableInfo.

    Even worse, you have to duplicate those methods for every test case. You could do a further step by joining the input and the output into a single object:

    
    public class TableTestInfo
    {
        public HtmlNode Html { get; set; }
        public TableInfo ExpectedTableInfo { get; set; }
    }
    
    private TableTestInfo CreateTestInfoForWellFormedTable() =>
    new TableTestInfo
    {
        Html = CreateWellFormedHtmlTable(),
        ExpectedTableInfo = CreateWellFormedTableInfo()
    };
    

    and then, in the test, you simplify everything in this way:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var testTableInfo = CreateTestInfoForWellFormedTable();
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = testTableInfo.ExpectedTableInfo;
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    In this way, you have all the info in a centralized place.

    But, sometimes, this is not the best way. Or, at least, in my opinion.

    In the previous example, the most important part is the elaboration of a specific input. So, to help readers, I usually prefer to keep inputs and outputs listed directly in the test method.

    On the contrary, if I had to test for some properties of a class or method (for instance, test that the sorting of an array with repeated values works as expected), I’d extract the initializations outside the test methods.

    AAA: Arrange, Act, Assert

    A good way to write tests is to write them with a structured and consistent template. The most used way is the Arrange-Act-Assert pattern:

    That means that in the first part of the test you set up the objects and variables that will be used; then, you’ll perform the operation under test; finally, you check if the test passes by using assertion (like a simple Assert.IsTrue(condition)).

    I prefer to explicitly write comments to separate the 3 parts of each test, like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        // Arrange
        var testTableInfo = CreateTestInfoForWellFormedTable();
        TableInfo expectedTableInfo = testTableInfo.ExpectedTableInfo;
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        // Act
        var actualResult = part.CreateTableInfo();
    
        // Assert
        actualResult.Should().BeEquivalentTo(expectedTableInfo);
    }
    

    Only one assertion per test (with some exceptions)

    Ideally, you may want to write tests with only a single assertion.

    Let’s take as an example a method that builds a User object using the parameters in input:

    public class User
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
        public Address AddressInfo { get; set; }
    }
    
    public class Address
    {
        public string Country { get; set; }
        public string City { get; set; }
    }
    
    public User BuildUser(string name, string lastName, DateTime birthdate, string country, string city)
    {
        return new User
        {
            FirstName = name,
            LastName = lastName,
            BirthDate = birthdate,
            AddressInfo = new Address
            {
                Country = country,
                City = city
            }
        };
    }
    

    Nothing fancy, right?

    So, ideally, we should write tests with a single assert (ignore in the next examples the test names – I removed the when part!):

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectName()
    {
        // Arrange
        var name = "Davide";
    
        // Act
        var user = BuildUser(name, null, DateTime.Now, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
    }
    
    [Test]
    public void BuildUser_Should_CreateUserWithCorrectLastName()
    {
        // Arrange
        var lastName = "Bellone";
    
        // Act
        var user = BuildUser(null, lastName, DateTime.Now, null, null);
    
        // Assert
        user.LastName.Should().Be(lastName);
    }
    

    … and so on. Imagine writing a test for each property: your test class will be full of small methods that only clutter the code.

    If you can group assertions in a logical way, you could write more asserts in a single test:

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectPlainInfo()
    {
        // Arrange
        var name = "Davide";
        var lastName = "Bellone";
        var birthDay = new DateTime(1991, 1, 1);
    
        // Act
        var user = BuildUser(name, lastName, birthDay, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
        user.LastName.Should().Be(lastName);
        user.BirthDate.Should().Be(birthDay);
    }
    

    This is fine because the three properties (FirstName, LastName, and BirthDate) are logically on the same level and with the same meaning.

    One concept per test

    As we stated before, it’s not important to test only one property per test: each and every test must be focused on a single concept.

    By looking at the previous examples, you can notice that the AddressInfo property is built using the values passed as parameters on the BuildUser method. That makes it a good candidate for its own test.

    Another way of seeing this tip is thinking of the properties of an object (I mean, the mathematical properties). If you’re creating your custom sorting, think of which properties can be applied to your method. For instance:

    • an empty list, when sorted, is still an empty list
    • an item with 1 item, when sorted, still has one item
    • applying the sorting to an already sorted list does not change the order

    and so on.

    So you don’t want to test every possible input but focus on the properties of your method.

    In a similar way, think of a method that gives you the number of days between today and a certain date. In this case, just a single test is not enough.

    You have to test – at least – what happens if the other date:

    • is exactly today
    • it is in the future
    • it is in the past
    • it is next year
    • it is February, the 29th of a valid year (to check an odd case)
    • it is February, the 30th (to check an invalid date)

    Each of these tests is against a single value, so you might be tempted to put everything in a single test method. But here you are running tests against different concepts, so place every one of them in a separate test method.

    Of course, in this example, you must not rely on the native way to get the current date (in C#, DateTime.Now or DateTime.UtcNow). Rather, you have to mock the current date.

    FIRST tests: Fast, Independent, Repeatable, Self-validating, and Timed

    You’ll often read the word FIRST when talking about the properties of good tests. What does FIRST mean?

    It is simply an acronym. A test must be Fast, Independent, Repeatable, Self-validating, and Timed.

    Fast

    Tests should be fast. How much? Enough to don’t discourage the developers to run them. This property applies only to Unit Tests: in fact, while each test should run in less than 1 second, you may have some Integration and E2E tests that take more than 10 seconds – it depends on what you’re testing.

    Now, imagine if you have to update one class (or one method), and you have to re-run all your tests. If the whole tests suite takes just a few seconds, you can run them whenever you want – some devs run all the tests every time they hit Save; if every single test takes 1 second to run, and you have 200 tests, just a simple update to one class makes you lose at least 200 seconds: more than 3 minutes. Yes, I know that you can run them in parallel, but that’s not the point!

    So, keep your tests short and fast.

    Independent

    Every test method must be independent of the other tests.

    This means that the result and the execution of one method must not impact the execution of another one. Conversely, one method must not rely on the execution of another method.

    A concrete example?

    public class MyTests
    {
        string userName = "Lenny";
    
        [Test]
        public void Test1()
        {
            Assert.AreEqual("Lenny", userName);
            userName = "Carl";
    
        }
    
        [Test]
        public void Test2()
        {
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    Those tests are perfectly valid if run in sequence. But Test1 affects the execution of Test2 by setting a global variable
    used by the second method. But what happens if you run only Test2? It will fail. Same result if the tests are run in a different order.

    So, you can transform the previous method in this way:

    public class MyTests
    {
        string userName;
    
        [SetUp]
        public void Setup()
        {
            userName = "Boe";
        }
    
        [Test]
        public void Test1()
        {
            userName = "Lenny";
            Assert.AreEqual("Lenny", userName);
    
        }
    
        [Test]
        public void Test2()
        {
            userName = "Carl";
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    In this way, we have a default value, Boe, that gets overridden by the single methods – only when needed.

    Repeatable

    Every Unit test must be repeatable: this means that you must be able to run them at any moment and on every machine (and get always the same result).

    So, avoid all the strong dependencies on your machine (like file names, absolute paths, and so on), and everything that is not directly under your control: the current date and time, random-generated numbers, and GUIDs.

    To work with them there’s only a solution: abstract them and use a mocking mechanism.

    If you want to learn 3 ways to do this, check out my 3 ways to inject DateTime and test it. There I explained how to inject DateTime, but the same approaches work even for GUIDs and random numbers.

    Self-validating

    You must be able to see the result of a test without performing more actions by yourself.

    So, don’t write your test results on an external file or source, and don’t put breakpoints on your tests to see if they’ve passed.

    Just put meaningful assertions and let your framework (and IDE) tell you the result.

    Timely

    You must write your tests when required. Usually, when using TDD, you write your tests right before your production code.

    So, this particular property applies only to devs who use TDD.

    Wrapping up

    In this article, we’ve seen that even if many developers consider tests redundant and not worthy of attention, they are first-class citizens of our applications.

    Paying enough attention to tests brings us a lot of advantages:

    • tests document our code, thus helping onboarding new developers
    • they help us deploy with confidence a new version of our product, without worrying about regressions
    • they prove that our code has no bugs (well, actually you’ll always have a few bugs, it’s just that you haven’t discovered them yet )
    • code becomes more flexible and can be extended without too many worries

    So, write meaningful tests, and always well written.

    Quality over quantity, always!

    Happy coding!



    Source link

  • Generating Your Website from Scratch for Remixing and Exploration

    Generating Your Website from Scratch for Remixing and Exploration



    Codrops’ “design” has been long overdue for a refresh. I’ve had ideas for a new look floating around for ages, but actually making time to bring them to life has been tough. It’s the classic shoemaker’s shoes problem: I spend my days answering emails, editing articles and (mostly) managing Codrops and the amazing contributions from the community, while the site itself quietly gathers dust 😂

    Still, the thought of reimagining Codrops has been sitting in the back of my mind. I’d already been eyeing Anima as a tool that could make the process faster, so I reached out to their team. They were kind enough to support us with this review (thank you so much!) and it’s a true win-win: I get to finally test my idea for Codrops, and you get a good look at how the tool holds up in practice 🤜🤛

    So, Anima is a platform made to bridge the gap between design and development. It allows you to take an existing website, either one of your own projects or something live on the web, and bring it into a workspace where the layout and elements can be inspected, edited, and reworked. From there, you can export the result as clean, production-ready code in React, HTML/CSS, or Tailwind. In practice, this means you can quickly prototype new directions, remix existing layouts, or test ideas without starting completely from scratch.

    Obviously, you should not use this to copy other people’s work, but rather to prototype your own ideas and remix your projects!

    Let me take you along on a little experiment I ran with it.

    Getting started

    Screenshot of Anima Playground interface

    Anima Link to Code was introduced in July this year and promises to take any design or web page and transform it into live editable code. You can generate, preview, and export production ready code in React, TypeScript, Tailwind CSS, or plain HTML and CSS. That means you can start with a familiar environment, test an idea, and immediately see how it holds up in real code rather than staying stuck in the design stage. It also means you can poke around, break things, and try different directions without manually rebuilding the scaffolding each time. That kind of speed is what usually makes or breaks whether I stick with an experiment or abandon it halfway through.

    To begin, I decided to use the Codrops homepage as my guinea pig. I have always wondered how it would feel reimagined as a bento style grid. Normally, if I wanted to try that, I would either spend hours rewriting markup and CSS by hand or rely on an AI prompt that would often spiral into unrelated layouts and syntax errors. It would be already a great help if I could envision my idea and play with it bit!

    After pasting in the Codrops URL, this is what came out. A React project was generated in seconds.

    Generated Codrops homepage project

    The first impression was surprisingly positive. The homepage looked recognizable and the layout did not completely collapse. Yes, there was a small glitch where the Webzibition box background was not sized correctly, but overall it was close enough that I felt comfortable moving on. That is already more than I can say for many auto generation tools where the output is so mangled that you do not even know where to start.

    Experimenting with a bento grid

    Now for the fun part. I typed a simple prompt that said, “Make a bento grid of all these items.” Almost immediately I hit an error. My usual instinct in this situation is to give up since vibe coding often collapses the moment an error shows up, and then it becomes a spiral of debugging someone else’s half generated mess. But let’s try this instead of quitting right away 🙂 The fix worked and I got a quirky but functioning bento grid layout:

    First attempt at bento grid

    The result was not exactly what I had in mind. Some elements felt off balance and the spacing was not ideal. Still, I had something on screen to iterate on, which is already a win compared to starting from scratch. So I pushed further. Could I bring the Creative Hub and Webzibition modules into this grid? A natural language prompt like “Place the Creative Hub box into the bento style container of the articles” felt like a good test.

    And yes, it actually worked. The Creative Hub box slipped into the grid container:

    Creative Hub moved into container

    The layout was starting to look cramped, so I tried another prompt. I asked Anima to also move the Webzibition box into the same container and to make it span full width. The generation was quick with barely a pause, and suddenly the page turns into this:

    Webzibition added to full width

    This really showed me what it’s good at: iteration is fast. You don’t have to stop, rethink the grid, or rewrite CSS by hand. You just throw an idea in, see what comes back, and keep moving. It feels more like sketching in a notebook than carefully planning a layout. For prototyping, that rhythm is exactly what I want. Really into this type of layout for Codrops!

    Looking under the hood

    Visuals are only half the story. The bigger question is what kind of code Anima actually produces. I opened the generated React and Tailwind output, fully expecting a sea of meaningless divs and tangled class names.

    To my surprise, the code was clean. Semantic elements were present, the structure was logical, and everything was just readable. There was no obvious divitis, and the markup did not feel like something I would want to burn and rewrite from scratch. It even got me thinking about how much simpler maintaining Codrops might be if it were a lean React app with Tailwind instead of living inside the layers of WordPress 😂

    There is also a Chrome extension called Web to Code, which lets you capture any page you are browsing and instantly get editable code. With this it’s easy to capture and generate inner pages like dashboards, login screens, or even private areas of a site you are working on could be pulled into a sandbox and played with directly.

    Anima Web to Code Chrome extension

    Pros and cons

    • Pros: Fast iteration, surprisingly clean code, easy setup, beginner-friendly, genuinely fun to experiment with.
    • Cons: Occasional glitches, exported code still needs cleanup, limited customization, not fully production-ready.

    Final thoughts

    Anima is not magic and it is not perfect. It will not replace deliberate coding, and it should not. But as a tool for quick prototyping, remixing existing designs, or exploring how a site might feel with a new structure, it is genuinely fun and surprisingly capable. The real highlight for me is the speed of iteration: you try an idea, see the result instantly, and either refine it or move on. That rhythm is addictive for creative developers who like to sketch in code rather than commit to heavy rebuilds from scratch.

    Verdict: Anima shines as a playground for experimentation and learning. If you’re a designer or developer who enjoys fast iteration, you’ll likely find it inspiring. If you need production-ready results for client work, you’ll still want to polish the output or stick with more mature frameworks. But for curiosity, prototyping, and a spark of creative joy, Anima is worth your time and you might be surprised at how much fun it is to remix the web this way.



    Source link

  • how to view Code Coverage report on Azure DevOps | Code4IT


    Code coverage is a good indicator of the health of your projects. We’ll see how to show Cobertura reports associated to your builds on Azure DevOps and how to display the progress on Dashboard.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code coverage is a good indicator of the health of your project: the more your project is covered by tests, the lesser are the probabilities that you have easy-to-find bugs in it.

    Even though 100% of code coverage is a good result, it is not enough: you have to check if your tests are meaningful and bring value to the project; it really doesn’t make any sense to cover each line of your production code with tests valid only for the happy path; you also have to cover the edge cases!

    But, even if it’s not enough, having an idea of the code coverage on your project is a good practice: it helps you understanding where you should write more tests and, eventually, help you removing some bugs.

    In a previous article, we’ve seen how to use Coverlet and Cobertura to view the code coverage report on Visual Studio (of course, for .NET projects).

    In this article, we’re gonna see how to show that report on Azure DevOps: by using a specific command (or, even better, a set of flags) on your YAML pipeline definition, we are going to display that report for every build we run on Azure DevOps. This simple addition will help you see the status of a specific build and, if it’s the case, update the code to add more tests.

    Then, in the second part of this article, we’re gonna see how to view the coverage history on your Azure DevOps dashboard, by using a plugin called Code Coverage Protector.

    But first, let’s start with the YAML pipelines!

    Coverlet – the NuGet package for code coverage

    As already explained in my previous article, the very first thing to do to add code coverage calculation is to install a NuGet package called Coverlet. This package must be installed in every test project in your Solution.

    So, running a simple dotnet add package coverlet.msbuild on your test projects is enough!

    Create YAML tasks to add code coverage

    Once we have Coverlet installed, it’s time to add the code coverage evaluation to the CI pipeline.

    We need to add two steps to our YAML file: one for collecting the code coverage on test projects, and one for actually publishing it.

    Run tests and collect code coverage results

    Since we are working with .NET Core applications, we need to use a DotNetCoreCLI@2 task to run dotnet test. But we need to specify some attributes: in the arguments field, add /p:CollectCoverage=true to tell the task to collect code coverage results, and /p:CoverletOutputFormat=cobertura to specify which kind of code coverage format we want to receive as output.

    The task will have this form:

    - task: DotNetCoreCLI@2
      displayName: "Run tests"
      inputs:
        command: "test"
        projects: "**/*[Tt]est*/*.csproj"
        publishTestResults: true
        arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
    

    You can see the code coverage preview directly in the log panel of the executing build. The ASCII table tells you the code coverage percentage for each module, specifying the lines, branches, and methods covered by tests for every module.

    Logging dotnet test

    Another interesting thing to notice is that this task generates two files: a trx file, that contains the test results info (which tests passed, which ones failed, and other info), and a coverage.cobertura.xml, that is the file we will use in the next step to publish the coverage results.

    dotnet test generated files

    Publish code coverage results

    Now that we have the coverage.cobertura.xml file, the last thing to do is to publish it.

    Create a task of type PublishCodeCoverageResults@1, specify that the result format is Cobertura, and then specify the location of the file to be published.

    - task: PublishCodeCoverageResults@1
      displayName: "Publish code coverage results"
      inputs:
        codeCoverageTool: "Cobertura"
        summaryFileLocation: "**/*coverage.cobertura.xml"
    

    Final result

    Now that we know what are the tasks to add, we can write the most basic version of a build pipeline:

    trigger:
      - master
    
    pool:
      vmImage: "windows-latest"
    
    variables:
      solution: "**/*.sln"
      buildPlatform: "Any CPU"
      buildConfiguration: "Release"
    
    steps:
      - task: DotNetCoreCLI@2
        displayName: "Build"
        inputs:
          command: "build"
      - task: DotNetCoreCLI@2
        displayName: "Run tests"
        inputs:
          command: "test"
          projects: "**/*[Tt]est*/*.csproj"
          publishTestResults: true
          arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
      - task: PublishCodeCoverageResults@1
        displayName: "Publish code coverage results"
        inputs:
          codeCoverageTool: "Cobertura"
          summaryFileLocation: "**/*coverage.cobertura.xml"
    

    So, here, we simply build the solution, run the tests and publish both test and code coverage results.

    Where can we see the results?

    If we go to the build execution details, we can see the tests and coverage results under the Tests and coverage section.

    Build summary panel

    By clicking on the Code Coverage tab, we can jump to the full report, where we can see how many lines and branches we have covered.

    Test coverage report

    And then, when we click on a class (in this case, CodeCoverage.MyArray), you can navigate to the class details to see which lines have been covered by tests.

    Test coverage details on the MyArray class

    Code Coverage Protector: an Azure DevOps plugin

    Now what? We should keep track of the code coverage percentage over time. But open every Build execution to see the progress is not a good idea, isn’t it? We should find another way to see the progress.

    A really useful plugin to manage this use case is Code Coverage Protector, developed by Dave Smits: among other things, it allows you to display the status of code coverage directly on your Azure DevOps Dashboards.

    To install it, head to the plugin page on the marketplace and click get it free.

    &ldquo;Code Coverage Protector plugin&rdquo;

    Once you have installed it, you can add one or more of its widgets to your project’s Dashboard, define which Build pipeline it must refer to, select which metric must be taken into consideration (line, branch, class, and so on), and set up a few other options (like the size of the widget).

    &ldquo;Code Coverage Protector widget on Azure Dashboard&rdquo;

    So, now, with just one look you can see the progress of your project.

    Wrapping up

    In this article, we’ve seen how to publish code coverage reports for .NET applications on Azure DevOps. We’ve used Cobertura and Coverlet to generate the reports, some YAML configurations to show them in the related build panel, and Code Coverage Protector to show the progress in your Azure DevOps dashboard.

    If you want to do one further step, you could use Code Coverage Protector as a build step to make your builds fail if the current Code Coverage percentage is less than the one from the previous builds.

    Happy coding!





    Source link

  • [ITA] Azure DevOps: build and release pipelines to deploy with confidence


    About the author

    Davide Bellone is a Principal Backend Developer with more than 10 years of professional experience with Microsoft platforms and frameworks.

    He loves learning new things and sharing these learnings with others: that’s why he writes on this blog and is involved as speaker at tech conferences.

    He’s a Microsoft MVP 🏆, conference speaker (here’s his Sessionize Profile) and content creator on LinkedIn.



    Source link