بلاگ

  • How to run integration tests for .NET API | Code4IT


    Integration tests are useful to check if multiple components fit together well. How can you test your APIs? And how can you mock dependencies?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You already write Unit tests, right?

    But sometimes you want to perform more deep tests, and test not only a small part of your code but the whole execution.

    In this article, I’m going to explain how to run integration tests on your APIs without relying on external tools like Postman: all the tests will be defined within the same solution next to your unit tests.

    Just as a reminder: integration tests are used to check that multiple parts of your system work correctly together; this includes networks, database access, file system and so on. The correctness of single components (meaning single classes and methods) is tested via unit tests.

    Time to write some code!

    API setup

    I’ve created a simple API with .NET Core 3.1.

    It’s really simple: there’s only one endpoint, /api/pokemon/{pokemonName} which, given a Pokémon name, returns its number and its types.

    [HttpGet]
    [Route("{pokemonName}")]
    public async Task<ActionResult<PokemonViewModel>> Get(string pokemonName)
    {
        var fullInfo = await _pokemonService.GetPokemonInfo(pokemonName);
        if (fullInfo != null)
        {
            return new PokemonViewModel
            {
                Name = fullInfo.Name,
                Number = fullInfo.Order,
                Types = fullInfo.Types.Select(x => x.Type.Name).ToArray()
            };
        }
        else
            return NotFound();
    }
    

    The _pokemonService variable is populated in the constructor via Dependency Injection, and its type is IPokemonService.

    The IPokemonService interface is implemented by the PokemonService class, which read data from an external API, https://pokeapi.co/api, parses the result, and then returns it with a complex structure. The API controller calls the GetPokemonInfo method and maps some of the fields to the view model.

    Lastly, let’s not forget to define the dependencies in the Startup class:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
        services.AddScoped<IPokemonService, PokemonService>();
    }
    

    That’s it! Nothing cumbersome, right?

    TIP: do you know the difference between Scoped and Transient when talking about DI? Check out this article!

    In-memory test server

    Time for some tests!

    Create a new test project within the same solution; you can use your favorite testing framework: there’s no difference in using MSTest, NUnit, XUnit, or something else.

    When you’re done, you need to install the
    Microsoft.AspNetCore.Mvc.Testing package
    via NuGet or via CLI.

    The purpose of my tests is to instantiate an instance of my APIs in memory, call them, and check the result of the whole process.

    First of all, you need to instantiate a new HttpClient:

    var factory = new WebApplicationFactory<APIIntegrationTestsExample.Program>();
    
    var client = factory.CreateClient();
    

    The variable factory creates a TestServer whose starting point is defined in the APIIntegrationTestsExample.Program class: this is exactly the one used by the real project, and scaffolded by default by .NET when creating the new project. This assures that we are using all the real dependencies and configurations.

    Finally, I’ve created the HTTP client that can be used to interact with my API; you can do it in the simplest way you can imagine:

     HttpResponseMessage sutHttpResponse = await client.GetAsync($"/api/pokedex/magmar");
    

    Here I have called the endpoint exposed by my in-memory server and stored the result in the sutHttpResponse variable. Now I can check whatever I want, from status code to content:

    string stringContent = await sutHttpResponse.Content.ReadAsStringAsync();
    
    var sutObjectResult = JsonSerializer.Deserialize<PokemonViewModel>(stringContent, new JsonSerializerOptions
    {
        PropertyNameCaseInsensitive = true
    });
    
    Assert.AreEqual("Magmar", sutObjectResult.Name, true);
    
    Assert.IsTrue(sutHttpResponse.IsSuccessStatusCode);
    

    Mocking dependencies

    There might be cases where you want to mock a dependency to avoid connections with external resources, like a database or an external API. In this case, I want to replace the PokemonService class with a mocking one. This can be a concrete class, defined in your test library, or a mock created with external tools like Moq and NSubstitute.

    Once you have defined the mocking class, that I called MockPokemonService, you can replace the creation of the HttpClient object:

    var client = factory.WithWebHostBuilder(builder =>
    {
        // Microsoft.AspNetCore.TestHost;
        builder.ConfigureTestServices(services =>
        {
            services.AddScoped<IPokemonService, MockPokemonService>();
        });
    }).CreateClient();
    

    In this way, you can customize the client by adding additional services, thanks to the ConfigureTestServices method defined in the Microsoft.AspNetCore.TestHost namespace. Notice that you must use ConfigureTestServices, not ConfigureServices!

    This will override only the specified dependency with the specified one.

    I used this method in a project to remove the dependencies from an external API which returned a very complicated JSON file and replace that remote data with an in-memory copy, using a JSON manifest file as a static resource.

    Conclusion

    You should care not only to write unit tests but also the other types (integration tests, end-to-end tests and so on).

    Here we’ve seen how to instantiate a TestServer to call our APIs and check the results.

    An idea can be to instantiate the HttpClient in the constructor or in the test setup, and write tests on different inputs.

    Have you ever used it?

    On the Microsoft documentation you’ll find a complete description of Integration tests with C#.

    If you want to see this example, head to my GitHub repo!

    Happy coding!



    Source link

  • Silent Lynx APT Targets Dushanbe with Espionage Campaign

    Silent Lynx APT Targets Dushanbe with Espionage Campaign


    • Introduction
    • Timeline
    • Key Targets.
      • Industries Affecte     d.
      • Geographical Focus.
    • Infection Chain.
    • Initial Findings.
    • Technical Analysis.
      • Campaign – I
        • The LNK Way.
        • Malicious SILENT LOADER
        • Malicious LAPLAS Implant – TCP & TLS.
        • Malicious .NET Implant – SilentSweeper
      • Campaign – II
        • Malicious .NET Implant – SilentSweeper
        • VBScript.
        • Malicious PowerShell Script.
    • Hunting and Infrastructure.
    • Attribution
    • Early-Remediations.
    • Conclusion
    • SEQRITE Protection.
    • IOCs
    • MITRE ATT&CK.
    • References

    Authors: Subhajeet Singha, Priya Patel, Sathwik Ram Prakki.

    Introduction

    Seqrite Labs’ APT Team was the first to assign the nomenclature Silent Lynx to the threat group. Prior & later to this, multiple researchers had identified the initial campaigns and referred to the group by various names, including YoroTrooper, Sturgeon Phisher, Cavalry Werewolf, ShadowSilk, and others. Since we were the first to uncover and track these campaigns under that naming convention, we have continued to refer to the group as Silent Lynx to maintain consistency and avoid confusion caused by multiple overlapping aliases.

    As posted by multiple other research vendors and by us, Silent Lynx is famous and well known for orchestrating spear-phishing based campaigns along with posing as government officials to target governmental employees. With multiple custom-made or sometimes ready-made available offensive tooling from open-source projects, they mostly focused targeting Central-Asian think-tanks, governments, Russian government and some nations towards South-east Asia.

    In this blog, we’ll explore on how we identified the same group, making sluggish changes in-terms of deploying stagers, and making small OPSEC blunders, that have led us to identify campaigns across entities targeting Azerbaijan-Russia relationship with fake RAR archives. This group has also been targeting China-Central Asian entities with malicious .NET implant. We believe that the sole-purpose of the group is purely espionage done in a hasty manner, which leaves a lot of blunders, that led this current research to multiple findings. We will also look at the infrastructure covering multiple campaigns and implants uncovered during the phase of research.

    Well, last but not the least part is the final theme of this research, which is “The roads lead to Dushanbe”. The theme will slowly be incorporated in the later parts of the blog, giving it the reason for the selection of the theme.

    Timeline.

    Key Targets.

    Silent Lynx have been targeting multiple sectors of various nations and in this research we will focus on the ones which have a very close geographic relation in terms of events such as Astana, Dushanbe & Baku. The first campaign, which we tracked in Early June-September, had targeted Chinese & Central Asian governmental think-tanks using the theme of a summit, which was held in Astana, which is the capital city of Kazakhstan. In the mid of September to October, we discovered another campaign, which was carried out by this same threat group, abusing e-mails from Kyrgyzstan-based governmental entities to target various entities in Russia, which was also discovered by threat-researchers at BI. Zone.

    Following the similar footmarks, we uncovered that the threat actor also targeted entities involved in Azerbaijan-Russian diplomacy, using the theme of a summit along with specific keywords such as Strategic Co-operation, which was held in Dushanbe. The industries which have been targeted are as follows:

    Industries Affected

    • Government Think-tanks & Diplomats.
    • Mining Industry.
    • Transport & Communication Industry.

    Geographical Focus

    • Tajikistan
    • Azerbaijan
    • Russia
    • China
    • Other Central-Asian nations (refer, previous research related to discovery of Silent Lynx)

    Infection Chain

    Initial Findings.

    We at SEQRITE APT-Team, have been meticulously tracking, Silent Lynx, since November 2024. Initially, we discovered that the group had been targeting multiple important entities across Kyrgyzstan, Turkmenistan & Uzbekistan followed by the sole motive of espionage related to critical sectors such as National Banks, Railway Projects, etc. Our findings were presented at Virus Bulletin, 2025.

    As mentioned in our collaboration with VirusTotal on our research and key-pivotal points we use to hunt the threat group, for example, the obsession of using Base64 encoded PowerShell implants and loaders which abuse PowerShell.exe binary.

    Using similar pivotal logic, we discovered a campaign in the month of September which we believe has been orchestrated initially in the month of June and the samples were discovered by us in September.

    Later, using the similar logic we found another campaign that has been using similar modus operandi with slight changes in deploying stagers. As this has been found in the month of October, we believe this is again orchestrated in October itself, depending on certain theme.

    Further hunting and pivoting led us to confirmation that these two campaigns, although with a very little number of changes in deploying the final stager payloads, have been launched by the same threat group Silent Lynx.

    In the next section of the research, we will focus on the technical and other interesting parts of the research.

    Technical Analysis.

    During our research on this threat group, under the code name Operation Peek-A-Baku, we uncovered multiple sets of campaigns. To present our findings clearly, we have divided the analysis into two sections.

    The first section details the various methods used by the threat actor to deploy the final-stage reverse shell, which primarily targeted entities involved in Russia–Azerbaijan relations. The second section focuses on campaigns aimed at China–Central Asia relations. It is important to note that the technical analysis is not organized chronologically and does not reflect the exact sequence of events within the overall campaign.

    Campaign – I

    Let us start analyzing the campaign, which involved targeting entities of Russian-Azerbaijan diplomatic relationship.

    October-2025

    Initially, in the first half of October-2025 or the second week of October to be precise, our team found a malicious RAR archive known as План развитие стратегического сотрудничества.pdf.rar which translates to Plan for the Development of Strategic Cooperation. As a matter of fact, we also did check out the way this filename has been written is actually grammatically incorrect in Russian, suggesting that it was likely created by a non-native speaker or generated automatically using multiple translators available on the web.

    As we believe this campaign targeted the diplomatic entities who were involved or related with organizing and making the meeting, this campaign specifically targeted diplomatic entities that were involved in or associated with the organization and coordination of the Russia–Azerbaijan meeting held in Dushanbe, Tajikistan, in October 2025. Given the timing and the politically charged context surrounding the summit, which focused on restoring and strengthening strategic cooperation between the two nations, it is mostly suspected that the threat actor sought to gather intelligence on diplomatic communications linked to this high-level engagement.

    Now, let us look into the technical arsenal of the set of malicious payloads used.

    The LNK Way

    We hunted the suspicious RAR file План развитие стратегического сотрудничества.pdf.rar and upon opening it, we only saw that the RAR file contained a malicious LNK with the similar name.

    Now, upon looking into the contents of the LNK, file, we figured out that the LNK is basically trying to abuse the powershell.exe binary to download and execute a malicious PowerShell file from a GitHub Repository known as GoBuster777 , and the file which is being downloaded is 1.ps1. Interestingly, we also found a suspicious file path which will be further leveraged to pivot and hunt further campaigns in the later section of this research blog.

    Upon downloading the 1.ps1 file, we found another similarity with the previous campaigns of Silent Lynx, which is the usage of Base64 encoded malicious blob executed via powershell.

    Upon decoding the Base64 blob, we determined this is a quick TCP-based reverse shell that connects to 206.189.11.142:443. The payload opens a socket, establishes a stream, and enters a persistent read-execute-return loop: it reads text commands from the remote operator, executes them locally via Invoke-Expression, converts enumerable results to strings, and writes the output back over the same connection. Finally, we also discovered that the threat actor also deployed the open-source tunneler Ligolo-ng alongside the PowerShell-based reverse shell, which overall gave the TA an access to execute arbitrary commands on the victim machine.

    Malicious SILENT LOADER Implant.

    We identified a second implant linked to the same campaign. The file, named silent_loader.exe, was uploaded from a similar location (Azerbaijan).

    Given this artifact and the multiple aliases previously used for the group, we assess the actor may have favored the “Silent” naming motif. That preference could explain the implant’s name Silent Loader and supports our continuing use of the Silent Lynx label for tracking.

    Looking into the technicalities of the implant, it turns out to be extremely simple in nature and highly relevant to the C++ based loader, which we initially discovered in the very first campaign. In this implant, it uses iex to download the malicious 1.ps1 file, which we saw in the previous section.

    Finally, it forms an entire command to download & execute the malicious PowerShell script, which is done by passing the command line as an argument to CreateProcessW API. This spawns a new PowerShell process again connecting back to the C2 framework.

    One of the most interesting parts of Silent Loader is it exactly matches the initial loader, that we discovered back in the month of November 2024 – January 2025. This indicates the only key difference is, that instead of adding the encoded Base64 blob inside the loader binary, the threat actor has made a sluggish move to download the content from GitHub.

    Malicious LAPLAS Implant – TCP & TLS.

    We also uncovered a malicious implant used by this threat group in this campaign, and we are tracking it under the name Laplas. This is programmed using C++ and uses TCP-based network-stack for communication.

    Looking into the initial part of this implant, we saw that it is trying to connect to the malicious command and control on a specific port number 443. The reverse-shell basically works like:

    • ./laplas.exe <c2 address> <port number>

    In case the arguments are not provided during startup, the flow of the code falls back to the hardcoded C2 address and the port number inside the binary, which is passed to a function sub_F710D0 that is basically a Connector function.

    Looking into the connector, it initially sleeps for 5 seconds and then performs some buffer-based operation further connecting to the C2 server.

    Then another function performs XOR-decoding operation of a hardcoded string, which upon decoding turns out to be cmd.exe , which is further passed as a parameter to CreateProcess API.

    There are a few interesting parts of the implant, one of them is it contains a bunch of garbage code, which is not much of a use with the context of the workings of the implant. Another one being the C2 server returning echo <some-garbled character> every time the implant tries to connect to the server.

    And, finally upon receiving the exit message from the threat actor, the implant will release all the resources and gracefully exit. We have also identified another version of the same LAPLAS implant, which performs nearly similar tasks, with a little difference in the command-and-control infrastructure and some functionalities and artefacts, being a TLS-based reverse shell.

    It is to be noted that the implant with TLS-based functionality has not been used in the Russia-Azerbaijan based campaign. We have found it via multiple pivots and due to its slightly unique technical aspect, this section has been added under the technical analysis part.

    The first interesting point contains an interesting string Phenyx2022, which we believe is just a lament signature which the developer wanted to flaunt while the implant gets executed.

    In this part the implant sends a message to the operator HELLO, Press Enter., once the handshake is done it uses windows objects known as Pipes for I/O operation and other simple aspects of this implant.

    As mentioned in this case, the C2 of the implant which is communicating over TLS, does use a different C2 address.

    Once it received the message shexit from the operator, it goes ahead to gracefully exit with a message of Goodbye.

    Malicious .NET Implant – SilentSweeper.

    We also identified another .NET implant used in this campaign, and we track it under the name of SilentSweeper.

    An interesting part of this implant is that it takes multiple arguments out of which an argument, that says -extract is basically responsible for extracting a malicious PowerShell Script and write it to a file. This is basically embedded inside the Resources section of the binary.

    Apart from the previous option of extracting the PowerShell to a file, the implant also provides multiple other options such as -? which provides a list of help on the specifications of the implant and -debug option that supports the debugging of the PowerShell script.

    As mentioned, the implant loads a file name from the Resources known as qw.ps1 and reads the content of the file and further goes ahead and executes the contents of the PowerShell script.

    Upon decoding the Base64 blog, we figured out that it is basically downloading the malicious 1.ps1 file, which is a reverse-shell, that we analyzed during the first section of the research. Now, in the next section, we will look into the other campaign.

    Campaign – II

    Let us start analyzing the campaign which involved targeting entities of China-Central Asian diplomatic relationship.

    Initially in the second week of September-2025, our team found a malicious RAR archive known as China-Central Asia SummitProject.rar. We believe this campaign targeted the diplomatic entities, individuals and other entities involved or related with organizing and have certain involvements in either decision making and multiple other decrees of involvement in deal-signing and multiple other coordination of the China-Central Asai Summit, held in Astana, Kazakhstan, on June 2025.

    Based on the previously uncovered campaigns and those identified by various other threat research vendors, this threat group has been observed targeting the Transport and Communication sector, including railways and other critical infrastructure domains.

    Therefore, analysis of the group’s historical behavior and targeted sectors indicates that the threat actors likely sought to gather intelligence on transportation and communication-based initiatives. These projects appear to be tied to the strategic framework established at the China-Central Asian Summit in 2025.

    Now, let us look into the technical arsenal of the set of malicious payloads used.

    Malicious .NET Implant – SilentSweeper

    Upon hunting the suspicious RAR file China-Central Asia SummitProject.rar, we observed that the RAR file contained a malicious executable with the similar name.

    As analyzed during the SilentSweeper implant in the previous section, we will now focus on the malicious PowerShell script which is known as TM3.ps1 in this campaign.

    Upon decoding the Base64 blob, we found a PowerShell script that downloads two helper scripts (a VBScript and a PowerShell script) from a remote host and then creates a scheduled task called WindowsUpdate. The task is set to run every six minutes (/sc minute /mo 6) and is triggered once immediately on creation, and the downloaded files are written to the current user’s temp folder (such as. C:\Users\<user>\AppData\Local\Temp\WindowsUpdateService.ps1 and …\WindowsUpdateService.vbs). In the next section, we will look into the VBS and the PowerShell script.

    Malicious. VBScript.

    Looking into the VBScript, it became quite evident that the sole purpose of this script is to execute the later stage, which is basically the PowerShell file.

    Malicious PowerShell.

    Next looking into the file WindowsUpdateService.ps1, we saw that it contains an encoded blob, which further upon decoding, observed that it resembles the exact final-stager reverse-shell payloads, which we have analyzed previously in this blog. It is also important to note that amongst all the campaigns, we have seen the attacker leveraging the open-source tool Ligolo-ng.

    In, the next section, we will focus on the hunting & infrastructural artefacts.

    Hunting & Infrastructure.

    During our analysis of both campaigns, we identified multiple artefacts that are valuable pivot points for further investigation, such as using LNK-based metadata, infrastructural-pivots & other un-attributed campaigns. Let us dive into those parts.

    Pivoting-via LNK-Metadata.

    Multiple LNK-based metadata led to other un-attributed campaigns. Let us dive into those parts.

    Initially, while looking into the malicious LNK file, we found an interesting working directory which contained the above following metadata, which basically says C:\Users\GoBus\OneDrive\Рабочий стол which basically translates to Desktop.

    Further pivoting on the artefact, we hunted over a set of 11 shortcut (.LNK) files which basically contain the same metadata. Interestingly, we found a malicious RAR file that also performs malicious tasks the LNK Way, which is also we believe is an unattributed campaign as of now, contains a LNK file with similar metadata.

    Next, we will look into, the pivots over infrastructural artefacts leading to a greater number of un-attributed campaigns

    Pivoting-via Infrastructural-Artefacts.`

    As, we saw that the malicious PowerShell reverse-shell was hosted over the GitHub, further pivoting onto that artefact led us to another campaign, which contains the name resume.rar and is currently un-attributed in terms of the targeted sector using exactly similar techniques.

    Next, as we looked into the multiple payloads, which had multiple pivots from the GitHub repository to other infrastructural entities as well. We landed into another set of malicious host addresses and upon further pivoting, we discovered another campaign that used a malicious ZIP file named as WindowsUpdateService.zipserving a malicious PowerShell script.

    We also did uncover another campaign which is connected to this malicious infrastructural artefact, as well as this binary linked to the campaigns, which were serving these malicious files.

    We also saw multiple executables that were connecting to these malicious artefacts, performing multiple tasks. Now, let us look into the infrastructural details, in the next section.

    Host / IP Address ASN Location
    62.113.66.137 AS 60490 Russia ()
    206.189.11.142 ASN 14061 Netherlands ()
    62.113.66.7 AS 60490 Russia ()
    37.18.27.27 AS 48096 Russia ()

    Attribution.

    Attribution is indeed the toughest part, while giving a strict direction in terms of victimology and many other domains of a threat campaign, which can be dilemmatic in a lot of cases. Although, it can be limited up to a certain degree, by closely monitoring a threat group especially their TTPs, interests in certain geographical and its infrastructural projects with a goal of espionage. Therefore, keeping in mind these artefacts, with high confidence we have attributed these threat campaigns to Silent Lynx, some of the reasons are as follows.

    Arsenal-oriented Attribution.

    • Since we have been tracking this threat group, we have encountered that the operators are heavily obsessed with Base64 encoding and go-to reverse-shells in C++, PowerShell, Golang 7 .NET. We believe that the group or the operators have been following our research and decided to store the Base64 encoded blob over GitHub instead of hardcoding into the C++ binary, as we already saw in the technical analysis section that the resemblance of both the implants is heavily similar in terms of codebase.
    • In the previous campaigns, where we saw that the threat group targeted government entities of Turkmenistan with a malicious ZIP file containing the C++ loader back in the first half of 2025, we also saw the exact same behavior while it targeted diplomatic & other important entities involved in China-Central Asian This proves that the group had used the same TTPs on both the campaigns, which is basically dropping the payload on disk, without any decoy-oriented material.
    • Initial spear-phishing compressed files and the payload files having a same name, which we saw across most of the campaigns by this group across Central Asian targets, we believe that the group is too sluggish to make certain changes, which creates a unique pattern for the threat-
      hunting individual to create certain pattern-based bias followed by this group.
    • We also found in both the campaigns, that this group is heavily obsessed with using Golang-based tunneling tools, as in the first campaign they deployed RESOCKS, while in these campaigns they switched to Ligolo-Ng, where both the tools share a lot of technical similarities such as support for encrypted tunnels, proxy chaining, and cross-platform compatibility.

    Victimology

    • We have seen that this threat actor primarily targets multiple Central-Asian nations and its critical infrastructure such as governmental entities, banking sector & entities involved in cross-country infrastructural projects on the similar geographic-zone in the initial research published by us. In this research, we have also identified the same on both the campaigns, where we have seen a very common-infrastructural pivot that leads to commonalities between the campaign that targeted Russia-Azerbaijan relations as well as the China-Central Asia, which we think is a OPSEC blunder from the threat group.
    • We believe that the threat group is primarily interested on the events at Dushanbe such as meeting of Russian-Azerbaijan nation-heads to projects such as China-Tajikistan Highway and Beijing-Dushanbe flight connection, which aims to business and multiple exchanges. Therefore, leading us to attribute in the terms of victimology with a medium strength confidence in terms of sectors being targeted.

    Early-Remediations.

    This year we have seen Silent Lynx targeting events that are of interests in the Central-Asia geosphere, especially summits which involve a large amount of infrastructural dealing and many more diplomatic decisions & improvements.

    We believe that this group has also been keeping a track of an event, which involves India-Central Asian Secretaries meet in the month of October. Although, for now it is a mere speculation and more of an early remediation to the entities involved during this meeting. We have not seen any such campaign at the time of publishing this research, this section of research is to be treated as an advisory.

    Conclusion

    We conclude that Silent Lynx, which SEQRITE APT-Team had dubbed and have been researching since a year, has been involved in multiple campaigns targeting various countries which have initiated certain diplomatic and infrastructural developments, as well as other critical sectors with multiple Central Asian nations. They have also been targeting Russia & China based entities as well, and is currently very active, while making minimal changes to their arsenal & might target entities which involve similar dialogue oriented meetings. We expect Silent Lynx to continue leveraging dual-layer scripts and GitHub-hosted payloads for low-cost persistence.

    SEQRITE Protection.

    • MalgentCiR.
    • trojan.50055.GC
    • boxter.50066.SL
    • Trojan.50056.GC

    IOCs.

    Hash (SHA-256) Malware Type
    ef627bad812c25a665e886044217371f9e817770b892f65cff5877b02458374e RAR File
    5b58133de33e818e082a5661d151326bce5eeddea0ef4d860024c1dbb9f94639 RAR File
    5bae9c364ee4f89af83e1c7d3d6ee93e7f2ea7bd72f9da47d78a88ab5cfbd5d4 RAR File
    72a36e1da800b5acec485ba8fa603cd2713de4ecc78498fcb5d306fc3e448c7b LNK File
    5e3533df6aa40e86063dd0c9d1cd235f4523d8a67d864aa958403d7b3273eaaf LNK File
    b58f672e7fe22b3a41b507211480c660003823f814d58c04334ca9b7cdd01f92 LNK File
    ae51aef21ea4b422ef0c7eb025356e45d1ce405d66afbb3f6479d10d0600bcfd PowerShell
    0bce0e213690120afc94b53390d93a8874562de5ddcc5511c7b9b9d95cf8a15d PowerShell
    821f1ee371482bfa9b5ff1aff33705ed16e0147a9375d7a9969974c43b9e16e8 PowerShell
    262f9c63c46a0c20d1feecbd0cad75dcb8f731aa5982fef47d2a87217ecda45b EXE
    123901fa1f91f68dacd9ec972e2137be7e1586f69e419fc12d82ab362ace0ba9 EXE
    6cb54ec004ff8b311e73ef8a8f69b8dd043b7b84c5499f4c6d79d462cea941d8 EXE
    97969978799100c7be211b9bf8a152bbd826ba6cb55377284537b381a4814216 EXE
    9de8bbc961ff450332f40935b739d6d546f4b2abf45aec713e86b37b0799526d EXE
    b5a4f459bdff7947f27474840062cfce14ee2b1a0ef84da100679bc4aa2fcf77 EXE
    ffda4f894ca784ce34386c52b18d61c399eb2fc8c9af721933a5de1a8fff9e1b EXE
    2c8efe6eb9f02bf003d489e846111ef3c6cab32168e6f02af7396e93938118dd .NET executable
    1531f13142fc0ebfb7b406d99a02ec6441fc9e40725fe2d2ac11119780995cd3 .NET executable
    67cf0e32ad30a594442be87a99882fa4ac86494994eee23bdd21337adb804d3f .NET executable
    036a60aa2c62c8a9be89a2060e4300476aef1af2fd4d3dd8cac1bb286c520959 .NET executable
    32035c9d3b81ad72913f8db42038fcf6d95b51d4d84208067fe22cf6323f133c .NET executable
    a639a9043334dcd95e7cd239f8816851517ebb3850c6066a4f64ac39281242a3 .NET executable
    a83a8eb3b522c4517b8512f7f4e9335485fd5684b8653cde7f3b9b65c432fa81 .NET executable
    26aca51d555a0ea6d80715d8c6a9f49fea158dee11631735e16ea75c443a5802 .NET executable
    303f03ae338fddfe77c6afab496ea5c3593d7831571ce697e2253d4b6ca8a69a .NET executable
    40d4d7b0bc47b1d30167dd7fc9bd6bd34d99b8e0ae2c4537f94716e58e7a5aeb VBA
    b0ac155b99bc5cf17ecfd8d3c26037456bc59643344a3a30a92e2c71c4c6ce8d VBA
    b87712a6eea5310319043414eabe69462e12738d4f460e66a59c3acb5f30e32e ZIP

     

    Host/IP addresses
    updates-check-microsoft[.]ddns[.]net
    catalog-update-update-microsoft[.]serveftp[.]com
    hxxp://206.189.11[.]142/
    62[.]113[.]66[.]137
    62.[.]113[.]66[.]7
    37[.]18[.]27[.]27

     

    MITRE ATT&CK

    Tactic Technique ID Technique Name
    Initial Access / Phishing T1566.001 Spearphishing Attachment
    Execution T1204.001 User Execution: Malicious Link
    T1204.002 User Execution: Malicious File
    T1106 Native API (CreateProcess / CreateProcessW)
    Persistence T1053.005 Scheduled Task/Job: Windows Task Scheduler
    Command & Scripting Interpreter T1059.001 PowerShell
    Defense Evasion T1027 Obfuscated Files or Information
    T1036 Masquerading
    Command & Control T1071 Application Layer Protocol (HTTPS / Web protocols)
    T1095 Non-Application Layer Protocol (raw TCP / custom C2)
    Proxying & Tunneling T1071 / T109 Tunneling / Proxy (use of Ligolo-ng) C2/mesh/tunnel
    Exfiltration T1041 / T1071 Exfiltration over C2 channel / Application layer

    References

    A1: Newspaper-Outlets

    A2: Existing-Public-Research



    Source link

  • Why the DPDP Act is a Game-Changer for India’s BFSI Industry

    Why the DPDP Act is a Game-Changer for India’s BFSI Industry


    India’s Banking, Financial Services, and Insurance (BFSI) industry stands at the intersection of innovation and risk. From UPI and digital wallets to AI-based lending and predictive underwriting, digital transformation is no longer a differentiator — it’s the operating model of the future.

    In 2024, India’s fintech market was valued at approximately US$110 billion. By 2029, that figure is expected to soar to US$420 billion, reflecting an annual growth rate of 31%. With digital payments projected to exceed US$3.1 trillion by 2028, and over 9,000 fintechs already driving financial digitization, the new currency of the BFSI sector isn’t capital — it’s data.

    Amid this transformation, the Digital Personal Data Protection (DPDP) Act, 2023 has emerged as a pivotal framework — not just a compliance mandate but a structural shift that will redefine trust, transparency, and data governance across the financial ecosystem.

    Trust: The New Competitive Advantage

    In an era where customer relationships are increasingly digital, trust has become the ultimate differentiator. The DPDP Act strengthens this foundation by restoring control to the individual — or as the law defines, the Data Principal.

    Under the Act, customers gain the right to access, correct, and even request deletion of their data. For BFSI players, this means transparency is no longer optional — it’s strategic.

    • India’s average data breach cost in 2023 stood at US$2.18 million.
    • Customer skepticism around data handling is rising.
    • The Act mandates informed, granular consent, ensuring customers know how and why their data is collected or shared.

    Financial institutions that proactively embed these principles can transform compliance into a brand advantage, positioning themselves as trustworthy custodians of data in an increasingly skeptical market.

    Cybersecurity: From Vulnerability to Core Capability

    BFSI remains the most targeted industry for cyberattacks in India — and the numbers are stark.

    • Between January and October 2023, the sector faced 1.3 million cyberattacks — roughly 4,400 per day.
    • Phishing incidents grew by 175% in H1 2024, crossing 135,000 cases in six months.
    • Over 1.1 million video KYC sessions occur daily, with spoofing rates as high as 86%.

    The DPDP Act directly addresses these realities. Its security provisions mandate:

    • Strong encryption and access controls
    • Periodic security audits
    • Data minimization, ensuring institutions store only what’s necessary

    For CISOs and security leaders, this alignment between regulatory expectations and operational resilience represents an opportunity to elevate cybersecurity from a compliance task to a strategic defense layer.

    Regulatory Harmony: A Unified Compliance Ecosystem

    BFSI entities operate under multiple regulators — RBI, SEBI, and IRDAI, each with its distinct compliance landscape. The DPDP Act offers a unifying framework that complements existing sectoral regulations, creating clarity and consistency across overlapping requirements.

    And the stakes are significant:

    • The DPDP Act empowers the Data Protection Board to impose penalties up to ₹250 crore.
    • In 2024, the RBI levied ₹56 crore in fines across 304 compliance cases — many tied to data protection and cybersecurity lapses.

    The message is clear: compliance can no longer be reactive. Non-compliance is not only costly but reputationally irreversible.

    Empowering the Customer Experience

    Traditional blanket consent forms are becoming obsolete. Under the DPDP Act, consent must be explicit, informed, and revocable.

    To meet these standards, BFSI organizations must implement:

    • Consent management systems with intuitive, multilingual interfaces
    • Real-time audit trails for traceability and accountability
    • Customer-centric communication that reinforces transparency

    Beyond compliance, these measures build deeper customer confidence — a competitive advantage that distinguishes data-responsible brands from the rest.

    Innovation and Privacy: Coexistence, Not Compromise

    Contrary to popular belief, the DPDP Act doesn’t constrain innovation — it enables it responsibly.

    By allowing the use of anonymized or pseudonymized data for purposes such as:

    • Fraud detection
    • Risk assessment and modeling
    • Product design and personalization

    The law ensures BFSI players can continue to harness the power of AI, machine learning, and analytics, without compromising privacy. Even cross-border data transfers are permitted — provided robust safeguards are in place.

    This balance between innovation and compliance positions India’s BFSI ecosystem as a global benchmark in ethical data innovation.

    Key Imperatives for BFSI Leaders

    To align with the DPDP Act, BFSI organizations must prioritize:

    • Comprehensive consent frameworks
    • Enterprise-grade security controls (encryption, MFA, continuous monitoring)
    • Breach response and reporting protocols
    • Data lifecycle management – retention, anonymization, secure disposal
    • Third-party and vendor compliance oversight
    • Appointment of a Data Protection Officer (DPO) for accountability

    However, this transformation goes beyond checklists. It’s about embedding privacy into the organizational culture, ensuring that every process, product, and partnership is built on the principle of “privacy by design.”

    Building the DPDP Roadmap

    Forward-looking financial institutions are already operationalizing compliance through structured roadmaps:

    1. Data Mapping – Understanding where and how data flows across the enterprise.
    2. Governance Alignment – Synchronizing internal policies with RBI, SEBI, and IRDAI frameworks.
    3. Technology Investments – Deploying consent management tools, governance platforms, and advanced cybersecurity solutions.
    4. Employee Training – Creating awareness across all business units.
    5. Continuous Monitoring – Shifting from annual audits to real-time compliance tracking.

    Conclusion: Turning Compliance into Competitive Edge

    Between 2019 and 2023, India’s BFSI cybersecurity investments tripled — from US$518 million to US$1.7 billion. The DPDP Act builds on this momentum, not as a disruptor, but as an accelerator of secure digital transformation.

    Institutions that embrace this regulation early will stand apart — as leaders in trust, resilience, and responsible innovation.

    The DPDP Act is not the end of compliance — it’s the foundation of a privacy-first future for India’s financial ecosystem. The question isn’t whether BFSI organizations will comply, but how effectively they’ll leverage compliance to lead

    Stay ahead of India’s evolving privacy landscape with Seqrite’s DPDP Act Compliance Services — a comprehensive framework to help BFSI institutions safeguard data, ensure regulatory alignment, and build customer trust.
    Turn compliance into a competitive advantage with Seqrite’s end-to-end data protection, governance, and security expertise.



    Source link

  • Microsoft Struggles with Power Constraints for New AI Hardware



    Microsoft Struggles with Power Constraints for New AI Hardware



    Source link

  • Clean code tips – comments and formatting | Code4IT


    Are all comments bad? When they are necessary? Why formatting is so important? Writing clean code does not only refer to the executed code, but also to everything around.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is the second part of my series of tips about clean code. We’ll talk about comments, why many of them are useless or even dangerous, why some are necessary and how to improve your comments. We’ll also have a look at why formatting is so important, and we can’t afford to write messy code.

    Here’s the list (in progress)

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Often you see comments that explain what a method or a class does.

    /// <summary>
    /// Returns the max number of an array
    /// </summary>
    /// <param name="numbers">array of numbers</param>
    /// <returns>Max number in the array</returns>
    public int GetMaxNumber(int[] numbers)
    {
        // return max;
        return numbers.Max();
    }
    

    What’s the point of this comment? Nothing: it doesn’t add more info about the method meaning. Even worse, it clutters the codebase and makes the overall method harder to read.

    Luckily sometimes comments are helpful; rare cases, but they exist.

    Yes, sometimes comments are useful. Or even necessary. Let’s see when.

    Show intention and meaning

    Sometimes the external library you’re using is not well documented, or you are writing an algorithm that needs some explanations. Put a comment to explain what you are doing and why.

    Another example is when you are using regular expressions: the meaning can be really hard to grasp, so using a comment to explain what you are doing is the best thing to do:

    public bool CheckIfStringIsValid(string password)
    {
        // 2 to 7 lowercase chars followed by 3 or 4 numbers
        // Valid:   kejix173
        //          aoe193
        // Invalid: a92881
        Regex regex = new Regex(@"[a-z]{2,7}[1-9]{3,4}");
        var hasMatch = regex.IsMatch(password);
        return hasMatch;
    }
    

    Reason for default values

    Some methods have default values, and you’d better explain why you chose that value:

    public string FindPerfectAnimal(List<Criterion> criteria)
    {
        string perfectAnimal = ElaborateCriteria(criteria);
        // We use "no-preferences" so we can easily perform queries on the DB and show it on the UI
        return string.IsNullOrEmpty(perfectAnimal) ? "no-preferences" : perfectAnimal;
    }
    

    What if you didn’t add that comment? Maybe someone will be tempted to change that value, thus breaking both the UI and the DB.

    TODO marks

    Yes, you can add TODO comments in your code. Just don’t use it as an excuse for not fixing bugs, refactor your code and rename functions and variables with better names.

    public void Register(string username, string password)
    {
        // TODO: add validation on password strenght
        dbRepository.RegisterUser(username, password);
    }
    

    Some IDEs have a TODO window that recognizes those comments: so, yes, it’s a common practice!

    Highlight the importance of some code

    Some method calls seem redundant but they actually make the difference. A good practice is to highlight whose parts and explain why they are so important.

    public string GetImagePath(string resourceId)
    {
        var item = dbRepository.GetItem(resourceId);
    
        // The source returns image paths with trailing whitespaces. We must remove them.
        return item.ImagePath.Trim();
    }
    

    Most of the time comments should be avoided. They can lead confusion to the developer, not be updated to the latest version of the code or they just make the code harder to read. Let’s see some of the bad uses of comments.

    They explain what the code does

    If your code is hard to read, why spending time in writing comments to explain what the code does instead of writing better code, with better names and easier to read functions?

    They don’t add anything important not already written in the code

    // sum two numbers and return the result
    public int Add(int a, int b)
    {
    	// calculate the sum and return it to the caller
    	return a + b;
    }
    

    What’s the meaning of these comments? Absolutely nothing. They just add lines of code to be read.

    They lie

    It may happen that you write your comments with the best intentions, but you don’t choose the best words for your comments, and they may involuntarily lie.

    Have a look at this snippet.

    // counts how many odd numbers are in the list
    public int CountOddsNumbers(IEnumerable<int> values)
    {
    	return values.Where(v => v % 2 == 1).Count();
    }
    

    Where are the lies? First of all, the numbers are not in the list, but in an IEnumerable. Yes, a List is an IEnumerable, but here that word can be misinterpreted. Second, what happens if the input value is null? Does this method return null, zero, or does it throw an exception? You have to check the internal code to see what’s really going on.

    They are not updated

    Maybe you’ve written the perfect comment that explains what your API does.

    But suddenly, someone adds a cache layer in your code, and he or she doesn’t update the documentation.

    So you’ll end up with wrong comments that are simply outdated.

    They indicate the end of a block

    What do you think of this snippet?

    public int CountPalindromes(IEnumerable<string> values)
    {
    	int count = 0;
    	foreach (var element in values)
    	{
    		if (!string.IsNullOrWhiteSpace(element))
    		{
    			var sb = new StringBuilder();
    			var reversedChars = element.Reverse();
    			foreach (var ch in reversedChars)
    			{
    				sb.Append(ch);
    			}
    
    			if (element.Equals(sb.ToString(), StringComparison.CurrentCultureIgnoreCase))
    				count++;
    
    		} // end if
    	} // end foreach
    	return count;
    } // end CountPalindromes
    

    If the code is complex enough to require end CountPalindromes, end foreach and end if, isn’t it better to refactor the code and use shorter methods?

    public int CountPalindromes(IEnumerable<string> values)
    {
    	return values
    	.Where(v => !string.IsNullOrWhiteSpace(v))
    	.Where(v => v.Equals(ReverseString(v), StringComparison.CurrentCultureIgnoreCase)).Count();
    }
    
    public string ReverseString(string originalString)
    {
    	return new string(originalString.Reverse().ToArray());
    }
    

    Better, isn’t it?

    Both bad and good

    There are comments that are both good and bad, it depends on how you structure them.

    Take for example documentation for APIs.

    /// <summary>
    /// Returns a page of items
    /// </summary>
    /// <param name="pageNumber">Page Number</param>
    /// <param name="pageSize">Page size</param>
    /// <returns>A list of items</returns>
    [HttpGet]
    public async Task<List<Item>> GetPage(int pageNumber, int pageSize)
    {
    	// do something
    }
    

    Useless comment, isn’t it? It doesn’t add anything that you could have guessed by looking at the parameters and the function name.

    Can we use every value for pageNumber and for pageSize? What happens if there are no items to be returned? Does it return a particular status code or does it return an empty list?

    /// <summary>
    /// Returns a page of items
    /// </summary>
    /// <param name="pageNumber">Number of the page to be fetched. This index is 0-based. It must be greater or equal than zero.</param>
    /// <param name="pageSize">Maximum number of items to be retrieved. It must be greater or equal than zero.</param>
    /// <returns>A list of up to <paramref name="pageSize"/> items. Empty result if no more items are available</returns>
    [HttpGet]
    [Route("getpage")]
    [ProducesResponseType(200)]
    [ProducesResponseType(400)]
    [ProducesResponseType(500)]
    public async Task<List<Item>> GetPage(int pageNumber, int pageSize)
    {
    	if (pageNumber < 0)
    		throw new ArgumentException($"{nameof(pageNumber)} cannot be less than zero");
    	if (pageSize < 0)
    		throw new ArgumentException($"{nameof(pageSize)} cannot be less than zero");
    
    	// do something
    }
    

    Now all these questions are addressed. Still not perfect, though. But you get the idea.

    Why spending time in code formatting?

    Why bother writing well-formatted code? Do I really need to spend time in formatting? Who cares! All the code gets transformed in bits, so why care about tabs and spacing, line length and so on!

    Right?

    No.

    Here’s a great quote from that book:

    The functionality you create today has a good chance of changing in the next release, but the readability of your code will have a profound effect on all the changes that will ever be made.

    How to structure classes

    Think of a class as if it was a news article. Would you prefer all the info mixed up or have a clear, structured content?
    So a good idea is to have all the general info on the top of the files, and order the functions in a way that the more you scroll down in the file, the more you get into the details of what’s going on.

    This will help the readers understanding what the class does in a general way by just having a look at the top of the class. If they are interested they can just scroll down and read the details.

    So a good way to structure your code can be

    1. public properties
    2. constructor
    3. public functions
    4. private functions

    Some programmer prefer other structures, like

    1. public functions
    2. private functions
    3. constructor
    4. public properties

    For me the second option is odd. But it’s not wrong. Whichever you prefer, remember to be consistent across your codebase.

    Conclusion

    We’ve seen some aspects that are considered secondary: coding and formatting. They are part of the codebase, and you should take care of them.

    In general, when you’re writing code and comments, stop for a second and think “is this part readable? Is it meaningful? Can I improve it?”.

    Don’t forget that you’re doing it not only for others but even for your future self.

    So, for now…

    Happy coding!





    Source link

  • YouTube AI Removes Tech Channel with 350,000 Subscribers Without Review



    YouTube AI Removes Tech Channel with 350,000 Subscribers Without Review



    Source link

  • Electric Vehicle Sales Hit a Wall as Federal Tax Credits Vanish



    Electric Vehicle Sales Hit a Wall as Federal Tax Credits Vanish



    Source link

  • Space Mirror Project Raises Alarm Among Scientists Over Solar Redirection



    Space Mirror Project Raises Alarm Among Scientists Over Solar Redirection



    Source link

  • Understanding Swagger integration in .NET Core | Code4IT


    Swagger is a tool that exposes the documentation of your APIs and helps collaborating with other teams. We’ll see how to integrate it with .NET Core 3, how to add XML comments and status codes.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When I write some APIs with .NET Core, one of the very first things I do is to add Swagger.
    This helps me test my code and helps other developers integrating the APIs I’m exposing.

    In this article, we’re going to add Swagger to an API built with .NET Core 3, and we’re going to learn how to show the description of the methods and how to show which status codes you could expect from our APIs.

    I think you already know what is Swagger, but let’s have a recap.

    What is Swagger

    Swagger is a suite of products that help developers documenting and testing APIs. If you are interested in the source code for their C# repository, you can see it on GitHub.

    I’m quite sure you’ve already seen a page like this:

    Typical Swagger UI

    That’s a typical UI generated with Swagger that allows you to interact with the APIs and view the endpoint definitions defined using the OpenAPI format, a format ideated by the Swagger team which became the de facto standard for API definition.

    How to integrate Swagger in .NET Core

    Let’s integrate it with a .NET Core 3 project. The steps are the same even if you have a .NET Core 2 application since the latest Swagger version (5.5.0) is compatible with any ASP.NET Core version greater than 2.0.

    Project setup

    I’ve created a simple .NET Core 3 API project, and exposed a single controller:

    [Route("api/[controller]")]
    [ApiController]
    public class MarvelMoviesController : ControllerBase
    {
        // GET: api/<MarvelMoviesController>
        [HttpGet]
        public IEnumerable<Movie> Get()
        {
            return movies;
        }
    
        // GET api/<MarvelMoviesController>/5
        [HttpGet("{id}")]
        public Movie Get(int id)
        {
            return movies.FirstOrDefault(x => x.Id == id);
        }
    
        // POST api/<MarvelMoviesController>
        [HttpPost]
        public void Post([FromBody] Movie value)
        {
            movies.Add(value);
        }
    
        // DELETE api/<MarvelMoviesController>/5
        [HttpDelete("{id}")]
        public void Delete(int id)
        {
            movies.RemoveAll(m => m.Id == id);
        }
    }
    

    Nothing odd, right? Notice that I’ve exposed endpoints reachable with different HTTP verbs: GET, POST and DELETE.

    Adding Swagger dependencies

    To use Swagger you have to install it from NuGet. You can run dotnet add package Swashbuckle.AspNetCore to include it in your project.

    Actually, that dependency installs three other packages that you can also see in the NuGet explorer screen on Visual Studio:

    • Swashbuckle.AspNetCore.SwaggerGen analyses the project endpoints and generates the OpenAPI documents
    • Swashbuckle.AspNetCore.Swagger exposes those documents
    • Swashbuckle.AspNetCore.SwaggerUi creates the UI you see when running the project

    Remember to get the version 5.5.0!

    Include Swagger in the project

    As you know, one of the core parts of every .NET Core API project is the Startup class. Here you must add Swagger in the middleware pipeline and declare that it must be used to provide the UI.

    In the ConfigureServices method we must add the Swagger generator and define some metadata about the OpenApi file to be generated:

    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1-foo", new OpenApiInfo { Title = "My API", Version = "v1-bar" });
    });
    

    This statement must be added after any services.AddControllers() or services.AddMvc() calls.

    Let’s dig a little deeper in the OpenApiInfo class: first of all, we must remember to include the Microsoft.OpenApi.Models namespace! Then we can notice that this object includes more fields that can be specified for your OpenAPI document, like a description, contact info and so on:

    //REQUIRED. The title of the application.
    public string Title { get; set; }
    
    //A short description of the application.
    public string Description { get; set; }
    
    //REQUIRED. The version of the OpenAPI document.
    public string Version { get; set; }
    
    //A URL to the Terms of Service for the API. MUST be in the format of a URL.
    public Uri TermsOfService { get; set; }
    
    //The contact information for the exposed API.
    public OpenApiContact Contact { get; set; }
    
    //The license information for the exposed API.
    public OpenApiLicense License { get; set; }
    

    The only required fields are Title and Version.

    Now it’s time to add Swagger as a middleware: in the Configure method, add this line

    This single line creates the OpenAPI file under /swagger/v1-foo/swagger.json, as you can see in the image below.

    OpenAPI definition for our API

    One last step: again in the Configure method, add the Swagger UI:

    app.UseSwaggerUI(c =>
    {
        c.SwaggerEndpoint("/swagger/v1-foo/swagger.json", "My API V1");
    });
    

    With these instructions, you define where to find the swagger.json file and what is the title of the page.

    Run the project

    The basic setup is done!

    Now you can run the project, navigate to your /swagger/index.html and see your wonderful page!

    Swagger UI for Marvel movies API

    The UI is pretty clear: on the very top of the page we have the API metadata, like title, version and licensing; in the middle, we have the endpoints exposed, that can also be called to try the API; in the bottom of the page we have the schemas, so the definition of the objects that interact with the endpoints.

    Finally, have a look at how the different parameters we’ve set in the configurations appear in the UI:

    Cross reference between Swagger configuration and the UI result

    Set Swagger as startup page

    When you run your project you may want to have the Swagger page displayed as soon as possible, without typing anything in the address bar.

    To do so, you must update the launchSettings.json file, which is available under the Properties folder. This is a hidden file, so to edit it you must click on Show all files in Visual Studio or search for it in the resource explorer.

    Here, for your profiles, you must set swagger/index.html as a value for the launchUrl field.

    LaunchSettings.json configuration

    Adding documentation to your endpoints

    Once you have created your endpoints and exposed the Swagger UI, the best thing to do is to add some detailed documentation to help other developers know what every endpoint does.

    PSST! You know that there’s a blurred line between good and bad comments, and endpoint documentation is exaclty on that line? If you wanna know more…

    /// <summary>
    /// Returns a movie given its ID
    /// </summary>
    /// <param name="id">ID of the movie to be found</param>
    /// <returns>The related movie if found. Null otherwise</returns>
    // GET api/<MarvelMoviesController>/5
    [HttpGet("{id}")]
    public Movie Get(int id)
    {
        return movies.FirstOrDefault(x => x.Id == id);
    }
    

    Let’s run the project and… nothing happens! Why?

    SwaggerGen can “see” only the executable code, not the comments. So we need to generate a different file and include it in the building of our OpenAPI file.

    In Visual Studio, open the Properties view of your API project, head to the Build tab, and select the XML documentation file under the Output section.

    By clicking on that checkbox, Visual Studio will populate the textbox with the absolute path for the generated file. Remember to replace it with a relative path, or simply the file name, because when you’ll share the repository with other colleagues they will reference the path on your pc, not on theirs.

    Project-level flag that enables XML documentation

    So, every time you build your project, you’ll create or update that XML, which contains other metadata related to your endpoints.

    I have added comments only to a single endpoint, so the generated XML is this one:

    <?xml version="1.0"?>
    <doc>
        <assembly>
            <name>SwaggerIntegrationV3</name>
        </assembly>
        <members>
            <member name="M:SwaggerIntegrationV3.Controllers.MarvelMoviesController.Get(System.Int32)">
                <summary>
                Returns a movie given its ID
                </summary>
                <param name="id">ID of the movie to be found</param>
                <returns>The related movie if found. Null otherwise</returns>
            </member>
        </members>
    </doc>
    

    Now it’s time to use this file in combination with Swagger. In the ConfigureServices method, update the Swagger integration like this:

    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1-foo", new OpenApiInfo { Title = "My API", Version = "v1-beta" });
        var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
        var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
        c.IncludeXmlComments(xmlPath);
    });
    

    The new lines use reflection to create the path to the XML file we’ve just generated; finally, we can the comments in our UI.

    XML comments visible on the UI

    How to display returned status codes

    There are lots of status codes that the developer who’s integrating your API needs to handle. Why not help him by adding some info about the returned status codes?

    It’s really simple: you just have to add some ProducesResponseType attributes to your endpoints.

    /// <summary>
    /// Returns a movie given its ID
    /// </summary>
    /// <param name="id">ID of the movie to be found</param>
    /// <returns>The related movie if found. Null otherwise</returns>
    // GET api/<MarvelMoviesController>/5
    [HttpGet("{id}")]
    [ProducesResponseType(StatusCodes.Status200OK)]
    [ProducesResponseType(StatusCodes.Status500InternalServerError)]
    public Movie Get(int id)
    {
        return movies.FirstOrDefault(x => x.Id == id);
    }
    

    Then will be able to see the status codes in the UI.

    Status codes displayed on the UI

    Wrapping up

    We’ve seen what is Swagger, how it can help you and other developers interacting with your APIs, and how to configure it.

    Usually, this integration is one of the very first things I do when I create a new API project.

    If you want to see the full example, head to this repository.

    Happy coding!



    Source link

  • Apple's Secret Deal: Why It Chose Google's AI Over a 'Better' Alternative for Siri



    Apple's Secret Deal: Why It Chose Google's AI Over a 'Better' Alternative for Siri



    Source link