نویسنده: post Bina

  • Setting the Stage: Inside the Process of Bringing Christian Fleming’s Work to Life in Print, Web, and 3D

    Setting the Stage: Inside the Process of Bringing Christian Fleming’s Work to Life in Print, Web, and 3D


    When two creatives collaborate, the design process becomes a shared stage — each bringing their own strengths, perspectives, and instincts. This project united designer/art director Artem Shcherban and 3D/motion designer Andrew Moskvin to help New York–based scenographer and costume designer Christian Fleming completely reimagine how his work is presented.

    What began as a portfolio refresh evolved into a cohesive visual system: a rigorously minimal print catalog, a single-page website concept, and a cinematic 3D visualization. Together, Artem and Andrew shaped an experience that distilled Christian’s theatrical sensibility into clear, atmospheric design across both physical and digital formats.

    From here, Artem picks up the story, walking us through how he approached the portfolio’s structure, the visual rules it would live by, and the thinking that shaped both its print and on-screen presence.

    Starting the Design Conversation

    Christian Fleming is a prominent designer and director based in New York City who works with theaters around the world creating visual spaces for performances. He approached me with a challenge: to update and rethink his portfolio, to make it easy to send out to theater directors and curators. Specifically the print format.

    Christian had a pretty clear understanding of what he wanted to show and how it should look: rigid Scandinavian minimalism, extreme clarity of composition, a minimum of elements and a presentation that would be understandable to absolutely anyone – regardless of age, profession or context.

    It was important to create a system that would:

    • be updated regularly (approximately every 3 weeks),
    • adapt to new projects,
    • and at the same time remain visually and semantically stable.

    There also needed to be an “About Christian” section in the structure, but this too had to fit within a strict framework of visual language.

    Designing a Flexible Visual System

    I started by carefully analyzing how Christian works. His primary language is visual. He thinks in images, light, texture and composition. So it was important to retain a sense of air and rhythm, but build a clear modular structure that he could confidently work with on his own.

    We came up with a simple adaptive system:

    • it easily adapts to images of different formats,
    • scalable for everything from PDFs to presentations,
    • and can be used both digitally and offline.

    In the first stages, we tried several structures. However, Christian still felt that there was something missing in the layout – the visuals and logic were in conflict. We discussed which designs he wanted to show openly and which he didn’t. Some works had global reviews and important weight, but could not be shown in all details.

    The solution was to divide them into two meaningful blocks:

    “Selected Projects”, with full submission, and “Archival Projects”, with a focus on awards, reviews, and context. This approach preserved both structure and tone. The layout became balanced – and Christian immediately responded to this.

    After gathering the structure and understanding how it would work, we began creating the design itself and populating it with content. It was important from the start to train Kristan to add content on his own, as there was a lot of project and they change quite often.

    One of the key pluses of our work is versatility. Not only could the final file be emailed, but it could also be used as a print publication. This gave Christian the opportunity to give physical copies at meetings, premieres and professional events where tactility and attention to detail are important.

    Christian liked the first result, both in the way the system was laid out and the way I approached the task. Then I suggested: let’s update the website as well.

    Translating the Portfolio to a Single-Page Site

    This phase proved to be the most interesting, and the most challenging.

    Although the website looks simple, it took almost 3 months to build. From the very beginning, Christian and I tried to understand why he needed to update the site and how it should work together with the already established portfolio system.

    The main challenge was to show the visual side of his projects. Not just text or logos, but the atmosphere, the light, the costumes, the feeling of the scene.

    One of the restrictions that Christian set was the requirement to make the site as concise as possible, without a large number of pages, or better to limit it to one, and without unnecessary transitions. It had to be simple, clear and intuitive, but still user-friendly and quite informative. This was a real challenge, given the amount of content that needed to be posted.

    Designing with Stage Logic

    One of the key constraints that started the work on the site was Christian’s wish: no multiple pages. Everything had to be compact, coherent, clear and yet rich. This posed a special challenge. It was necessary to accommodate a fairly large amount of information without overloading the perception.

    I proposed a solution built on a theatrical metaphor: as in a stage blackout, the screen darkens and a new space appears. Each project becomes its own scene, with the user as a spectator — never leaving their seat, never clicking through menus. Navigation flows in smooth, seamless transitions, keeping attention focused and the emotional rhythm intact.

    Christian liked the idea, but immediately faced a new challenge: how to fit everything important on one screen:

    • a short text about him,
    • social media links and a resume,
    • the job title and description,
    • and, if necessary, reviews.

    At the same time, the main visual content – photos and videos – had to remain in the center of attention and not overlap with the interface.

    Solving the Composition Puzzle

    We explored several layouts — from centered titles and multi-level disclosures to diagonal structures and thumbnail navigation. Some looked promising, but they lacked the sense of theatrical rhythm we wanted. The layouts felt crowded, with too much design and not enough air.

    The breakthrough came when we shifted focus from pure visuals to structural logic. We reduced each project view to four key elements: minimal information about Christian, the production title with the director’s name, a review (when available), and a button to select the project. Giving each element its own space created a layout that was both clear and flexible, without overloading the screen.

    Refining Through Iteration

    As with the book, the site went through several iterations:

    • In the first prototype, the central layout quickly proved unworkable – long play titles and director names didn’t fit on the screen, especially in the mobile version. We were losing scalability and not using all the available space.
    • In the second version, we moved the information blocks upwards – this gave us a logical hierarchy and allowed us not to burden the center of the screen. The visual focus remained on the photos, and the text did not interfere with the perception of the scenography.
    • In the third round, the idea of “titles” appeared – a clear typographic structure, where titles are highlighted only by boldness, without changing the lettering. This was in keeping with the overall minimalist aesthetic, and Christian specifically mentioned that he didn’t want to use more than one font or style unless necessary.

    We also decided to stylistically separate the reviews from the main description. We italicized them and put them just below. This made it clear what belonged to the author and what was a response to the author’s work.

    Bringing Theatrical Flow to Navigation

    The last open issue was navigation between projects. I proposed two scenarios:

    1. Navigating with arrows, as if the viewer were leafing through the play scene by scene.
    2. A clickable menu with a list of works for those who want to go directly.

    Christian was concerned about the question: wouldn’t the user lose their bearings if they didn’t see the list all the time? We discussed this and came to the conclusion that most visitors don’t come to the site to “look for the right job”. They come to feel the atmosphere and “experience” its theater. So the basic scenario is a consistent browsing experience, like moving through a play. The menu is available, but not in the way – it should not break the effect of involvement.

    What We Learned About Theatrical Design

    We didn’t build just a website. We built an experience. It is not a digital storefront, but a space that reflects the way Christian works. He is an artist who thinks in the rhythm of the stage, and it was essential not to break that rhythm.

    The result is a place where the viewer isn’t distracted; they inhabit it. Navigation, structure, and interface quietly support this experience. Much of that comes from Christian’s clear and thoughtful feedback, which shaped the process at every step. This project is a reminder that even work which appears simple is defined by countless small decisions, each influencing not only how it functions but also the mood it creates from the very beginning.

    Extending the Design from Screen to Print

    Once the site was complete, a new question emerged: how should this work be presented in the most meaningful way?

    The digital format was only part of the answer. We also envisioned a printed edition — something that could be mailed or handed over in person as a physical object. In the theater world, where visual presence and tactility carry as much weight as the idea itself, this felt essential.

    We developed a set of layouts, but bringing the catalog to life as intended proved slow. Christian’s schedule with his theater work left little time to finalize the print production. We needed an alternative that could convey not only the design but also the atmosphere and weight of the finished book.

    Turning the Book into a Cinematic Object

    At this stage, 3D and motion designer Andrew Moskvin joined the project. We shared the brief with him — not just to present the catalog, but to embed it within the theatrical aesthetic, preserving the play of light, texture, air, and mood that defined the website.

    Andrew was immediately enthusiastic. After a quick call, he dove into the process. I assembled all the pages of the print version we had, and together we discussed storyboards, perspectives, atmosphere, possible scenes, and materials that could deepen the experience. The goal was more than simply showing the layout — we wanted cinematic shots where every fold of fabric and every spot of light served a single dramaturgy.

    The result exceeded expectations. Andrew didn’t just recreate the printed version; he brought it to life. His work was subtle and precise, with a deep respect for context. He captured not only the mood but also the intent behind each spread, giving the book weight, materiality, and presence — the kind we imagined holding in our hands and leafing through in person.

    Andrew will share his development process below.

    Breaking Down the 3D Process

    The Concept

    At the very start, I wanted my work to blend fluently in the ideas that were already made. Christian Fleming is a scenographer and costume designer, so the visual system needed to reflect his world. Since the project was deeply rooted in the theatrical aesthetic, my 3D work had to naturally blend into that atmosphere. Artem’s direction played a key role in shaping the unique look envisioned by Christian Fleming — rich with stage-like presence, bold compositions, and intentional use of space. My task was to ensure that the 3D elements not only supported this world, but also felt like an organic extension of it — capturing the same mood, lighting nuances, and visual rhythm that define a theatrical setting.

    The Tools

    For the entire 3D pipeline, I worked in:

    1. Cinema 4D for modeling and scene setup
    2. Redshift for rendering 
    3. After Effects for compositing 
    4. Photoshop for color correcting static images

    Modeling the Book

    The book was modeled entirely from scratch. Me and Artem discussed the form and proportions, and after several iterations, we finalized the design direction. I focused on the small details that bring realism: the curvature of the hardcover spine, beveled edges, the separation between the cover and pages, and the layered structure of the paper block. I also modeled the cloth texture wrapping the spine, giving the book a tactile, fabric-like look. The geometry was built to hold up in close-up shots and fit the theatrical lighting.

    Lighting with a Theatrical Eye

    Lighting was one of the most important parts of this process. I wanted the scenes to feel theatrical — as if the objects were placed on a stage under carefully controlled spotlights. Using a combination of area lights and spotlights in Redshift, I shaped the lighting to create soft gradients and shadows on the surfaces. The setup was designed to emphasize the geometry without flattening it, always preserving depth and direction. A subtle backlight highlight played a key role in defining the edges and enhancing the overall form.

    I think I spent more time on lighting than on modeling, since lighting has always been more experimental for me — even in product scenes.

    One small but impactful trick I always use is setting up a separate HDRI map just for reflections. I disable its contribution to diffuse lighting by setting the diffuse value to 0, while keeping reflections at 1. This allows the reflections to pop more without affecting the overall lighting of the scene. It’s a simple setup, but it gives you way more control over how materials respond — especially in stylized or highly art-directed environments.

    Building the Materials

    When I was creating the materials, I noticed that Artem had used a checkerboard texture for the cover. So I thought — why not take that idea further and implement it directly into the material? I added a subtle bump using a checker texture on the sides and front part of the book.

    I also experimented quite a bit with displacement. Initially, I had the idea to make the title metallic, but it felt too predictable. So instead, I went with a white title featuring embossed details, while keeping the checker bump texture underneath.

    This actually ties back to the modeling process — for the displacement to work properly, the geometry had to be evenly dense and ready for subdivision. 

    I created a mask in Photoshop and applied a procedural Gaussian blur using a Smart Object. Without the blur, the displacement looked harsh and unrefined — even a slight blur made a noticeable difference.

    The main challenge with using white, as always, was avoiding blown-out highlights. I had to carefully balance the lighting and tweak the material settings to make the title clean and visible without overexposing it.

    One of the more unusual challenges in this project was animating the page slide and making the pages differ. I didn’t want the pages to feel too repetitive, but I also didn’t want to create dozens of individual materials for each page. To find a balance, I created two different materials for two pages and made them random inside of the cloner. It was a bit of a workaround — mostly due to limitations inside the Shader switch node — but it worked well enough to create the illusion of variety without significantly increasing the complexity of the setup.

    There’s a really useful node in Redshift called Color User Data — especially when working with the MoGraph system to trigger object index values. One of the strangest (and probably least intuitive) things I did in this setup was using a Change Range node to remap those index values properly according to the number of textures I had. With that in place, I built a system that used an index to mix between all the textures inside a Shader Switch node. This allowed me to get true variation across the pages without manually assigning materials to each one.

    You might’ve noticed that the pages look a bit too bright for a real-world scenario — and that was actually a deliberate choice. I often use a trick that helps me art-direct material brightness independently of the scene’s lighting. The key node here is Color Correct Node.

    Inside it, there’s a parameter called Level. If you set it higher than 1, it increases the overall brightness of the texture output — without affecting shadows or highlights too aggressively. This also works in reverse: if your texture has areas that are too bright (like pure white), lowering the Level value below 1 will tone it down without needing to modify the source texture.

    It’s a simple trick, but incredibly useful when you want fine control over how materials react in stylized or theatrical lighting setups.

    The red cloth material I used throughout the scene is another interesting part of the project. I wanted it to have a strong tactile feel — something that looks thick, textured, and physically present. To achieve that, I relied heavily on geometry. I used a Redshift Object Tag with Subdivision (under the Geometry tab) enabled to add more detail where it was needed. This helped the cloth catch light properly and hold up in close-up shots.

    For the translucent look, I originally experimented with Subsurface Scattering, but it didn’t give me the control I wanted. So instead, I used an Opacity setup driven by a Ramp and Change Range nodes. That gave me just enough falloff and variation to fake the look of light passing through thinner areas of the fabric — and in the end, it worked surprisingly well.

    Animating the Pages

    This was by far the most experimental part of the project for me. The amount of improvisation — and the complete lack of confidence in what the next frame would be — made the process both fun and flexible.

    What you’re about to see might look a bit chaotic, so let me quickly walk you through how it all started.

    The simulation started with a subject — in our case, a page. It had to have the proper form, and by that I mean the right typology. Specifically, it needed to consist only of horizontal segments; otherwise, it would bend unevenly under the forces present in the scene. (And yes, I did try versions with even polygons — it got messy.)

    I set up all the pages in a Cloner so I could easily adjust any parameters I needed, and added a bit of randomness using a Random Effector.

    In the video, you can see a plane on the side that connects to the pages — that was actually the first idea I had when thinking about how to run the simulation. The plane has a Connect tag that links all the pages to it, so when it rotates, they all follow along.

    I won’t go into all the force settings — most of them were experimental, and animations like this always require a bit of creative adjustment.

    The main force was wind. The pages did want to slide just from the plane with the Connect tag, but I needed to give them an extra push from underneath — that’s where wind came in handy.

    I also used a Field Force to move the pages mid-air, from the center outward to the other side.

    Probably the most important part was how I triggered the “Mix Animation.” I used a Vertex Map tag on the Cloner to paint a map using a Field, which then drove the Mix Animation parameter in the Cloth tag. This setup made the pages activate one by one, creating a natural, finger-like sliding motion as seen in Video.

    Postprocessing

    I didn’t go too heavy on post-processing, but there’s one plugin I have to mention — Deep Glow. It gives amazing results. By tweaking the threshold, you can make it react only to the brightest areas, which creates a super clean, glowing effect.

    The Final Theatrical Ecosystem

    In the end, Christian was delighted with the outcome. Together we had built more than a portfolio — we had created a cohesive theatrical ecosystem. It moved fluidly from digital performance to printed object, from live stage to interface, and from emotion to technology.

    The experience is pared back to its essence: no superfluous effects, no unnecessary clicks, nothing to pull focus. What remains is what matters most — the work itself, framed in a way that stays quietly behind the scenes yet comes fully alive in the viewer’s hands and on their screen.



    Source link

  • How to solve InvalidOperationException for constructors using HttpClientFactory in C#

    How to solve InvalidOperationException for constructors using HttpClientFactory in C#


    A suitable constructor for type ‘X’ could not be located. What a strange error message! Luckily it’s easy to solve.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    A few days ago I was preparing the demo for a new article. The demo included a class with an IHttpClientFactory service injected into the constructor. Nothing more.

    Then, running the application (well, actually, executing the code), this error popped out:

    System.InvalidOperationException: A suitable constructor for type ‘X’ could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.

    How to solve it? It’s easy. But first, let me show you what I did in the wrong version.

    Setting up the wrong example

    For this example, I created an elementary project.
    It’s a .NET 7 API project, with only one controller, GenderController, which calls another service defined in the IGenderizeService interface.

    public interface IGenderizeService
    {
        Task<GenderProbability> GetGenderProbabiliy(string name);
    }
    

    IGenderizeService is implemented by a class, GenderizeService, which is the one that fails to load and, therefore, causes the exception to be thrown. The class calls an external endpoint, parses the result, and then returns it to the caller:

    public class GenderizeService : IGenderizeService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public GenderizeService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            var httpClient = _httpClientFactory.CreateClient();
    
            var response = await httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    Finally, I’ve defined the services in the Program class, and then I’ve specified which is the base URL for the HttpClient instance generated in the GenderizeService class:

    // some code
    
    builder.Services.AddScoped<IGenderizeService, GenderizeService>();
    
    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
        client => client.BaseAddress = new Uri("https://api.genderize.io/")
        );
    
    var app = builder.Build();
    
    // some more code
    

    That’s it! Can you spot the error?

    2 ways to solve the error

    The error was quite simple, but it took me a while to spot:

    In the constructor I was injecting an IHttpClientFactory:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    

    while in the host definition I was declaring an HttpClient for a specific class:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>
    

    Apparently, even if we’ve specified how to create an instance for a specific class, we could not build it using an IHttpClientFactory.

    So, here are 2 ways to solve it.

    Use named HttpClient in HttpClientFactory

    Named HttpClients are a helpful way to define a specific HttpClient and use it across different services.

    It’s as simple as assigning a name to an HttpClient instance and then using the same name when you need that specific client.

    So, define it in the Startup method:

    builder.Services.AddHttpClient("genderize",
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and retrieve it using CreateClient:

    public GenderizeService(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory;
    }
    
    public async Task<GenderProbability> GetGenderProbabiliy(string name)
    {
        var httpClient = _httpClientFactory.CreateClient("genderize");
    
        var response = await httpClient.GetAsync($"?name={name}");
    
        var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
        return result;
    }
    

    💡 Quick tip: define the HttpClient names in a constant field shared across the whole system!

    Inject HttpClient instead of IHttpClientFactory

    The other way is by injecting an HttpClient instance instead of an IHttpClientFactory.

    So we can restore the previous version of the Startup part:

    builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
                client => client.BaseAddress = new Uri("https://api.genderize.io/")
            );
    

    and, instead of injecting an IHttpClientFactory, we can directly inject an HttpClient instance:

    public class GenderizeService : IGenderizeService
    {
        private readonly HttpClient _httpClient;
    
        public GenderizeService(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }
    
        public async Task<GenderProbability> GetGenderProbabiliy(string name)
        {
            //var httpClient = _httpClientFactory.CreateClient("genderize");
    
            var response = await _httpClient.GetAsync($"?name={name}");
    
            var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
    
            return result;
        }
    }
    

    We no longer need to call _httpClientFactory.CreateClient because the injected instance of HttpClient is already customized with the settings we’ve defined at Startup.

    Further readings

    I’ve briefly talked about HttpClientFactory in one article of my C# tips series:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instance | Code4IT

    And, more in detail, I’ve also talked about one way to mock HttpClientFactory instances in unit tests using Moq:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, why do we need to use HttpClientFactories instead of HttpClients?

    🔗 Use IHttpClientFactory to implement resilient HTTP requests | Microsoft Docs

    This article first appeared on Code4IT

    Wrapping up

    Yes, it was that easy!

    We received the error message

    A suitable constructor for type ‘X’ could not be located.

    because we were mixing two ways to customize and use HttpClient instances.

    But we’ve only opened Pandora’s box: we will come back to this topic soon!

    For now, Happy coding!

    🐧



    Source link

  • SelectMany in LINQ &vert; Code4IT

    SelectMany in LINQ | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    There’s one LINQ method that I always struggle in understanding: SelectMany.

    It’s actually a pretty simple method, but somehow it doesn’t stuck in my head.

    In simple words, SelectMany works on collections of items that you can use, in whichever way, to retrieve other items.

    Let’s see an example using the dear old for loop, and then we will replace it with SelectMany.

    For this example, I’ve created a simple record type that represents an office. Each office has one or more phone numbers.

    record Office(string Place, string[] PhoneNumbers);
    

    Now, our company has a list of offices.

    List<Office> myCompanyOffices = new List<Office>{
        new Office("Turin", new string[]{"011-1234567", "011-2345678", "011-34567890"}),
        new Office("Rome", new string[]{"031-4567", "031-5678", "031-890"}),
        new Office("Dublin", new string[]{"555-031-4567", "555-031-5678", "555-031-890"}),
    };
    

    How can we retrieve the list of all phone numbers?

    Iterating with a FOR-EACH loop

    The most obvious way is to iterate over the collection with a for or a foreach loop.

    List<string> allPhoneNumbers = new List<string>();
    
    foreach (var office in myCompanyOffices)
    {
        allPhoneNumbers.AddRange(office.PhoneNumbers);
    }
    

    Nothing fancy: we use AddRange instead of Add, just to avoid another inner loop.

    Using SelectMany

    You can do the same thing in a single line using LINQ’s SelectMany.

    List<string> allPhoneNumbers = myCompanyOffices.SelectMany(b => b.PhoneNumbers).ToList();
    

    This method aggregates all the PhoneNumbers elements in an IEnumerable<string> instance (but then we need to call ToList to convert it).

    Of course, always check that the PhoneNumbers list is not null, otherwise it will throw an ArgumentNullException.

    The simplest way is by using the ?? operator:

    allPhoneNumbers = myCompanyOffices.SelectMany(b => b.PhoneNumbers ?? Enumerable.Empty<string>()).ToList();
    

    Wrapping up

    Easy, right? I don’t have anything more to add!

    Happy coding!

    🐧



    Source link

  • How to Choose the Top XDR Vendor for Your Cybersecurity Future

    How to Choose the Top XDR Vendor for Your Cybersecurity Future


    Cyberattacks aren’t slowing down—they’re getting bolder and smarter. From phishing scams to ransomware outbreaks, the number of incidents has doubled or even tripled year over year. In today’s hybrid, multi-vendor IT landscape, protecting your organization’s digital assets requires choosing the top XDR vendor that can see and stop threats across every possible entry point.

    Over the last five years, XDR (Extended Detection and Response) has emerged as one of the most promising cybersecurity innovations. Leading IT analysts agree: XDR solutions will play a central role in the future of cyber defense. But not all XDR platforms are created equal. Success depends on how well an XDR vendor integrates Endpoint Protection Platforms (EPP) and Endpoint Detection and Response (EDR) to detect, analyze, and neutralize threats in real time.

    This guide will explain what makes a great XDR vendor and how Seqrite XDR compares to industry benchmarks. It also includes a practical checklist for confidently evaluating your next security investment.

    Why Choosing the Right XDR Vendor Matters

    Your XDR platform isn’t just another security tool; it’s the nerve center of your threat detection and response strategy. The best solutions act as a central brain, collecting security telemetry from:

    • Endpoints
    • Networks
    • Firewalls
    • Email
    • Identity systems
    • DNS

    They don’t just collect this data, they correlate it intelligently, filter out the noise, and give your security team actionable insights to respond faster.

    According to industry reports, over 80% of IT and cybersecurity professionals are increasing budgets for threat detection and response. If you choose the wrong vendor, you risk fragmented visibility, alert fatigue, and missed attacks.

    Key Capabilities Every Top XDR Vendor Should Offer

    When shortlisting top XDR vendors, here’s what to look for:

    1. Advanced Threat Detection – Identify sophisticated, multi-layer attack patterns that bypass traditional tools.
    2. Risk-Based Prioritization – Assign scores (1–1000) so you know which threats truly matter.
    3. Unified Visibility – A centralized console to eliminate security silos.
    4. Integration Flexibility – Native and third-party integrations to protect existing investments.
    5. Automation & Orchestration – Automate repetitive workflows to respond in seconds, not hours.
    6. MITRE ATT&CK Mapping – Know exactly which attacker tactics and techniques you can detect.

    Remember, it’s the integration of EPP and EDR that makes or breaks an XDR solution’s effectiveness.

    Your Unified Detection & Response Checklist

    Use this checklist to compare vendors on a like-for-like basis:

    • Full telemetry coverage: Endpoints, networks, firewalls, email, identity, and DNS.
    • Native integration strength: Smooth backend-to-frontend integration for consistent coverage.
    • Real-time threat correlation: Remove false positives, detect real attacks faster.
    • Proactive security posture: Shift from reactive to predictive threat hunting.
    • MITRE ATT&CK alignment: Validate protection capabilities against industry-recognized standards.

    Why Automation Is the Game-Changer

    The top XDR vendors go beyond detection, they optimize your entire security operation. Automated playbooks can instantly execute containment actions when a threat is detected. Intelligent alert grouping cuts down on noise, preventing analyst burnout.

    Automation isn’t just about speed; it’s about cost savings. A report by IBM Security shows that organizations with full automation save over ₹31 crore annually and detect/respond to breaches much faster than those relying on manual processes.

    The Seqrite XDR Advantage

    Seqrite XDR combines advanced detection, rich telemetry, and AI-driven automation into a single, unified platform. It offers:

    • Seamless integration with Seqrite Endpoint Protection (EPP) and Seqrite Endpoint Detection & Response (EDR) and third party telemetry sources.
    • MITRE ATT&CK-aligned visibility to stay ahead of attackers.
    • Automated playbooks to slash response times and reduce manual workload.
    • Unified console for complete visibility across your IT ecosystem.
    • GenAI-powered SIA (Seqrite Intelligent Assistant) – Your AI-Powered Virtual Security Analyst. SIA offers predefined prompts and conversational access to incident and alert data, streamlining investigations and making it faster for analysts to understand, prioritize, and respond to threats.

    In a market crowded with XDR solutions, Seqrite delivers a future-ready, AI-augmented platform designed for today’s threats and tomorrow’s unknowns.

    If you’re evaluating your next security investment, start with a vendor who understands the evolving threat landscape and backs it up with a platform built for speed, intelligence, and resilience.



    Source link

  • 5 tricks every C# dev should know about LINQPad &vert; Code4IT

    5 tricks every C# dev should know about LINQPad | Code4IT


    LINQPad is one of the tools I use daily. But still, I haven’t used it at its full power. And you?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    LINQPad is one of my best friends: I use it daily, and it helps me A LOT when I need to run some throwaway code.

    There are many other tools out there, but I think that LINQPad (well, the full version!) is one of the best tools on the market.

    But still, many C# developers only use just a few of its functionalities! In this article, I will show you my top 5 functionalities you should know.

    Advanced Dump()

    As many of you already know, to print stuff on the console you don’t have to call Console.WriteLine(something), but you can use something.Dump();

    void Main()
    {
        var user = new User(1, "Davide", "DavideB");
        user.Dump();
    }
    

    Basic usage of Dump()

    You can simplify it by avoiding calling the Dump operation in a separate step: Dump can print the content and return it at the same time:

    var user = new User(1, "Davide", "DavideB").Dump();
    

    Dump() can both print and return a value

    For sure, this simple trick makes your code easier to read!

    Ok, what if you have too many Dump calls and you don’t know which operation prints which log? Lucky for us, the Dump method accepts a string as a Title: that text will be displayed in the output panel.

    var user = new User(1, "Davide", "DavideB").Dump("My User content");
    

    You can now see the “My User content” header right above the log of the user:

    Dump() with title

    Dump containers

    We can do a step further and introduce Dump containers.

    Dump Containers are some sort of sink for your logs (we’ve already talked about sinks, do you remember?). Once you’ve instantiated a DumpContainer object, you can perform some operations such as AppendContent to append some content at the end of the logs, ClearContent to clear the content (obviously!), and Dump to display the content of the Container in the Results panel.

    DumpContainer dc = new DumpContainer();
    
    dc.Content = "Hey!";
    dc.AppendContent("There");
    
    dc.Dump();
    

    Note: you don’t need to place the Dump() instruction at the end of the script: you can put it at the beginning and you’ll see the content as soon as it gets added. Otherwise, you will build the internal list of content and display it only at the end.

    So, this is perfectly valid:

    DumpContainer dc = new DumpContainer();
    dc.Dump();
    
    
    dc.Content = "Hey!";
    dc.AppendContent("There");
    

    Simple usage of Dump container

    You can even explicitly set the content of the Container: setting it will replace everything else.

    Here you can see what happens when we override the content:

    Replace log content with DumpContainer

    Why should we even care? 🤔

    My dear friend, it’s easy! Because we can create more Containers to log different things!

    Take this example: we want to loop over a list of items and use one Container to display the item itself, and another Container to list what happens when we perform some operations on each item. Yeeees, I know, it’s hard to understand in this way: let me show you an example!

    DumpContainer dc1 = new DumpContainer();
    DumpContainer dc2 = new DumpContainer();
    
    dc1.Dump();
    dc2.Dump();
    
    var users = new List<User> {
        new User(1, "Davide", "DavideB"),
        new User(2, "Dav", "Davi Alt"),
        new User(3, "Bellone", "Bellone 3"),
    };
    
    foreach (var element in users)
    {
        dc1.AppendContent(element);
        dc2.AppendContent(element.name.ToUpper());
    }
    

    Here we’re using two different containers, each of them lives its own life.

    Using multiple containers

    In this example I used AppendContent, but of course, you can replace the full content of a Container to analyze one item at a time.

    I can hear you: there’s another question in your mind:

    How can we differentiate those containers?

    You can use the Style property of the DumpContainer class to style the output, using CSS-like properties:

    DumpContainer dc2 = new DumpContainer();
    dc2.Style = "color:red; font-weight: bold";
    

    Now all the content stored in the dc2 container will be printed in red:

    Syling DumpContainer with CSS rules

    Great stuff 🤩

    Read text from input

    Incredibly useful, but often overlooked, is the ability to provide inputs to our scripts.

    To do that, you can rely on the Util.ReadLine method already included in LINQPad:

    string myContent = Util.ReadLine();
    

    When running the application, you will see a black box at the bottom of the window that allows you to write (or paste) some text. That text will then be assigned to the myContent variable.

    Using input in LINQPad

    There’s a nice overload that allows you to specify a sort of title to the text box, to let you know which is the current step:

    Input boxes can have a title

    Paste as escaped string

    This is one of my favorite functionalities: many times I have to escape text that contains quotes, copied from somewhere else to assign it to a string variable; I used to lose time escaping those values manually (well, using other tools that still are slower than this one).

    Take this JSON:

    {
      "name": "davide",
      "gender": "male",
      "probability": 0.99,
      "count": 82957
    }
    

    Assigning it manually to a string becomes a mess. Lucky for us, we can copy it, get back on LINQPad, right-click, choose “Paste as escaped string” (or, if you prefer, use Alt+Shift+V) and have it already escaped and ready to be used:

    Escaped string in LINQPad

    That operation will generate this string:

    string content = "{\n\t\"name\": \"davide\",\n\t\"gender\": \"male\",\n\t\"probability\": 0.99,\n\t\"count\": 82957\n}";
    

    Not bad, isn’t it? 😎

    xUnit test support

    Another nice functionality that you can use to toy with classes or methods you don’t know is the xUnit test support.

    By clicking on the Query > Add XUnit Test Support, you can add xUnit to your query and write (and run, obviously) unit tests.

    All those tests are placed in a region named Tests:

    and can be run both by pressing Alt+Shift+T or by calling RunTests() in the Main method.

    After running the tests you will see a report with the list of the tests that passed and the details of the tests that failed:

    xUnit test result

    This article first appeared on Code4IT

    Wrapping up

    We’ve seen 5 amazing tricks to get the best out of LINQPad. In my opinion, every C# developer that uses this tool should know those tricks, they can really boost your productivity.

    Did you already know all of them? Which are your favorites? Drop a message in the comments section or on Twitter 📧

    Happy coding!

    🐧





    Source link

  • Building a Blended Material Shader in WebGL with Solid.js

    Building a Blended Material Shader in WebGL with Solid.js



    Blackbird was a fun, experimental site that I used as a way to get familiar with WebGL inside of Solid.js. It went through the story of how the SR-71 was built in super technical detail. The wireframe effect covered here helped visualize the technology beneath the surface of the SR-71 while keeping the polished metal exterior visible that matched the sites aesthetic.

    Here is how the effect looks like on the Blackbird site:

    In this tutorial, we’ll rebuild that effect from scratch: rendering a model twice, once as a solid and once as a wireframe, then blending the two together in a shader for a smooth, animated transition. The end result is a flexible technique you can use for technical reveals, holograms, or any moment where you want to show both the structure and the surface of a 3D object.

    There are three things at work here: material properties, render targets, and a black-to-white shader gradient. Let’s get into it!

    But First, a Little About Solid.js

    Solid.js isn’t a framework name you hear often, I’ve switched my personal work to it for the ridiculously minimal developer experience and because JSX remains the greatest thing since sliced bread. You absolutely don’t need to use the Solid.js part of this demo, you could strip it out and use vanilla JS all the same. But who knows, you may enjoy it 🙂

    Intrigued? Check out Solid.js.

    Why I Switched

    TLDR: Full-stack JSX without all of the opinions of Next and Nuxt, plus it’s like 8kb gzipped, wild.

    The technical version: Written in JSX, but doesn’t use a virtual DOM, so a “reactive” (think useState()) doesn’t re-render an entire component, just one DOM node. Also runs isomorphically, so "use client" is a thing of the past.

    Setting Up Our Scene

    We don’t need anything wild for the effect: a Mesh, Camera, Renderer, and Scene will do. I use a base Stage class (for theatrical-ish naming) to control when things get initialized.

    A Global Object for Tracking Window Dimensions

    window.innerWidth and window.innerHeight trigger document reflow when you use them (more about document reflow here). So I keep them in one object, only updating it when necessary and reading from the object, instead of using window and causing reflow. Notice these are all set to 0 and not actual values by default. window gets evaluated as undefined when using SSR, so we want to wait to set this until our app is mounted, GL class is initialized, and window is defined to avoid everybody’s favorite error: Cannot read properties of undefined (reading ‘window’).

    // src/gl/viewport.js
    
    export const viewport = {
      width: 0,
      height: 0,
      devicePixelRatio: 1,
      aspectRatio: 0,
    };
    
    export const resizeViewport = () => {
      viewport.width = window.innerWidth;
      viewport.height = window.innerHeight;
    
      viewport.aspectRatio = viewport.width / viewport.height;
    
      viewport.devicePixelRatio = Math.min(window.devicePixelRatio, 2);
    };

    A Basic Three.js Scene, Renderer, and Camera

    Before we can render anything, we need a small framework to handle our scene setup, rendering loop, and resizing logic. Instead of scattering this across multiple files, we’ll wrap it in a Stage class that initializes the camera, renderer, and scene in one place. This makes it easier to keep our WebGL lifecycle organized, especially once we start adding more complex objects and effects.

    // src/gl/stage.js
    
    import { WebGLRenderer, Scene, PerspectiveCamera } from 'three';
    import { viewport, resizeViewport } from './viewport';
    
    class Stage {
      init(element) {
        resizeViewport() // Set the initial viewport dimensions, helps to avoid using window inside of viewport.js for SSR-friendliness
        
        this.camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
        this.camera.position.set(0, 0, 2); // back the camera up 2 units so it isn't on top of the meshes we make later, you won't see them otherwise.
    
        this.renderer = new WebGLRenderer();
        this.renderer.setSize(viewport.width, viewport.height);
        element.appendChild(this.renderer.domElement); // attach the renderer to the dom so our canvas shows up
    
        this.renderer.setPixelRatio(viewport.devicePixelRatio); // Renders higher pixel ratios for screens that require it.
    
        this.scene = new Scene();
      }
    
      render() {
        this.renderer.render(this.scene, this.camera);
        requestAnimationFrame(this.render.bind(this));
    // All of the scenes child classes with a render method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.render && typeof child.render === 'function') {
            child.render();
          }
        });
      }
    
      resize() {
        this.renderer.setSize(viewport.width, viewport.height);
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
    
    // All of the scenes child classes with a resize method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.resize && typeof child.resize === 'function') {
            child.resize();
          }
        });
      }
    }
    
    export default new Stage();

    And a Fancy Mesh to Go With It

    With our stage ready, we can give it something interesting to render. A torus knot is perfect for this: it has plenty of curves and detail to show off both the wireframe and solid passes. We’ll start with a simple MeshNormalMaterial in wireframe mode so we can clearly see its structure before moving on to the blended shader version.

    // src/gl/torus.js
    
    import { Mesh, MeshBasicMaterial, TorusKnotGeometry } from 'three';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial({
          color: 0xffff00,
          wireframe: true,
        });
    
        this.position.set(0, 0, -8); // Back up the mesh from the camera so its visible
      }
    }

    A quick note on lights

    For simplicity we’re using MeshNormalMaterial so we don’t have to mess with lights. The original effect on Blackbird had six lights, waaay too many. The GPU on my M1 Max was choked to 30fps trying to render the complex models and realtime six-point lighting. But reducing this to just 2 lights (which visually looked identical) ran at 120fps no problem. Three.js isn’t like Blender where you can plop in 14 lights and torture your beefy computer with the render for 12 hours while you sleep. The lights in WebGL have consequences 🫠

    Now, the Solid JSX Components to House It All

    // src/components/GlCanvas.tsx
    
    import { onMount, onCleanup } from 'solid-js';
    import Stage from '~/gl/stage';
    
    export default function GlCanvas() {
    // let is used instead of refs, these aren't reactive
      let el;
      let gl;
      let observer;
    
      onMount(() => {
        if(!el) return
        gl = Stage;
    
        gl.init(el);
        gl.render();
    
    
        observer = new ResizeObserver((entry) => gl.resize());
        observer.observe(el); // use ResizeObserver instead of the window resize event. 
        // It is debounced AND fires once when initialized, no need to call resize() onMount
      });
    
      onCleanup(() => {
        if (observer) {
          observer.disconnect();
        }
      });
    
    
      return (
        <div
          ref={el}
          style={{
            position: 'fixed',
            inset: 0,
            height: '100lvh',
            width: '100vw',
          }}
          
        />
      );
    }

    let is used to declare a ref, there is no formal useRef() function in Solid. Signals are the only reactive method. Read more on refs in Solid.

    Then slap that component into app.tsx:

    // src/app.tsx
    
    import { Router } from '@solidjs/router';
    import { FileRoutes } from '@solidjs/start/router';
    import { Suspense } from 'solid-js';
    import GlCanvas from './components/GlCanvas';
    
    export default function App() {
      return (
        <Router
          root={(props) => (
            <Suspense>
              {props.children}
              <GlCanvas />
            </Suspense>
          )}
        >
          <FileRoutes />
        </Router>
      );
    }

    Each 3D piece I use is tied to a specific element on the page (usually for timeline and scrolling), so I create an individual component to control each class. This helps me keep organized when I have 5 or 6 WebGL moments on one page.

    // src/components/WireframeDemo.tsx
    
    import { createEffect, createSignal, onMount } from 'solid-js'
    import Stage from '~/gl/stage';
    import Torus from '~/gl/torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new Torus()); // Stage is initialized when the page initially mounts, 
        // so it's not available until the next tick. 
        // A signal forces this update to the next tick, 
        // after Stage is available.
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} />;
    }

    createEffect() instead of onMount(): this automatically tracks dependencies (element, and actor in this case) and fires the function when they change, no more useEffect() with dependency arrays 🙃. Read more on createEffect in Solid.

    Then a minimal route to put the component on:

    // src/routes/index.tsx
    
    import WireframeDemo from '~/components/WiframeDemo';
    
    export default function Home() {
      return (
        <main>
          <WireframeDemo />
        </main>
      );
    }
    Diagramming showing the folder structure of a code project

    Now you’ll see this:

    Rainbow torus knot

    Switching a Material to Wireframe

    I loved wireframe styling for the Blackbird site! It fit the prototype feel of the story, fully textured models felt too clean, wireframes are a bit “dirtier” and unpolished. You can wireframe just about any material in Three.js with this:

    // /gl/torus.js
    
      this.material.wireframe = true
      this.material.needsUpdate = true;
    Rainbow torus knot changing from wireframe to solid colors

    But we want to do this dynamically on only part of our model, not on the entire thing.

    Enter render targets.

    The Fun Part: Render Targets

    Render Targets are a super deep topic but they boil down to this: Whatever you see on screen is a frame for your GPU to render, in WebGL you can export that frame and re-use it as a texture on another mesh, you are creating a “target” for your rendered output, a render target.

    Since we’re going to need two of these targets, we can make a single class and re-use it.

    // src/gl/render-target.js
    
    import { WebGLRenderTarget } from 'three';
    import { viewport } from '../viewport';
    import Torus from '../torus';
    import Stage from '../stage';
    
    export default class RenderTarget extends WebGLRenderTarget {
      constructor() {
        super();
    
        this.width = viewport.width * viewport.devicePixelRatio;
        this.height = viewport.height * viewport.devicePixelRatio;
      }
    
      resize() {
        const w = viewport.width * viewport.devicePixelRatio;
        const h = viewport.height * viewport.devicePixelRatio;
    
        this.setSize(w, h)
      }
    }

    This is just an output for a texture, nothing more.

    Now we can make the class that will consume these outputs. It’s a lot of classes, I know, but splitting up individual units like this helps me keep track of where stuff happens. 800 line spaghetti mega-classes are the stuff of nightmares when debugging WebGL.

    // src/gl/targeted-torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      PerspectiveCamera,
      PlaneGeometry,
    } from 'three';
    import Torus from './torus';
    import { viewport } from './viewport';
    import RenderTarget from './render-target';
    import Stage from './stage';
    
    export default class TargetedTorus extends Mesh {
      targetSolid = new RenderTarget();
      targetWireframe = new RenderTarget();
    
      scene = new Torus(); // The shape we created earlier
      camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
      
      constructor() {
        super();
    
        this.geometry = new PlaneGeometry(1, 1);
        this.material = new MeshNormalMaterial();
      }
    
      resize() {
        this.targetSolid.resize();
        this.targetWireframe.resize();
    
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
      }
    }

    Now, switch our WireframeDemo.tsx component to use the TargetedTorus class, instead of Torus:

    // src/components/WireframeDemo.tsx 
    
    import { createEffect, createSignal, onMount } from 'solid-js';
    import Stage from '~/gl/stage';
    import TargetedTorus from '~/gl/targeted-torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new TargetedTorus()); // << change me
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} data-gl="wireframe" />;
    }

    “Now all I see is a blue square Nathan, it feel like we’re going backwards, show me the cool shape again”.

    Shhhhh, It’s by design I swear!

    From MeshNormalMaterial to ShaderMaterial

    We can now take our Torus rendered output and smack it onto the blue plane as a texture using ShaderMaterial. MeshNormalMaterial doesn’t let us use a texture, and we’ll need shaders soon anyway. Inside of targeted-torus.js remove the MeshNormalMaterial and switch this in:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        void main() {
          gl_FragColor = vec4(0.67, 0.08, 0.86, 1.0);
        }
      `,
    });

    Now we have a much prettier purple plane with the help of two shaders:

    • Vertex shaders manipulate vertex locations of our material, we aren’t going to touch this one further
    • Fragment shaders assign the colors and properties to each pixel of our material. This shader tells every pixel to be purple

    Using the Render Target Texture

    To show our Torus instead of that purple color, we can feed the fragment shader an image texture via uniforms:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        // declare 2 uniforms
        uniform sampler2D u_texture_solid;
        uniform sampler2D u_texture_wireframe;
    
        void main() {
          // declare 2 images
          vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
          vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
          // set the color to that of the image
          gl_FragColor = solid_texture;
        }
      `,
      uniforms: {
        u_texture_solid: { value: this.targetSolid.texture },
        u_texture_wireframe: { value: this.targetWireframe.texture },
      },
    });

    And add a render method to our TargetedTorus class (this is called automatically by the Stage class):

    // src/gl/targeted-torus.js
    
    render() {
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      Stage.renderer.render(this.scene, this.camera);
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.clear();
      Stage.renderer.setRenderTarget(null);
    }

    THE TORUS IS BACK. We’ve passed our image texture into the shader and its outputting our original render.

    Mixing Wireframe and Solid Materials with Shaders

    Shaders were black magic to me before this project. It was my first time using them in production and I’m used to frontend where you think in boxes. Shaders are coordinates 0 to 1, which I find far harder to understand. But, I’d used Photoshop and After Effects with layers plenty of times. These applications do a lot of the same work shaders can: GPU computing. This made it far easier. Starting out by picturing or drawing what I wanted, thinking how I might do it in Photoshop, then asking myself how I could do it with shaders. Photoshop or AE into shaders is far less mentally taxing when you don’t have a deep foundation in shaders.

    Populating Both Render Targets

    At the moment, we are only saving data to the solidTarget render target via normals. We will update our render loop, so that our shader has them both this and wireframeTarget available simultaneously.

    // src/gl/targeted-torus.js
    
    render() {
      // Render wireframe version to wireframe render target
      this.scene.material.wireframe = true;
      Stage.renderer.setRenderTarget(this.targetWireframe);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_wireframe.value = this.targetWireframe.texture;
    
      // Render solid version to solid render target
      this.scene.material.wireframe = false;
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      // Reset render target
      Stage.renderer.setRenderTarget(null);
    }

    With this, you end up with a flow that under the hood looks like this:

    Diagram with red lines describing data being passed around

    Fading Between Two Textures

    Our fragment shader will get a little update, 2 additions:

    • smoothstep creates a linear ramp between 2 values. UVs only go from 0 to 1, so in this case we use .15 and .65 as the limits (they look make the effect more obvious than 0 and 1). Then we use the x value of the uvs to define which value gets fed into smoothstep.
    • vec4 mixed = mix(wireframe_texture, solid_texture, blend); mix does exactly what it says, mixes 2 values together at a ratio determined by blend. .5 being a perfectly even split.
    // src/gl/targeted-torus.js
    
    fragmentShader: `
      varying vec2 v_uv;
      varying vec3 v_position;
    
      // declare 2 uniforms
      uniform sampler2D u_texture_solid;
      uniform sampler2D u_texture_wireframe;
    
      void main() {
        // declare 2 images
        vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
        vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
        float blend = smoothstep(0.15, 0.65, v_uv.x);
        vec4 mixed = mix(wireframe_texture, solid_texture, blend);        
    
        gl_FragColor = mixed;
      }
    `,

    And boom, MIXED:

    Rainbow torus knot with wireframe texture

    Let’s be honest with ourselves, this looks exquisitely boring being static so we can spice this up with little magic from GSAP.

    // src/gl/torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      TorusKnotGeometry,
    } from 'three';
    import gsap from 'gsap';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial();
    
        this.position.set(0, 0, -8);
    
        // add me!
        gsap.to(this.rotation, {
          y: 540 * (Math.PI / 180), // needs to be in radians, not degrees
          ease: 'power3.inOut',
          duration: 4,
          repeat: -1,
          yoyo: true,
        });
      }
    }

    Thank You!

    Congratulations, you’ve officially spent a measurable portion of your day blending two materials together. It was worth it though, wasn’t it? At the very least, I hope this saved you some of the mental gymnastics orchestrating a pair of render targets.

    Have questions? Hit me up on Twitter!



    Source link

  • F.I.R.S.T. acronym for better unit tests &vert; Code4IT

    F.I.R.S.T. acronym for better unit tests | Code4IT


    Good unit tests have some properties in common: they are Fast, Independent, Repeatable, Self-validating, and Thorough. In a word: FIRST!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    FIRST is an acronym that you should always remember if you want to write clean and extensible tests.

    This acronym tells us that Unit Tests should be Fast, Independent, Repeatable, Self-validating, and Thorough.

    Fast

    You should not create tests that require a long time for setup and start-up: ideally, you should be able to run the whole test suite in under a minute.

    If your unit tests are taking too much time for running, there must be something wrong with it; there are many possibilities:

    1. You’re trying to access remote sources (such as real APIs, Databases, and so on): you should mock those dependencies to make tests faster and to avoid accessing real resources. If you need real data, consider creating integration/e2e tests instead.
    2. Your system under test is too complex to build: too many dependencies? DIT value too high?
    3. The method under test does too many things. You should consider splitting it into separate, independent methods, and let the caller orchestrate the method invocations as necessary.

    Independent (or Isolated)

    Test methods should be independent of one another.

    Avoid doing something like this:

    MyObject myObj = null;
    
    [Fact]
    void Test1()
    {
        myObj = new MyObject();
        Assert.True(string.IsNullOrEmpty(myObj.MyProperty));
    
    }
    
    [Fact]
    void Test2()
    {
    
        myObj.MyProperty = "ciao";
        Assert.Equal("oaic", Reverse(myObj.MyProperty));
    
    }
    

    Here, to have Test2 working correctly, Test1 must run before it, otherwise myObj would be null. There’s a dependency between Test1 and Test2.

    How to avoid it? Create new instances for every test! May it be with some custom methods or in the StartUp phase. And remember to reset the mocks as well.

    Repeatable

    Unit Tests should be repeatable. This means that wherever and whenever you run them, they should behave correctly.

    So you should remove any dependency on the file system, current date, and so on.

    Take this test as an example:

    [Fact]
    void TestDate_DoNotDoIt()
    {
    
        DateTime d = DateTime.UtcNow;
        string dateAsString = d.ToString("yyyy-MM-dd");
    
        Assert.Equal("2022-07-19", dateAsString);
    }
    

    This test is strictly bound to the current date. So, if I’ll run this test again in a month, it will fail.

    We should instead remove that dependency and use dummy values or mock.

    [Fact]
    void TestDate_DoIt()
    {
    
        DateTime d = new DateTime(2022,7,19);
        string dateAsString = d.ToString("yyyy-MM-dd");
    
        Assert.Equal("2022-07-19", dateAsString);
    }
    

    There are many ways to inject DateTime (and other similar dependencies) with .NET. I’ve listed some of them in this article: “3 ways to inject DateTime and test it”.

    Self-validating

    Self-validating means that a test should perform operations and programmatically check for the result.

    For instance, if you’re testing that you’ve written something on a file, the test itself is in charge of checking that it worked correctly. No manual operations should be done.

    Also, tests should provide explicit feedback: a test either passes or fails; no in-between.

    Thorough

    Unit Tests should be thorough in that they must validate both the happy paths and the failing paths.

    So you should test your functions with valid inputs and with invalid inputs.

    You should also validate what happens if an exception is thrown while executing the path: are you handling errors correctly?

    Have a look at this class, with a single, simple, method:

    public class ItemsService
    {
    
        readonly IItemsRepository _itemsRepo;
    
        public ItemsService(IItemsRepository itemsRepo)
        {
            _itemsRepo = itemsRepo;
        }
    
        public IEnumerable<Item> GetItemsByCategory(string category, int maxItems)
        {
    
            var allItems = _itemsRepo.GetItems();
    
            return allItems
                    .Where(i => i.Category == category)
                    .Take(maxItems);
        }
    }
    

    Which tests should you write for GetItemsByCategory?

    I can think of these:

    • what if category is null or empty?
    • what if maxItems is less than 0?
    • what if allItems is null?
    • what if one of the items inside allItems is null?
    • what if _itemsRepo.GetItems() throws an exception?
    • what if _itemsRepo is null?

    As you can see, even for a trivial method like this you should write a lot of tests, to ensure that you haven’t missed anything.

    Conclusion

    F.I.R.S.T. is a good way to way to remember the properties of a good unit test suite.

    Always try to stick to it, and remember that tests should be written even better than production code.

    Happy coding!

    🐧



    Source link

  • How to propagate HTTP Headers (and  Correlation IDs) using HttpClients in C#

    How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C#


    Propagating HTTP Headers can be useful, especially when dealing with Correlation IDs. It’s time to customize our HttpClients!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Imagine this: you have a system made up of different applications that communicate via HTTP. There’s some sort of entry point, exposed to the clients, that orchestrates the calls to the other applications. How do you correlate those requests?

    A good idea is to use a Correlation ID: one common approach for HTTP-based systems is passing a value to the “public” endpoint using HTTP headers; that value will be passed to all the other systems involved in that operation to say that “hey, these incoming requests in the internal systems happened because of THAT SPECIFIC request in the public endpoint”. Of course, it’s more complex than this, but you got the idea.

    Now. How can we propagate an HTTP Header in .NET? I found this solution on GitHub, provided by no less than David Fowler. In this article, I’m gonna dissect his code to see how he built this solution.

    Important update: there’s a NuGet package that implements these functionalities: Microsoft.AspNetCore.HeaderPropagation. Consider this article as an excuse to understand what happens behind the scenes of an HTTP call, and use it to learn how to customize and extend those functionalities. Here’s how to integrate that package.

    Just interested in the C# methods?

    As I said, I’m not reinventing anything new: the source code I’m using for this article is available on GitHub (see link above), but still, I’ll paste the code here, for simplicity.

    First of all, we have two extension methods that add some custom functionalities to the IServiceCollection.

    public static class HeaderPropagationExtensions
    {
        public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
        {
            services.AddHttpContextAccessor();
            services.ConfigureAll(configure);
            services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
            return services;
        }
    
        public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
        {
            builder.Services.AddHttpContextAccessor();
            builder.Services.Configure(builder.Name, configure);
            builder.AddHttpMessageHandler((sp) =>
            {
                var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
                var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
                return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
            });
    
            return builder;
        }
    }
    

    Then we have a Filter that will be used to customize how the HttpClients must be built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    next, a simple class that holds the headers we want to propagate

    public class HeaderPropagationOptions
    {
        public IList<string> HeaderNames { get; set; } = new List<string>();
    }
    

    and, lastly, the handler that actually propagates the headers.

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    Ok, and how can we use all of this?

    It’s quite easy: if you want to propagate the my-correlation-id header for all the HttpClients created in your application, you just have to add this line to your Startup method.

    builder.Services.AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    

    Time to study this code!

    How to “enrich” HTTP requests using DelegatingHandler

    Let’s start with the HeaderPropagationMessageHandler class:

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    This class lies in the middle of the HTTP Request pipeline. It can extend the functionalities of HTTP Clients because it inherits from System.Net.Http.DelegatingHandler.

    If you recall from a previous article, the SendAsync method is the real core of any HTTP call performed using .NET’s HttpClients, and here we’re enriching that method by propagating some HTTP headers.

     protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
    {
        if (_contextAccessor.HttpContext != null)
        {
            foreach (var headerName in _options.HeaderNames)
            {
                // Get the incoming header value
                var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                if (StringValues.IsNullOrEmpty(headerValue))
                {
                    continue;
                }
    
                request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
            }
        }
    
        return base.SendAsync(request, cancellationToken);
    }
    

    By using _contextAccessor we can access the current HTTP Context. From there, we retrieve the current HTTP headers, check if one of them must be propagated (by looking up _options.HeaderNames), and finally, we add the header to the outgoing HTTP call by using TryAddWithoutValidation.

    HTTP Headers are “cloned” and propagated

    Notice that we’ve used `TryAddWithoutValidation` instead of `Add`: in this way, we can use whichever HTTP header key we want without worrying about invalid names (such as the ones with a new line in it). Invalid header names will simply be ignored, as opposed to the Add method that will throw an exception.
    Finally, we continue with the HTTP call by executing `base.SendAsync`, passing the `HttpRequestMessage` object now enriched with additional headers.

    Using HttpMessageHandlerBuilder to configure how HttpClients must be built

    The Microsoft.Extensions.Http.IHttpMessageHandlerBuilderFilter interface allows you to apply some custom configurations to the HttpMessageHandlerBuilder right before the HttpMessageHandler object is built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    The Configure method allows you to customize how the HttpMessageHandler will be built: we are adding a new instance of the HeaderPropagationMessageHandler class we’ve seen before to the current HttpMessageHandlerBuilder’s AdditionalHandlers collection. All the handlers registered in the list will then be used to build the HttpMessageHandler object we’ll use to send and receive requests.

    via GIPHY

    By having a look at the definition of HttpMessageHandlerBuilder you can grasp a bit of what happens when we’re creating HttpClients in .NET.

    namespace Microsoft.Extensions.Http
    {
        public abstract class HttpMessageHandlerBuilder
        {
            protected HttpMessageHandlerBuilder();
    
            public abstract IList<DelegatingHandler> AdditionalHandlers { get; }
    
            public abstract string Name { get; set; }
    
            public abstract HttpMessageHandler PrimaryHandler { get; set; }
    
            public virtual IServiceProvider Services { get; }
    
            protected internal static HttpMessageHandler CreateHandlerPipeline(HttpMessageHandler primaryHandler, IEnumerable<DelegatingHandler> additionalHandlers);
    
            public abstract HttpMessageHandler Build();
        }
    
    }
    

    Ah, and remember the wise words you can read in the docs of that class:

    The Microsoft.Extensions.Http.HttpMessageHandlerBuilder is registered in the service collection as a transient service.

    Nice 😎

    Share the behavior with all the HTTP Clients in the .NET application

    Now that we’ve defined the custom behavior of HTTP clients, we need to integrate it into our .NET application.

    public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
    {
        services.AddHttpContextAccessor();
        services.ConfigureAll(configure);
        services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
        return services;
    }
    

    Here, we’re gonna extend the IServiceCollection with those functionalities. At first, we’re adding AddHttpContextAccessor, which allows us to access the current HTTP Context (the one we’ve used in the HeaderPropagationMessageHandler class).

    Then, services.ConfigureAll(configure) registers an HeaderPropagationOptions that will be used by HeaderPropagationMessageHandlerBuilderFilter. Without that line, we won’t be able to specify the names of the headers to be propagated.

    Finally, we have this line:

    services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
    

    Honestly, I haven’t understood it thoroughly: I thought that it allows us to use more than one class implementing IHttpMessageHandlerBuilderFilter, but apparently if we create a sibling class and add them both using Add, everything works the same. If you know what this line means, drop a comment below! 👇

    Wherever you access the ServiceCollection object (may it be in the Startup or in the Program class), you can propagate HTTP headers for every HttpClient by using

    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    Yes, AddHeaderPropagation is the method we’ve seen in the previous paragraph!

    Seeing it in action

    Now we have all the pieces in place.

    It’s time to run it 😎

    To fully understand it, I strongly suggest forking this repository I’ve created and running it locally, placing some breakpoints here and there.

    As a recap: in the Program class, I’ve added these lines to create a named HttpClient specifying its BaseAddress property. Then I’ve added the HeaderPropagation as we’ve seen before.

    builder.Services.AddHttpClient("items")
                        .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://en5xof8r16a6h.x.pipedream.net/"));
    
    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    There’s also a simple Controller that acts as an entry point and that, using an HttpClient, sends data to another endpoint (the one defined in the previous snippet).

    [HttpPost]
    public async Task<IActionResult> PostAsync([FromQuery] string value)
    {
        var item = new Item(value);
    
        var httpClient = _httpClientFactory.CreateClient("items");
        await httpClient.PostAsJsonAsync("/", item);
        return NoContent();
    }
    

    What happens at start-up time

    When a .NET application starts up, the Main method in the Program class acts as an entry point and registers all the dependencies and configurations required.

    We will then call builder.Services.AddHeaderPropagation, which is the method present in the HeaderPropagationExtensions class.

    All the configurations are then set, but no actual operations are being executed.

    The application then starts normally, waiting for incoming requests.

    What happens at runtime

    Now, when we call the PostAsync method by passing an HTTP header such as my-correlation-id:123, things get interesting.

    The first operation is

    var httpClient = _httpClientFactory.CreateClient("items");
    

    While creating the HttpClient, the engine is calling all the registered IHttpMessageHandlerBuilderFilter and calling their Configure method. So, you’ll see the execution moving to HeaderPropagationMessageHandlerBuilderFilter’s Configure.

    public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
    {
        return builder =>
        {
            builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
            next(builder);
        };
    }
    

    Of course, you’re also executing the HeaderPropagationMessageHandler constructor.

    The HttpClient is now ready: when we call httpClient.PostAsJsonAsync("/", item) we’re also executing all the registered DelegatingHandler instances, such as our HeaderPropagationMessageHandler. In particular, we’re executing the SendAsync method and adding the required HTTP Headers to the outgoing HTTP calls.

    We will then see the same HTTP Header on the destination endpoint.

    We did it!

    Propagating CorrelationId to a specific HttpClient

    You can also specify which headers need to be propagated on single HTTP Clients:

    public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
    {
        builder.Services.AddHttpContextAccessor();
        builder.Services.Configure(builder.Name, configure);
    
        builder.AddHttpMessageHandler((sp) =>
        {
            var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
            var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
            return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
        });
    
        return builder;
    }
    

    Which works similarly, but registers the Handler only to a specific HttpClient.

    For instance, you can have 2 distinct HttpClient that will propagate only a specific set of HTTP Headers:

    builder.Services.AddHttpClient("items")
            .AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    
    builder.Services.AddHttpClient("customers")
            .AddHeaderPropagation(options => options.HeaderNames.Add("another-correlation-id"));
    

    Further readings

    Finally, some additional resources if you want to read more.

    For sure, you should check out (and star⭐) David Fowler’s code:

    🔗 Original code | GitHub

    If you’re not sure about what are extension methods (and you cannot respond to this question: How does inheritance work with extension methods?), then you can have a look at this article:

    🔗 How you can create extension methods in C# | Code4IT

    We heavily rely on HttpClient and HttpClientFactory. How can you test them? Well, by mocking the SendAsync method!

    🔗 How to test HttpClientFactory with Moq | Code4IT

    We’ve seen which is the role of HttpMessageHandlerBuilder when building HttpClients. You can explore that class starting from the documentation.

    🔗 HttpMessageHandlerBuilder Class | Microsoft Docs

    We’ve already seen how to inject and use HttpContext in our applications:

    🔗 How to access the HttpContext in .NET API

    Finally, the repository that you can fork to toy with it:

    🔗 PropagateCorrelationIdOnHttpClients | GitHub

    This article first appeared on Code4IT

    Conclusion

    What a ride!

    We’ve seen how to add functionalities to HttpClients and to HTTP messages. All integrated into the .NET pipeline!

    We’ve learned how to propagate generic HTTP Headers. Of course, you can choose any custom HttpHeader and promote one of them as CorrelationId.

    Again, I invite you to download the code and toy with it – it’s incredibly interesting 😎

    Happy coding!

    🐧



    Source link

  • RBI Emphasizes Adopting Zero Trust Approaches for Banking Institutions

    RBI Emphasizes Adopting Zero Trust Approaches for Banking Institutions


    In a significant move to bolster cybersecurity in India’s financial ecosystem, the Reserve Bank of India (RBI) has underscored the urgent need for regulated entities—especially banks—to adopt Zero Trust approaches as part of a broader strategy to curb cyber fraud. In its latest Financial Stability Report (June 2025), RBI highlighted Zero Trust as a foundational pillar for risk-based supervision, AI-aware defenses, and proactive cyber risk management.

    The directive comes amid growing concerns about the digital attack surface, vendor lock-in risks, and the systemic threats posed by overreliance on a few IT infrastructure providers. RBI has clarified that traditional perimeter-based security is no longer enough, and financial institutions must transition to continuous verification models where no user or device is inherently trusted.

    What is Zero Trust?

    Zero Trust is a modern security framework built on the principle: “Never trust, always verify.”

    Unlike legacy models that grant broad access to anyone inside the network, Zero Trust requires every user, device, and application to be verified continuously, regardless of location—inside or outside the organization’s perimeter.

    Key principles of Zero Trust include:

    • Least-privilege access: Users only get access to what they need—nothing more.
    • Micro-segmentation: Breaking down networks and applications into smaller zones to isolate threats.
    • Continuous verification: Access is granted based on multiple dynamic factors, including identity, device posture, location, time, and behavior.
    • Assume breach: Security models assume threats are already inside the network and act accordingly.

    In short, Zero Trust ensures that access is never implicit, and every request is assessed with context and caution.

    Seqrite ZTNA: Zero Trust in Action for Indian Banking

    To help banks and financial institutions meet RBI’s Zero Trust directive, Seqrite ZTNA (Zero Trust Network Access) offers a modern, scalable, and India-ready solution that aligns seamlessly with RBI’s vision.

    Key Capabilities of Seqrite ZTNA

    • Granular access control
      It allows access only to specific applications based on role, user identity, device health, and risk level, eliminating broad network exposure.
    • Continuous risk-based verification
      Each access request is evaluated in real time using contextual signals like location, device posture, login time, and behavior.
    • No VPN dependency
      Removes the risks of traditional VPNs that grant excessive access. Seqrite ZTNA gives just-in-time access to authorized resources.
    • Built-in analytics and audit readiness
      Detailed logs of every session help organizations meet RBI’s incident reporting and risk-based supervision requirements.
    • Easy integration with identity systems
      Works seamlessly with Azure AD, Google Workspace, and other Identity Providers to enforce secure authentication.
    • Supports hybrid and remote workforces
      Agent-based or agent-less deployment suits internal employees, third-party vendors, and remote users.

    How Seqrite ZTNA Supports RBI’s Zero Trust Mandate

    RBI’s recommendations aren’t just about better firewalls but about shifting the cybersecurity posture entirely. Seqrite ZTNA helps financial institutions adopt this shift with:

    • Risk-Based Supervision Alignment
    • Policies can be tailored based on user risk, job function, device posture, or geography.
    • Enables graded monitoring, as RBI emphasizes, with intelligent access decisions based on risk level.
    • CART and AI-Aware Defenses
    • Behavior analytics and real-time monitoring help institutions detect anomalies and conduct Continuous Assessment-Based Red Teaming (CART) simulations.
    • Uniform Incident Reporting
    • Seqrite’s detailed session logs and access histories simplify compliance with RBI’s call for standardized incident reporting frameworks.
    • Vendor Lock-In Mitigation
    • Unlike global cloud-only vendors, Seqrite ZTNA is designed with data sovereignty and local compliance in mind, offering full control to Indian enterprises.

    Sample Use Case: A Mid-Sized Regional Bank

    Challenge: The bank must secure access to its core banking applications for remote employees and third-party vendors without relying on VPNs.

    With Seqrite ZTNA:

    • Users access only assigned applications, not the entire network.
    • Device posture is verified before every session.
    • Behavior is monitored continuously to detect anomalies.
    • Detailed logs assist compliance with RBI audits.
    • Risk-based policies automatically adjust based on context (e.g., denying access from unknown locations or outdated devices).

    Result: A Zero Trust-aligned access model with reduced attack surface, better visibility, and continuous compliance readiness.

    Conclusion: Future-Proofing Banking Security with Zero Trust

    RBI’s directive isn’t just another compliance checklist, it’s a wake-up call. As India’s financial institutions expand digitally, adopting Zero Trust is essential for staying resilient, secure, and compliant.

    Seqrite ZTNA empowers banks to implement Zero Trust in a practical, scalable way aligned with national cybersecurity priorities. With granular access control, continuous monitoring, and compliance-ready visibility, Seqrite ZTNA is the right step forward in securing India’s digital financial infrastructure.



    Source link

  • use Miniprofiler instead of Stopwatch to profile code performance &vert; Code4IT

    use Miniprofiler instead of Stopwatch to profile code performance | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Do you need to tune up the performance of your code? You can create some StopWatch objects and store the execution times or rely on external libraries like MiniProfiler.

    Note: of course, we’re just talking about time duration, and not about memory usage!

    How to profile code using Stopwatch

    A Stopwatch object acts as a (guess what?) stopwatch.

    You can manually make it start and stop, and keep track of the elapsed time:

    Stopwatch sw = Stopwatch.StartNew();
    DoSomeOperations(100);
    var with100 = sw.ElapsedMilliseconds;
    
    
    sw.Restart();
    DoSomeOperations(2000);
    var with2000 = sw.ElapsedMilliseconds;
    
    sw.Stop();
    
    Console.WriteLine($"With 100: {with100}ms");
    Console.WriteLine($"With 2000: {with2000}ms");
    

    It’s useful, but you have to do it manually. There’s a better choice.

    How to profile code using MiniProfiler

    A good alternative is MiniProfiler: you can create a MiniProfiler object that holds all the info related to the current code execution. You then can add some Steps, which can have a name, and even nest them.

    Finally, you can print the result using RenderPlainText.

    MiniProfiler profiler = MiniProfiler.StartNew();
    
    using (profiler.Step("With 100"))
    {
        DoSomeOperations(100);
    }
    
    
    using (profiler.Step("With 2000"))
    {
        DoSomeOperations(2000);
    }
    
    Console.WriteLine(profiler.RenderPlainText());
    

    You won’t anymore stop and start any StopWatch instance.

    You can even use inline steps, to profile method execution and store its return value:

    var value = profiler.Inline(() => MethodThatReturnsSomething(12), "Get something");
    

    Here I decided to print the result on the Console. You can even create HTML reports, which are quite useful when profiling websites. You can read more here, where I experimented with MiniProfiler in a .NET API project.

    Here’s an example of what you can get:

    MiniProfiler API report

    Further readings

    We’ve actually already talked about MiniProfiler in an in-depth article you can find here:

    🔗 Profiling .NET code with MiniProfiler | Code4IT

    Which, oddly, is almost more detailed than the official documentation, that you can still find here:

    🔗 MiniProfiler for .NET | MiniProfiler

    Happy coding!

    🐧



    Source link