A suitable constructor for type ‘X’ could not be located. What a strange error message! Luckily it’s easy to solve.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
A few days ago I was preparing the demo for a new article. The demo included a class with an IHttpClientFactory service injected into the constructor. Nothing more.
Then, running the application (well, actually, executing the code), this error popped out:
System.InvalidOperationException: A suitable constructor for type ‘X’ could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.
How to solve it? It’s easy. But first, let me show you what I did in the wrong version.
Setting up the wrong example
For this example, I created an elementary project.
It’s a .NET 7 API project, with only one controller, GenderController, which calls another service defined in the IGenderizeService interface.
IGenderizeService is implemented by a class, GenderizeService, which is the one that fails to load and, therefore, causes the exception to be thrown. The class calls an external endpoint, parses the result, and then returns it to the caller:
publicclassGenderizeService : IGenderizeService
{
privatereadonly IHttpClientFactory _httpClientFactory;
public GenderizeService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<GenderProbability> GetGenderProbabiliy(string name)
{
var httpClient = _httpClientFactory.CreateClient();
var response = await httpClient.GetAsync($"?name={name}");
var result = await response.Content.ReadFromJsonAsync<GenderProbability>();
return result;
}
}
Finally, I’ve defined the services in the Program class, and then I’ve specified which is the base URL for the HttpClient instance generated in the GenderizeService class:
// some codebuilder.Services.AddScoped<IGenderizeService, GenderizeService>();
builder.Services.AddHttpClient<IGenderizeService, GenderizeService>(
client => client.BaseAddress = new Uri("https://api.genderize.io/")
);
var app = builder.Build();
// some more code
That’s it! Can you spot the error?
2 ways to solve the error
The error was quite simple, but it took me a while to spot:
In the constructor I was injecting an IHttpClientFactory:
public GenderizeService(IHttpClientFactory httpClientFactory)
while in the host definition I was declaring an HttpClient for a specific class:
We no longer need to call _httpClientFactory.CreateClient because the injected instance of HttpClient is already customized with the settings we’ve defined at Startup.
Further readings
I’ve briefly talked about HttpClientFactory in one article of my C# tips series:
Cyberattacks aren’t slowing down—they’re getting bolder and smarter. From phishing scams to ransomware outbreaks, the number of incidents has doubled or even tripled year over year. In today’s hybrid, multi-vendor IT landscape, protecting your organization’s digital assets requires choosing the top XDR vendor that can see and stop threats across every possible entry point.
Over the last five years, XDR (Extended Detection and Response) has emerged as one of the most promising cybersecurity innovations. Leading IT analysts agree: XDR solutions will play a central role in the future of cyber defense. But not all XDR platforms are created equal. Success depends on how well an XDR vendor integrates Endpoint Protection Platforms (EPP) and Endpoint Detection and Response (EDR) to detect, analyze, and neutralize threats in real time.
This guide will explain what makes a great XDR vendor and how Seqrite XDR compares to industry benchmarks. It also includes a practical checklist for confidently evaluating your next security investment.
Why Choosing the Right XDR Vendor Matters
Your XDR platform isn’t just another security tool; it’s the nerve center of your threat detection and response strategy. The best solutions act as a central brain, collecting security telemetry from:
Endpoints
Networks
Firewalls
Email
Identity systems
DNS
They don’t just collect this data, they correlate it intelligently, filter out the noise, and give your security team actionable insights to respond faster.
According to industry reports, over 80% of IT and cybersecurity professionals are increasing budgets for threat detection and response. If you choose the wrong vendor, you risk fragmented visibility, alert fatigue, and missed attacks.
Key Capabilities Every Top XDR Vendor Should Offer
When shortlisting top XDR vendors, here’s what to look for:
Advanced Threat Detection – Identify sophisticated, multi-layer attack patterns that bypass traditional tools.
Risk-Based Prioritization – Assign scores (1–1000) so you know which threats truly matter.
Unified Visibility – A centralized console to eliminate security silos.
Integration Flexibility – Native and third-party integrations to protect existing investments.
Automation & Orchestration – Automate repetitive workflows to respond in seconds, not hours.
MITRE ATT&CK Mapping – Know exactly which attacker tactics and techniques you can detect.
Remember, it’s the integration of EPP and EDR that makes or breaks an XDR solution’s effectiveness.
Your Unified Detection & Response Checklist
Use this checklist to compare vendors on a like-for-like basis:
Full telemetry coverage: Endpoints, networks, firewalls, email, identity, and DNS.
Native integration strength: Smooth backend-to-frontend integration for consistent coverage.
Real-time threat correlation: Remove false positives, detect real attacks faster.
Proactive security posture: Shift from reactive to predictive threat hunting.
MITRE ATT&CK alignment: Validate protection capabilities against industry-recognized standards.
Why Automation Is the Game-Changer
The top XDR vendors go beyond detection, they optimize your entire security operation. Automated playbooks can instantly execute containment actions when a threat is detected. Intelligent alert grouping cuts down on noise, preventing analyst burnout.
Automation isn’t just about speed; it’s about cost savings. A report by IBM Security shows that organizations with full automation save over ₹31 crore annually and detect/respond to breaches much faster than those relying on manual processes.
The Seqrite XDR Advantage
Seqrite XDR combines advanced detection, rich telemetry, and AI-driven automation into a single, unified platform. It offers:
Seamless integration with Seqrite Endpoint Protection (EPP) and Seqrite Endpoint Detection & Response (EDR) and third party telemetry sources.
MITRE ATT&CK-aligned visibility to stay ahead of attackers.
Automated playbooks to slash response times and reduce manual workload.
Unified console for complete visibility across your IT ecosystem.
GenAI-powered SIA (Seqrite Intelligent Assistant) – Your AI-Powered Virtual Security Analyst. SIA offers predefined prompts and conversational access to incident and alert data, streamlining investigations and making it faster for analysts to understand, prioritize, and respond to threats.
In a market crowded with XDR solutions, Seqrite delivers a future-ready, AI-augmented platform designed for today’s threats and tomorrow’s unknowns.
If you’re evaluating your next security investment, start with a vendor who understands the evolving threat landscape and backs it up with a platform built for speed, intelligence, and resilience.
Good unit tests have some properties in common: they are Fast, Independent, Repeatable, Self-validating, and Thorough. In a word: FIRST!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
FIRST is an acronym that you should always remember if you want to write clean and extensible tests.
This acronym tells us that Unit Tests should be Fast, Independent, Repeatable, Self-validating, and Thorough.
Fast
You should not create tests that require a long time for setup and start-up: ideally, you should be able to run the whole test suite in under a minute.
If your unit tests are taking too much time for running, there must be something wrong with it; there are many possibilities:
You’re trying to access remote sources (such as real APIs, Databases, and so on): you should mock those dependencies to make tests faster and to avoid accessing real resources. If you need real data, consider creating integration/e2e tests instead.
Your system under test is too complex to build: too many dependencies? DIT value too high?
The method under test does too many things. You should consider splitting it into separate, independent methods, and let the caller orchestrate the method invocations as necessary.
Independent (or Isolated)
Test methods should be independent of one another.
Here, to have Test2 working correctly, Test1 must run before it, otherwise myObj would be null. There’s a dependency between Test1 and Test2.
How to avoid it? Create new instances for every test! May it be with some custom methods or in the StartUp phase. And remember to reset the mocks as well.
Repeatable
Unit Tests should be repeatable. This means that wherever and whenever you run them, they should behave correctly.
So you should remove any dependency on the file system, current date, and so on.
This test is strictly bound to the current date. So, if I’ll run this test again in a month, it will fail.
We should instead remove that dependency and use dummy values or mock.
[Fact]void TestDate_DoIt()
{
DateTime d = new DateTime(2022,7,19);
string dateAsString = d.ToString("yyyy-MM-dd");
Assert.Equal("2022-07-19", dateAsString);
}
There are many ways to inject DateTime (and other similar dependencies) with .NET. I’ve listed some of them in this article: “3 ways to inject DateTime and test it”.
Self-validating
Self-validating means that a test should perform operations and programmatically check for the result.
For instance, if you’re testing that you’ve written something on a file, the test itself is in charge of checking that it worked correctly. No manual operations should be done.
Also, tests should provide explicit feedback: a test either passes or fails; no in-between.
Thorough
Unit Tests should be thorough in that they must validate both the happy paths and the failing paths.
So you should test your functions with valid inputs and with invalid inputs.
You should also validate what happens if an exception is thrown while executing the path: are you handling errors correctly?
Have a look at this class, with a single, simple, method:
publicclassItemsService{
readonly IItemsRepository _itemsRepo;
public ItemsService(IItemsRepository itemsRepo)
{
_itemsRepo = itemsRepo;
}
public IEnumerable<Item> GetItemsByCategory(string category, int maxItems)
{
var allItems = _itemsRepo.GetItems();
return allItems
.Where(i => i.Category == category)
.Take(maxItems);
}
}
Which tests should you write for GetItemsByCategory?
I can think of these:
what if category is null or empty?
what if maxItems is less than 0?
what if allItems is null?
what if one of the items inside allItems is null?
what if _itemsRepo.GetItems() throws an exception?
what if _itemsRepo is null?
As you can see, even for a trivial method like this you should write a lot of tests, to ensure that you haven’t missed anything.
Conclusion
F.I.R.S.T. is a good way to way to remember the properties of a good unit test suite.
In a significant move to bolster cybersecurity in India’s financial ecosystem, the Reserve Bank of India (RBI) has underscored the urgent need for regulated entities—especially banks—to adopt Zero Trust approaches as part of a broader strategy to curb cyber fraud. In its latest Financial Stability Report (June 2025), RBI highlighted Zero Trust as a foundational pillar for risk-based supervision, AI-aware defenses, and proactive cyber risk management.
The directive comes amid growing concerns about the digital attack surface, vendor lock-in risks, and the systemic threats posed by overreliance on a few IT infrastructure providers. RBI has clarified that traditional perimeter-based security is no longer enough, and financial institutions must transition to continuous verification models where no user or device is inherently trusted.
What is Zero Trust?
Zero Trust is a modern security framework built on the principle: “Never trust, always verify.”
Unlike legacy models that grant broad access to anyone inside the network, Zero Trust requires every user, device, and application to be verified continuously, regardless of location—inside or outside the organization’s perimeter.
Key principles of Zero Trust include:
Least-privilege access: Users only get access to what they need—nothing more.
Micro-segmentation: Breaking down networks and applications into smaller zones to isolate threats.
Continuous verification: Access is granted based on multiple dynamic factors, including identity, device posture, location, time, and behavior.
Assume breach: Security models assume threats are already inside the network and act accordingly.
In short, Zero Trust ensures that access is never implicit, and every request is assessed with context and caution.
Seqrite ZTNA: Zero Trust in Action for Indian Banking
To help banks and financial institutions meet RBI’s Zero Trust directive, Seqrite ZTNA (Zero Trust Network Access) offers a modern, scalable, and India-ready solution that aligns seamlessly with RBI’s vision.
Key Capabilities of Seqrite ZTNA
Granular access control It allows access only to specific applications based on role, user identity, device health, and risk level, eliminating broad network exposure.
Continuous risk-based verification Each access request is evaluated in real time using contextual signals like location, device posture, login time, and behavior.
No VPN dependency Removes the risks of traditional VPNs that grant excessive access. Seqrite ZTNA gives just-in-time access to authorized resources.
Built-in analytics and audit readiness Detailed logs of every session help organizations meet RBI’s incident reporting and risk-based supervision requirements.
Easy integration with identity systems Works seamlessly with Azure AD, Google Workspace, and other Identity Providers to enforce secure authentication.
Supports hybrid and remote workforces Agent-based or agent-less deployment suits internal employees, third-party vendors, and remote users.
How Seqrite ZTNA Supports RBI’s Zero Trust Mandate
RBI’s recommendations aren’t just about better firewalls but about shifting the cybersecurity posture entirely. Seqrite ZTNA helps financial institutions adopt this shift with:
Risk-Based Supervision Alignment
Policies can be tailored based on user risk, job function, device posture, or geography.
Enables graded monitoring, as RBI emphasizes, with intelligent access decisions based on risk level.
CART and AI-Aware Defenses
Behavior analytics and real-time monitoring help institutions detect anomalies and conduct Continuous Assessment-Based Red Teaming (CART) simulations.
Uniform Incident Reporting
Seqrite’s detailed session logs and access histories simplify compliance with RBI’s call for standardized incident reporting frameworks.
Vendor Lock-In Mitigation
Unlike global cloud-only vendors, Seqrite ZTNA is designed with data sovereignty and local compliance in mind, offering full control to Indian enterprises.
Sample Use Case: A Mid-Sized Regional Bank
Challenge: The bank must secure access to its core banking applications for remote employees and third-party vendors without relying on VPNs.
With Seqrite ZTNA:
Users access only assigned applications, not the entire network.
Device posture is verified before every session.
Behavior is monitored continuously to detect anomalies.
Detailed logs assist compliance with RBI audits.
Risk-based policies automatically adjust based on context (e.g., denying access from unknown locations or outdated devices).
Result: A Zero Trust-aligned access model with reduced attack surface, better visibility, and continuous compliance readiness.
Conclusion: Future-Proofing Banking Security with Zero Trust
RBI’s directive isn’t just another compliance checklist, it’s a wake-up call. As India’s financial institutions expand digitally, adopting Zero Trust is essential for staying resilient, secure, and compliant.
Seqrite ZTNA empowers banks to implement Zero Trust in a practical, scalable way aligned with national cybersecurity priorities. With granular access control, continuous monitoring, and compliance-ready visibility, Seqrite ZTNA is the right step forward in securing India’s digital financial infrastructure.
In the ever-evolving cybersecurity landscape, attackers constantly seek new ways to bypass traditional defences. One of the latest and most insidious methods involves using Scalable Vector Graphics (SVG)—a file format typically associated with clean, scalable images for websites and applications. But beneath their seemingly harmless appearance, SVGs can harbour threatening scripts capable of executing sophisticated phishing attacks.
This blog explores how SVGs are weaponized, why they often evade detection, and what organizations can do to protect themselves.
SVGs: More Than Just Images
SVG files differ fundamentally from standard image formats like JPEG or PNG. Instead of storing pixel data, SVGs use XML-based code to define vector paths, shapes, and text. This makes them ideal for responsive design, as they scale without losing quality. However, this same structure allows SVGs to contain embedded JavaScript, which can execute when the file is opened in a browser—something that happens by default on many Windows systems.
Delivery
Email Attachments: Sent via spear-phishing emails with convincing subject lines and sender impersonation.
Cloud Storage Links: Shared through Dropbox, Google Drive, OneDrive, etc., often bypassing email filters.
Fig:1 Attack chain of SVG campaign
The image illustrates the SVG phishing attack chain in four distinct stages: it begins with an email containing a seemingly harmless SVG attachment, which, when opened, triggers JavaScript execution in the browser, ultimately redirecting the user to a phishing site designed to steal credentials.
How the attack works:
When a target receives an SVG attachment and opens an email, the file typically launches in their default web browser—unless they have a specific application set to handle SVG files—allowing any embedded scripts to execute immediately.
Fig2: Phishing Email of SVG campaign
Attackers commonly send phishing emails with deceptive subject lines like “Reminder for your Scheduled Event 7212025.msg” or “Meeting-Reminder-7152025.msg”, paired with innocuous-looking attachments named “Upcoming Meeting.svg” or “Your-to-do-List.svg” to avoid raising suspicion. Once opened, the embedded JavaScript within the SVG file silently redirects the victim to a phishing site that closely mimics trusted services like Microsoft 365 or Google Workspace. As shown in fig.
Fig3: Malicious SVG code.
In the analyzed SVG sample, the attacker embeds a <script> tag within the SVG, using a CDATA section to hide malicious logic. The code includes a long hex-encoded string (Y) and a short XOR key (q), which decodes into a JavaScript payload when processed. This decoded payload is then executed using window.location = ‘javascript:’ + v;, effectively redirecting the victim to a phishing site upon opening the file. An unused email address variable (g.rume@mse-filterpressen.de) is likely a decoy or part of targeted delivery.
Upon decryption, we found the c2c phishing link as
hxxps://hju[.]yxfbynit[.]es/koRfAEHVFeQZ!bM9
Fig4: Cloudflare CAPTCHA gate
The link directs to a phishing site protected by a Cloudflare CAPTCHA gate. After you check the box to verify, you’re human then you’re redirected to a malicious page controlled by the attackers.
Fig5: Office 365 login form
This page embeds a genuine-looking Office 365 login form, allowing the phishing group to capture and validate your email and password credentials simultaneously.
Conclusion: Staying Ahead of SVG-Based Threats
As attackers continue to innovate, organizations must recognize the hidden risks in seemingly benign file formats like SVG. Security teams should:
Implement deep content inspection for SVG files.
Disable automatic browser rendering of SVGs from untrusted sources.
Educate employees about the risks of opening unfamiliar attachments.
Monitor for unusual redirects and script activity in email and web traffic.
SVGs may be powerful tools for developers, but in the wrong hands, they can become potent weapons for cybercriminals. Awareness and proactive defense are key to staying ahead of this emerging threat.
“Move fast and break things” has graduated from a startup mantra to an industry-wide gospel. We’re told to ship now and ask questions later, to launch minimum viable products and iterate indefinitely. But in the race to be first, we risk forgetting what it means to be good. What if the relentless pursuit of ‘now’ comes with higher reputational consequences than we realise?
I have worked for a lot of other businesses before. Contract, Industry, Agency, you name it… over the last 17 years I’ve seen the decisions that get made, many of them mistakes, from junior level through to senior leadership. Often I would find myself wondering, ‘is this how it has to be?’.
Businesses I worked for would cut corners everywhere, and I don’t mean slightly under-deliver to preserve margin, I mean a perpetual ethos of poor performance was not just accepted, but cultivated through indifference and a lack of accountability.
Motivated by this behaviour, I wanted to start something with integrity, something a bit more human, something where value is determined by quality-delivered, not just cash-extracted.
Although I am introverted by nature, and generally more interested in craft than networking – I’ve been fortunate enough to build partnerships with some of the largest companies and brands in the world.
The projects we work on are usually for brands with a substantial audience, which require a holistic approach to design and development. We are particularly proud of our work in the entertainment sector, which we recently decided was a logical niche for us.
Our Ethos
Our guiding philosophy is simple:
Designed with purpose, built to perform.
In the entertainment space, a digital touchpoint is more than just a website or an app, it’s a gateway to an experience. It has to handle crushing traffic spikes for ticket or merchandise drops, convey the energy of an event (usually using highly visual, large content formats like video/audio), be just as performant on mobile as it is on desktop, and function flawlessly under pressure.
In this context, creativity without a clear purpose is just noise. A beautiful design that collapses under load isn’t just a failure; it’s a broken promise to thousands of fans. This is why we are laser-focused on creativity and performance being complimentary forces, rather than adversaries.
To design with purpose is to understand that every choice must be anchored in strategy. It means we don’t just ask “what does it look like?” but “what is it for?”. A critical part of our ethos involves avoiding common industry pitfalls.
I don’t know how loud this needs to be for people to hear me, but you should never build platform first.
If you’re advising clients that they need a WordPress website because that’s the only tool you know, you’re doing something wrong. The same is true of any solution that you deliver.
There is a right way and 17 wrong ways to do everything.
This is why we build for performance by treating speed, stability, and scalability as core features, not afterthoughts. It’s about architecting systems that are as resilient as they are beautiful. Working with the correct tech stack on every project is important. The user experience is only as good as the infrastructure that supports it.
That said, experiential design is an incredibly important craft, and at the front edge of this are libraries like GSAP, Lenis, and of course WebGL/Three.js. Over the last few years, we’ve been increasing the amount of these features across our work, thankfully to much delight.
liquidGL
Recently we launched a library you might like to try called liquidGL, an attempt to bring Apple’s new Liquid Glass aesthetic to the web. It’s a lot trickier in the browser, and there are still some things to work out in BETA, but it’s available now on GitHub and of course, it’s open source.
particlesGL
In addition to liquidGL, we recently launched particlesGL, a library for creating truly unique particle effects in the browser, complete with 6 core demos and support for all media formats including 3D models, video, audio, images and text. Available on GitHub and free for personal use.
glitchGL
Following on from particlesGL is glitchGL, a library for creating pixelation, CRT and glitch effects in the browser. With more than 30 custom properties and a configurable global interaction system, which can be applied to multiple elements. Also available on GitHub and free for personal use.
We post mainly on LinkedIn, so if you’re interested in libraries like these, give us a follow so you don’t miss new releases and updates.
The result is a suite of market-specific solutions that consistently deliver: web, game development, mobile app, and e-commerce; all made possible because we know the culture and the owners, not just the brief. This is why I would encourage other creatives to niche down into an industry they understand, and to see their clients as partners rather than targets – you might think this cynicism is rare but I can assure you it is not.
Quality relationships take time, but they’re the foundation of quality work.
OFFLIMITS
Sometimes the best choices you make on a project are the ones that no one sees.
For OFFLIMITS Festival, the UAE’s first open format music festival featuring Ed Sheeran, Kaiser Chiefs, OneRepublic, and more, one of the most critical aspects was the ability to serve large content formats performantly, at scale.
Whilst Webflow was the right platform for the core requirements, we decided to forgo several of Webflow’s own features, including their forms setup and asset handling. We opted to use Cloudflare R2 to serve videos and audio, giving us granular control over caching policies and delivery. One of many hidden changes which were invisible to users, but critical to performance. Taking time for proper decisions, even boring ones, is what separates experiences that deliver from those that merely look nice.
PRIMAL™
PRIMAL™ started as a sample pack library focused on raw high quality sounds. When they wanted to expand into audio plugins, we spent eighteen months developing custom audio plugins and architecting a comprehensive ecosystem from scratch, because comprehensive solutions create lasting value.
The result is something we’re particularly proud of, with automatic account creation, login, subscription creation, and license generation happening from a single click. This may sound simple on the surface, but it required months of careful planning and development across JUCE/C++, Stripe, Clerk, React, Cloudflare, and Mailchimp.
More information on this repositioning will be available late 2025.
The Integrated Pipeline
Our philosophy of Quality Over Speed only works if your team is structured to support it. Common approaches separate concerns like design and development. In large teams this is seen as somewhat essential, a project moves along a conveyor belt, handed off from one silo to the next.
Having a holistic approach allows you to create deeply connected digital ecosystems.
When the same team that designs the brand identity also builds the mobile app and architects the backend, you get a level of coherence that simply isn’t possible otherwise. This leads to better outcomes: lower operational costs for our clients, less patchwork for us, higher conversion rates, and a superior customer experience that feels seamless and intentional.
Final Thoughts
Choosing craft over haste is not an indulgence, it’s a strategic decision we make every day.
It’s not that we are perfect, we’re not. It’s that we’d rather aim for perfection and miss, than fail to even try and settle for ‘good enough’. In a digital landscape saturated with forgettable experiences, perfectionism is what cuts through the noise.
It’s what turns a user into a fan and a brand into a legacy.
Our work has been fortunate enough to win awards, but the real validation comes from seeing our clients thrive on the back of the extra care and attention to detail that goes into a Quality Over Speed mindset. By building platforms that are purposeful, performant, and deeply integrated, we deliver lasting value.
The goal isn’t just to launch something, it’s to launch something right.
In today’s hyper-connected world, cyberattacks are no longer just a technical issue, they are a serious business risk. From ransomware shutting down operations to data breaches costing millions, the threat landscape is constantly evolving. According to IBM’s 2024 Cost of a Data Breach Report, the global average cost of a data breach has reached 4.45 million dollars, marking a 15 percent increase over the past three years. As a result, more organizations are turning to EDR cybersecurity solutions.
EDR offers real-time monitoring, threat detection, and rapid incident response to protect endpoints like desktops, and laptops from malicious activity. These capabilities are critical for minimizing the impact of attacks and maintaining operational resilience. Below are the top benefits of implementing EDR cybersecurity in your organization.
Top EDR Cybersecurity Benefits
1. Improved Visibility and Threat Awareness
In a modern enterprise, visibility across all endpoints is crucial. EDR offers a comprehensive lens into every device, user activity, and system process within your network.
Continuous Endpoint Monitoring
EDR agents installed on endpoints continuously collect data related to file access, process execution, login attempts, and more. This enables 24/7 monitoring of activity across desktops, and mobile devices regardless of location.
Behavioral Analytics
EDR solutions use machine learning to understand normal behavior across systems and users. When anomalies occur—like unusual login patterns or unexpected file transfers—they are flagged for investigation.
2. Faster Threat Response and Containment
In cybersecurity, response speed is critical. Delayed action can lead to data loss, system compromise, and reputational damage.
Real-Time Containment
EDR solutions enable security teams to isolate infected endpoints instantly, preventing malware from spreading laterally through the network. Even if the endpoint is rebooted or disconnected, containment policies remain active.
Automated Response Workflows
EDR systems support predefined rules for automatic responses such as:
Killing malicious processes
Quarantining suspicious files
Blocking communication with known malicious IPs
Disconnecting compromised endpoints from the network
Protection for Offline Devices
Remote endpoints or those operating without an internet connection remain protected. Security policies continue to function, ensuring consistent enforcement even in disconnected environments.
According to IDC’s 2024 report on endpoint security, companies with automated EDR solutions reduced their average incident containment time by 60 percent.
3. Regulatory Compliance and Reporting
Compliance is no longer optional—especially for organizations in healthcare, finance, government, and other regulated sectors. EDR tools help meet these requirements.
Support for Compliance Standards
EDR solutions help organizations meet GDPR, HIPAA, PCI-DSS, and the Indian DPDP Act by:
Enforcing data encryption
Applying strict access controls
Maintaining audit logs of all system and user activities
Enabling rapid response and documentation of security incidents
Simplified Audit Readiness
Automated report generation and log retention ensure that organizations can quickly present compliance evidence during audits.
Proactive Compliance Monitoring
EDR platforms identify areas of non-compliance and provide recommendations to fix them before regulatory issues arise.
HIPAA, for instance, requires logs to be retained for at least six years. EDR solutions ensure this requirement is met with minimal manual intervention.
4. Cost Efficiency and Operational Gains
Strong cybersecurity is not just about prevention it is also about operational and financial efficiency. EDR helps reduce the total cost of ownership of security infrastructure.
Lower Incident Management Costs
According to Deloitte India’s Cybersecurity Report 2024, companies using EDR reported an average financial loss of 42 million rupees per attack. In contrast, companies without EDR reported average losses of 253 million rupees.
Reduced Business Disruption
EDR solutions enable security teams to isolate only affected endpoints rather than taking entire systems offline. This minimizes downtime and maintains business continuity.
More Efficient Security Teams
Security analysts often spend hours manually investigating each alert. EDR platforms automate much of this work by providing instant analysis, root cause identification, and guided response steps. This frees up time for more strategic tasks like threat hunting and policy improvement.
The Ponemon Institute’s 2024 report notes that organizations using EDR reduced average investigation time per incident by 30 percent.
5. Protection Against Advanced and Evolving Threats
Cyberthreats are evolving rapidly, and many now bypass traditional defenses. EDR solutions are built to detect and respond to these sophisticated attacks.
Detection of Unknown Threats
Unlike traditional antivirus software, EDR uses heuristic and behavioral analysis to identify zero-day attacks and malware that do not yet have known signatures.
Defense Against Advanced Persistent Threats (APTs)
EDR systems correlate seemingly minor events such as login anomalies, privilege escalations, and file modifications—into a single threat narrative that identifies stealthy attacks.
Integration with Threat Intelligence
EDR platforms often incorporate global and local threat feeds, helping organizations respond to emerging threats faster and more effectively.
Verizon’s 2024 Data Breach Investigations Report found that 70 percent of successful breaches involved endpoints, highlighting the need for more advanced protection mechanisms like EDR.
Why Choose Seqrite EDR
Seqrite EDR cybersecurity is designed to meet the needs of today’s complex and fast-paced enterprise environments. It provides centralized control, powerful analytics, and advanced response automation all in a user-friendly package.
Unified dashboard for complete endpoint visibility
Seamless integration with existing IT infrastructure
Resilient protection for remote and offline devices
Scalability for growing enterprise needs
Seqrite EDR is especially well-suited for industries such as finance, healthcare, manufacturing, and government, where both threat risk and compliance pressure are high.
Conclusion
EDR cybersecurity solutions have become a strategic necessity for organizations of all sizes. They offer comprehensive protection by detecting, analyzing, and responding to threats across all endpoints in real time. More importantly, they help reduce incident costs, improve compliance, and empower security teams with automation and insight.
Seqrite Endpoint Detection and Response provides a powerful, cost-effective way to future-proof your organization’s cybersecurity. By adopting Seqrite EDR, you can strengthen your cyber defenses, reduce operational risk, and ensure compliance with evolving regulations.
To learn more, visit www.seqrite.com and explore how Seqrite EDR can support your business in the age of intelligent cyber threats.
Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.
In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).
In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.
For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,
GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.
Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.
There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.
As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.
But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.
How to create a custom WebApplicationFactory in .NET
When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.
Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.
Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.
If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.
What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:
publicpartialclassProgram { }
Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:
How to use WebApplicationFactory in your NUnit tests
It’s time to start working on some real Integration Tests!
As we said before, we have only one HTTP endpoint, defined like this:
privatereadonly ISocialLinkParser _parser;
privatereadonly ILogger<SocialPostLinkController> _logger;
privatereadonly IConfiguration _config;
public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
{
_parser = parser;
_logger = logger;
_config = config;
}
[HttpGet]public IActionResult Get([FromQuery] string uri)
{
_logger.LogInformation("Received uri {Uri}", uri);
if (Uri.TryCreate(uri, new UriCreationOptions { }, out Uri _uri))
{
var linkInfo = _parser.GetLinkInfo(_uri);
_logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
var instance = new Instance
{
InstanceName = _config.GetValue<string>("InstanceName"),
Info = linkInfo
};
return Ok(instance);
}
else {
_logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
return BadRequest();
}
}
We have 2 flows to validate:
If the input URI is valid, the HTTP Status code should be 200;
If the input URI is invalid, the HTTP Status code should be 400;
We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.
First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.
As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.
From now on, we can use the _client instance to work with the in-memory instance of the API.
One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.
Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:
[Test]publicasync Task Should_ReturnHttp200_When_UrlIsValid()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
}
Otherwise, return Bad Request:
[Test]publicasync Task Should_ReturnBadRequest_When_UrlIsNotValid()
{
string inputUrl = "invalid-url";
var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
}
How to create test-specific configurations using InMemoryCollection
WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.
Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.
For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("Real"));
}
But some other times you might want to override a specific configuration key.
The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.
If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:
protectedoverridevoid ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureAppConfiguration((host, configurationBuilder) =>
{
configurationBuilder.AddInMemoryCollection(
new List<KeyValuePair<string, string?>>
{
new KeyValuePair<string, string?>("InstanceName", "FromTests")
});
});
}
Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.
You can validate this change by simply replacing the expected value in the previous test:
[Test]publicasync Task Should_ReadInstanceNameFromSettings()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
}
If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.
How to set up custom dependencies for your tests
Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.
We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:
publicclassStubSocialLinkParser : ISocialLinkParser
{
public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
{
SocialNetworkName = "test from stub",
Id = "test id",
SourceUrl = postUri,
Username = "test username" };
}
We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:
Finally, we can create a method to validate this change:
[Test]publicasync Task Should_UseStubName()
{
string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
}
How to create Integration Tests on specific resolved dependencies
Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.
We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.
For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:
builder.Services.AddScoped<SocialLinkParser>();
Now we can create a test to validate that it is working:
[Test]publicasync Task Should_ResolveDependency()
{
using (var _scope = _factory.Services.CreateScope())
{
var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
Assert.That(service, Is.Not.Null);
Assert.That(service, Is.AssignableTo<SocialLinkParser>());
}
}
As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.
The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.
Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:
Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.
But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:
Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:
Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():
As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.
In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.
Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here: