Virtual Private Networks (VPNs) have been the go-to solution for securing remote access to banking systems for decades. They created encrypted tunnels for employees, vendors, and auditors to connect with core banking applications. But as cyber threats become more sophisticated, regulatory bodies tighten their grip, and branch operations spread into rural areas, it becomes increasingly clear that VPNs are no longer sufficient for regional and cooperative banks in India.
The Cybersecurity Reality for Banks
The numbers speak for themselves:
In just 10 months of 2023, Indian banks faced 13 lakh cyberattacks, averaging 4,400 daily.
Over the last five years, banks reported 248 successful data breaches.
In the first half of 2025 alone, the RBI imposed ₹15.63 crore in penalties on cooperative banks for compliance failures, many linked to weak cybersecurity practices.
The most concerning factor is that most of these incidents were linked to unauthorized access. With their flat network access model, traditional VPNs make banks highly vulnerable when even one compromised credential slips into the wrong hands.
Why VPNs Are No Longer Enough
Over-Privileged Access
VPNs were built to provide broad network connectivity. Once logged in, users often gain excessive access to applications and systems beyond their role. This “all-or-nothing” model increases the risk of insider threats and lateral movement by attackers.
VPNs were built to provide broad network connectivity. Once logged in, users often gain excessive access to applications and systems beyond their roles.
Lack of Granularity
Banks require strict control over who accesses what. VPNs cannot enforce role-based or context-aware access controls. For example, an external auditor should only be able to view specific reports, not navigate through the entire network.
Operational Complexity
VPN infrastructure is cumbersome to deploy and maintain across hundreds of branches. The overhead of managing configurations, licenses, and updates adds strain to already stretched IT teams in regional banks.
Poor Fit for Hybrid and Remote Work
Banking operations are no longer confined to branch premises. Remote staff, vendors, and regulators need secure but seamless access. VPNs slow down connectivity, especially in rural low-bandwidth areas, hampering productivity.
Audit and Compliance Gaps
VPNs don’t inherently provide built-in audit logs, geo-restriction policies, or continuous verification—making compliance audits more painful and penalties more likely.
The Rise of Zero Trust Network Access (ZTNA)
Zero Trust Network Access (ZTNA) addresses the shortcomings of VPNs by adopting a “never trust, always verify” mindset. Every user, device, and context is continuously authenticated before and during access. Instead of broad tunnels, ZTNA grants access only to the specific application or service a user is authorized for—nothing more.
For regional and cooperative banks, this shift is a game-changer:
Least-Privilege Access ensures employees, vendors, and auditors only see what they can.
Built-in Audit Trails support RBI inspections without manual effort.
Agentless Options allow quick deployment across diverse user groups.
Resilience in Low-Bandwidth Environments ensures rural branches stay secure without connectivity struggles.
Seqrite ZTNA: Tailored for Banks
Unlike generic ZTNA solutions, Seqrite ZTNA has been designed with India’s banking landscape in mind. It supports various applications, including core banking systems, RDP, SSH, ERP, and CRM, while seamlessly integrating with existing IT infrastructure.
Key differentiators include:
Support for Thick Clients, such as core banking and ERP systems, is critical for cooperative banks.
Out-of-the-Box SaaS Support for modern banking applications.
Centralized Policy Control to simplify access across branches, vendors, and staff.
In fact, a cooperative bank in Western Maharashtra replaced its legacy VPN with Seqrite ZTNA and immediately reduced its security risks. By implementing granular, identity-based access policies, the bank achieved secure branch connectivity, simplified audits, and stronger resilience against unauthorized access.
The Way Forward
The RBI has already stated that cybersecurity resilience will depend on zero-trust approaches. Cooperative and regional banks that continue to rely on legacy VPNs are exposing themselves to cyber risks, regulatory penalties, and operational inefficiencies.
By moving from VPNs to ZTNA, banks can protect their sensitive data, secure their branches and remote workforce, and stay one step ahead of attackers—all while ensuring compliance.
Legacy VPNs are relics of the past. The future of secure banking access is Zero Trust.
Secure your bank’s core systems with Seqrite ZTNA, which is built for India’s cooperative and regional banks to replace risky VPNs with identity-based, least-privilege access. Stay compliant, simplify audits, and secure every branch with Zero Trust.
Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.
However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.
In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.
What Is Code Coverage?
Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:
How much of my code is tested?
Are there any untested paths or dead code?
Which parts of the application need additional test coverage?
In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.
You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.
Why Code Coverage Matters
Clearly, if you write valuable tests, Code Coverage is a great ally.
A high value of Code Coverage helps you with:
Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.
The Limitations of Code Coverage
While Code Coverage is valuable, it has limitations:
False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.
3 Practical reasons why Code Coverage percentage can be misleading
For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.
Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!
As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.
Great job! Or is it?
You can cheat by excluding parts of the code
In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.
While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).
We can, in fact, add that attribute to every single class like this:
You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.
Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.
As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.
We can, indeed, focus our efforts in different areas:
Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.
Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.
Further readings
To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).
In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.
This way of defining tests is called Testing Diamond, and I explained it here:
Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛