برچسب: why

  • Why Data Principal Rights Management Is the Heart of Modern Privacy Compliance|Seqrite

    Why Data Principal Rights Management Is the Heart of Modern Privacy Compliance|Seqrite


    As data privacy laws evolve globally—from the GDPR to India’s Digital Personal Data Protection Act (DPDPA)—one common theme emerges: empowering individuals with control over their data. This shift places data principal rights at the center of privacy compliance.

    Respecting these rights isn’t just a legal obligation for organizations; it’s a business imperative. Efficiently operationalizing and fulfilling data principal rights is now a cornerstone of modern privacy programs.

    Understanding Data Principal Rights

    Data principal rights refer to the entitlements granted to individuals regarding their data. Under laws like the DPDPA and GDPR, these typically include:

    • Right to Access: Individuals can request a copy of the personal data held about them.
    • Right to Correction: They can demand corrections to inaccurate or outdated data.
    • Right to Erasure (Right to Be Forgotten): They can request deletion of their data under specific circumstances.
    • Right to Data Portability: They can request their data in a machine-readable format.
    • Right to Withdraw Consent: They can withdraw previously given consent for data processing.
    • Right to Grievance Redressal: They can lodge complaints if their rights are not respected.

    While these rights sound straightforward, fulfilling them at scale is anything but simple, especially when data is scattered across cloud platforms, internal systems, and third-party applications.

    Why Data Principal Rights Management is Critical

    1. Regulatory Compliance and Avoidance of Penalties

    Non-compliance can result in substantial fines, regulatory scrutiny, and reputational harm. For instance, DPDPA empowers the Data Protection Board of India to impose heavy penalties for failure to honor data principal rights on time.

    1. Customer Trust and Transparency

    Respecting user rights builds transparency and demonstrates that your organization values privacy. This can increase customer loyalty and strengthen brand reputation in privacy-conscious markets.

    1. Operational Readiness and Risk Reduction

    Organizations risk delays, errors, and missed deadlines when rights requests are handled manually. An automated and structured rights management process reduces legal risk and improves operational agility.

    1. Auditability and Accountability

    Every action taken to fulfill a rights request must be logged and documented. This is essential for proving compliance during audits or investigations.

    The Role of Data Discovery in Rights Fulfilment

    To respond to any data principal request, you must first know where the relevant personal data resides. This is where Data Discovery plays a crucial supporting role.

    A robust data discovery framework enables organizations to:

    • Identify all systems and repositories that store personal data.
    • Correlate data to specific individuals or identifiers.
    • Retrieve, correct, delete, or port data accurately and quickly.

    Without comprehensive data visibility, any data principal rights management program will fail, resulting in delays, partial responses, or non-compliance.

    Key Challenges in Rights Management

    Despite its importance, many organizations struggle with implementing effective data principal rights management due to:

    • Fragmented data environments: Personal data is often stored in silos, making it challenging to aggregate and act upon.
    • Manual workflows: Fulfilling rights requests often involves slow, error-prone manual processes.
    • Authentication complexities: Verifying the identity of the data principal securely is essential to prevent abuse of rights.
    • Lack of audit trails: Without automated tracking, it’s hard to demonstrate compliance.

    Building a Scalable Data Principal Rights Management Framework

    To overcome these challenges, organizations must invest in technologies and workflows that automate and streamline the lifecycle of rights requests. A mature data principal rights management framework should include:

    • Centralized request intake: A portal or dashboard where individuals can easily submit rights requests.
    • Automated data mapping: Leveraging data discovery tools to locate relevant personal data quickly.
    • Workflow automation: Routing requests to appropriate teams with built-in deadlines and escalation paths.
    • Verification and consent tracking: Only verified individuals can initiate requests and track their consent history.
    • Comprehensive logging: Maintaining a tamper-proof audit trail of all actions to fulfill requests.

    The Future of Privacy Lies in Empowerment

    As data privacy regulations mature, the focus shifts from mere protection to empowerment. Data principles are no longer passive subjects but active stakeholders in handling their data. Organizations that embed data principal rights management into their core data governance strategy will stay compliant and gain a competitive edge in building customer trust.

    Empower Your Privacy Program with Seqrite

    Seqrite’s Data Privacy Suite is purpose-built to help enterprises manage data principal rights confidently. From automated request intake and identity verification to real-time data discovery and audit-ready logs, Seqrite empowers you to comply faster, smarter, and at scale.



    Source link

  • What is MDM and Why Your Business Can’t Ignore It Anymore

    What is MDM and Why Your Business Can’t Ignore It Anymore


    In today’s always-connected, mobile-first world, employees are working on the go—from airports, cafes, living rooms, and everywhere in between. That’s great for flexibility and productivity—but what about security? How do you protect sensitive business data when it’s spread across dozens or hundreds of mobile devices?  This is where Mobile Device Management (MDM) steps in. Let’s see what is MDM.

     

    What is MDM?

    MDM, short for Mobile Device Management, is a system that allows IT teams to monitor, manage, and secure employees’ mobile devices—whether company-issued or BYOD (Bring Your Own Device).

    It’s like a smart control panel for your organization’s phones and tablets. From pushing software updates and managing apps to enforcing security policies and wiping lost devices—MDM gives you full visibility and control, all from a central dashboard.

    MDM helps ensure that only secure, compliant, and authorized devices can access your company’s network and data.

     

    Why is MDM Important?

    As the modern workforce becomes more mobile, data security risks also rise. Devices can be lost, stolen, or compromised. Employees may install risky apps or access corporate files from unsecured networks. Without MDM, IT teams are essentially blind to these risks.

    A few common use cases of MDM:

    • A lost smartphone with access to business emails.
    • An employee downloading malware-infected apps.
    • Data breaches due to unsecured Wi-Fi use on personal devices.
    • Non-compliance with industry regulations due to lack of control.

    MDM helps mitigate all these risks while still enabling flexibility.

     

    Key Benefits of MDM Solution

    Enhanced Security

    Remotely lock, wipe, or locate lost devices. Prevent unauthorized access, enforce passcodes, and control which apps are installed.

    Centralized Management

    Manage all mobile devices, iOS and Android from a single dashboard. Push updates, install apps, and apply policies in bulk.

    Improved Productivity

    Set devices in kiosk mode for focused app usage. Push documents, apps, and files on the go. No downtime, no waiting.

    Compliance & Monitoring

    Track usage, enforce encryption, and maintain audit trails. Ensure your devices meet industry compliance standards at all times. 

     

    Choosing the Right MDM Solution

    There are many MDM solutions out there, but the right one should go beyond basic management. It should make your life easier, offer deep control, and scale with your organization’s needs—without compromising user experience.

    Why Seqrite MDM is Built for Today’s Mobile Workforce

     Seqrite Enterprise Mobility Management (EMM) is a comprehensive MDM solution tailored for businesses that demand both security and simplicity. Here’s what sets it apart:

    1. Unified Management Console: Manage all enrolled mobile devices in one place—track location, group devices, apply custom policies, and more.
    1. AI-Driven Security: Built-in antivirus, anti-theft features, phishing protection, and real-time web monitoring powered by artificial intelligence.
    1. Virtual Fencing: Set geo, Wi-Fi, and time-based restrictions to control device access and usage great for field teams and remote employees.
    1. App & Kiosk Mode Management: Push apps, lock devices into single- or multi-app kiosk mode, and publish custom apps to your enterprise app store.
    1. Remote File Transfer & Troubleshooting: Send files to one or multiple devices instantly and troubleshoot issues remotely to reduce device downtime.
    1. Automation & Reporting: Get visual dashboards, schedule regular exports, and access real-time logs and audit reports to stay ahead of compliance.

     

     Final Thoughts

    As work continues to shift beyond the boundaries of the office, MDM is no longer a luxury, it’s a necessity. Whether you’re a growing startup or a large enterprise, protecting your mobile workforce is key to maintaining both productivity and security.

    With solutions like Seqrite Enterprise Mobility Management, businesses get the best of both worlds powerful control and seamless management, all wrapped in a user-friendly experience.



    Source link

  • Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Source link

  • Why Zero‑Trust Access Is the Future of Secure Networking

    Why Zero‑Trust Access Is the Future of Secure Networking


    ZTNA vs VPN is a critical comparison in today’s hyperconnected world, where remote workforces, cloud-driven data flows, and ever-evolving threats make securing enterprise network access more complex than ever. Traditional tools like Virtual Private Networks (VPNs), which once stood as the gold standard of secure connectivity, are now showing their age. Enter Zero Trust Network Access (ZTNA) is a modern, identity-centric approach rapidly replacing VPNs in forward-thinking organizations.

    The Rise and Fall of VPNs

    VPNs have long been trusted to provide secure remote access by creating an encrypted “tunnel” between the user and the corporate network. While VPNs are still widely used, they operate on a fundamental assumption: they can be trusted once a user is inside the network. This “castle and moat” model may have worked in the past, but in today’s threat landscape, it creates glaring vulnerabilities:

    • Over-privileged access: VPNs often grant users broad network access, increasing the risk of lateral movement by malicious actors.
    • Lack of visibility: VPNs provide limited user activity monitoring once access is granted.
    • Poor scalability: As remote workforces grow, VPNs become performance bottlenecks, especially under heavy loads.
    • Susceptibility to credential theft: VPNs rely heavily on usernames and passwords, which can be stolen or reused in credential stuffing attacks.

    What is Zero Trust Network Access (ZTNA)

    ZTNA redefines secure access by flipping the trust model. It’s based on the principle of “never trust, always verify.” Instead of granting blanket access to the entire network, ZTNA enforces granular, identity-based access controls. Access is provided only after the user, device, and context are continuously verified.

    ZTNA architecture typically operates through a broker that evaluates user identity, device posture, and other contextual factors before granting access to a specific application, not the entire network. This minimizes exposure and helps prevent the lateral movement of threats.

    ZTNA vs VPN: The Key Differences

    Why ZTNA is the Future

    1. Security for the Cloud Era: ZTNA is designed for modern environments—cloud, hybrid, or multi-cloud. It secures access across on-prem and SaaS apps without the complexity of legacy infrastructure.
    2. Adaptive Access Controls: Access isn’t just based on credentials. ZTNA assesses user behavior, device health, location, and risk level in real time to dynamically permit or deny access.
    3. Enhanced User Experience: Unlike VPNs that slow down application performance, ZTNA delivers faster, direct-to-app connectivity, reducing latency and improving productivity.
    4. Minimized Attack Surface: Because ZTNA only exposes what’s necessary and hides the rest, the enterprise’s digital footprint becomes nearly invisible to attackers.
    5. Better Compliance & Visibility: With robust logging, analytics, and policy enforcement, ZTNA helps organizations meet compliance standards and gain detailed insights into access behaviors.

     Transitioning from VPN to ZTNA

    While ZTNA vs VPN continues to be a key consideration for IT leaders, it’s clear that Zero Trust offers a more future-ready approach. Although VPNs still serve specific legacy use cases, organizations aiming to modernize should begin their ZTNA vs VPN transition now. Adopting a phased, hybrid model enables businesses to secure critical applications with ZTNA while still leveraging VPN access for systems that require it.

    The key is to evaluate access needs, identify high-risk entry points, and prioritize business-critical applications for ZTNA implementation. Over time, enterprises can reduce their dependency on VPNs and shift toward a more resilient, Zero Trust architecture.

    Ready to Take the First Step Toward Zero Trust?

    Explore how Seqrite ZTNA enables secure, seamless, and scalable access for the modern workforce. Make the shift from outdated VPNs to a future-ready security model today.



    Source link

  • Rethinking Design: Why Privacy Shouldn’t Be an Afterthought

    Rethinking Design: Why Privacy Shouldn’t Be an Afterthought


    As organizations continue to embrace digital transformation, how we think about personal data has changed fundamentally. Data is no longer just a by-product of business processes; it is often the product itself. This shift brings a pressing responsibility: privacy cannot be treated as an after-the-fact fix. It must be part of the architecture from the outset.

    This is the thinking behind Privacy by Design. This concept is gaining renewed attention not just because regulators endorse it but also because it is increasingly seen as a marker of digital maturity.

    So, what is Privacy by Design?

    At a basic level, Privacy by Design (often abbreviated as PbD) means designing systems, products, and processes with privacy built into them from the start. It’s not a tool or a checklist; it’s a way of thinking.

    Rather than waiting until the end of the development cycle to address privacy risks, teams proactively factor privacy into the design, architecture, and decision-making stages. This means asking the right questions early:

    • Do we need to collect this data?
    • How will it be stored, shared, and eventually deleted?
    • Are there less invasive ways to achieve the same business goal?

    This mindset goes beyond technology. It is as much about product strategy and organizational alignment as it is about encryption or access controls.

    Why It’s Becoming Non-Negotiable

    The global regulatory environment is a key driver here. GDPR, for instance, formalized this approach in Article 25, which explicitly calls for “data protection by design and by default.” However, the need for privacy by design is not just about staying compliant.

    Customers today are more aware than ever of how their data is used. Organizations that respect that reality – minimizing collection, improving transparency, and offering control – tend to earn more trust. And in a landscape where trust is hard to gain and easy to lose, that’s a competitive advantage.

    Moreover, designing with privacy in mind from an engineering perspective reduces technical debt. Fixing privacy issues after launch usually means expensive rework and rushed patches. Building it right from day one leads to better outcomes.

    Turning Principles into Practice

    For many teams, the challenge is not agreeing with the idea but knowing how to apply it. Here’s what implementation often looks like in practice:

    1. Product & Engineering Collaboration

    Product teams define what data is needed and why. Engineering teams determine how it’s collected, stored, and protected. Early conversations between both help identify red flags and trade-offs before anything goes live.

    1. Embedding Privacy into Architecture

    This includes designing data flows with limitations, such as separating identifiers, encrypting sensitive attributes at rest, and ensuring role-based access to personal data. These aren’t just compliance tasks; they are innovative design practices that also improve security posture.

    1. Privacy as a Default Setting

    Instead of asking users to configure privacy settings after onboarding, PbD insists on secure defaults. If a feature collects data, users should have to opt in, not find a buried toggle to opt out.

    1. Periodic Reviews, Not Just One-Time Checks

    Privacy by Design isn’t a one-and-done activity. As systems evolve and new features roll out, periodic reviews help ensure that decisions made early on still hold up in practice.

    1. Cross-Functional Awareness

    Not every developer needs to be a privacy expert, but everyone in the development lifecycle—from analysts to QA—should be familiar with core privacy principles. A shared vocabulary goes a long way toward spotting and resolving issues early.

    Going Beyond Compliance

    A common mistake is to treat Privacy by Design as a box to tick. However, the organizations that do it well tend to treat it differently.

    They don’t ask, “What’s the minimum we need to do to comply?” Instead, they ask, “How do we build responsibly?”

    They don’t design features and then layer privacy on top. They create privacy into the feature.

    They don’t stop at policies. They create workflows and tooling that enforce those policies consistently.

    This mindset fosters resilience, reduces risk, and, over time, becomes part of the organization’s culture. In this mindset, product ideas are evaluated for feasibility and market fit and ethical and privacy alignment.

    Final Thoughts

    Privacy by Design is about intent. When teams build with privacy in mind, they send a message that the organization values the people behind the data.

    This approach is very much expected in an era where privacy concerns are at the centre of digital discourse. For those leading security, compliance, or product teams, the real opportunity lies in making privacy a requirement and a differentiator.

    Seqrite brings Privacy by Design to life with automated tools for data discovery, classification, and protection—right from the start. Our solutions embed privacy into every layer of your IT infrastructure, ensuring compliance and building trust. Explore how Seqrite can simplify your privacy journey.

     



    Source link

  • Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT

    Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT


    Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.

    However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.

    In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.

    What Is Code Coverage?

    Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:

    • How much of my code is tested?
    • Are there any untested paths or dead code?
    • Which parts of the application need additional test coverage?

    In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.

    You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.

    Why Code Coverage Matters

    Clearly, if you write valuable tests, Code Coverage is a great ally.

    A high value of Code Coverage helps you with:

    1. Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
    2. Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
    3. Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
    4. Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.

    The Limitations of Code Coverage

    While Code Coverage is valuable, it has limitations:

    1. False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
    2. They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
    3. Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.

    3 Practical reasons why Code Coverage percentage can be misleading

    For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.

    It contains a Controller with two endpoints:

    [ApiController]
    [Route("[controller]")]
    public class UniversalWeatherForecastController : ControllerBase
    {
        private readonly IWeatherService _weatherService;
    
        public UniversalWeatherForecastController(IWeatherService weatherService)
        {
            _weatherService = weatherService;
        }
    
        [HttpGet]
        public IEnumerable<Weather> Get(int locationId)
        {
            var forecast = _weatherService.ForecastsByLocation(locationId);
            return forecast.ToList();
        }
    
        [HttpGet("minByPlanet")]
        public Weather GetMinByPlanet(Planet planet)
        {
            return _weatherService.MinTemperatureForPlanet(planet);
        }
    }
    

    The Controller uses the Service:

    public class WeatherService : IWeatherService
    {
        private readonly IWeatherForecastRepository _repository;
    
        public WeatherService(IWeatherForecastRepository repository)
        {
            _repository = repository;
        }
    
        public IEnumerable<Weather> ForecastsByLocation(int locationId)
        {
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
            Location? searchedLocation = _repository.GetLocationById(locationId);
    
            if (searchedLocation == null)
                throw new LocationNotFoundException(locationId);
    
            return searchedLocation.WeatherForecasts;
        }
    
        public Weather MinTemperatureForPlanet(Planet planet)
        {
            var allCitiesInPlanet = _repository.GetLocationsByPlanet(planet);
            int minTemperature = int.MaxValue;
            Weather minWeather = null;
            foreach (var city in allCitiesInPlanet)
            {
                int temperature =
                    city.WeatherForecasts.MinBy(c => c.TemperatureC).TemperatureC;
    
                if (temperature < minTemperature)
                {
                    minTemperature = temperature;
                    minWeather = city.WeatherForecasts.MinBy(c => c.TemperatureC);
                }
            }
            return minWeather;
        }
    }
    

    Finally, the Service calls the Repository, omitted for brevity (it’s just a bunch of items in an in-memory List).

    I then created an NUnit test project to generate the unit tests, focusing on the WeatherService:

    
    public class WeatherServiceTests
    {
        private readonly Mock<IWeatherForecastRepository> _mockRepository;
        private WeatherService _sut;
    
        public WeatherServiceTests() => _mockRepository = new Mock<IWeatherForecastRepository>();
    
        [SetUp]
        public void Setup() => _sut = new WeatherService(_mockRepository.Object);
    
        [TearDown]
        public void Teardown() =>_mockRepository.Reset();
    
        // Tests
    
    }
    

    This class covers two cases, both related to the ForecastsByLocation method of the Service.

    Case 1: when the location exists in the repository, this method must return the related info.

    [Test]
    public void ForecastByLocation_Should_ReturnForecast_When_LocationExists()
    {
        //Arrange
        var forecast = new List<Weather>
            {
                new Weather{
                    Date = DateOnly.FromDateTime(DateTime.Now.AddDays(1)),
                    Summary = "sunny",
                    TemperatureC = 30
                }
            };
    
        var location = new Location
        {
            Id = 1,
            WeatherForecasts = forecast
        };
    
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns(location);
    
        //Act
        var resultForecast = _sut.ForecastsByLocation(1);
    
        //Assert
        CollectionAssert.AreEquivalent(forecast, resultForecast);
    }
    

    Case 2: when the location does not exist in the repository, the method should throw a LocationNotFoundException.

    [Test]
    public void ForecastByLocation_Should_Throw_When_LocationDoesNotExists()
    {
        //Arrange
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns<Location?>(null);
    
        //Act + Assert
        Assert.Catch<LocationNotFoundException>(() => _sut.ForecastsByLocation(1));
    }
    

    We then can run the Code Coverage report and see the result:

    Initial Code Coverage

    Tests cover 16% of lines and 25% of branches, as shown in the report displayed above.

    Delving into the details of the WeatherService class, we can see that we have reached 100% Code Coverage for the ForecastsByLocation method.

    Code Coverage Details for the Service

    Can we assume that that method is bug-free? Not at all!

    Not all cases may be covered by tests

    Let’s review the method under test.

    public IEnumerable<Weather> ForecastsByLocation(int locationId)
    {
        ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
        Location? searchedLocation = _repository.GetLocationById(locationId);
    
        if (searchedLocation == null)
            throw new LocationNotFoundException(locationId);
    
        return searchedLocation.WeatherForecasts;
    }
    

    Our tests only covered two cases:

    • the location exists;
    • the location does not exist.

    However, these tests do not cover the following cases:

    • the locationId is less than zero;
    • the locationId is exactly zero (are we sure that 0 is an invalid locationId?)
    • the _repository throws an exception (right now, that exception is not handled);
    • the location does exist, but it has no weather forecast info; is this a valid result? Or should we have thrown another custom exception?

    So, well, we have 100% Code Coverage for this method, yet we have plenty of uncovered cases.

    You can cheat on the result by adding pointless tests

    There’s a simple way to have high Code Coverage without worrying about the quality of the tests: calling the methods and ignoring the result.

    To demonstrate it, we can create one single test method to reach 100% coverage for the Repository, without even knowing what it actually does:

    public class WeatherForecastRepositoryTests
    {
        private readonly WeatherForecastRepository _sut;
    
        public WeatherForecastRepositoryTests() =>
            _sut = new WeatherForecastRepository();
    
        [Test]
        public void TotallyUselessTest()
        {
            _ = _sut.GetLocationById(1);
            _ = _sut.GetLocationsByPlanet(Planet.Jupiter);
    
            Assert.That(1, Is.EqualTo(1));
        }
    }
    

    Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!

    We reached 53% Code Coverage without adding useful methods

    As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.

    The whole class has 100% Code Coverage, even without useful tests

    Great job! Or is it?

    You can cheat by excluding parts of the code

    In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.

    While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).

    We can, in fact, add that attribute to every single class like this:

    
    [ApiController]
    [Route("[controller]")]
    [ExcludeFromCodeCoverage]
    public class UniversalWeatherForecastController : ControllerBase
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherService : IWeatherService
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherForecastRepository : IWeatherForecastRepository
    {
        // omitted
    }
    

    You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.

    100% Code Coverage, but without any test

    Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.

    Beyond Code Coverage: Effective Testing Strategies

    As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.

    We can, indeed, focus our efforts in different areas:

    1. Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
    2. Exploratory Testing: Manual testing complements automated tests. Exploratory testing uncovers issues that automated tests might miss.
    3. Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.

    Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.

    Further readings

    To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).

    🔗 How to view Code Coverage with Coverlet and Visual Studio | Code4IT

    In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.

    This way of defining tests is called Testing Diamond, and I explained it here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage)

    This article first appeared on Code4IT 🐧

    Finally, I talked about Code Coverage on YouTube as a guest on the VisualStudio Toolbox channel. Check it out here!

    https://www.youtube.com/watch?v=R80G3LJ6ZWc

    Wrapping up

    Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • IEnumerable vs ICollection, and why it matters &vert; Code4IT

    IEnumerable vs ICollection, and why it matters | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Defining the best return type is crucial to creating a shared library whose behaviour is totally under your control.

    You should give the consumers of your libraries just the right amount of freedom to integrate and use the classes and structures you have defined.

    That’s why it is important to know the differences between interfaces like IEnumerable<T> and ICollection<T>: these interfaces are often used together but have totally different meanings.

    IEnumerable: loop through the items in the collection

    Suppose that IAmazingInterface is an interface you expose so that clients can interact with it without knowing the internal behaviour.

    You have defined it this way:

    public interface IAmazingInterface
    {
        IEnumerable<int> GetNumbers(int[] numbers);
    }
    

    As you can see, the GetNumbers returns an IEnumerable<int>: this means that (unless they do some particular tricks like using reflection), clients will only be able to loop through the collection of items.

    Clients don’t know that, behind the scenes, AmazingClass uses a custom class MySpecificEnumberable.

    public class AmazingClass: IAmazingInterface
    {
        public IEnumerable<int> GetNumbers(int[] numbers)
            => new MySpecificEnumberable(numbers);
    }
    

    MySpecificEnumberable is a custom class whose purpose is to store the initial values in a sorted way. It implements IEnumerable<int>, so the only operations you have to support are the two implementations of GetEnumerator() – pay attention to the returned data type!

    public class MySpecificEnumberable : IEnumerable<int>
    {
        private readonly int[] _numbers;
    
        public MySpecificEnumberable(int[] numbers)
        {
            _numbers = numbers.OrderBy(_ => _).ToArray();
        }
    
        public IEnumerator<int> GetEnumerator()
        {
            foreach (var number in _numbers)
            {
                yield return number;
            }
        }
    
        IEnumerator IEnumerable.GetEnumerator()
            => _numbers.GetEnumerator();
    }
    

    Clients will then be able to loop all the items in the collection:

    IAmazingInterface something = new AmazingClass();
    var numbers = something.GetNumbers([1, 5, 6, 9, 8, 7, 3]);
    
    foreach (var number in numbers)
    {
        Console.WriteLine(number);
    }
    

    But you cannot add or remove items from it.

    ICollection: list, add, and remove items

    As we saw, IEnumerable<T> only allows you to loop through all the elements. However, you cannot add or remove items from an IEnumerable<T>.

    To do so, you need something that implements ICollection<T>, like the following class (I haven’t implemented any of these methods: I want you to focus on the operations provided, not on the implementation details).

    class MySpecificCollection : ICollection<int>
    {
        public int Count => throw new NotImplementedException();
    
        public bool IsReadOnly => throw new NotImplementedException();
    
        public void Add(int item) => throw new NotImplementedException();
    
        public void Clear() => throw new NotImplementedException();
    
        public bool Contains(int item) => throw new NotImplementedException();
    
        public void CopyTo(int[] array, int arrayIndex) => throw new NotImplementedException();
    
        public IEnumerator<int> GetEnumerator() => throw new NotImplementedException();
    
        public bool Remove(int item) => throw new NotImplementedException();
    
        IEnumerator IEnumerable.GetEnumerator() => throw new NotImplementedException();
    }
    

    ICollection<T> is a subtype of IEnumerable<T>, so everything we said before is still valid.

    However, having a class that implements ICollection<T> gives you full control over how items can be added or removed from the collection, allowing you to define custom behaviour. For instance, you can define that the Add method adds an integer only if it’s an odd number.

    Why knowing the difference actually matters

    Classes and interfaces are meant to be used. If you are like me, you work on both the creation of the class and its consumption.

    So, if an interface must return a sequence of items, you most probably use the List shortcut: define the return type of the method as List<Item>, and then use it, regardless of having it looped through or having the consumer add items to the sequence.

    // in the interface
    public interface ISomething
    {
        List<Item> PerformSomething(int[] numbers);
    }
    
    
    // in the consumer class
    ISomething instance = //omitted
    List<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Everything works fine, but it works because we are in control of both the definition and the consumer.

    What if you have to expose the library to something outside your control?

    You have to consider two elements:

    • consumers should not be able to tamper with your internal implementation (for example, by adding items when they are not supposed to);
    • you should be able to change the internal implementation as you wish without breaking changes.

    So, if you want your users to just enumerate the items within a collection, you may start this way:

    // in the interface
    public interface ISomething
    {
        IEnumerable<Item> PerformSomething(int[] numbers);
    }
    
    // in the implementation
    
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        return numbers.Select(x => new Item(x)).ToList();
    }
    
    // in the consumer class
    
    ISomething instance = //omitted
    IEnumerable<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Then, when the time comes, you can change the internal implementation of PerformSomething with a more custom class:

    // custom IEnumerable definition
    public class MyCustomEnumberable : IEnumerable<Item> { /*omitted*/ }
    
    // in the interface
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        MyCustomEnumberable customEnumerable = new MyCustomEnumberable();
        customEnumerable.DoSomething(numbers);
        return customEnumerable;
    }
    

    And the consumer will not notice the difference. Again, unless they try to use tricks to tamper with your code!

    This article first appeared on Code4IT 🐧

    Wrapping up

    While understanding the differences between IEnumerable and ICollection is trivial, understanding why you should care about them is not.

    IEnumerable and ICollection hierarchy

    I hope this article helped you understand that yeah, you can take the easy way and return everywhere a List, but it’s a choice that you cannot always apply to a project, and that probably will make breaking changes more frequent in the long run.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Custom Attributes, and why they are useful &vert; Code4IT

    How to create Custom Attributes, and why they are useful | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.

    I’m sure you’ve already used them before. Examples are:

    • the [Required] attribute when you define the properties of a model to be validated;
    • the [Test] attribute when creating Unit Tests using NUnit;
    • the [Get] and the [FromBody] attributes used to define API endpoints.

    As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.

    In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.

    Create a custom attribute by inheriting from System.Attribute

    Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class ApplicationModuleAttribute : Attribute
    {
     public Module BelongingModule { get; }
    
     public ApplicationModuleAttribute(Module belongingModule)
       {
     BelongingModule = belongingModule;
       }
    }
    
    public enum Module
    {
     Authentication,
     Catalogue,
     Cart,
     Payment
    }
    

    Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.

    Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
    I can then use this attribute by calling [ApplicationModule(Module.Cart)].

    Define where a Custom Attribute can be applied

    Have a look at the attribute applied to the class definition:

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    

    This attribute tells us that the ApplicationModule can be applied to interfaces, classes, and methods.

    System.AttributeTargets is an enum that enlists all the points you can attach to an attribute. The AttributeTargets enum is defined as:

    [Flags]
    public enum AttributeTargets
    {
     Assembly = 1,
     Module = 2,
     Class = 4,
     Struct = 8,
     Enum = 16,
     Constructor = 32,
     Method = 64,
     Property = 128,
     Field = 256,
     Event = 512,
     Interface = 1024,
     Parameter = 2048,
     Delegate = 4096,
     ReturnValue = 8192,
     GenericParameter = 16384,
     All = 32767
    }
    

    Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.

    There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Or, if you want, you can inline them:

    [ApplicationModule(Module.Cart), ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Practical usage of Custom Attributes

    You can use custom attributes to declare which components or business areas an element belongs to.

    In the previous example, I defined an enum that enlists all the business modules supported by my application:

    public enum Module
    {
        Authentication,
        Catalogue,
        Cart,
        Payment
    }
    

    This way, whenever I define an interface, I can explicitly tell which components it belongs to:

    [ApplicationModule(Module.Catalogue)]
    public interface IItemDetails
    {
        [ApplicationModule(Module.Catalogue)]
        string ShowItemDetails(string itemId);
    }
    
    [ApplicationModule(Module.Cart)]
    public interface IItemDiscounts
    {
        [ApplicationModule(Module.Cart)]
        bool CanHaveDiscounts(string itemId);
    }
    

    Not only that: I can have one single class implement both interfaces and mark it as related to both the Catalogue and the Cart areas.

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService : IItemDetails, IItemDiscounts
    {
        [ApplicationModule(Module.Catalogue)]
        public string ShowItemDetails(string itemId) => throw new NotImplementedException();
    
        [ApplicationModule(Module.Cart)]
        public bool CanHaveDiscounts(string itemId) => throw new NotImplementedException();
    }
    

    Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.

    Further readings

    As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:

    🔗 5 things you should know about enums in C# | Code4IT

    and
    🔗 5 more things you should know about enums in C# | Code4IT

    This article first appeared on Code4IT 🐧

    There are some famous but not-so-obvious examples of attributes that you should know: DebuggerDisplay and InternalsVisibleTo.

    DebuggerDisplay can be useful for improving your debugging sessions.

    🔗 Simplify debugging with DebuggerDisplay attribute dotNET | Code4IT

    IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.

    🔗 Testing internal members with InternalsVisibleTo | Code4IT

    Wrapping up

    In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.

    In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.

    Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link