As organizations continue to embrace digital transformation, how we think about personal data has changed fundamentally. Data is no longer just a by-product of business processes; it is often the product itself. This shift brings a pressing responsibility: privacy cannot be treated as an after-the-fact fix. It must be part of the architecture from the outset.
This is the thinking behind Privacy by Design. This concept is gaining renewed attention not just because regulators endorse it but also because it is increasingly seen as a marker of digital maturity.
So, what is Privacy by Design?
At a basic level, Privacy by Design (often abbreviated as PbD) means designing systems, products, and processes with privacy built into them from the start. It’s not a tool or a checklist; it’s a way of thinking.
Rather than waiting until the end of the development cycle to address privacy risks, teams proactively factor privacy into the design, architecture, and decision-making stages. This means asking the right questions early:
Do we need to collect this data?
How will it be stored, shared, and eventually deleted?
Are there less invasive ways to achieve the same business goal?
This mindset goes beyond technology. It is as much about product strategy and organizational alignment as it is about encryption or access controls.
Why It’s Becoming Non-Negotiable
The global regulatory environment is a key driver here. GDPR, for instance, formalized this approach in Article 25, which explicitly calls for “data protection by design and by default.” However, the need for privacy by design is not just about staying compliant.
Customers today are more aware than ever of how their data is used. Organizations that respect that reality – minimizing collection, improving transparency, and offering control – tend to earn more trust. And in a landscape where trust is hard to gain and easy to lose, that’s a competitive advantage.
Moreover, designing with privacy in mind from an engineering perspective reduces technical debt. Fixing privacy issues after launch usually means expensive rework and rushed patches. Building it right from day one leads to better outcomes.
Turning Principles into Practice
For many teams, the challenge is not agreeing with the idea but knowing how to apply it. Here’s what implementation often looks like in practice:
Product & Engineering Collaboration
Product teams define what data is needed and why. Engineering teams determine how it’s collected, stored, and protected. Early conversations between both help identify red flags and trade-offs before anything goes live.
Embedding Privacy into Architecture
This includes designing data flows with limitations, such as separating identifiers, encrypting sensitive attributes at rest, and ensuring role-based access to personal data. These aren’t just compliance tasks; they are innovative design practices that also improve security posture.
Privacy as a Default Setting
Instead of asking users to configure privacy settings after onboarding, PbD insists on secure defaults. If a feature collects data, users should have to opt in, not find a buried toggle to opt out.
Periodic Reviews, Not Just One-Time Checks
Privacy by Design isn’t a one-and-done activity. As systems evolve and new features roll out, periodic reviews help ensure that decisions made early on still hold up in practice.
Cross-Functional Awareness
Not every developer needs to be a privacy expert, but everyone in the development lifecycle—from analysts to QA—should be familiar with core privacy principles. A shared vocabulary goes a long way toward spotting and resolving issues early.
Going Beyond Compliance
A common mistake is to treat Privacy by Design as a box to tick. However, the organizations that do it well tend to treat it differently.
They don’t ask, “What’s the minimum we need to do to comply?” Instead, they ask, “How do we build responsibly?”
They don’t design features and then layer privacy on top. They create privacy into the feature.
They don’t stop at policies. They create workflows and tooling that enforce those policies consistently.
This mindset fosters resilience, reduces risk, and, over time, becomes part of the organization’s culture. In this mindset, product ideas are evaluated for feasibility and market fit and ethical and privacy alignment.
Final Thoughts
Privacy by Design is about intent. When teams build with privacy in mind, they send a message that the organization values the people behind the data.
This approach is very much expected in an era where privacy concerns are at the centre of digital discourse. For those leading security, compliance, or product teams, the real opportunity lies in making privacy a requirement and a differentiator.
Seqrite brings Privacy by Design to life with automated tools for data discovery, classification, and protection—right from the start. Our solutions embed privacy into every layer of your IT infrastructure, ensuring compliance and building trust. Explore how Seqrite can simplify your privacy journey.
Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.
However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.
In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.
What Is Code Coverage?
Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:
How much of my code is tested?
Are there any untested paths or dead code?
Which parts of the application need additional test coverage?
In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.
You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.
Why Code Coverage Matters
Clearly, if you write valuable tests, Code Coverage is a great ally.
A high value of Code Coverage helps you with:
Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.
The Limitations of Code Coverage
While Code Coverage is valuable, it has limitations:
False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.
3 Practical reasons why Code Coverage percentage can be misleading
For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.
Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!
As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.
Great job! Or is it?
You can cheat by excluding parts of the code
In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.
While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).
We can, in fact, add that attribute to every single class like this:
You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.
Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.
As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.
We can, indeed, focus our efforts in different areas:
Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.
Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.
Further readings
To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).
In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.
This way of defining tests is called Testing Diamond, and I explained it here:
Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Defining the best return type is crucial to creating a shared library whose behaviour is totally under your control.
You should give the consumers of your libraries just the right amount of freedom to integrate and use the classes and structures you have defined.
That’s why it is important to know the differences between interfaces like IEnumerable<T> and ICollection<T>: these interfaces are often used together but have totally different meanings.
IEnumerable: loop through the items in the collection
Suppose that IAmazingInterface is an interface you expose so that clients can interact with it without knowing the internal behaviour.
As you can see, the GetNumbers returns an IEnumerable<int>: this means that (unless they do some particular tricks like using reflection), clients will only be able to loop through the collection of items.
Clients don’t know that, behind the scenes, AmazingClass uses a custom class MySpecificEnumberable.
publicclassAmazingClass: IAmazingInterface
{
public IEnumerable<int> GetNumbers(int[] numbers)
=> new MySpecificEnumberable(numbers);
}
MySpecificEnumberable is a custom class whose purpose is to store the initial values in a sorted way. It implements IEnumerable<int>, so the only operations you have to support are the two implementations of GetEnumerator() – pay attention to the returned data type!
publicclassMySpecificEnumberable : IEnumerable<int>
{
privatereadonlyint[] _numbers;
public MySpecificEnumberable(int[] numbers)
{
_numbers = numbers.OrderBy(_ => _).ToArray();
}
public IEnumerator<int> GetEnumerator()
{
foreach (var number in _numbers)
{
yieldreturn number;
}
}
IEnumerator IEnumerable.GetEnumerator()
=> _numbers.GetEnumerator();
}
Clients will then be able to loop all the items in the collection:
IAmazingInterface something = new AmazingClass();
var numbers = something.GetNumbers([1, 5, 6, 9, 8, 7, 3]);
foreach (var number in numbers)
{
Console.WriteLine(number);
}
But you cannot add or remove items from it.
ICollection: list, add, and remove items
As we saw, IEnumerable<T> only allows you to loop through all the elements. However, you cannot add or remove items from an IEnumerable<T>.
To do so, you need something that implements ICollection<T>, like the following class (I haven’t implemented any of these methods: I want you to focus on the operations provided, not on the implementation details).
ICollection<T> is a subtype of IEnumerable<T>, so everything we said before is still valid.
However, having a class that implements ICollection<T> gives you full control over how items can be added or removed from the collection, allowing you to define custom behaviour. For instance, you can define that the Add method adds an integer only if it’s an odd number.
Why knowing the difference actually matters
Classes and interfaces are meant to be used. If you are like me, you work on both the creation of the class and its consumption.
So, if an interface must return a sequence of items, you most probably use the List shortcut: define the return type of the method as List<Item>, and then use it, regardless of having it looped through or having the consumer add items to the sequence.
// in the interfacepublicinterfaceISomething{
List<Item> PerformSomething(int[] numbers);
}
// in the consumer classISomething instance = //omittedList<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
Everything works fine, but it works because we are in control of both the definition and the consumer.
What if you have to expose the library to something outside your control?
You have to consider two elements:
consumers should not be able to tamper with your internal implementation (for example, by adding items when they are not supposed to);
you should be able to change the internal implementation as you wish without breaking changes.
So, if you want your users to just enumerate the items within a collection, you may start this way:
// in the interfacepublicinterfaceISomething{
IEnumerable<Item> PerformSomething(int[] numbers);
}
// in the implementationIEnumerable<Item> PerformSomething(int[] numbers)
{
return numbers.Select(x => new Item(x)).ToList();
}
// in the consumer classISomething instance = //omittedIEnumerable<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
Then, when the time comes, you can change the internal implementation of PerformSomething with a more custom class:
// custom IEnumerable definitionpublicclassMyCustomEnumberable : IEnumerable<Item> { /*omitted*/ }
// in the interfaceIEnumerable<Item> PerformSomething(int[] numbers)
{
MyCustomEnumberable customEnumerable = new MyCustomEnumberable();
customEnumerable.DoSomething(numbers);
return customEnumerable;
}
And the consumer will not notice the difference. Again, unless they try to use tricks to tamper with your code!
While understanding the differences between IEnumerable and ICollection is trivial, understanding why you should care about them is not.
I hope this article helped you understand that yeah, you can take the easy way and return everywhere a List, but it’s a choice that you cannot always apply to a project, and that probably will make breaking changes more frequent in the long run.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.
I’m sure you’ve already used them before. Examples are:
the [Required] attribute when you define the properties of a model to be validated;
the [Test] attribute when creating Unit Tests using NUnit;
the [Get] and the [FromBody] attributes used to define API endpoints.
As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.
In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.
Create a custom attribute by inheriting from System.Attribute
Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.
Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.
Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
I can then use this attribute by calling [ApplicationModule(Module.Cart)].
Define where a Custom Attribute can be applied
Have a look at the attribute applied to the class definition:
Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.
There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:
Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.
Further readings
As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:
IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.
In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.
In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.
Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛