Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In a previous article, we delved into the creation of realistic data using Bogus, an open-source library that allows you to generate data with plausible values.
Bogus contains several properties and methods that generate realistic data such as names, addresses, birthdays, and so on.
In this article, we will learn two ways to generate data with Bogus: both ways generate the same result; the main change is on the reusability and the modularity. But, in my opinion, it’s just a matter of preference: there is no approach absolutely better than the other. However, both methods can be preferred in specific cases.
For the sake of this article, we are going to use Bogus to generate instances of the Book class, defined like this:
It is possible to create a specific object that, using a Builder approach, allows you to generate one or more items of a specified type.
It all starts with the Faker<T> generic type, where T is the type you want to generate.
Once you create it, you can define the rules to be used when initializing the properties of a Book by using methods such as RuleFor and RuleForType.
publicstaticclassBogusBookGenerator{
publicstatic Faker<Book> CreateFaker()
{
Faker<Book> bookFaker = new Faker<Book>()
.RuleFor(b => b.Id, f => f.Random.Guid())
.RuleFor(b => b.Title, f => f.Lorem.Text())
.RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>())
.RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName)
.RuleFor(b => b.AuthorLastName, f => f.Person.LastName)
.RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800))
.RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
return bookFaker;
}
}
In this way, thanks to the static method, you can simply create a new instance of Faker<Book>, ask it to generate one or more books, and enjoy the result:
Faker<Book> generator = BogusBookGenerator.CreateFaker();
var books = generator.Generate(10);
Clearly, it’s not necessary for the class to be marked as static: it all depends on what you need to achieve!
Expose a subtype of Faker, specific for the data type to be generated
If you don’t want to use a method (static or not static, it doesn’t matter), you can define a subtype of Faker<Book> whose customization rules are all defined in the constructor.
publicclassBookGenerator : Faker<Book>
{
public BookGenerator()
{
RuleFor(b => b.Id, f => f.Random.Guid());
RuleFor(b => b.Title, f => f.Lorem.Text());
RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>());
RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName);
RuleFor(b => b.AuthorLastName, f => f.Person.LastName);
RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800));
RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
}
}
Using this way, you can simply create a new instance of BookGenerator and, again, call the Generate method to create new book instances.
var generator = new BookGenerator();
var books = generator.Generate(10);
Method vs Subclass: When should we use which?
As we saw, both methods bring the same result, and their usage is almost identical.
So, which way should I use?
Use the method approach (the first one) when you need:
Simplicity: If you need to generate fake data quickly and your rules are straightforward, using a method is the easiest approach.
Ad-hoc Data Generation: Ideal for one-off or simple scenarios where you don’t need to reuse the same rules across your application.
Or use the subclass (the second approach) when you need:
Reusability: If you need to generate the same type of fake data in multiple places, defining a subclass allows you to encapsulate the rules and reuse them easily.
Complex scenarios and extensibility: Better suited for more complex data generation scenarios where you might have many rules or need to extend the functionality.
Maintainability: Easier to maintain and update the rules in one place.
Further readings
If you want to learn a bit more about Bogus and use it to populate data used by Entity Framework, I recently published an article about this topic:
I think Bogus is one of the best libraries in the .NET universe, as having realistic data can help you improve the intelligibility of the test cases you generate. Also, Bogus can be a great tool when you want to showcase demo values without accessing real data.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛
Non-functional requirements matter, but we often forget to validate them. You can measure them by setting up Fitness Functions.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Just creating an architecture is not enough; you should also make sure that the pieces of stuff you are building are, in the end, the once needed by your system.
Is your system fast enough? Is it passing all the security checks? What about testability, maintainability, and other -ilities?
Fitness Functions are components of the architecture that do not execute functional operations, but, using a set of tests and measurements, allow you to validate that the system respects all the non-functional requirements defined upfront.
Fitness Functions: because non-functional requirements matter
An architecture is made of two main categories of requirements: functional requirements and non-functional requirements.
Functional requirements are the most easy to define and to test: if one of the requirements is “a user with role Admin must be able to see all data”, then writing a suite of tests for this specific requirement is pretty straightforward.
Non-functional requirements are, for sure, as important as functional requirements, but are often overlooked or not detailed. “The system must be fast”: ok, how fast? What do you mean with “fast”? What is an acceptable value of “fast”?
If we don’t have a clear understanding of non-functional requirements, then it’s impossible to measure them.
And once we have defined a way to measure them, how can we ensure that we are meeting our expectations? Here’s where Fitness Functions come in handy.
In fact, Fitness Functions are specific components that focus on non-functional requirements, executing some calculations and providing metrics that help architects and developers ensure that the system’s architecture aligns with business goals, technical requirements, and other quality attributes.
Why Fitness Functions are crucial for future-proof architectures
When creating an architecture, you must think of the most important -ilities for that specific case. How can you ensure that the technical choices we made meet the expectations?
By being related to specific and measurable metrics, Fitness Functions provide a way to assess the architecture’s quality and performance, reducing the reliance on subjective opinions by using objective measurements. A metric can be a simple number (e.g., “maximum number of requests per second”), a percentage value (like “percentage of code covered by tests”) or other values that are still measurable.
Knowing how the system behaves in regards to these measures allows architects to work on the continuous improvement of the system: teams can identify areas for improvement and make decisions based not on personal opinion but on actual data to enhance the system.
Having a centralized place to view the historical values of a measure helps understanding if you have done progresses or, as time goes by, the quality has degraded.
Still talking about the historical values of the measures, having a clear understanding of what is the current status of such metrics can help in identifying potential issues early in the development process, allowing teams to address them before they become critical problems.
For example, by using Fitness Functions, you can ensure that the system is able to handle a certain amount of users per second: having proper measurements, you can identify which functionalities are less performant and, in case of high traffic, may bring the whole system down.
You are already using Fitness Functions, but you didn’t know
Fitness Functions sound like complex things to handle.
Even though you can create your own functions, most probably you are already using them without knowing it. Lots of tools are available out there that cover several metrics, and I’m sure you’ve already used some of them (or, at least, you’ve already heard of them).
Tools like SonarQube and NDepend use Fitness Functions to evaluate code quality based on metrics such as code complexity, duplication, and adherence to coding standards. Those metrics are calculated based on static analysis of the code, and teams can define thresholds under which a system can be at risk of losing maintainability. An example of metric related to code quality is Code Coverage: the higher, the better (even though 100% of code coverage does not guarantee your code is healthy).
Tools like JMeter or K6 help you measure system performance under various conditions: having a history of load testing results can help ensure that, as you add new functionalities to the system, the performance on some specific modules does not downgrade.
All in all, most of the Fitness Functions can be set to be part of CI/CD pipelines: for example, you can configure a CD pipeline to block the deployment of the code on a specific system if the load testing results of the new code are worse than the previous version. Or you could block a Pull Request if the code coverage percentage is getting lower.
Further readings
A good way to start experimenting with Load Testing is by running them locally. A nice open-source project is K6: you can install it on your local machine, define the load phases, and analyze the final result.
But, even if you don’t really care about load testing (maybe because your system is not expected to handle lots of users), I’m sure you still care about code quality and their tests. When using .NET, you can collect code coverage reports using Cobertura. Then, if you are using Azure DevOps, you may want to stop a Pull Request if the code coverage percentage has decreased.
Sometimes, there are things that we use every day, but we don’t know how to name them: Fitness Functions are one of them – and they are the foundation of future-proof software systems.
You can create your own Fitness Functions based on whatever you can (and need to) measure: from average page loading to star-rated customer satisfaction. In conjunction with a clear dashboard, you can provide a clear view of the history of such metrics.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.
I’m sure you’ve already used them before. Examples are:
the [Required] attribute when you define the properties of a model to be validated;
the [Test] attribute when creating Unit Tests using NUnit;
the [Get] and the [FromBody] attributes used to define API endpoints.
As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.
In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.
Create a custom attribute by inheriting from System.Attribute
Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.
Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.
Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
I can then use this attribute by calling [ApplicationModule(Module.Cart)].
Define where a Custom Attribute can be applied
Have a look at the attribute applied to the class definition:
Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.
There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:
Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.
Further readings
As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:
IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.
In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.
In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.
Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛
D2 is an open-source tool to design architectural layouts using a declarative syntax. It’s a textual format, which can also be stored under source control. Let’s see how it works, how you can install it, and some practical usage tips.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When defining the architecture of a system, I believe in the adage that says that «A picture is worth a thousand words».
Proper diagramming helps in understanding how the architecture is structured, the dependencies between components, how the different components communicate, and their responsibilities.
A clear architectural diagram can also be useful for planning. Once you have a general idea of the components, you can structure the planning according to the module dependencies and the priorities.
A lack of diagramming leads to a “just words” definition: how many times have you heard people talk about modules that do not exist or do not work as they were imagining?
The whole team can benefit from having a common language: a clear diagram brings clear thoughts, helping all the stakeholders (developers, architects, managers) understand the parts that compose a system.
I tried several approaches: both online WYSIWYG tools like Draw.IO and DSL like Structurizr and Mermaid. For different reasons, I wasn’t happy with any of them.
Then I stumbled upon D2: its rich set of elements makes it my new go-to tool for describing architectures. Let’s see how it works!
A quick guide to D2 syntax
Just like the more famous Mermaid, when using D2, you have to declare all the elements and connections as textual nodes.
You can generate diagrams online by using the Playground section available on the official website, or you can install it locally (as you will see later).
Elements: the basic components of every diagram
Elements are defined as a set of names that can be enriched with a label and other metadata.
Here’s an example of the most straightforward configurations for standalone elements.
service
user: Application User
job: {
shape: hexagon
}
For each element, you can define its internal name (service), a label (user: Application User) and a shape (shape: hexagon).
Other than that, I love the fact that you can define elements to be displayed as multiple instances: this can be useful when a service has multiple instances of the same type, and you want to express it clearly without the need to manually create multiple elements.
You can do it by setting the multiple property to true.
apiGtw: API Gateway {
shape: cloud
}
be: BackEnd {
style.multiple: true
}
apiGtw -> be
Grouping: nesting elements hierarchically
You may want to group elements. You can do that by using a hierarchical structure.
In the following example, the main container represents my e-commerce application, composed of a website and a background job. The website is composed of a frontend, a backend, and a database.
As you can see from the diagram definition, elements can be nested in a hierarchical structure using the {} symbols. Of course, you can still define styles and labels to nested elements.
Connections: making elements communicate
An architectural diagram is helpful only if it can express connections between elements.
To connect two elements, you must use the --, the -> or the <- connector. You have to link their IDs, not their labels.
ecommerce: E-commerce {
website: User Website {
frontend
backend
database: DB {
shape: cylinder
}
frontend -> backend
backend -> database: retrieve records {
style.stroke: red
}
}
job: {
shape: hexagon
}
job -> website.database: update records
}
The previous example contains some interesting points.
Elements within the same container can be referenced directly using their ID: frontend -> backend.
You can add labels to a connection: backend -> database: retrieve records.
You can apply styles to a connection, like choosing the arrow colour with style.stroke: red.
You can create connections between elements from different containers: job -> website.database.
When referencing items from different containers, you must always include the container ID: job -> website.database works, but job -> database doesn’t because database is not defined (so it gets created from scratch).
SQL Tables: represent the table schema
An interesting part of D2 diagrams is the possibility of adding the description of SQL tables.
Obviously, the structure cannot be validated: the actual syntax depends on the database vendor.
However, having the table schema defined in the diagram can be helpful in reasoning around the dependencies needed to complete a development.
serv: Products Service
db: Database Schema {
direction: right
shape: cylinder
userTable: dbo.user {
shape: sql_table
Id: int {constraint: primary_key}
FirstName: text
LastName: text
Birthday: datetime2
}
productsTable: dbo.products {
shape: sql_table
Id: int {constraint: primary_key}
Owner: int {constraint: foreign_key}
Description: text
}
productsTable.Owner -> userTable.Id
}
serv -> db.productsTable: Retrieve products by user id
Notice how you can also define constraints to an element, like {constraint: foreign_key}, and specify the references from one table to another.
How to install and run D2 locally
D2 is a tool written in Go.
Go is not natively present in every computer, so you have to install it. You can learn how to install it from the official page.
Once Go is ready, you can install D2 in several ways. I use Windows 11, so my preferred installation approach is to use a .msi installer, as described here.
If you are on macOS, you can use Homebrew to install it by running:
Regardless of the Operating System, you can have Go directly install D2 by running the following command:
go install oss.terrastruct.com/d2@latest
It’s even possible to install it via Docker. However, this approach is quite complex, so I prefer installing D2 directly with the other methods I explained before.
To work with D2 diagrams, you need to create a file with the .d2 extension. That file will contain the textual representation of the diagrams, following the syntax we saw before.
Once D2 is installed and the file is present in the file system (in my case, I named the file my-diagram.d2), you can use the console to generate the diagram locally – remember, I’m using Windows11, so I need to run the exe file:
d2.exe --watch .\my-diagram.d2
Now you can open your browser, head to the localhost page displayed on the shell, and see how D2 renders the local file. Thanks to the --watch flag, you can update the file locally and see the result appear on the browser without the need to restart the application.
When the diagram is ready, you can export it as a PNG or SVG by running
d2.exe .\my-diagram.d2 my-wonderful-design.png
Create D2 Diagrams on Visual Studio Code
Another approach is to install the D2 extension on VS Code.
Thanks to this extension, you can open any D2 file and, by using the command palette, see a preview of the final result. You can also format the document to have the diagram definition tidy and well-structured.
How to install and use D2 Diagrams on Obsidian
Lastly, D2 can be easily integrated with tools like Obsidian. Among the community plugins, you can find the official D2 plugin.
As you can imagine, Go is required on your machine.
And, if necessary, you are required to explicitly set the path to the bin folder of Go. In my case, I had to set it to C:\Users\BelloneDavide\go\bin\.
To insert a D2 diagram in a note generated with Obsidian, you have to use d2 as a code fence language.
Practical tips for using D2
D2 is easy to use once you have a basic understanding of how to create elements and connections.
However, some tips may be useful to ease the process of creating the diagrams. Or, at least, these tips helped me write and maintain my diagrams.
Separate elements and connections definition
A good approach is to declare the application’s structure first, and then list all the connections between elements unless the elements are within the same components and are not expected to change.
ecommerce: E-commerce {
website: User Website {
backend
database: DB {
shape: cylinder
}
backend -> database: retrieve records {
style.stroke: red
}
}
job -> website.database: update records
}
Here, the connection between backend and database is internal to the website element, so it makes sense to declare it directly within the website element.
However, the other connection between the job and the database is cross-element. In the long run, it may bring readability problems.
So, you could update it like this:
ecommerce: E-commerce {
website: User Website {
backend
database: DB {
shape: cylinder
}
backend -> database: retrieve records {
style.stroke: red
}
}
- job -> website.database: update records
}
+ ecommerce.job -> ecommerce.website.database: update records
This tip can be extremely useful when you have more than one element with the same name belonging to different parents.
Needless to say, since the order of the connection declarations does not affect the final rendering, write them in an organized way that best fits your needs. In general, I prefer creating sections (using comments to declare the area), and grouping connections by the outbound module.
Pick a colour theme (and customize it, if you want!)
D2 allows you to specify a theme for the diagram. There are some predefined themes (which are a set of colour palettes), each with a name and an ID.
To use a theme, you have to specify it in the vars element on top of the diagram:
vars: {
d2-config: {
theme-id: 103
}
}
103 is the theme named “Earth tones”, using a brown-based palette that, when applied to the diagram, renders it like this.
However, if you have a preferred colour palette, you can use your own colours by overriding the default values:
You can read more about themes and customizations here.
What is that B4 key overridden in the previous example? Unfortunately, I don’t know: you must try all the variables to understand how the diagram is rendered.
Choose the right layout engine
You can choose one of the three supported layout engines to render the elements in a different way (more info here).
DAGRE and ELK are open source, but quite basic. TALA is more sophisticated, but it requires a paid licence.
Here’s an example of how the same diagram is rendered using the three different engines.
You can decide which engine to use by declaring it in the layout-engine element:
vars: {
d2-config: {
layout-engine: tala
}
}
Choosing the right layout engine can be beneficial because sometimes some elements are not rendered correctly: here’s a weird rendering with the DAGRE engine.
Use variables to simplify future changes
D2 allows you to define variables in a single place and have the same value repeated everywhere it’s needed.
If in the future you’ll have to handle not only Magazines but also other media types, you can simply replace the value of entityName in one place and have it updated all over the diagram.
D2 vs Mermaid: a comparison
D2 and Mermaid are similar but have some key differences.
They both are diagram-as-a-code tools, meaning that the definition of a diagram is expressed as a text file, thus making it available under source control.
Mermaid is already supported by many tools, like Azure DevOps wikis, GitHub pages, and so on.
On the contrary, D2 must be installed (along with the Go language).
Mermaid is quite a “close” system: even if it allows you to define some basic styles, it’s not that flexible.
On the contrary, D2 allows you to choose a theme for the whole diagram, as well as choosing different layout engines.
Also, D2 has some functionalities that are (currently) missing on Mermaid:
Mermaid, on the contrary, allows us to define more types of diagrams: State Diagrams, Gantt, Mindmaps, and so on. Also, as we saw, it’s already supported on many platforms.
So, my (current) choice is: use D2 for architectural diagrams, and use Mermaid for everything else.
I haven’t tried D2 for Sequence Diagrams yet, so I won’t express an opinion on that.
Further readings
D2 is available online with a playground you can use to try things out in a sandboxed environment.
And, if you want, you can use icons to create better diagrams: D2 exposes a set of SVG icons that can be easily integrated into your diagrams. You can find them here:
And, of course, just the architectural diagram is not enough: you should also describe the dependencies, the constraints, the deployment strategies, and so on. Arc42 is a template that can guide you to proper system documentation:
Application Insights is a great tool for handling high volumes of logs. How can you configure an ASP.NET application to send logs to Azure Application Insights? What can I do to have Application Insights log my exceptions?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Logging is crucial for any application. However, generating logs is not enough: you must store them somewhere to access them.
Application Insights is one of the tools that allows you to store your logs in a cloud environment. It provides a UI and a query editor that allows you to drill down into the details of your logs.
In this article, we will learn how to integrate Azure Application Insights with an ASP.NET Core application and how Application Insights treats log properties such as Log Levels and exceptions.
For the sake of this article, I’m working on an API project with HTTP Controllers with only one endpoint. The same approach can be used for other types of applications.
How to retrieve the Azure Application Insights connection string
Azure Application Insights can be accessed via any browser by using the Azure Portal.
Once you have an instance ready, you can simply get the value of the connection string for that resource.
You can retrieve it in two ways.
You can get the connection string by looking at the Connection String property in the resource overview panel:
The alternative is to navigate to the Configure > Properties page and locate the Connection String field.
How to add Azure Application Insights to an ASP.NET Core application
Now that you have the connection string, you can place it in the configuration file or, in general, store it in a place that is accessible from your application.
To configure ASP.NET Core to use Application Insights, you must first install the Microsoft.Extensions.Logging.ApplicationInsights NuGet package.
Now you can add a new configuration to the Program class (or wherever you configure your services and the ASP.NET core pipeline):
The configureApplicationInsightsLoggerOptions allows you to configure some additional properties: TrackExceptionsAsExceptionTelemetry, IncludeScopes, and FlushOnDispose. These properties are by default set to true, so you probably don’t want to change the default behaviour (except one, which we’ll modify later).
And that’s it! You have Application Insights ready to be used.
How log levels are stored and visualized on Application Insights
I have this API endpoint that does nothing fancy: it just returns a random number.
We can use it to run experiments on how logs are treated using Application Insights.
First, let’s add some simple log messages in the Get endpoint:
[HttpGet]publicasync Task<IActionResult> Get()
{
int number = Random.Shared.Next();
_logger.LogDebug("A debug log");
_logger.LogTrace("A trace log");
_logger.LogInformation("An information log");
_logger.LogWarning("A warning log");
_logger.LogError("An error log");
_logger.LogCritical("A critical log");
return Ok(number);
}
These are just plain messages. Let’s search for them in Application Insights!
You first have to run the application – duh! – and wait for a couple of minutes for the logs to be ready on Azure. So, remember not to close the application immediately: you have to give it a few seconds to send the log messages to Application Insights.
Then, you can open the logs panel and access the logs stored in the traces table.
As you can see, the messages appear in the query result.
There are three important things to notice:
in .NET, the log level is called “Log Level”, while on Application Insights it’s called “severity level”;
the log levels lower than Information are ignored by default (in fact, you cannot see them in the query result);
the Log Levels are exposed as numbers in the severityLevel column: the higher the value, the higher the log level.
So, if you want to update the query to show only the log messages that are at least Warnings, you can do something like this:
traces
| where severityLevel >= 2
| order by timestamp desc
| project timestamp, message, severityLevel
How to log exceptions on Application Insights
In the previous example, we logged errors like this:
_logger.LogError("An error log");
Fortunately, ILogger exposes an overload that accepts an exception in input and logs all the details.
Let’s try it by throwing an exception (I chose AbandonedMutexException because it’s totally nonsense in this simple context, so it’s easy to spot).
privatevoid SomethingWithException(int number)
{
try {
_logger.LogInformation("In the Try block");
thrownew AbandonedMutexException("An exception message");
}
catch (Exception ex)
{
_logger.LogInformation("In the Catch block");
_logger.LogError(ex, "Unable to complete the operation");
}
finally {
_logger.LogInformation("In the Finally block");
}
}
So, when calling it, we expect to see 4 log entries, one of which contains the details of the AbandonedMutexException exception.
Hey, where is the exception message??
It turns out that ILogger, when creating log entries like _logger.LogError("An error log");, generates objects of type TraceTelemetry. However, the overload that accepts as a first parameter an exception (_logger.LogError(ex, "Unable to complete the operation");) is internally handled as an ExceptionTelemetry object. Since internally, it’s a different type of Telemetry object, and it gets ignored by default.
To enable logging exceptions, you have to update the way you add Application Insights to your application by setting the TrackExceptionsAsExceptionTelemetry property to false:
It’s not the first time we have written about logging in this blog.
For example, suppose you don’t want to use Application Insights but prefer an open-source, vendor-independent log sink. In that case, my suggestion is to try Seq:
This article taught us how to set up Azure Application Insights in an ASP.NET application.
We touched on the basics, discussing log levels and error handling. In future articles, we’ll delve into some other aspects of logging, such as correlating logs, understanding scopes, and more.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛
Let’s dive deep into the CallerMemberName attribute and explore its usage from multiple angles. We’ll see various methods of invoking it, shedding light on how it is defined at compile time.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Method names change. And, if you are using method names in some places specifying them manually, you’ll spend a lot of time updating them.
Luckily for us, in C#, we can use an attribute named CallerMemberName.
This attribute can be applied to a parameter in the method signature so that its runtime value is the caller method’s name.
publicvoid SayMyName([CallerMemberName] string? methodName = null) =>
Console.WriteLine($"The method name is {methodName ?? "NULL"}!");
It’s important to note that the parameter must be a nullable string: this way if the caller sets its value, the actual value is set. Otherwise, the name of the caller method is used. Well, if the caller method has a name! 👀
Getting the caller method’s name via direct execution
Direct call with overridden name:
The method name is Walter White!
It’s important to note that the compiler sets the methodName parameter only if it is not otherwise specified.
This means that if you call SayMyName(null), the value will be null – because you explicitly declared the value.
privatevoid DirectCallWithNullName()
{
Console.WriteLine("Direct call with null name:");
SayMyName(null);
}
The printed text is then:
Direct call with null name:
The method name is NULL!
CallerMemberName when the method is called via an Action
Let’s see what happens when calling it via an Action:
publicvoid CallViaAction()
{
Console.WriteLine("Calling via Action:");
Action<int> action = (_) => SayMyName();
var singleElement = new List<int> { 1 };
singleElement.ForEach(s => action(s));
}
This method prints this text:
Calling via Action:
The method name is CallViaAction!
Now, things get interesting: the CallerMemberName attribute recognizes the method’s name that contains the overall expression, not just the actual caller.
We can see that, syntactically, the caller is the ForEach method (which is a method of the List<T> class). But, in the final result, the ForEach method is ignored, as the method is actually called by the CallViaAction method.
This can be verified by accessing the compiler-generated code, for example by using Sharplab.
At compile time, since no value is passed to the SayMyName method, it gets autopopulated with the parent method name. Then, the ForEach method calls SayMyName, but the methodName is already defined at compiled time.
Lambda executions and the CallerMemberName attribute
What if we try to execute the SayMyName method by accessing the root class (in this case, CallerMemberNameTests) as a dynamic type?
privatevoid CallViaDynamicInvocation()
{
Console.WriteLine("Calling via dynamic invocation:");
dynamic dynamicInstance = new CallerMemberNameTests(null);
dynamicInstance.SayMyName();
}
Oddly enough, the attribute does not work as could have expected, but it prints NULL:
Calling via dynamic invocation:
The method name is NULL!
This happens because, at compile time, there is no reference to the caller method.
privatevoid CallViaDynamicInvocation()
{
Console.WriteLine("Calling via dynamic invocation:");
object arg = new C();
if (<>o__0.<>p__0 == null)
{
Type typeFromHandle = typeof(C);
CSharpArgumentInfo[] array = new CSharpArgumentInfo[1];
array[0] = CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null);
<>o__0.<>p__0 = CallSite<Action<CallSite, object>>.Create(Microsoft.CSharp.RuntimeBinder.Binder.InvokeMember(CSharpBinderFlags.ResultDiscarded, "SayMyName", null, typeFromHandle, array));
}
<>o__0.<>p__0.Target(<>o__0.<>p__0, arg);
}
I have to admit that I don’t understand why this happens: if you want, drop a comment to explain to us what is going on, I’d love to learn more about it! 📩
Event handlers can get the method name
Then, we have custom events.
We define events in one place, but they are executed indirectly.
privatevoid CallViaEventHandler()
{
Console.WriteLine("Calling via events:");
var eventSource = new MyEventClass();
eventSource.MyEvent += (sender, e) => SayMyName();
eventSource.TriggerEvent();
}
publicclassMyEventClass{
publicevent EventHandler MyEvent;
publicvoid TriggerEvent() =>
// Raises an event which in our case calls SayMyName via subscribing lambda method MyEvent?.Invoke(this, EventArgs.Empty);
}
So, what will the result be? “Who” is the caller of this method?
Calling via events:
The method name is CallViaEventHandler!
Again, it all boils down to how the method is generated at compile time: even if the actual execution is performed “asynchronously” – I know, it’s not the most obvious word for this case – at compile time the method is declared by the CallViaEventHandler method.
CallerMemberName from the Class constructor
Lastly, what happens when we call it from the constructor?
public CallerMemberNameTests(IOutput output) : base(output)
{
Console.WriteLine("Calling from the constructor");
SayMyName();
}
We can consider constructors to be a special kind of method, but what’s in their names? What can we find?
Calling from the constructor
The method name is .ctor!
Yes, the actual method name is .ctor! Regardless of the class name, the constructor is considered to be a method with that specific internal name.
Wrapping up
In this article, we started from a “simple” topic but learned a few things about how code is compiled and the differences between runtime and compile time.