بلاگ

  • Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT

    Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT


    Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.

    Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.

    In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.

    For this article, my code is structured into 3 Assemblies:

    • CommonClasses, a Class Library that contains some utility classes;
    • NetCoreScripts, a Class Library that contains the code we are going to execute;
    • ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.

    The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.

    Class library dependencies

    In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.

    How to load an Assembly in C#, with different methods

    The starting point to analyse an Assembly is, well, to have an Assembly.

    So, in the Scripts Class Library (the middle one), I wrote:

    var assembly = DefineAssembly();
    

    In the DefineAssembly method we can choose the Assembly we are going to analyse.

    Load the Assembly containing a specific class

    The easiest way is to do something like this:

    private static Assembly DefineAssembly()
        => typeof(AssemblyAnalysis).Assembly;
    

    Where AssemblyAnalysis is the class that contains our scripts.

    Similarly, we can get the Assembly info for a class belonging to another Assembly, like this:

    private static Assembly DefineAssembly()
        => typeof(CommonClasses.BaseExecutable).Assembly;
    

    In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!

    Load the current, the calling, and the executing Assembly

    The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.

    Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).

    var executing = System.Reflection.Assembly.GetExecutingAssembly();
    var calling = System.Reflection.Assembly.GetCallingAssembly();
    var entry = System.Reflection.Assembly.GetEntryAssembly();
    

    Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.

    Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.

    Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.

    Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.

    Method name Meaning In this example…
    GetExecutingAssembly The current Assembly CommonClasses
    GetCallingAssembly The caller Assembly NetCoreScripts
    GetEntryAssembly The top-level executor ScriptsRunner

    How to retrieve classes of a given .NET Assembly

    Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.

    You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.

    For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.

    You may want to analyse public classes: therefore, you can do something like:

    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly
                .GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    

    How to list the Methods belonging to a C# Type

    Given a Type, you can extract the info about all the available methods.

    The Type type contains several methods that can help you find useful information, such as GetConstructors.

    In our case, we are only interested in public methods, declared in that class (and not inherited from a base class):

    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    

    The BindingFlags enum is a 🔗Flagged Enum: it’s an enum with special values that allow you to perform an OR operation on the values.

    Value Description Example
    Public Includes public members. public void Print()
    NonPublic Includes non-public members (private, protected, etc.). private void Calculate()
    Instance Includes instance (non-static) members. public void Save()
    Static Includes static members. public static void Log(string msg)
    FlattenHierarchy Includes static members up the inheritance chain. public static void Helper() (this method exists in the base class)
    DeclaredOnly Only members declared in the given type, not inherited. public void MyTypeSpecific() (this method does not exist in the base class)

    How to get the parameters of a MethodInfo object

    The final step is to retrieve the list of parameters from a MethodInfo object.

    This step is pretty easy: just call the GetParameter() method:

    public ParameterInfo[] GetParameters(MethodInfo method) => method.GetParameters();
    

    A ParameterInfo object contains several pieces of information, such as the name, the type and the default value of the parameter.

    Let’s consider this silly method:

    public static void RandomCity(string[] cities, string fallback = "Rome")
    { }
    

    If we have a look at its parameters, we will find the following values:

    Properties of a ParameterInfo object

    Bonus tip: Auto-properties act as Methods

    Let’s focus a bit more on the properties of a class.

    Consider this class:

    public class User
    {
      public string Name { get; set; }
    }
    

    There are no methods; only one public property.

    But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.

    Automatic Getter and Setter of the Name property in C#

    Further readings

    Do you remember that exceptions are, in the end, Types?

    And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?

    If not, check this article out!

    🔗 Exception handling with WHEN clause | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up (plus the full example)

    From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.

    In short, an example of a simple code analyser can be this one:

    public void Execute()
    {
        var assembly = DefineAssembly();
        var paramsInfo = AnalyzeAssembly(assembly);
    
        AnalyzeParameters(paramsInfo);
    }
    
    private static Assembly DefineAssembly()
        => Assembly.GetExecutingAssembly();
    
    public static List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
    {
        List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
        var types = GetAllPublicTypes(assembly);
    
        foreach (var type in types)
        {
            var publicMethods = GetPublicMethods(type);
    
            foreach (var method in publicMethods)
            {
                var parameters = method.GetParameters();
                if (parameters.Length > 0)
                {
                    var f = parameters.First();
                }
    
                all.Add(new ParamsMethodInfo(
                    assembly.GetName().Name,
                    type.Name,
                    method
                    ));
            }
        }
        return all;
    }
    
    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    
    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    
    public class ParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
    {
        public string MethodName => Method.Name;
        public ParameterInfo[] Parameters => Method.GetParameters();
    }
    

    And then, in the AnalyzeParameters, you can add your own logic.

    As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js

    Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js


    Back in November 2024, I shared a post on X about a tool I was building to help visualize kitchen remodels. The response from the Three.js community was overwhelmingly positive. The demo showed how procedural rendering techniques—often used in games—can be applied to real-world use cases like designing and rendering an entire kitchen in under 60 seconds.

    In this article, I’ll walk through the process and thinking behind building this kind of procedural 3D kitchen design tool using vanilla Three.js and TypeScript—from drawing walls and defining cabinet segments to auto-generating full kitchen layouts. Along the way, I’ll share key technical choices, lessons learned, and ideas for where this could evolve next.

    You can try out an interactive demo of the latest version here: https://kitchen-designer-demo.vercel.app/. (Tip: Press the “/” key to toggle between 2D and 3D views.)

    Designing Room Layouts with Walls

    Example of user drawing a simple room shape using the built-in wall module.

    To initiate our project, we begin with the wall drawing module. At a high level, this is akin to Figma’s pen tool, where the user can add one line segment at a time until a closed—or open-ended—polygon is complete on an infinite 2D canvas. In our build, each line segment represents a single wall as a 2D plane from coordinate A to coordinate B, while the complete polygon outlines the perimeter envelope of a room.

    1. We begin by capturing the [X, Z] coordinates (with Y oriented upwards) of the user’s initial click on the infinite floor plane. This 2D point is obtained via Three.js’s built-in raycaster for intersection detection, establishing Point A.
    2. As the user hovers the cursor over a new spot on the floor, we apply the same intersection logic to determine a temporary Point B. During this movement, a preview line segment appears, connecting the fixed Point A to the dynamic Point B for visual feedback.
    3. Upon the user’s second click to confirm Point B, we append the line segment (defined by Points A and B) to an array of segments. The former Point B instantly becomes the new Point A, allowing us to continue the drawing process with additional line segments.

    Here is a simplified code snippet demonstrating a basic 2D pen-draw tool using Three.js:

    import * as THREE from 'three';
    
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.set(0, 5, 10); // Position camera above the floor looking down
    camera.lookAt(0, 0, 0);
    
    const renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);
    
    // Create an infinite floor plane for raycasting
    const floorGeometry = new THREE.PlaneGeometry(100, 100);
    const floorMaterial = new THREE.MeshBasicMaterial({ color: 0xcccccc, side: THREE.DoubleSide });
    const floor = new THREE.Mesh(floorGeometry, floorMaterial);
    floor.rotation.x = -Math.PI / 2; // Lay flat on XZ plane
    scene.add(floor);
    
    const raycaster = new THREE.Raycaster();
    const mouse = new THREE.Vector2();
    let points: THREE.Vector3[] = []; // i.e. wall endpoints
    let tempLine: THREE.Line | null = null;
    const walls: THREE.Line[] = [];
    
    function getFloorIntersection(event: MouseEvent): THREE.Vector3 | null {
      mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
      mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
      raycaster.setFromCamera(mouse, camera);
      const intersects = raycaster.intersectObject(floor);
      if (intersects.length > 0) {
        // Round to simplify coordinates (optional for cleaner drawing)
        const point = intersects[0].point;
        point.x = Math.round(point.x);
        point.z = Math.round(point.z);
        point.y = 0; // Ensure on floor plane
        return point;
      }
      return null;
    }
    
    // Update temporary line preview
    function onMouseMove(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && points.length > 0) {
        // Remove old temp line if exists
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
        // Create new temp line from last point to current hover
        const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 1], point]);
        const material = new THREE.LineBasicMaterial({ color: 0x0000ff }); // Blue for temp
        tempLine = new THREE.Line(geometry, material);
        scene.add(tempLine);
      }
    }
    
    // Add a new point and draw permanent wall segment
    function onMouseDown(event: MouseEvent) {
      if (event.button !== 0) return; // Left click only
      const point = getFloorIntersection(event);
      if (point) {
        points.push(point);
        if (points.length > 1) {
          // Draw permanent wall line from previous to current point
          const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 2], points[points.length - 1]]);
          const material = new THREE.LineBasicMaterial({ color: 0xff0000 }); // Red for permanent
          const wall = new THREE.Line(geometry, material);
          scene.add(wall);
          walls.push(wall);
        }
        // Remove temp line after click
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
      }
    }
    
    // Add event listeners
    window.addEventListener('mousemove', onMouseMove);
    window.addEventListener('mousedown', onMouseDown);
    
    // Animation loop
    function animate() {
      requestAnimationFrame(animate);
      renderer.render(scene, camera);
    }
    animate();

    The above code snippet is a very basic 2D pen tool, and yet this information is enough to generate an entire room instance. For reference: not only does each line segment represent a wall (2D plane), but the set of accumulated points can also be used to auto-generate the room’s floor mesh, and likewise the ceiling mesh (the inverse of the floor mesh).

    In order to view the planes representing the walls in 3D, one can transform each THREE.Line into a custom Wall class object, which contains both a line (for orthogonal 2D “floor plan” view) and a 2D inward-facing plane (for perspective 3D “room” view). To build this class:

    class Wall extends THREE.Group {
      constructor(length: number, height: number = 96, thickness: number = 4) {
        super();
    
        // 2D line for top view, along the x-axis
        const lineGeometry = new THREE.BufferGeometry().setFromPoints([
          new THREE.Vector3(0, 0, 0),
          new THREE.Vector3(length, 0, 0),
        ]);
        const lineMaterial = new THREE.LineBasicMaterial({ color: 0xff0000 });
        const line = new THREE.Line(lineGeometry, lineMaterial);
        this.add(line);
    
        // 3D wall as a box for thickness
        const wallGeometry = new THREE.BoxGeometry(length, height, thickness);
        const wallMaterial = new THREE.MeshBasicMaterial({ color: 0xaaaaaa, side: THREE.DoubleSide });
        const wall = new THREE.Mesh(wallGeometry, wallMaterial);
        wall.position.set(length / 2, height / 2, 0);
        this.add(wall);
      }
    }

    We can now update the wall draw module to utilize this newly created Wall object:

    // Update our variables
    let tempWall: Wall | null = null;
    const walls: Wall[] = [];
    
    // Replace line creation in onMouseDown with
    if (points.length > 1) {
      const start = points[points.length - 2];
      const end = points[points.length - 1];
      const direction = end.clone().sub(start);
      const length = direction.length();
      const wall = new Wall(length);
      wall.position.copy(start);
      wall.rotation.y = Math.atan2(direction.z, direction.x); // Align along direction (assuming CCW for inward facing)
      scene.add(wall);
      walls.push(wall);
    }
    

    Upon adding the floor and ceiling meshes, we can further transform our wall module into a room generation module. To recap what we have just created: by adding walls one by one, we have given the user the ability to create full rooms with walls, floors, and ceilings—all of which can be adjusted later in the scene.

    User dragging out the wall in 3D perspective camera-view.

    Generating Cabinets with Procedural Modeling

    Our cabinet-related logic can consist of countertops, base cabinets, and wall cabinets.

    Rather than taking several minutes to add the cabinets on a case-by-case basis—for example, like with IKEA’s 3D kitchen builder—it’s possible to add all the cabinets at once via a single user action. One method to employ here is to allow the user to draw high-level cabinet line segments, in the same manner as the wall draw module.

    In this module, each cabinet segment will transform into a linear row of base and wall cabinets, along with a parametrically generated countertop mesh on top of the base cabinets. As the user creates the segments, we can automatically populate this line segment with pre-made 3D cabinet meshes in meshing software like Blender. Ultimately, each cabinet’s width, depth, and height parameters will be fixed, while the width of the last cabinet can be dynamic to fill the remaining space. We use a cabinet filler piece mesh here—a regular plank, with its scale-X parameter stretched or compressed as needed.

    Creating the Cabinet Line Segments

    User can make a half-peninsula shape by dragging the cabinetry line segments alongside the walls, then in free-space.

    Here we will construct a dedicated cabinet module, with the aforementioned cabinet line segment logic. This process is very similar to the wall drawing mechanism, where users can draw straight lines on the floor plane using mouse clicks to define both start and end points. Unlike walls, which can be represented by simple thin lines, cabinet line segments need to account for a standard depth of 24 inches to represent the base cabinets’ footprint. These segments do not require closing-polygon logic, as they can be standalone rows or L-shapes, as is common in most kitchen layouts.

    We can further improve the user experience by incorporating snapping functionality, where the endpoints of a cabinet line segment automatically align to nearby wall endpoints or wall intersections, if within a certain threshold (e.g., 4 inches). This ensures cabinets fit snugly against walls without requiring manual precision. For simplicity, we’ll outline the snapping logic in code but focus on the core drawing functionality.

    We can start by defining the CabinetSegment class. Like the walls, this should be its own class, as we will later add the auto-populating 3D cabinet models.

    class CabinetSegment extends THREE.Group {
      public length: number;
    
      constructor(length: number, height: number = 96, depth: number = 24, color: number = 0xff0000) {
        super();
        this.length = length;
        const geometry = new THREE.BoxGeometry(length, height, depth);
        const material = new THREE.MeshBasicMaterial({ color, wireframe: true });
        const box = new THREE.Mesh(geometry, material);
        box.position.set(length / 2, height / 2, depth / 2); // Shift so depth spans 0 to depth (inward)
        this.add(box);
      }
    }

    Once we have the cabinet segment, we can use it in a manner very similar to the wall line segments:

    let cabinetPoints: THREE.Vector3[] = [];
    let tempCabinet: CabinetSegment | null = null;
    const cabinetSegments: CabinetSegment[] = [];
    const CABINET_DEPTH = 24; // everything in inches
    const CABINET_SEGMENT_HEIGHT = 96; // i.e. both wall & base cabinets -> group should extend to ceiling
    const SNAPPING_DISTANCE = 4;
    
    function getSnappedPoint(point: THREE.Vector3): THREE.Vector3 {
      // Simple snapping: check against existing wall points (wallPoints array from wall module)
      for (const wallPoint of wallPoints) {
        if (point.distanceTo(wallPoint) < SNAPPING_DISTANCE) return wallPoint;
      }
      return point;
    }
    
    // Update temporary cabinet preview
    function onMouseMoveCabinet(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && cabinetPoints.length > 0) {
        const snappedPoint = getSnappedPoint(point);
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
        const start = cabinetPoints[cabinetPoints.length - 1];
        const direction = snappedPoint.clone().sub(start);
        const length = direction.length();
        if (length > 0) {
          tempCabinet = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0x0000ff); // Blue for temp
          tempCabinet.position.copy(start);
          tempCabinet.rotation.y = Math.atan2(direction.z, direction.x);
          scene.add(tempCabinet);
        }
      }
    }
    
    // Add a new point and draw permanent cabinet segment
    function onMouseDownCabinet(event: MouseEvent) {
      if (event.button !== 0) return;
      const point = getFloorIntersection(event);
      if (point) {
        const snappedPoint = getSnappedPoint(point);
        cabinetPoints.push(snappedPoint);
        if (cabinetPoints.length > 1) {
          const start = cabinetPoints[cabinetPoints.length - 2];
          const end = cabinetPoints[cabinetPoints.length - 1];
          const direction = end.clone().sub(start);
          const length = direction.length();
          if (length > 0) {
            const segment = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0xff0000); // Red for permanent
            segment.position.copy(start);
            segment.rotation.y = Math.atan2(direction.z, direction.x);
            scene.add(segment);
            cabinetSegments.push(segment);
          }
        }
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
      }
    }
    
    // Add separate event listeners for cabinet mode (e.g., toggled via UI button)
    window.addEventListener('mousemove', onMouseMoveCabinet);
    window.addEventListener('mousedown', onMouseDownCabinet);

    Auto-Populating the Line Segments with Live Cabinet Models

    Here we fill 2 line-segments with 3D cabinet models (base & wall), and countertop meshes.

    Once the cabinet line segments are defined, we can procedurally populate them with detailed components. This involves dividing each segment vertically into three layers: base cabinets at the bottom, countertops in the middle, and wall cabinets above. For the base and wall cabinets, we’ll use an optimization function to divide the segment’s length into standard widths (preferring 30-inch cabinets), with any remainder filled using the filler piece mentioned above. Countertops are even simpler—they form a single continuous slab stretching the full length of the segment.

    The base cabinets are set to 24 inches deep and 34.5 inches high. Countertops add 1.5 inches in height and extend to 25.5 inches deep (including a 1.5-inch overhang). Wall cabinets start at 54 inches high (18 inches above the countertop), measure 12 inches deep, and are 30 inches tall. After generating these placeholder bounding boxes, we can replace them with preloaded 3D models from Blender using a loading function (e.g., via GLTFLoader).

    // Constants in inches
    const BASE_HEIGHT = 34.5;
    const COUNTER_HEIGHT = 1.5;
    const WALL_HEIGHT = 30;
    const WALL_START_Y = 54;
    const BASE_DEPTH = 24;
    const COUNTER_DEPTH = 25.5;
    const WALL_DEPTH = 12;
    
    const DEFAULT_MODEL_WIDTH = 30;
    
    // Filler-piece information
    const FILLER_PIECE_FALLBACK_PATH = 'models/filler_piece.glb'
    const FILLER_PIECE_WIDTH = 3;
    const FILLER_PIECE_HEIGHT = 12;
    const FILLER_PIECE_DEPTH = 24;

    To handle individual cabinets, we’ll create a simple Cabinet class that manages the placeholder and model loading.

    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    
    const loader = new GLTFLoader();
    
    class Cabinet extends THREE.Group {
      constructor(width: number, height: number, depth: number, modelPath: string, color: number) {
        super();
    
        // Placeholder box
        const geometry = new THREE.BoxGeometry(width, height, depth);
        const material = new THREE.MeshBasicMaterial({ color });
        const placeholder = new THREE.Mesh(geometry, material);
        this.add(placeholder);
    
    
        // Load and replace with model async
    
        // Case: Non-standard width -> use filler piece
        if (width < DEFAULT_MODEL_WIDTH) {
          loader.load(FILLER_PIECE_FALLBACK_PATH, (gltf) => {
            const model = gltf.scene;
            model.scale.set(
              width / FILLER_PIECE_WIDTH,
              height / FILLER_PIECE_HEIGHT,
              depth / FILLER_PIECE_DEPTH,
            );
            this.add(model);
            this.remove(placeholder);
          });
        }
    
        loader.load(modelPath, (gltf) => {
          const model = gltf.scene;
          model.scale.set(width / DEFAULT_MODEL_WIDTH, 1, 1); // Scale width
          this.add(model);
          this.remove(placeholder);
        });
      }
    }

    Then, we can add a populate method to the existing CabinetSegment class:

    function splitIntoCabinets(width: number): number[] {
      const cabinets = [];
      // Preferred width
      while (width >= DEFAULT_MODEL_WIDTH) {
        cabinets.push(DEFAULT_MODEL_WIDTH);
        width -= DEFAULT_MODEL_WIDTH;
      }
      if (width > 0) {
        cabinets.push(width); // Custom empty slot
      }
      return cabinets;
    }
    
    class CabinetSegment extends THREE.Group {
      // ... (existing constructor and properties)
    
      populate() {
        // Remove placeholder line and box
        while (this.children.length > 0) {
          this.remove(this.children[0]);
        }
    
        let offset = 0;
        const widths = splitIntoCabinets(this.length);
    
        // Base cabinets
        widths.forEach((width) => {
          const baseCab = new Cabinet(width, BASE_HEIGHT, BASE_DEPTH, 'models/base_cabinet.glb', 0x8b4513);
          baseCab.position.set(offset + width / 2, BASE_HEIGHT / 2, BASE_DEPTH / 2);
          this.add(baseCab);
          offset += width;
        });
    
        // Countertop (single slab, no model)
        const counterGeometry = new THREE.BoxGeometry(this.length, COUNTER_HEIGHT, COUNTER_DEPTH);
        const counterMaterial = new THREE.MeshBasicMaterial({ color: 0xa9a9a9 });
        const counter = new THREE.Mesh(counterGeometry, counterMaterial);
        counter.position.set(this.length / 2, BASE_HEIGHT + COUNTER_HEIGHT / 2, COUNTER_DEPTH / 2);
        this.add(counter);
    
        // Wall cabinets
        offset = 0;
        widths.forEach((width) => {
          const wallCab = new Cabinet(width, WALL_HEIGHT, WALL_DEPTH, 'models/wall_cabinet.glb', 0x4b0082);
          wallCab.position.set(offset + width / 2, WALL_START_Y + WALL_HEIGHT / 2, WALL_DEPTH / 2);
          this.add(wallCab);
          offset += width;
        });
      }
    }
    
    // Call for each cabinetSegment after drawing
    cabinetSegments.forEach((segment) => segment.populate());

    Further Improvements & Optimizations

    We can further improve the scene with appliances, varying-height cabinets, crown molding, etc.

    At this point, we should have the foundational elements of room and cabinet creation logic fully in place. In order to take this project from a rudimentary segment-drawing app into the practical realm—along with dynamic cabinets, multiple realistic material options, and varying real appliance meshes—we can further enhance the user experience through several targeted refinements:

    • We can implement a detection mechanism to determine if a cabinet line segment is in contact with a wall line segment.
      • For cabinet rows that run parallel to walls, we can automatically incorporate a backsplash in the space between the wall cabinets and the countertop surface.
      • For cabinet segments not adjacent to walls, we can remove the upper wall cabinets and extend the countertop by an additional 15 inches, aligning with standard practices for kitchen islands or peninsulas.
    • We can introduce drag-and-drop functionality for appliances, each with predefined widths, allowing users to position them along the line segment. This integration will instruct our cabinet-splitting algorithm to exclude those areas from dynamic cabinet generation.
    • Additionally, we can give users more flexibility by enabling the swapping of one appliance with another, applying different textures to our 3D models, and adjusting default dimensions—such as wall cabinet depth or countertop overhang—to suit specific preferences.

    All these core components lead us to a comprehensive, interactive application that enables the rapid rendering of a complete kitchen: cabinets, countertops, and appliances, in a fully interactive, user-driven experience.

    The aim of this project is to demonstrate that complex 3D tasks can be distilled down to simple user actions. It is fully possible to take the high-dimensional complexity of 3D tooling—with seemingly limitless controls—and encode these complexities into low-dimensional, easily adjustable parameters. Whether the developer chooses to expose these parameters to the user or an LLM, the end result is that historically complicated 3D processes can become simple, and thus the entire contents of a 3D scene can be fully transformed with only a few parameters.

    If you find this type of development interesting, have any great ideas, or would love to contribute to the evolution of this product, I strongly welcome you to reach out to me via email. I firmly believe that only recently has it become possible to build home design software that is so wickedly fast and intuitive that any person—regardless of architectural merit—will be able to design their own single-family home in less than 5 minutes via a web app, while fully adhering to local zoning, architectural, and design requirements. All the infrastructure necessary to accomplish this already exists; all it takes is a team of crazy, ambitious developers looking to change the standard of architectural home design.





    Source link

  • Spear Phishing Campaign Delivers VIP Keylogger via Email Attachment

    Spear Phishing Campaign Delivers VIP Keylogger via Email Attachment


    Introduction

    Earlier this year, we published a white paper detailing the VIP keylogger, a sophisticated malware strain leveraging spear-phishing and steganography to infiltrate victims’ systems. The keylogger is known for its data theft capabilities, particularly targeting web browsers and user credentials.

    In a recently identified campaign, the threat actors have once again employed spear-phishing tactics to distribute the malware. However, unlike the previous iteration, this campaign uses an Auto-It-based injector to deploy the final payload VIP keylogger.

    The malware is typically delivered through phishing emails containing malicious attachments or embedded links. Once executed, it installs the VIP keylogger, which is specifically designed to steal sensitive information by logging keystrokes, capturing credentials from widely used web browsers like Chrome, MS Edge, and Mozilla, and monitoring clipboard activity.

    In this campaign, the AutoIt script is utilized to deliver and execute the malicious payload. Threat actors often leverage AutoIt due to its ease of obfuscation and ability to compile scripts into executables, which evade traditional AV solutions.

    Infection chain and Process tree:

    The campaign begins with a spear-phishing email carrying a ZIP file named “payment receipt_USD 86,780.00.pdf.pdf.z.”. This archive contains a malicious executable disguised as “payment receipt_USD 86,780.00 pdf.exe”, tricking users into believing it’s a harmless document. Once executed, the executable runs an embedded AutoIt script and drops two encrypted files leucoryx and avenes into the temp folder. These files are decrypted at runtime, and the final payload, VIP Keylogger, is injected into RegSvcs.exe using process hollowing techniques, as shown in the figures below.

    Fig.: Infection chain

     

    Fig.: Process Tree

    Infiltration:

    The campaign begins with a spear-phishing email carrying a ZIP file named “payment receipt_USD 86,780.00 pdf.pdf.z.” This archive contains a malicious executable disguised as “payment receipt_USD 86,780.00 pdf.exe,” tricking users into thinking it’s a harmless document. Once executed, the embedded AutoIt script runs and drops the VIP Keylogger onto the system, as shown in the images below.

    Fig.: Email

     

    Zip Attachments which further contains the executable.

     

    Fig:Attachment

    During execution, two files named leucorynx and aveness are dropped in the system’s Temp directory, as shown in the figure below.

    AutoIt Script:

     

    Fig.: AutoIt Script

     

    This AutoIt script decrypts and executes the dropped payload in memory. It first checks the encrypted file leucoryx in the temp directory, reads its content, and decrypts it using a custom XOR function (KHIXTKVLO). The decrypted data is stored in a memory structure.
    It retrieves the pointer to the decrypted payload and uses DllCall to allocate executable memory and copy the payload into the allocated memory. A second DllCall triggers the execution and runs the payload in the memory.

    The leucorynx contains the key to the decode file, as shown in the figure below.

    Fig.: leucorynx

    The malware drops a .vbs script in the Startup folder to maintain persistence. This script executes the primary payload located in the “AppData\Local” directory.
    The VB script ensures that the payload (definitiveness.exe) located in the “AppData\Local\Dunlop” directory is executed every time the user logs in, it to operate silently in the background after each reboot.

    Fig.: Persistence

    The dropped file avness is loaded into memory, as shown in the figures below. Once loaded, its contents are passed to a custom decryption routine, which is responsible for unpacking or decoding the embedded payload.

    The figure below Shows the decryption function, which is takes the address of the encrypted payload and the XOR key as arguments.

     

    Fig.:Decryption Function

     

     

    The figure below highlights the decryption loop, where the payload is iteratively decoded. The memory dump shows the decrypted content of the payload.

    Fig.: Decryption Loop

    Decrypted payload is .NET VIP keylogger;

    Process Hollowing:

    The figure below demonstrates the use of process hollowing, where RegSvcs.exe is spawned in a suspended state using CreateProcess. This enables the malware to unmap the original code and inject its own payload into the process memory before resuming execution.

    Fig: Targeted process RegSvcs.exe

    As shown in the figures below, the decrypted payload is mapped into the address space of regsvc.exe. The memory dump has strings associated with the payload.

    Fig: Injected code in RegSvcs.exe

     

    Fig: Strings related to VIP Keylogger

     

    Payload: VIP Keylogger

    Fig. Exfiltrate data through SMTP

     

    Fig. Exfiltrate data to c2

     

    The final payload delivered in this campaign is VIP Keylogger, for which we have already provided a comprehensive analysis of its functionality, capabilities, and behaviour in our technical paper on VIP Keylogger.

    IOCs:

    MD5 Filename
    F0AD3189FE9076DDD632D304E6BEE9E8 payment receipt_USD 86,780.00 pdf.exe
    0B0AE173FABFCE0C5FBA521D71895726 VIP Keylogger
    Domain/IP
    hxxp[:]//51.38.247.67:8081

     

    Protection:

    Trojan.AgentCiR

    Trojan.YakbeexMSIL.ZZ4

     

    MITRE ATT&CK:

     

    Tactic Technique ID Name
    Obfuscation T1027 Obfuscated Files or Information
    Execution T1204.002
    Execution T1059.006 Command and Scripting Interpreter: Python
    Screen Capture T1113 Screen Capture
    Gather Victim Host Information T1592 Collects system info
    Input Capture T1056 Keylogging
    Defense Evasion T1055.002 Process Injection: Portable Executable Injection
    Content Injection T1659 Injecting malicious code into systems
    Command and Control T1071.001 Application Layer Protocol: Web Protocols

     

     

    Author:

    Vaibhav Billade

    Rumana Siddiqui

     



    Source link

  • LINQ’s Enumerable.Range to generate a sequence of consecutive numbers &vert; Code4IT

    LINQ’s Enumerable.Range to generate a sequence of consecutive numbers | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you need to generate a sequence of numbers in ascending order, you can just use a while loop with an enumerator, or you can use Enumerable.Range.

    This method, which you can find in the System.Linq namespace, allows you to generate a sequence of numbers by passing two parameters: the start number and the total numbers to add.

    Enumerable.Range(start:10, count:4) // [10, 11, 12, 13]
    

    ⚠ Notice that the second parameter is not the last number of the sequence. Rather, it’s the length of the returned collection.

    Clearly, it also works if the start parameter is negative:

    Enumerable.Range(start:-6, count:3) // [-6, -5, -4]
    

    But it will not work if the count parameter is negative: in fact, it will throw an ArgumentOutOfRangeException:

    Enumerable.Range(start:1, count:-23) // Throws ArgumentOutOfRangeException
    // with message "Specified argument was out of the range of valid values"(Parameter 'count')
    

    ⚠ Beware of overflows: it’s not a circular array, so if you pass the int.MaxValue value while building the collection you will get another ArgumentOutOfRangeException.

    Enumerable.Range(start:Int32.MaxValue, count:2) // Throws ArgumentOutOfRangeException
    

    💡 Smart tip: you can use Enumerable.Range to generate collections of other types! Just use LINQ’s Select method in conjunction with Enumerable.Range:

    Enumerable.Range(start:0, count:5)
        .Select(_ => "hey!"); // ["hey!", "hey!", "hey!", "hey!", "hey!"]
    

    Notice that this pattern is not very efficient: you first have to build a collection with N integers to then generate a collection of N strings. If you care about performance, go with a simple while loop – if you need a quick and dirty solution, this other approach works just fine.

    Further readings

    There are lots of ways to achieve a similar result: another interesting one is by using the yield return statement:

    🔗 C# Tip: use yield return to return one item at a time | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this C# tip, we learned how to generate collections of numbers using LINQ.

    This is an incredibly useful LINQ method, but you have to remember that the second parameter does not indicate the last value of the collection, rather it’s the length of the collection itself.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Top Benefits for Organizations & Seqrite EDR


    In today’s hyper-connected world, cyberattacks are no longer just a technical issue, they are a serious business risk. From ransomware shutting down operations to data breaches costing millions, the threat landscape is constantly evolving. According to IBM’s 2024 Cost of a Data Breach Report, the global average cost of a data breach has reached 4.45 million dollars, marking a 15 percent increase over the past three years. As a result, more organizations are turning to EDR cybersecurity solutions.

    EDR offers real-time monitoring, threat detection, and rapid incident response to protect endpoints like desktops, and laptops from malicious activity. These capabilities are critical for minimizing the impact of attacks and maintaining operational resilience. Below are the top benefits of implementing EDR cybersecurity in your organization.

    Top EDR Cybersecurity Benefits 

    1. Improved Visibility and Threat Awareness

    In a modern enterprise, visibility across all endpoints is crucial. EDR offers a comprehensive lens into every device, user activity, and system process within your network.

    • Continuous Endpoint Monitoring

      EDR agents installed on endpoints continuously collect data related to file access, process execution, login attempts, and more. This enables 24/7 monitoring of activity across desktops, and mobile devices regardless of location.

    • Behavioral Analytics

    EDR solutions use machine learning to understand normal behavior across systems and users. When anomalies occur—like unusual login patterns or unexpected file transfers—they are flagged for investigation.

    2. Faster Threat Response and Containment

    In cybersecurity, response speed is critical. Delayed action can lead to data loss, system compromise, and reputational damage.

    • Real-Time Containment

      EDR solutions enable security teams to isolate infected endpoints instantly, preventing malware from spreading laterally through the network. Even if the endpoint is rebooted or disconnected, containment policies remain active.

    • Automated Response Workflows

      EDR systems support predefined rules for automatic responses such as:

      • Killing malicious processes
      • Quarantining suspicious files
      • Blocking communication with known malicious IPs
      • Disconnecting compromised endpoints from the network
    • Protection for Offline Devices

      Remote endpoints or those operating without an internet connection remain protected. Security policies continue to function, ensuring consistent enforcement even in disconnected environments.

    According to IDC’s 2024 report on endpoint security, companies with automated EDR solutions reduced their average incident containment time by 60 percent.

     

    3. Regulatory Compliance and Reporting

    Compliance is no longer optional—especially for organizations in healthcare, finance, government, and other regulated sectors. EDR tools help meet these requirements.

    • Support for Compliance Standards

      EDR solutions help organizations meet GDPR, HIPAA, PCI-DSS, and the Indian DPDP Act by:

      • Enforcing data encryption
      • Applying strict access controls
      • Maintaining audit logs of all system and user activities
      • Enabling rapid response and documentation of security incidents
    • Simplified Audit Readiness

      Automated report generation and log retention ensure that organizations can quickly present compliance evidence during audits.

    • Proactive Compliance Monitoring

      EDR platforms identify areas of non-compliance and provide recommendations to fix them before regulatory issues arise.

    HIPAA, for instance, requires logs to be retained for at least six years. EDR solutions ensure this requirement is met with minimal manual intervention.

    4. Cost Efficiency and Operational Gains

    Strong cybersecurity is not just about prevention it is also about operational and financial efficiency. EDR helps reduce the total cost of ownership of security infrastructure.

    • Lower Incident Management Costs

      According to Deloitte India’s Cybersecurity Report 2024, companies using EDR reported an average financial loss of 42 million rupees per attack. In contrast, companies without EDR reported average losses of 253 million rupees.

    • Reduced Business Disruption

      EDR solutions enable security teams to isolate only affected endpoints rather than taking entire systems offline. This minimizes downtime and maintains business continuity.

    • More Efficient Security Teams

      Security analysts often spend hours manually investigating each alert. EDR platforms automate much of this work by providing instant analysis, root cause identification, and guided response steps. This frees up time for more strategic tasks like threat hunting and policy improvement.

    The Ponemon Institute’s 2024 report notes that organizations using EDR reduced average investigation time per incident by 30 percent.

    5. Protection Against Advanced and Evolving Threats

    Cyberthreats are evolving rapidly, and many now bypass traditional defenses. EDR solutions are built to detect and respond to these sophisticated attacks.

    • Detection of Unknown Threats

      Unlike traditional antivirus software, EDR uses heuristic and behavioral analysis to identify zero-day attacks and malware that do not yet have known signatures.

    • Defense Against Advanced Persistent Threats (APTs)

      EDR systems correlate seemingly minor events such as login anomalies, privilege escalations, and file modifications—into a single threat narrative that identifies stealthy attacks.

    • Integration with Threat Intelligence

      EDR platforms often incorporate global and local threat feeds, helping organizations respond to emerging threats faster and more effectively.

    Verizon’s 2024 Data Breach Investigations Report found that 70 percent of successful breaches involved endpoints, highlighting the need for more advanced protection mechanisms like EDR.

    Why Choose Seqrite EDR

    Seqrite EDR cybersecurity is designed to meet the needs of today’s complex and fast-paced enterprise environments. It provides centralized control, powerful analytics, and advanced response automation all in a user-friendly package.

    Highlights of Seqrite EDR Cybersecurity:

    • Powered by GoDeep.AI for deep behavioral analysis
    • Unified dashboard for complete endpoint visibility
    • Seamless integration with existing IT infrastructure
    • Resilient protection for remote and offline devices
    • Scalability for growing enterprise needs

    Seqrite EDR is especially well-suited for industries such as finance, healthcare, manufacturing, and government, where both threat risk and compliance pressure are high.

    Conclusion

    EDR cybersecurity solutions have become a strategic necessity for organizations of all sizes. They offer comprehensive protection by detecting, analyzing, and responding to threats across all endpoints in real time. More importantly, they help reduce incident costs, improve compliance, and empower security teams with automation and insight.

    Seqrite Endpoint Detection and Response provides a powerful, cost-effective way to future-proof your organization’s cybersecurity. By adopting Seqrite EDR, you can strengthen your cyber defenses, reduce operational risk, and ensure compliance with evolving regulations.

    To learn more, visit www.seqrite.com and explore how Seqrite EDR can support your business in the age of intelligent cyber threats.

     



    Source link

  • do NOT use nameof to give constants a value &vert; Code4IT

    do NOT use nameof to give constants a value | Code4IT


    In C#, nameof can be quite useful. But it has some drawbacks, if used the wrong way.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As per Microsoft’s definition,

    A nameof expression produces the name of a variable, type, or member as the string constant.

    This means that you can have, for example

    void Main()
    {
        PrintItems("hello");
    }
    
    public void PrintItems(string items)
    {
        Console.WriteLine(nameof(items));
    }
    

    that will print “items”, and not “hello”: this is because we are printing the name of the variable, items, and not its runtime value.

    A real example I saw in my career

    In some of the projects I’ve worked on during these years, I saw an odd approach that I highly recommend NOT to use: populate constants with the name of the constant itself:

    const string User_Table = nameof(User_Table);
    

    and then use the constant name to access stuff on external, independent systems, such as API endpoints or Databases:

    const string User_Table = nameof(User_Table);
    
    var users = db.GetAllFromTable(User_Table);
    

    The reasons behind this, in my teammates opinion, are that:

    1. It’s easier to write
    2. It’s more performant: we’re using constants that are filled at compile time, not at runtime
    3. You can just rename the constant if you need to access a new database table.

    I do not agree with them: expecially the third point is pretty problematic.

    Why this approach should not be used

    We are binding the data access to the name of a constant, and not to its value.

    We could end up in big trouble because if, from one day to the next, the system might not be able to reach the User table because the name does not exist.

    How is it possible? It’s a constant, it can’t change! No: it’s a constant whose value changes if the contant name changes.

    It can change for several reasons:

    1. A developer, by mistake, renames the constant. For example, from User_Table to Users_Table.
    2. An automatic tool (like a Linter) with wrong configurations updates the constants’ names: from User_Table to USER_TABLE.
    3. New team styleguides are followed blindly: if the new rule is that “constants must not contain hyphens” and you apply it everywhere, you’ll end in trouble.

    To me, those are valid reasons not to use nameof to give a value to a constant.

    How to overcome it

    If this approach is present in your codebase and it’s too time-consuming to update it everywhere, not everything is lost.

    You must absolutely do just one thing to prevent all the issues I listed above: add tests, and test on the actual value.

    If you’re using Moq, for instance, you should test the database access we saw before as:

    // initialize and run the method
    [...]
    
    // test for the Table name
    _mockDb.Verify(db => db.GetAllFromTable("User_Table"));
    

    Notice that here you must test against the actual name of the table: if you write something like

    _mockDb.Verify(db => db.GetAllFromTable(It.IsAny<string>()));
    

    or

    _mockDb.Verify(db => db.GetAllFromTable(DbAccessClass.User_Table));
    //say that DbAccessClass is the name of the class the uses the data access showed above
    

    the whole test becomes pointless.

    Further readings

    This article lies in the middle of my C# tips 🔗 and my Clean Code tips 🔗.

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned that you could value a constant with its own name, using nameof, but also that you shouldn’t.

    Have you ever seen this approach? In your opinion, what are some other benefits and disadvantages of it? Drop a comment below! 📩

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to customize Conventional Commits in a .NET application using GitHooks &vert; Code4IT

    How to customize Conventional Commits in a .NET application using GitHooks | Code4IT


    Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖

    A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.

    Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.

    Conventional Commits

    Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:

    • they help developers understand the history of a git branch;
    • they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
    • using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
    • they allow you to create automated Changelog files.

    So, what does an average Conventional Commit look like?

    There’s not just one way to specify such formats.

    For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:

    feat(api): send an email to the customer
    

    Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.

    fix: prevent racing condition
    
    Introduce a request id and a reference to latest request. Dismiss
    incoming responses other than from latest request.
    

    There are several types of commits that you can support, such as:

    • feat, used when you add a new feature to the application;
    • fix, when you fix a bug;
    • docs, used to add or improve documentation to the project;
    • refactor, used – well – after some refactoring;
    • test, when adding tests or fixing broken ones

    All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.

    Some Changes

    So, now, it’s time to include Conventional Commits in our .NET applications.

    What is our goal?

    For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.

    Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.

    For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.

    Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.

    The goal of this article is, then, to force developers to create Commit messages such as

    feat/FOO-123: commit short description
    

    or, if you want to add a full description of the commit,

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.

    Install NPM in your folder

    Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.

    First things first: head to the Command Line and run

    After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.

    Now we can move on and add a GIT Hook.

    Husky: integrate GIT Hooks to improve commit messages

    To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.

    We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.

    Head to the terminal, and install Husky by running

    npm install husky --save-dev
    

    This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:

    "devDependencies": {
        "husky": "^8.0.3"
    }
    

    Finally, to enable Git Hooks, we have to run

    npm pkg set scripts.prepare="husky install"
    

    and notice the new section in the package.json.

    "scripts": {
        "prepare": "husky install"
    },
    

    Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.

    Git commit message editor

    Save and close the file. The commit message has been applied, as you can see by running git log --oneline.

    CommitLint: a package to validate Commit messages

    We need to install and configure CommitLint, the NPM package that does the dirty job.

    On the same terminal as before, run

    npm install --save-dev @commitlint/config-conventional @commitlint/cli
    

    to install both commitlint/config-conventional, which add the generic functionalities, and commitlint/cli, which allows us to run the scripts via CLI.

    You will see both packages listed in your package.json file:

    "devDependencies": {
        "@commitlint/cli": "^17.4.2",
        "@commitlint/config-conventional": "^17.4.2",
        "husky": "^8.0.3"
    }
    

    Next step: scaffold the file that handles the configurations on how we want our Commit Messages to be structured.

    On the root, create a brand new file, commitlint.config.js, and paste this snippet:

    module.exports = {
      extends: ["@commitlint/config-conventional"],
    }
    

    This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.

    To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:

    npm install -g @commitlint/cli @commitlint/config-conventional
    

    and, in a console, we can run

    echo 'foo: a message with wrong format' | commitlint
    

    and see the error messages

    Testing commitlint with errors

    At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.

    We need to do some more steps.

    First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.

    Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.

    Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.

    npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'
    

    We’re almost ready: everything is set, but we need to activate the functionality. So you just have to run

    to see it working:

    CommitLint correctly validates the commit message

    Commitlint.config.js: defining explicit rules on Git Messages

    Now, remember that we want to enforce certain rules on the commit message.

    We don’t want them to be like

    feat(api): send an email to the customer when a product is shipped
    

    but rather like

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    This means that we have to configure the commitlint.config.js file to override default values.

    Let’s have a look at a valid Commitlint file:

    module.exports = {
      extends: ["./node_modules/@commitlint/config-conventional"],
      parserPreset: {
        parserOpts: {
          headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
          headerCorrespondence: ["type", "scope", "subject"],
        },
      },
      rules: {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
          2,
          "never",
          ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
      },
    }
    

    Time to deep dive into those sections:

    The ParserOpts section: define how CommitList should parse text

    The first part tells the parser how to parse the header message:

    parserOpts: {
        headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
        headerCorrespondence: ["type", "scope", "subject"],
    },
    

    It’s a regular expression, where every matching part has its correspondence in the headerCorrespondence array:

    So, in the message hello/FOO-123: my tiny message, we will have type=hello, scope=123, subject=my tiny message.

    Rules: define specific rules for each message section

    The rules section defines the rules to be applied to each part of the message structure.

    rules:
    {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
            2,
            "never",
            ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
    },
    

    The first value is a number that expresses the severity of the rule:

    • 0: the rule is disabled;
    • 1: show a warning;
    • 2: it’s an error.

    The second value defines if the rule must be applied (using always), or if it must be reversed (using never).

    The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.

    You can read more about each and every configuration on the official documentation 🔗.

    Setting the commit structure using .gitmessage

    Now that everything is set, we can test it.

    But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.

    Default commit editor

    You can set your own text with hints about the structure of the messages.

    You just need to create a file named .gitmessage and put some text in it, such as:

    # <type>/FOO-<jira-ticket-id>: <title>
    # YOU CAN WRITE WHATEVER YOU WANT HERE
    # allowed types: feat | fix | hot | chore
    # Example:
    #
    # feat/FOO-01: first commit
    #
    # No more than 50 chars. #### 50 chars is here:  #
    
    # Remember blank line between title and body.
    
    # Body: Explain *what* and *why* (not *how*)
    # Wrap at 72 chars. ################################## which is here:  #
    #
    

    Now, we have to tell Git to use that file as a template:

    git config commit.template ./.gitmessage
    

    and.. TA-DAH! Here’s your message template!

    Customized message template

    Putting all together

    Finally, we have everything in place: git hooks, commit template, and template hints.

    If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.

    Commit message with wrong format gets rejected

    Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working

    Further readings

    Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:

    🔗 Conventional Commits

    As we saw before, there are a lot of configurations that you can set for your commits. You can see the full list here:

    🔗 CommitLint rules

    This article first appeared on Code4IT 🐧

    This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1:
    🔗 Semantic Versioning

    And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer:
    🔗 How to deploy .NET APIs on Azure using GitHub actions

    Wrapping up

    In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.

    The steps we’ve seen before work for every type of application, even not related to dotnet.

    So, to recap everything, we have to:

    1. Install NPM: npm init;
    2. Install Husky: npm install husky --save-dev;
    3. Enable Husky: npm pkg set scripts.prepare="husky install";
    4. Install CommitLint: npm install --save-dev @commitlint/config-conventional @commitlint/cli;
    5. Create the commitlint.config.js file: module.exports = { extends: '@commitlint/config-conventional']};;
    6. Create the Husky folder: mkdir .husky;
    7. Link Husky and CommitLint: npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}';
    8. Activate the whole functionality: npx husky install;

    Then, you can customize the commitlint.config.js file and, if you want, create a better .gitmessage file.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • List Pattern to match an collection against a sequence of patterns &vert; Code4IT

    List Pattern to match an collection against a sequence of patterns | Code4IT


    By using list patterns on an array or a list you can check whether a it contains the values you expect in a specific position.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With C# 11 we have an interesting new feature: list patterns.

    You can, in fact, use the is operator to check if an array has the exact form that you expect.

    Take this method as an example.

    Introducing List Patterns

    string YeahOrError(int[] s)
    {
        if (s is [1, 2, 3]) return "YEAH";
        return "error!";
    }
    

    As you can imagine, the previous method returns YEAH if the input array is exactly [1, 2, 3]. You can, in fact, try it by running some tests:

    [Test]
    public void PatternMatchingWorks()
    {
        Assert.That(YeahOrError(new int[] { 1, 2, 3 }), Is.EqualTo("YEAH"));
        Assert.That(YeahOrError(new int[] { 1, 2, 3, 4 }), Is.EqualTo("error!"));
        Assert.That(YeahOrError(new int[] { 2, 3, 1}), Is.EqualTo("error!"));
    }
    

    As you can see, if the order is different, the check does not pass.

    List Patterns with Discard

    We can also use discard values to check whether a list contains a specific item in a specified position, ignoring all the other values:

    string YeahOrErrorWithDiscard(int[] s)
    {
        if (s is [_, 2, _]) return "YEAH";
        return "error!";
    }
    

    So, to be valid, the array must have exactly 3 elements, and the second one must be a “2”.

    [Test]
    public void PatternMatchingWorksWithDiscard()
    {
        Assert.That(YeahOrErrorWithDiscard(new int[] { 1, 2, 3 }), Is.EqualTo("YEAH"));
        Assert.That(YeahOrErrorWithDiscard(new int[] { 9, 2, 6 }), Is.EqualTo("YEAH"));
        Assert.That(YeahOrErrorWithDiscard(new int[] { 1, 6, 2, 3 }), Is.EqualTo("error!"));
        Assert.That(YeahOrErrorWithDiscard(new int[] { 6, 3, 8, 4 }), Is.EqualTo("error!"));
    }
    

    List Patterns with variable assignment

    You can also assign one or more of such values to a variable, and discard all the others:

    string SelfOrMessageWithVar(int[] s)
    {
        if (s is [_, 2, int third]) return "YEAH_" + third;
        return "error!";
    }
    

    The previous condition, s is [_, 2, int third], returns true only if the array has 3 elements, and the second one is “2”. Then, it stores the third element in a new variable, int third, and uses it to build the returned string.

    [Test]
    public void can_use_list_patterns_with_var()
    {
        Assert.That(SelfOrMessageWithVar(new int[] { 1, 2, 3 }), Is.EqualTo("YEAH_3"));
        Assert.That(SelfOrMessageWithVar(new int[] { 1, 6, 2, 3 }), Is.EqualTo("error!"));
        Assert.That(SelfOrMessageWithVar(new int[] { 6, 3, 8, 4 }), Is.EqualTo("error!"));
    }
    

    List Patterns with item constraints

    Finally, you can also specify further constraints on each value in the condition, using operators such as or, >, >=, and so on.

    string SelfOrMessageWithCondition(int[] s)
    {
        if (s is [0 or 1, > 2, int third]) return "YEAH_" + third;
        return "error!";
    }
    

    You can easily guess the meaning of the previous method. You can double-check the actual result by looking at the following tests:

    [Test]
    [DotNet7]
    public void can_use_list_patterns_with_condition()
    {
        Assert.That(SelfOrMessageWithCondition(new int[] { 0, 4, 3 }), Is.EqualTo("YEAH_3"));
        Assert.That(SelfOrMessageWithCondition(new int[] { 6, 4, 3 }), Is.EqualTo("error!"));
        Assert.That(SelfOrMessageWithCondition(new int[] { 1, 2, 3 }), Is.EqualTo("error!"));
        Assert.That(SelfOrMessageWithCondition(new int[] { 1, 6, 2, 3 }), Is.EqualTo("error!"));
        Assert.That(SelfOrMessageWithCondition(new int[] { 6, 3, 8, 4 }), Is.EqualTo("error!"));
    }
    

    To read more about List patterns, just head to the official documentation 🔗.

    This article first appeared on Code4IT 🐧

    Wrapping up

    This is a new feature in C#. Have you ever used it in your production code?

    Or is it “just” a nice functionality that nobody uses? Drop a message below if you have a real use if it 📩

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Motion Highlights #11

    Motion Highlights #11



    A fresh roundup of standout motion design and animation work from across the creative community.



    Source link

  • Initialize lists size to improve performance &vert; Code4IT

    Initialize lists size to improve performance | Code4IT


    Lists have an inner capacity. Every time you add more items than the current Capacity, you add performance overhead. How to prevent it?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Some collections, like List<T>, have a predefined initial size.

    Every time you add a new item to the collection, there are two scenarios:

    1. the collection has free space, allocated but not yet populated, so adding an item is immediate;
    2. the collection is already full: internally, .NET resizes the collection, so that the next time you add a new item, we fall back to option #1.

    Clearly, the second approach has an impact on the overall performance. Can we prove it?

    Here’s a benchmark that you can run using BenchmarkDotNet:

    [Params(2, 100, 1000, 10000, 100_000)]
    public int Size;
    
    [Benchmark]
    public void SizeDefined()
    {
        int itemsCount = Size;
    
        List<int> set = new List<int>(itemsCount);
        foreach (var i in Enumerable.Range(0, itemsCount))
        {
            set.Add(i);
        }
    }
    
    [Benchmark]
    public void SizeNotDefined()
    {
        int itemsCount = Size;
    
        List<int> set = new List<int>();
        foreach (var i in Enumerable.Range(0, itemsCount))
        {
            set.Add(i);
        }
    }
    

    Those two methods are almost identical: the only difference is that in one method we specify the initial size of the list: new List<int>(itemsCount).

    Have a look at the result of the benchmark run with .NET 7:

    Method Size Mean Error StdDev Median Gen0 Gen1 Gen2 Allocated
    SizeDefined 2 49.50 ns 1.039 ns 1.678 ns 49.14 ns 0.0248 104 B
    SizeNotDefined 2 63.66 ns 3.016 ns 8.507 ns 61.99 ns 0.0268 112 B
    SizeDefined 100 798.44 ns 15.259 ns 32.847 ns 790.23 ns 0.1183 496 B
    SizeNotDefined 100 1,057.29 ns 42.100 ns 121.469 ns 1,056.42 ns 0.2918 1224 B
    SizeDefined 1000 9,180.34 ns 496.521 ns 1,400.446 ns 8,965.82 ns 0.9766 4096 B
    SizeNotDefined 1000 9,720.66 ns 406.184 ns 1,184.857 ns 9,401.37 ns 2.0142 8464 B
    SizeDefined 10000 104,645.87 ns 7,636.303 ns 22,395.954 ns 99,032.68 ns 9.5215 1.0986 40096 B
    SizeNotDefined 10000 95,192.82 ns 4,341.040 ns 12,524.893 ns 92,824.50 ns 31.2500 131440 B
    SizeDefined 100000 1,416,074.69 ns 55,800.034 ns 162,771.317 ns 1,402,166.02 ns 123.0469 123.0469 123.0469 400300 B
    SizeNotDefined 100000 1,705,672.83 ns 67,032.839 ns 186,860.763 ns 1,621,602.73 ns 285.1563 285.1563 285.1563 1049485 B

    Notice that, in general, they execute in a similar amount of time; for instance when running the same method with 100000 items, we have the same magnitude of time execution: 1,416,074.69 ns vs 1,705,672.83 ns.

    The huge difference is with the allocated space: 400,300 B vs 1,049,485 B. Almost 2.5 times better!

    Ok, it works. Next question: How can we check a List capacity?

    We’ve just learned that capacity impacts the performance of a List.

    How can you try it live? Easy: have a look at the Capacity property!

    List<int> myList = new List<int>();
    
    foreach (var element in Enumerable.Range(0,50))
    {
        myList.Add(element);
        Console.WriteLine($"Items count: {myList.Count} - List capacity: {myList.Capacity}");
    }
    

    If you run this method, you’ll see this output:

    Items count: 1 - List capacity: 4
    Items count: 2 - List capacity: 4
    Items count: 3 - List capacity: 4
    Items count: 4 - List capacity: 4
    Items count: 5 - List capacity: 8
    Items count: 6 - List capacity: 8
    Items count: 7 - List capacity: 8
    Items count: 8 - List capacity: 8
    Items count: 9 - List capacity: 16
    Items count: 10 - List capacity: 16
    Items count: 11 - List capacity: 16
    Items count: 12 - List capacity: 16
    Items count: 13 - List capacity: 16
    Items count: 14 - List capacity: 16
    Items count: 15 - List capacity: 16
    Items count: 16 - List capacity: 16
    Items count: 17 - List capacity: 32
    Items count: 18 - List capacity: 32
    Items count: 19 - List capacity: 32
    Items count: 20 - List capacity: 32
    Items count: 21 - List capacity: 32
    Items count: 22 - List capacity: 32
    Items count: 23 - List capacity: 32
    Items count: 24 - List capacity: 32
    Items count: 25 - List capacity: 32
    Items count: 26 - List capacity: 32
    Items count: 27 - List capacity: 32
    Items count: 28 - List capacity: 32
    Items count: 29 - List capacity: 32
    Items count: 30 - List capacity: 32
    Items count: 31 - List capacity: 32
    Items count: 32 - List capacity: 32
    Items count: 33 - List capacity: 64
    Items count: 34 - List capacity: 64
    Items count: 35 - List capacity: 64
    Items count: 36 - List capacity: 64
    Items count: 37 - List capacity: 64
    Items count: 38 - List capacity: 64
    Items count: 39 - List capacity: 64
    Items count: 40 - List capacity: 64
    Items count: 41 - List capacity: 64
    Items count: 42 - List capacity: 64
    Items count: 43 - List capacity: 64
    Items count: 44 - List capacity: 64
    Items count: 45 - List capacity: 64
    Items count: 46 - List capacity: 64
    Items count: 47 - List capacity: 64
    Items count: 48 - List capacity: 64
    Items count: 49 - List capacity: 64
    Items count: 50 - List capacity: 64
    

    So, as you can see, List capacity is doubled every time the current capacity is not enough.

    Further readings

    To populate the lists in our Benchmarks we used Enumerable.Range. Do you know how it works? Have a look at this C# tip:

    🔗 C# Tip: LINQ’s Enumerable.Range to generate a sequence of consecutive numbers

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned that just a minimal change can impact our application performance.

    We simply used a different constructor, but the difference is astounding. Clearly, this trick works only if already know the final length of the list (or, at least, an estimation). The more precise, the better!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link