بلاگ

  • Developer Spotlight: Ruud Luijten | Codrops

    Developer Spotlight: Ruud Luijten | Codrops


    I’m a 32-year-old freelance developer based in Antwerp, Belgium, with a degree in Multimedia Technology and over a decade of experience crafting digital experiences. Early in my career, I took a bold step and moved to New York City to join the creative agency Your Majesty as a front-end developer — a launchpad that allowed me to work on high-profile projects for global brands like BMW and Spotify.

    After a year in the U.S., I returned home to be closer to family and friends and continued refining my skills with renowned agencies such as Build in Amsterdam, Hello Monday, Watson DG, Exo Ape, and Studio 28K. Over the years, I’ve had the privilege of collaborating with top-tier creative teams on projects for clients including Amazon, Apple, Disney+, Mammut, Sony, WeTransfer, and more.

    Today, as an independent developer, I partner with agencies around the world to deliver design and motion-driven digital experiences. Outside of work, you’ll often find me trail running, playing golf, or exploring the outdoors through my passion for landscape and nature photography.

    Featured work

    WRK Timepieces – Advanced Horology Engineering

    WRK Timepieces is renowned for its commitment to crafting luxury timepieces. Their artisans craft each piece with exceptional care, combining traditional craftsmanship with cutting-edge innovation. Utilizing premium materials such as titanium and DLC (Diamond-Like Carbon), WRK Timepieces produces watches that embody precision engineering and timeless elegance. For their website, I worked with 28K Studio and their amazing team of designers.

    I developed a product page for WRK’s latest timepiece, using immersive 3D scroll-triggered animations to reveal features like the precision-engineered movement, Flying Tourbillon, and Double-wishbone System. The fluid, interactive sequences let users explore the watch in rich detail from every angle, while a seamless scrolling flow highlights its design, materials, and craftsmanship in a premium, minimalist style true to the WRK brand.

    Built with Vue/Nuxt, SCSS, TypeScript, GSAP, Storyblok for content, and Vercel for fast, reliable hosting.

    Mammut – Digital Flagship Store

    Mammut is a Swiss premium outdoor brand known for its high-performance gear and apparel, combining innovative design with over 160 years of alpine heritage to equip adventurers for the most demanding environments.

    From concept to launch, Build in Amsterdam collaborated closely with Mammut to create their new digital flagship store, united by a shared vision for quality. We set a fresh creative direction for lifestyle and product photography, shifting toward a more emotional shopping experience.

    The mobile-first design system meets WCAG 2.1 accessibility standards, ensuring usability for all. To highlight the story behind every product, we designed editorial modules that encourage deeper exploration without overshadowing commerce.

    Built with Next.js for responsive, high-performance interfaces, the platform integrates Contentful as a headless CMS, Algolia for lightning-fast search, and a custom PIM for real-time product data.

    Exo Ape – Portfolio

    Exo Ape is a digital design studio that plants the flag at the intersection of technology, digital branding and content creation. For over a decade their team has had the privilege of working with global brands and inspiring startups to tell brand-led stories and shape digital identities.

    I’ve collaborated with the Exo Ape team on numerous projects and had the privilege of contributing to their new studio website. Their focus consistently lies in strong design and bringing sites to life through engaging motion design. Working closely with Rob Smittenaar, I developed a robust foundation using Vue/Nuxt and created a backend-for-frontend API layer that streamlined data usage within the frontend app.

    The project uses Vue/Nuxt for a dynamic frontend, SCSS for modular styling, and GSAP for smooth animations and transitions. Content is managed with Storyblok, and the site is hosted on Vercel for fast, reliable delivery.

    Columbia Pictures – 100 Years Anniversary

    Celebrating a century of cinema – In honor of Columbia Pictures’ 100th anniversary, I worked with Sony Pictures, Exo Ape and Watson DG to create a century-filled digital experience and quiz. Through this, we took visitors on a journey, not only through entertainment history, but also through a personalized path of self-discovery helping fans uncover the films and television shows that have shaped them the most.

    In order to create a uniquely individual experience, we implemented a strategic backend development to keep the journey original, dynamic, and varied for every user – no matter how many times they return. This, along with a diverse mix of visuals and questions, created a fresh and engaging experience for everyone. The assets and questions are sourced from an extensive database of films, TV shows, and actors.

    Each run through the quiz presents the visitor with eight interactive questions that gradually change based on their responses. The creative team developed a design system featuring five interactive mechanics — Circles, This or That, Slider, Rotator, and Drag and Drop — to present quiz questions in an engaging way, encouraging users to select answers through fun, dynamic interactions.

    With titles, genres, and visuals from a hundred years of film and television, we were able to tailor the experience around each answer. For example, if the quiz finds that the user leans toward horror, more questions will be horror-themed. If they lean toward 80s films, more options will tap into nostalgia.

    After diving into Columbia’s 100 years and completing the quiz, visitors were presented with a personalized shareable through an automated asset generator. This asset — accompanied with a range of social sharing options — played an important role in maintaining engagement and enthusiasm throughout the campaign and beyond.

    The project’s frontend was built with Vue/Nuxt, SCSS, and GSAP for dynamic interfaces, modular styling, and smooth animations. The backend uses Node, Express, and Canvas to generate the shareable asset.

    General Electric – Innovation Barometer

    Every two years, General Electric surveys the world’s leading innovators to explore the future of innovation. The findings are presented in the Global Innovation Barometer, a biannual campaign created in collaboration with Little Red Robot.

    The creative team developed a futuristic visual system for the campaign, based on a modern abstract 3D design style. Photography and live-action video were combined with 3D animations and illustrations to create a rich visual experience.

    The project was built using Vue for a reactive and component-driven frontend, combined with Stylus for efficient and scalable styling. For smooth, performant animations, GSAP was integrated to handle complex transitions and scroll-triggered effects, enhancing the overall user experience.

    Extra: Photography Portfolio 2.0 sneak peek

    I’m excited to share that I’m currently collaborating with Barend Eijkenduijn and Clay Boan on the next version of my photography portfolio. This upcoming Version 2.0 will be more than just a refreshed collection of my work — it will also include a fully integrated print shop. Through this new platform, you’ll be able to browse my photographs, select your favorites, and customize your prints with a range of options, including various sizes, paper types, and framing styles. Our goal is to create a seamless and enjoyable experience, making it easier than ever to bring a piece of my photography into your home or workspace. We’re working towards an official launch by the end of this year, and I can’t wait to share it with you.

    Final Thoughts

    As a developer, I believe that curiosity and consistency are essential to growing in this field — but so is making time to explore, experiment, and create personal passion projects. Sharing that work can open unexpected doors.

    Many thanks to Codrops and Manoela for this opportunity — it’s a real honor. Codrops has long been one of my favorite resources, and it’s played a big role in my own learning journey.

    You can find more of work on my portfolio, Twitter and LinkedIn.



    Source link

  • Docker + Python CRUD API + Excel VBA – All for beginners – Useful code


    import os, sqlite3

    from typing import List, Optional

    from fastapi import FastAPI, HTTPException

    from pydantic import BaseModel

     

    DB_PATH = os.getenv(“DB_PATH”, “/data/app.db”)  

     

    app = FastAPI(title=“Minimal Todo CRUD”, description=“Beginner-friendly, zero frontend.”)

     

    class TodoIn(BaseModel):

        title: str

        completed: bool = False

     

    class TodoUpdate(BaseModel):

        title: Optional[str] = None

        completed: Optional[bool] = None

     

    class TodoOut(TodoIn):

        id: int

     

    def row_to_todo(row) -> TodoOut:

        return TodoOut(id=row[“id”], title=row[“title”], completed=bool(row[“completed”]))

     

    def get_conn():

        conn = sqlite3.connect(DB_PATH)

        conn.row_factory = sqlite3.Row

        return conn

     

    @app.on_event(“startup”)

    def init_db():

        os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)

        conn = get_conn()

        conn.execute(“””

            CREATE TABLE IF NOT EXISTS todos(

                id INTEGER PRIMARY KEY AUTOINCREMENT,

                title TEXT NOT NULL,

                completed INTEGER NOT NULL DEFAULT 0

            )

        “””)

        conn.commit(); conn.close()

     

    @app.post(“/todos”, response_model=TodoOut, status_code=201)

    def create_todo(payload: TodoIn):

        conn = get_conn()

        cur = conn.execute(

            “INSERT INTO todos(title, completed) VALUES(?, ?)”,

            (payload.title, int(payload.completed))

        )

        conn.commit()

        row = conn.execute(“SELECT * FROM todos WHERE id=?”, (cur.lastrowid,)).fetchone()

        conn.close()

        return row_to_todo(row)

     

    @app.get(“/todos”, response_model=List[TodoOut])

    def list_todos():

        conn = get_conn()

        rows = conn.execute(“SELECT * FROM todos ORDER BY id DESC”).fetchall()

        conn.close()

        return [row_to_todo(r) for r in rows]

     

    @app.get(“/todos/{todo_id}”, response_model=TodoOut)

    def get_todo(todo_id: int):

        conn = get_conn()

        row = conn.execute(“SELECT * FROM todos WHERE id=?”, (todo_id,)).fetchone()

        conn.close()

        if not row:

            raise HTTPException(404, “Todo not found”)

        return row_to_todo(row)

     

    @app.patch(“/todos/{todo_id}”, response_model=TodoOut)

    def update_todo(todo_id: int, payload: TodoUpdate):

        data = payload.model_dump(exclude_unset=True)

        if not data:

            return get_todo(todo_id)  # nothing to change

     

        fields, values = [], []

        if “title” in data:

            fields.append(“title=?”); values.append(data[“title”])

        if “completed” in data:

            fields.append(“completed=?”); values.append(int(data[“completed”]))

        if not fields:

            return get_todo(todo_id)

     

        conn = get_conn()

        cur = conn.execute(f“UPDATE todos SET {‘, ‘.join(fields)} WHERE id=?”, (*values, todo_id))

        if cur.rowcount == 0:

            conn.close(); raise HTTPException(404, “Todo not found”)

        conn.commit()

        row = conn.execute(“SELECT * FROM todos WHERE id=?”, (todo_id,)).fetchone()

        conn.close()

        return row_to_todo(row)

     

    @app.delete(“/todos/{todo_id}”, status_code=204)

    def delete_todo(todo_id: int):

        conn = get_conn()

        cur = conn.execute(“DELETE FROM todos WHERE id=?”, (todo_id,))

        conn.commit(); conn.close()

        if cur.rowcount == 0:

            raise HTTPException(404, “Todo not found”)

        return  # 204 No Content



    Source link

  • Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene

    Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene



    This tutorial walks through creating an interactive animation: starting in Blender by designing a button and simulating a cloth-like object that drops onto a surface and settles with a soft bounce.

    After baking the cloth simulation, the animation is exported and brought into a Three.js project, where it becomes an interactive scene that can be replayed on click.

    By the end, you’ll have a user-triggered animation that blends Blender’s physics simulations with Three.js rendering and interactivity.

    Let’s dive in!

    Step 1: Create a Cube and Add Subdivisions

    1. Start a New Project: Open Blender and delete the default cube (select it and press X, then confirm).
    2. Add a Cube: Press Shift + A > Mesh > Cube to create a new cube.
    3. Enter Edit Mode: Select the cube, then press Tab to switch to Edit Mode.
    4. Subdivide the Cube: Press Ctrl + R to add a loop cut, hover over the cube, and scroll your mouse wheel to increase the number of cuts.
    5. Apply Subdivision: With the cube still selected in Object Mode, go to the Modifiers panel (wrench icon), and click Add Modifier > Subdivision Surface. Set the Levels to 2 or 3 for a smoother result, then click Apply.

    Step 2: Add Cloth Physics and Adjust Settings

    1. Select the Cube: Ensure your subdivided cube is selected in Object Mode.
    2. Add Cloth Physics: Go to the Physics tab in the Properties panel. Click Cloth to enable cloth simulation.
    3. Pin the Edges (Optional): If you want parts of the cube to stay fixed (e.g., the top), switch to Edit Mode, select the vertices you want to pin, go back to the Physics tab, and under Cloth > Shape, click Pin to assign those vertices to a vertex group.
    4. Adjust Key Parameters:
      • Quality Steps: Set to 10-15 for smoother simulation (higher values increase accuracy but slow down computation).
      • Mass: Set to around 0.2-0.5 kg for a lighter, more flexible cloth.
      • Pressure: Under Cloth > Pressure, enable it and set a positive value (e.g., 2-5) to simulate inflation. This will make the cloth expand as if air is pushing it outward.
      • Stiffness: Adjust Tension and Compression (e.g., 10-15) to control how stiff or loose the cloth feels.
    5. Test the Simulation: Press the Spacebar to play the animation and see the cloth inflate. Tweak settings as needed.

    Step 3: Add a Ground Plane with a Collision

    1. Create a Ground Plane: Press Shift + A > Mesh > Plane. Scale it up by pressing S and dragging (e.g., scale it to 5-10x) so it’s large enough for the cloth to interact with.
    2. Position the Plane: Move the plane below the cube by pressing G > Z > -5 (or adjust as needed).
    3. Enable Collision: Select the plane, go to the Physics tab, and click Collision. Leave the default settings.
    4. Run the Simulation: Press the Spacebar again to see the cloth inflate and settle onto the ground plane.

    Step 4: Adjust Materials and Textures

    1. Select the Cube: In Object Mode, select the cloth (cube) object.
    2. Add a Material: Go to the Material tab, click New to create a material, and name it.
    3. Set Base Color/UV Map: In the Base Color slot, choose a fabric-like color (e.g., red or blue) or connect an image texture by clicking the yellow dot next to Base Color and selecting Image Texture. Load a texture file if you have one.
    4. Adjust Roughness and Specular: Set Roughness to 0.1-0.3 for a soft fabric look.
    5. Apply to Ground (Optional): Repeat the process for the plane, using a simple gray or textured material for contrast.

    Step 5: Export as MDD and Generate Shape Keys for Three.js

    To use the cloth animation in a Three.js project, we’ll export the physics simulation as an MDD file using the NewTek MDD plugin, then re-import it to create Shape Keys. Follow these steps:

    1. Enable the NewTek MDD Plugin:
      1. Go to Edit > Preferences > Add-ons.
      2. Search for “NewTek” or “MDD” and enable the “Import-Export: NewTek MDD format” add-on by checking the box. Close the Preferences window.
    2. Apply All Modifiers and All Transform:
      1. In Object Mode, select the cloth object.
      2. Go to the Modifiers panel (wrench icon). For each modifier (e.g., Subdivision Surface, Cloth), click the dropdown and select Apply. This “freezes” the mesh with its current shape and physics data.
      3. Ensure no unapplied deformations (e.g., scale) remain: Press Ctrl + A > All Transforms to apply location, rotation, and scale.
    3. Export as MDD:
      1. With the cloth object selected, go to File > Export > Lightwave Point Cache (.mdd).
      2. In the export settings (bottom left):
        • Set FPS (frames per second) to match your project (e.g., 24, 30, or 60).
        • Set the Start/End Frame of your animation.
      3. Choose a save location (e.g., “inflation.mdd”) and click Export MDD.
    4. Import the MDD:
      1. Go to File > Import > Lightwave Point Cache (.mdd), and load the “inflation.mdd” file.
      2. In the Physics and Modifiers panel, remove any cloth simulation-related options, as we now have shape keys.

    Step 6: Export the Cloth Simulation Object as GLB

    After importing the MDD, select the cube with the animation data.

    1. Export as glTF 2.0 (.glb/.gltf): Go to File > Export > glTF 2.0 (.glb/.gltf).
    2. Check Shape Keys and Animation
      1. Under the Data section, check Shape Keys to include the morph targets generated from the animation.
      2. Check Animations to export the animation data tied to the Shape Keys.
    3. Export: Choose a save location (e.g., “inflation.glb”) and click Export glTF 2.0. This file is now ready for use in Three.js.

    Step 7: Implement the Cloth Animation in Three.js

    In this step, we’ll use Three.js with React (via @react-three/fiber) to load and animate the cloth inflation effect from the inflation.glb file exported in Step 6. Below is the code with explanations:

    1. Set Up Imports and File Path:
      1. Import necessary libraries: THREE for core Three.js functionality, useRef, useState, useEffect from React for state and lifecycle management, and utilities from @react-three/fiber and @react-three/drei for rendering and controls.
      2. Import GLTFLoader from Three.js to load the .glb file.
      3. Define the model path: const modelPath = ‘/inflation.glb’; points to the exported file (adjust the path based on your project structure).
    2. Create the Model Component:
      1. Define the Model component to handle loading and animating the .glb file.
      2. Use state variables: model for the loaded 3D object, loading to track progress, and error for handling issues.
      3. Use useRef to store the AnimationMixer (mixerRef) and animation actions (actionsRef) for controlling playback.
    3. Load the Model with Animations:
      1. In a useEffect hook, instantiate GLTFLoader and load inflation.glb.
      2. On success (gltf callback):
        • Extract the scene (gltf.scene) and create an AnimationMixer to manage animations.
        • For each animation clip in gltf.animations:
          • Set duration to 6 seconds (clip.duration = 6).
          • Create an AnimationAction (mixer.clipAction(clip)).
          • Configure the action: clampWhenFinished = true stops at the last frame, loop = THREE.LoopOnce plays once, and setDuration(6) enforces the 6-second duration.
          • Reset and play the action immediately, storing it in actionsRef.current.
        • Update state with the loaded model and set loading to false.
      3. Log loading progress with the xhr callback.
      4. Handle errors in the error callback, updating error state.
      5. Clean up the mixer on component unmount.
    4. Animate the Model:
      1. Use useFrame to update the mixer each frame with mixerRef.current.update(delta), advancing the animation based on time.
      2. Add interactivity:
        • handleClick: Resets and replays all animations on click.
        • onPointerOver/onPointerOut: Changes the cursor to indicate clickability.
    5. Render the Model:
      1. Return null if still loading, an error occurs, or no model is loaded.
      2. Return a <primitive> element with the loaded model, enabling shadows and attaching event handlers.
    6. Create a Reflective Ground:
      1. Define MetalGround as a mesh with a plane geometry (args={[100, 100]}).
      2. Apply MeshReflectorMaterial with properties like metalness=0.5, roughness=0.2, and color=”#202020″ for a metallic, reflective look. Adjust blur, strength, and resolution as needed.
    7. Set Up the Scene:
      1. In the App component, create a <Canvas> with a camera positioned at [0, 15, 15] and a 50-degree FOV.
      2. Add a directionalLight at [0, 15, 0] with shadows enabled.
      3. Include an Environment preset (“studio”) for lighting, a Model at [0, 5, 0], ContactShadows for realism, and the MetalGround rotated and positioned below.
      4. Add OrbitControls for interactive camera movement.
    import * as THREE from 'three';
    import { useRef, useState, useEffect } from 'react';
    import { Canvas, useFrame } from '@react-three/fiber';
    import { OrbitControls, Environment, MeshReflectorMaterial, ContactShadows } from '@react-three/drei';
    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
    
    const modelPath = '/inflation.glb';
    
    function Model({ ...props }) {
      const [model, setModel] = useState<THREE.Group | null>(null);
      const [loading, setLoading] = useState(true);
      const [error, setError] = useState<unknown>(null);
      const mixerRef = useRef<THREE.AnimationMixer | null>(null);
      const actionsRef = useRef<THREE.AnimationAction[]>([]);
    
      const handleClick = () => {
        actionsRef.current.forEach((action) => {
          action.reset();
          action.play();
        });
      };
    
      const onPointerOver = () => {
        document.body.style.cursor = 'pointer';
      };
    
      const onPointerOut = () => {
        document.body.style.cursor = 'auto';
      };
    
      useEffect(() => {
        const loader = new GLTFLoader();
        const dracoLoader = new DRACOLoader();
        dracoLoader.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/');
        loader.setDRACOLoader(dracoLoader);
    
        loader.load(
          modelPath,
          (gltf) => {
            const mesh = gltf.scene;
            const mixer = new THREE.AnimationMixer(mesh);
            mixerRef.current = mixer;
    
            if (gltf.animations && gltf.animations.length) {
              gltf.animations.forEach((clip) => {
                clip.duration = 6;
                const action = mixer.clipAction(clip);
                action.clampWhenFinished = true;
                action.loop = THREE.LoopOnce;
                action.setDuration(6);
                action.reset();
                action.play();
                actionsRef.current.push(action);
              });
            }
    
            setModel(mesh);
            setLoading(false);
          },
          (xhr) => {
            console.log(`Loading: ${(xhr.loaded / xhr.total) * 100}%`);
          },
          (error) => {
            console.error('An error happened loading the model:', error);
            setError(error);
            setLoading(false);
          }
        );
    
        return () => {
          if (mixerRef.current) {
            mixerRef.current.stopAllAction();
          }
        };
      }, []);
    
      useFrame((_, delta) => {
        if (mixerRef.current) {
          mixerRef.current.update(delta);
        }
      });
    
      if (loading || error || !model) {
        return null;
      }
    
      return (
        <primitive
          {...props}
          object={model}
          castShadow
          receiveShadow
          onClick={handleClick}
          onPointerOver={onPointerOver}
          onPointerOut={onPointerOut}
        />
      );
    }
    
    function MetalGround({ ...props }) {
      return (
        <mesh {...props} receiveShadow>
          <planeGeometry args={[100, 100]} />
          <MeshReflectorMaterial
            color="#151515"
            metalness={0.5}
            roughness={0.2}
            blur={[0, 0]}
            resolution={2048}
            mirror={0}
          />
        </mesh>
      );
    }
    
    export default function App() {
      return (
        <div id="content">
          <Canvas camera={{ position: [0, 35, 15], fov: 25 }}>
            <directionalLight position={[0, 15, 0]} intensity={1} shadow-mapSize={1024} />
    
            <Environment preset="studio" background={false} environmentRotation={[0, Math.PI / -2, 0]} />
            <Model position={[0, 5, 0]} />
            <ContactShadows opacity={0.5} scale={10} blur={5} far={10} resolution={512} color="#000000" />
            <MetalGround rotation-x={Math.PI / -2} position={[0, -0.01, 0]} />
    
            <OrbitControls
              enableZoom={false}
              enablePan={false}
              enableRotate={true}
              enableDamping={true}
              dampingFactor={0.05}
            />
          </Canvas>
        </div>
      );
    }

    And that’s it! Starting from a cloth simulation in Blender, we turned it into a button that drops into place and reacts with a bit of bounce inside a Three.js scene.

    This workflow shows how Blender’s physics simulations can be exported and combined with Three.js to create interactive, real-time experiences on the web.



    Source link

  • how I automated my blogging workflow with GitHub, PowerShell, and Azure &vert; Code4IT

    how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT


    After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.

    In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.

    I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.

    Introducing my blog architecture

    To better understand what’s going on, I need a very brief overview of the architecture of my blog.

    It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).

    So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.

    As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.

    Now that you know some general aspects, it’s time to deep dive into my writing process.

    Before writing: organizing ideas with GitHub

    My central source, as you might have already understood, is GitHub.

    There, I write all my notes and keep track of the status of my articles.

    Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.

    Github Projects to track the status of the articles

    GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.

    GitHub projects

    I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.

    In this way, I can use different columns and have more flexibility when handling the status of the tasks.

    GitHub issues templates

    As I said, to write my notes I use GitHub issues.

    When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.

    To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.

    The list of GitHub issues templates I use

    Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form

    Article creation form as generated by a template

    which is prepopulated with some fields:

    • Title: with a placeholder ([Article] )
    • Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
    • Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)

    How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.

    ---
    name: New article
    about: New blog article
    title: "[Article] - "
    labels: Article
    assignees: bellons91
    ---
    
    ## Argomenti
    
    ## Link
    
    ## Appunti vari
    

    And you’re good to go!

    GitHub action to assign issues to a project

    Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!

    With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.

    So, here’s mine:

    Auto-assign to project GitHub Action

    For better readability, you can find the Gist here.

    This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.

    In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎

    Writing

    Now it’s the time of writing!

    As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.

    For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.

    But, of course, I automated it! 😎

    Powershell script to scaffold a new article

    Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.

    Also, every MD file begins with a Frontmatter section, like this:

    ---
    title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
    path: "/blog/automate-articles-creations-github-powershell-azure"
    tags: ["MainArticle"]
    featuredImage: "./cover.png"
    excerpt: "a description for 072-how-i-create-articles"
    created: 4219-11-20
    updated: 4219-11-20
    ---
    

    But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.

    PowerShell script to scaffold a new article

    You can find the code in this Gist.

    This script performs several actions:

    1. Switches to the Master branch and downloads the latest updates
    2. Asks for the article slug that will be used to create the folder name
    3. Creates a new branch using the article slug as a name
    4. Creates a new folder that will contain all the files I will be using for my article (markdown content and images)
    5. Creates the article file with the Frontmatter part populated with dummy values
    6. Copies a placeholder image into this folder; this image will be the temporary cover image

    In this way, with a single command, I can scaffold a new article with all the files I need to get started.

    Ok, but how can I run a PowerShell in a Gatsby repository?

    I added this script in the package.json file

    "create-article": "@powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./article-creator.ps1",
    

    where article-creator.ps1 is the name of the file that contains the script.

    Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.

    Markdown preview on VS Code

    I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).

    One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.

    Here’s what I can see right now while I’m writing:

    Markdown preview on the side with VS Code

    Grammar check with Grammarly

    Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).

    I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.

    Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).

    Unprofessional, but fun, cover images

    One of the tasks I like the most is creating my cover images.

    I don’t use stock images, I prefer using less professional but more original cover images.

    Some of the cover images for my articles

    You can see all of them here.

    Creating and scheduling PR on GitHub with Templates and Actions

    Now that my article is complete, I can set it as ready for being scheduled.

    To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!

    I have created a PR template in an MD file, which I use to create a draft of the PR content.

    Pull Request form on GitHub

    In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).

    Also, I can define when this PR will be merged on Master, using the /schedule tag.

    It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.

    YAML of Merge Schedule action

    So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.

    As usual, you can find the code of this action here

    After the PR is merged, I also receive an email that notifies me of the action.

    After publishing

    Once a new article is online, I like to give it some visibility.

    To do that, I heavily rely on Azure Logic Apps.

    Azure Logic App for sharing on Twitter

    My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.

    I use it to trigger an Azure Logic App to publish a message on Twitter:

    Azure Logic App workflow for publishing on Twitter

    The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.

    If you prefer, you can use a custom Azure Function! The choice is yours!

    Cross-post reminder with Azure Logic Apps

    Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.

    Azure Logic App workflow for crosspost reminders

    I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.

    Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.

    And, my friends, this is everything that happens in the background of my blog!

    What I’m still missing

    I’ve added a lot of effort to my blog, and I’m incredibly proud of it!

    But still, there are a few things I’d like to improve.

    SEO Tools/analysis

    I’ve never considered SEO. Or, better, Keywords.

    I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.

    I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.

    Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.

    Also, I should spend more time thinking of the correct title and section titles.

    Any idea?

    Easy upgrade of Gatsby/Migrate to other headless CMSs

    Lastly, I’d like to find another theme or platform and leave the one I’m currently using.

    Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.

    Wrapping up

    That’s it: in this article, I’ve explained everything that I do when writing a blog post.

    Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!

    So, for now, happy coding!

    🐧



    Source link

  • Convert ExpandoObjects to IDictionary &vert; Code4IT

    Convert ExpandoObjects to IDictionary | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, ExpandoObjects are dynamically-populated objects without a predefined shape.

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    

    Name and Age are not part of the definition of ExpandoObject: they are two fields I added without declaring their type.

    This is a dynamic object, so I can add new fields as I want. Say that I need to add my City: I can simply use

    without creating any field on the ExpandoObject class.

    Now: how can I retrieve all the values? Probably the best way is by converting the ExpandoObject into a Dictionary.

    Create a new Dictionary

    Using an IDictionary makes it easy to access the keys of the object.

    If you have an ExpandoObject that will not change, you can use it to create a new IDictionary:

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    
    
    IDictionary<string, object?> dict = new Dictionary<string, object?>(myObj);
    
    //dict.Keys: [Name, Age]
    
    myObj.City ="Turin";
    
    //dict.Keys: [Name, Age]
    

    Notice that we use the ExpandoObject to create a new IDictionary. This means that after the Dictionary creation if we add a new field to the ExpandoObject, that new field will not be present in the Dictionary.

    Cast to IDictionary

    If you want to use an IDictionary to get the ExpandoObject keys, and you need to stay in sync with the ExpandoObject status, you just have to cast that object to an IDictionary

    dynamic myObj = new ExpandoObject();
    myObj.Name ="Davide";
    myObj.Age = 30;
    
    IDictionary<string, object?> dict = myObj;
    
    //dict.Keys: [Name, Age]
    
    myObj.City ="Turin";
    
    //dict.Keys: [Name, Age, City]
    

    This works because ExpandoObject implements IDictionary, so you can simply cast to IDictionary without instantiating a new object.

    Here’s the class definition:

    public sealed class ExpandoObject :
    	IDynamicMetaObjectProvider,
    	IDictionary<string, object?>,
    	ICollection<KeyValuePair<string, object?>>,
    	IEnumerable<KeyValuePair<string, object?>>,
    	IEnumerable,
    	INotifyPropertyChanged
    

    Wrapping up

    Both approaches are correct. They both create the same Dictionary, but they act differently when a new value is added to the ExpandoObject.

    Can you think of any pros and cons of each approach?

    Happy coding!

    🐧



    Source link

  • Closing the Compliance Gap with Data Discovery

    Closing the Compliance Gap with Data Discovery


    In today’s regulatory climate, compliance is no longer a box-ticking exercise. It is a strategic necessity. Organizations across industries are under pressure to secure sensitive data, meet privacy obligations, and avoid hefty penalties. Yet, despite all the talk about “data visibility” and “compliance readiness,” one fundamental gap remains: unseen data—the information your business holds but doesn’t know about.

    Unseen data isn’t just a blind spot—it’s a compliance time bomb waiting to trigger regulatory and reputational damage.

    The Myth: Sensitive Data Lives Only in Databases

    Many businesses operate under the dangerous assumption that sensitive information exists only in structured repositories like databases, ERP platforms, or CRM systems. While it’s true these systems hold vast amounts of personal and financial information, they’re far from the whole picture.

    Reality check: Sensitive data is often scattered across endpoints, collaboration platforms, and forgotten storage locations. Think of HR documents on a laptop, customer details in a shared folder, or financial reports in someone’s email archive. These are prime targets for breaches—and they often escape compliance audits because they live outside the “official” data sources.

    Myth vs Reality: Why Structured Data is Not the Whole Story

    Yes, structured sources like SQL databases allow centralized access control and auditing. But compliance risks aren’t limited to structured data. Unstructured and endpoint data can be far more dangerous because:

    • They are harder to track.
    • They often bypass IT policies.
    • They get replicated in multiple places without oversight.

    When organizations focus solely on structured data, they risk overlooking up to 50–70% of their sensitive information footprint.

    The Challenge Without Complete Discovery

    Without full-spectrum data discovery—covering structured, unstructured, and endpoint environments—organizations face several challenges:

    1. Compliance Gaps – Regulations like GDPR, DPDPA, HIPAA, and CCPA require knowing all locations of personal data. If data is missed, compliance reports will be incomplete.
    2. Increased Breach Risk – Cybercriminals exploit the easiest entry points, often targeting endpoints and poorly secured file shares.
    3. Inefficient Remediation – Without knowing where data lives, security teams can’t effectively remove, encrypt, or mask it.
    4. Costly Investigations – Post-breach forensics becomes slower and more expensive when data locations are unknown.

    The Importance of Discovering Data Everywhere

    A truly compliant organization knows where every piece of sensitive data resides, no matter the format or location. That means extending discovery capabilities to:

    1. Structured Data
    • Where it lives: Databases, ERP, CRM, and transactional systems.
    • Why it matters: It holds core business-critical records, such as customer PII, payment data, and medical records.
    • Risks if ignored: Non-compliance with data subject rights requests; inaccurate reporting.
    1. Unstructured Data
    • Where it lives: File servers, SharePoint, Teams, Slack, email archives, cloud storage.
    • Why it matters: Contains contracts, scanned IDs, reports, and sensitive documents in freeform formats.
    • Risks if ignored: Harder to monitor, control, and protect due to scattered storage.
    1. Endpoint Data
    • Where it lives: Laptops, desktops, mobile devices (Windows, Mac, Linux).
    • Why it matters: Employees often store working copies of sensitive files locally.
    • Risks if ignored: Theft, loss, or compromise of devices can expose critical information.

    Real-World Examples of Compliance Risks from Unseen Data

    1. Healthcare Sector: A hospital’s breach investigation revealed patient records stored on a doctor’s laptop, which was never logged into official systems. GDPR fines followed.
    2. Banking & Finance: An audit found loan application forms with customer PII on a shared drive, accessible to interns.
    3. Retail: During a PCI DSS assessment, old CSV exports containing cardholder data were discovered in an unused cloud folder.
    4. Government: Sensitive citizen records are emailed between departments, bypassing secure document transfer systems, and are later exposed to a phishing attack.

    Closing the Gap: A Proactive Approach to Data Discovery

    The only way to eliminate unseen data risks is to deploy comprehensive data discovery and classification tools that scan across servers, cloud platforms, and endpoints—automatically detecting sensitive content wherever it resides.

    This proactive approach supports regulatory compliance, improves breach resilience, reduces audit stress, and ensures that data governance policies are meaningful in practice, not just on paper.

    Bottom Line

    Compliance isn’t just about protecting data you know exists—it’s about uncovering the data you don’t. From servers to endpoints, organizations need end-to-end visibility to safeguard against unseen risks and meet today’s stringent data protection laws.

    Seqrite empowers organizations to achieve full-spectrum data visibility — from servers to endpoints — ensuring compliance and reducing risk. Learn how we can help you discover what matters most.



    Source link

  • 3 ways to check the object passed to mocks with Moq in C# &vert; Code4IT

    3 ways to check the object passed to mocks with Moq in C# | Code4IT


    In unit tests, sometimes you need to perform deep checks on the object passed to the mocked service. We will learn 3 ways to do that with Moq and C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing unit tests, you can use Mocks to simulate the usage of class dependencies.

    Even though some developers are harshly against the usage of mocks, they can be useful, especially when the mocked operation does not return any value, but still, you want to check that you’ve called a specific method with the correct values.

    In this article, we will learn 3 ways to check the values passed to the mocks when using Moq in our C# Unit Tests.

    To better explain those 3 ways, I created this method:

    public void UpdateUser(User user, Preference preference)
    {
        var userDto = new UserDto
        {
            Id = user.id,
            UserName = user.username,
            LikesBeer = preference.likesBeer,
            LikesCoke = preference.likesCoke,
            LikesPizza = preference.likesPizza,
        };
    
        _userRepository.Update(userDto);
    }
    

    UpdateUser simply accepts two objects, user and preference, combines them into a single UserDto object, and then calls the Update method of _userRepository, which is an interface injected in the class constructor.

    As you can see, we are not interested in the return value from _userRepository.Update. Rather, we are interested in checking that we are calling it with the right values.

    We can do it in 3 ways.

    Verify each property with It.Is

    The simplest, most common way is by using It.Is<T> within the Verify method.

    [Test]
    public void VerifyEachProperty()
    {
        // Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
    
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u =>
            u.Id == expected.Id
            && u.UserName == expected.UserName
            && u.LikesPizza == expected.LikesPizza
            && u.LikesBeer == expected.LikesBeer
            && u.LikesCoke == expected.LikesCoke
        )));
    }
    

    In the example above, we used It.Is<UserDto> to check the exact item that was passed to the Update method of userRepo.

    Notice that it accepts a parameter. That parameter is of type Func<UserDto, bool>, and you can use it to define when your expectations are met.

    In this particular case, we’ve checked each and every property within that function:

    u =>
        u.Id == expected.Id
        && u.UserName == expected.UserName
        && u.LikesPizza == expected.LikesPizza
        && u.LikesBeer == expected.LikesBeer
        && u.LikesCoke == expected.LikesCoke
    

    This approach works well when you have to perform checks on only a few fields. But the more fields you add, the longer and messier that code becomes.

    Also, a problem with this approach is that if it fails, it becomes hard to understand which is the cause of the failure, because there is no indication of the specific field that did not match the expectations.

    Here’s an example of an error message:

    Expected invocation on the mock at least once, but was never performed: _ => _.Update(It.Is<UserDto>(u => (((u.Id == 1 && u.UserName == "Davidde") && u.LikesPizza == True) && u.LikesBeer == True) && u.LikesCoke == False))
    
    Performed invocations:
    
    Mock<IUserRepository:1> (_):
        IUserRepository.Update(UserDto { UserName = Davide, Id = 1, LikesPizza = True, LikesCoke = False, LikesBeer = True })
    

    Can you spot the error? And what if you were checking 15 fields instead of 5?

    Verify with external function

    Another approach is by externalizing the function.

    [Test]
    public void WithExternalFunction()
    {
        //Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u => AreEqual(u, expected))));
    }
    
    private bool AreEqual(UserDto u, UserDto expected)
    {
        Assert.AreEqual(expected.UserName, u.UserName);
        Assert.AreEqual(expected.Id, u.Id);
        Assert.AreEqual(expected.LikesBeer, u.LikesBeer);
        Assert.AreEqual(expected.LikesCoke, u.LikesCoke);
        Assert.AreEqual(expected.LikesPizza, u.LikesPizza);
    
        return true;
    }
    

    Here, we are passing an external function to the It.Is<T> method.

    This approach allows us to define more explicit and comprehensive checks.

    The good parts of it are that you will gain more control over the assertions, and you will also have better error messages in case a test fails:

    Expected string length 6 but was 7. Strings differ at index 5.
    Expected: "Davide"
    But was:  "Davidde"
    

    The bad part is that you will stuff your test class with lots of different methods, and the class can easily become hard to maintain. Unluckily, we cannot use local functions.

    On the other hand, having external functions allows us to combine them when we need to do some tests that can be reused across test cases.

    Intercepting the function parameters with Callback

    Lastly, we can use a hidden gem of Moq: Callbacks.

    With Callbacks, you can store in a local variable the reference to the item that was called by the method.

    [Test]
    public void CompareWithCallback()
    {
        // Arrange
    
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto actual = null;
        userRepo.Setup(_ => _.Update(It.IsAny<UserDto>()))
            .Callback(new InvocationAction(i => actual = (UserDto)i.Arguments[0]));
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        Assert.IsTrue(AreEqual(expected, actual));
    }
    

    In this way, you can use it locally and run assertions directly to that object without relying on the Verify method.

    Or, if you use records, you can use the auto-equality checks to simplify the Verify method as I did in the previous example.

    Wrapping up

    In this article, we’ve explored 3 ways to perform checks on the objects passed to dependencies mocked with Moq.

    Each way has its pros and cons, and it’s up to you to choose the approach that fits you the best.

    I personally prefer the second and third approaches, as they allow me to perform better checks on the passed values.

    What about you?

    For now, happy coding!

    🐧



    Source link

  • Tests should be even more well-written than production code &vert; Code4IT

    Tests should be even more well-written than production code | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You surely take care of your code to make it easy to read and understand, right? RIGHT??

    Well done! 👏

    But most of the developers tend to write good production code (the one actually executed by your system), but very poor test code.

    Production code is meant to be run, while tests are also meant to document your code; therefore there must not be doubts about the meaning and the reason behind a test.
    This also means that all the names must be explicit enough to help readers understand how and why a test should pass.

    This is a valid C# test:

    [Test]
    public void TestHtmlParser()
    {
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml("<p>Hello</p>");
        var node = doc.DocumentNode.ChildNodes[0];
        var parser = new HtmlParser();
    
        Assert.AreEqual("Hello", parser.ParseContent(node));
    }
    

    What is the meaning of this test? We should be able to understand it just by reading the method name.

    Also, notice that here we are creating the HtmlNode object; imagine if this node creation is present in every test method: you will see the same lines of code over and over again.

    Thus, we can refactor this test in this way:

     [Test]
    public void HtmlParser_ExtractsContent_WhenHtmlIsParagraph()
    {
        //Arrange
        string paragraphContent = "Hello";
        string htmlParagraph = $"<p>{paragraphContent}</p>";
        HtmlNode htmlNode = CreateHtmlNode(htmlParagraph);
        var htmlParser = new HtmlParser();
    
        //Act
        var parsedContent = htmlParser.ParseContent(htmlNode);
    
        //Assert
        Assert.AreEqual(paragraphContent, parsedContent);
    }
    

    This test is definitely better:

    • you can understand its meaning by reading the test name
    • the code is concise, and some creation parts are refactored out
    • we’ve well separated the 3 parts of the tests: Arrange, Act, Assert (we’ve already talked about it here)

    Wrapping up

    Tests are still part of your project, even though they are not used directly by your customers.

    Never skip tests, and never write them in a rush. After all, when you encounter a bug, the first thing you should do is write a test to reproduce the bug, and then validate the fix using that same test.

    So, keep writing good code, for tests too!

    Happy coding!

    🐧



    Source link

  • 8 things about Records in C# you probably didn’t know &vert; Code4IT

    8 things about Records in C# you probably didn’t know | Code4IT


    C# recently introduced Records, a new way of defining types. In this article, we will see 8 things you probably didn’t know about C# Records

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Records are the new data type introduced in 2021 with C# 9 and .NET Core 5.

    public record Person(string Name, int Id);
    

    Records are the third way of defining data types in C#; the other two are class and struct.

    Since they’re a quite new idea in .NET, we should spend some time experimenting with it and trying to understand its possibilities and functionalities.

    In this article, we will see 8 properties of Records that you should know before using it, to get the best out of this new data type.

    1- Records are immutable

    By default, Records are immutable. This means that, once you’ve created one instance, you cannot modify any of its fields:

    var me = new Person("Davide", 1);
    me.Name = "AnotherMe"; // won't compile!
    

    This operation is not legit.

    Even the compiler complains:

    Init-only property or indexer ‘Person.Name’ can only be assigned in an object initializer, or on ’this’ or ‘base’ in an instance constructor or an ‘init’ accessor.

    2- Records implement equality

    The other main property of Records is that they implement equality out-of-the-box.

    [Test]
    public void EquivalentInstances_AreEqual()
    {
        var me = new Person("Davide", 1);
        var anotherMe = new Person("Davide", 1);
    
        Assert.That(anotherMe, Is.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    As you can see, I’ve created two instances of Person with the same fields. They are considered equal, but they are not the same instance.

    3- Records can be cloned or updated using ‘with’

    Ok, so if we need to update the field of a Record, what can we do?

    We can use the with keyword:

    [Test]
    public void WithProperty_CreatesNewInstance()
    {
        var me = new Person("Davide", 1);
        var anotherMe = me with { Id = 2 };
    
        Assert.That(anotherMe, Is.Not.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    Take a look at me with { Id = 2 }: that operation creates a clone of me and updates the Id field.

    Of course, you can use with to create a new instance identical to the original one.

    [Test]
    public void With_CreatesNewInstance()
    {
        var me = new Person("Davide", 1);
    
        var anotherMe = me with { };
    
        Assert.That(anotherMe, Is.EqualTo(me));
        Assert.That(me, Is.Not.SameAs(anotherMe));
    }
    

    4- Records can be structs and classes

    Basically, Records act as Classes.

    public record Person(string Name, int Id);
    

    Sometimes that’s not what you want. Since C# 10 you can declare Records as Structs:

    public record struct Point(int X, int Y);
    

    Clearly, everything we’ve seen before is still valid.

    [Test]
    public void EquivalentStructsInstances_AreEqual()
    {
        var a = new Point(2, 1);
        var b = new Point(2, 1);
    
        Assert.That(b, Is.EqualTo(a));
        //Assert.That(a, Is.Not.SameAs(b));// does not compile!
    }
    

    Well, almost everything: you cannot use Is.SameAs() because, since structs are value types, two values will always be distinct values. You’ll get notified about it by the compiler, with an error that says:

    The SameAs constraint always fails on value types as the actual and the expected value cannot be the same reference

    5- Records are actually not immutable

    We’ve seen that you cannot update existing Records. Well, that’s not totally correct.

    That assertion is true in the case of “simple” Records like Person:

    public record Person(string Name, int Id);
    

    But things change when we use another way of defining Records:

    public record Pair
    {
        public Pair(string Key, string Value)
        {
            this.Key = Key;
            this.Value = Value;
        }
    
        public string Key { get; set; }
        public string Value { get; set; }
    }
    

    We can explicitly declare the properties of the Record to make it look more like plain classes.

    Using this approach, we still can use the auto-equality functionality of Records

    [Test]
    public void ComplexRecordsAreEquatable()
    {
        var a = new Pair("Capital", "Roma");
        var b = new Pair("Capital", "Roma");
    
        Assert.That(b, Is.EqualTo(a));
    }
    

    But we can update a single field without creating a brand new instance:

    [Test]
    public void ComplexRecordsAreNotImmutable()
    {
        var b = new Pair("Capital", "Roma");
        b.Value = "Torino";
    
        Assert.That(b.Value, Is.EqualTo("Torino"));
    }
    

    Also, only simple types are immutable, even with the basic Record definition.

    The ComplexPair type is a Record that accepts in the definition a list of strings.

    public record ComplexPair(string Key, string Value, List<string> Metadata);
    

    That list of strings is not immutable: you can add and remove items as you wish:

    [Test]
    public void ComplexRecordsAreNotImmutable2()
    {
        var b = new ComplexPair("Capital", "Roma", new List<string> { "City" });
        b.Metadata.Add("Another Value");
    
        Assert.That(b.Metadata.Count, Is.EqualTo(2));
    }
    

    In the example below, you can see that I added a new item to the Metadata list without creating a new object.

    6- Records can have subtypes

    A neat feature is that we can create a hierarchy of Records in a very simple manner.

    Do you remember the Person definition?

    public record Person(string Name, int Id);
    

    Well, you can define a subtype just as you would do with plain classes:

    public record Employee(string Name, int Id, string Role) : Person(Name, Id);
    

    Of course, all the rules of Boxing and Unboxing are still valid.

    [Test]
    public void Records_CanHaveSubtypes()
    {
        Person meEmp = new Employee("Davide", 1, "Chief");
    
        Assert.That(meEmp, Is.AssignableTo<Employee>());
        Assert.That(meEmp, Is.AssignableTo<Person>());
    }
    

    7- Records can be abstract

    …and yes, we can have Abstract Records!

    public abstract record Box(int Volume, string Material);
    

    This means that we cannot instantiate new Records whose type is marked ad Abstract.

    var box = new Box(2, "Glass"); // cannot create it, it's abstract
    

    On the contrary, we need to create concrete types to instantiate new objects:

    public record PlasticBox(int Volume) : Box(Volume, "Plastic");
    

    Again, all the rules we already know are still valid.

    [Test]
    public void Records_CanBeAbstract()
    {
        var plasticBox = new PlasticBox(2);
    
        Assert.That(plasticBox, Is.AssignableTo<Box>());
        Assert.That(plasticBox, Is.AssignableTo<PlasticBox>());
    }
    

    8- Record can be sealed

    Finally, Records can be marked as Sealed.

    public sealed record Point3D(int X, int Y, int Z);
    

    Marking a Record as Sealed means that we cannot declare subtypes.

    public record ColoredPoint3D(int X, int Y, int Z, string RgbColor) : Point3D(X, Y, X); // Will not compile!
    

    This can be useful when exposing your types to external systems.

    This article first appeared on Code4IT

    Additional resources

    As usual, a few links you might want to read to learn more about Records in C#.

    The first one is a tutorial from the Microsoft website that teaches you the basics of Records:

    🔗 Create record types | Microsoft Docs

    The second one is a splendid article by Gary Woodfine where he explores the internals of C# Records, and more:

    🔗C# Records – The good, bad & ugly | Gary Woodfine.

    Finally, if you’re interested in trivia about C# stuff we use but we rarely explore, here’s an article I wrote a while ago about GUIDs in C# – you’ll find some neat stuff in there!

    🔗5 things you didn’t know about Guid in C# | Code4IT

    Wrapping up

    In this article, we’ve seen 8 things you probably didn’t know about Records in C#.

    Records are quite new in the .NET ecosystem, so we can expect more updates and functionalities.

    Is there anything else we should add? Or maybe something you did not expect?

    Happy coding!

    🐧



    Source link

  • C# Tip: use IHttpClientFactory to generate HttpClient instances

    C# Tip: use IHttpClientFactory to generate HttpClient instances


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The problem with HttpClient

    When you create lots of HttpClient instances, you may incur Socket Exhaustion.

    This happens because sockets are a finite resource, and they are not released exactly when you ‘Dispose’ them, but a bit later. So, when you create lots of clients, you may terminate the available sockets.

    Even with using statements you may end up with Socket Exhaustion.

    class ResourceChecker
    {
        public async Task<bool> ResourceExists(string url)
        {
            using (HttpClient client = new HttpClient())
            {
                var response = await client.GetAsync(url);
                return response.IsSuccessStatusCode;
            }
        }
    }
    

    Actually, the real issue lies in the disposal of HttpMessageHandler instances. With simple HttpClient objects, you have no control over them.

    Introducing HttpClientFactory

    The HttpClientFactory class creates HttpClient instances for you.

    class ResourceChecker
    {
        private IHttpClientFactory _httpClientFactory;
    
        public ResourceChecker(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<bool> ResourceExists(string url)
        {
            HttpClient client = _httpClientFactory.CreateClient();
            var response = await client.GetAsync(url);
            return response.IsSuccessStatusCode;
        }
    }
    

    The purpose of IHttpClientFactory is to solve that issue with HttpMessageHandler.

    An interesting feature of IHttpClientFactory is that you can customize it with some general configurations that will be applied to all the HttpClient instances generated in a certain way. For instance, you can define HTTP Headers, Base URL, and other properties in a single point, and have those properties applied everywhere.

    How to add it to .NET Core APIs or Websites

    How can you use HttpClientFactory in your .NET projects?

    If you have the Startup class, you can simply add an instruction to the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddHttpClient();
    }
    

    You can find that extension method under the Microsoft.Extensions.DependencyInjection namespace.

    Wrapping up

    In this article, we’ve seen why you should not instantiate HttpClients manually, but instead, you should use IHttpClientFactory.

    Happy coding!

    🐧



    Source link