بلاگ

  • Building a Blended Material Shader in WebGL with Solid.js

    Building a Blended Material Shader in WebGL with Solid.js



    Blackbird was a fun, experimental site that I used as a way to get familiar with WebGL inside of Solid.js. It went through the story of how the SR-71 was built in super technical detail. The wireframe effect covered here helped visualize the technology beneath the surface of the SR-71 while keeping the polished metal exterior visible that matched the sites aesthetic.

    Here is how the effect looks like on the Blackbird site:

    In this tutorial, we’ll rebuild that effect from scratch: rendering a model twice, once as a solid and once as a wireframe, then blending the two together in a shader for a smooth, animated transition. The end result is a flexible technique you can use for technical reveals, holograms, or any moment where you want to show both the structure and the surface of a 3D object.

    There are three things at work here: material properties, render targets, and a black-to-white shader gradient. Let’s get into it!

    But First, a Little About Solid.js

    Solid.js isn’t a framework name you hear often, I’ve switched my personal work to it for the ridiculously minimal developer experience and because JSX remains the greatest thing since sliced bread. You absolutely don’t need to use the Solid.js part of this demo, you could strip it out and use vanilla JS all the same. But who knows, you may enjoy it 🙂

    Intrigued? Check out Solid.js.

    Why I Switched

    TLDR: Full-stack JSX without all of the opinions of Next and Nuxt, plus it’s like 8kb gzipped, wild.

    The technical version: Written in JSX, but doesn’t use a virtual DOM, so a “reactive” (think useState()) doesn’t re-render an entire component, just one DOM node. Also runs isomorphically, so "use client" is a thing of the past.

    Setting Up Our Scene

    We don’t need anything wild for the effect: a Mesh, Camera, Renderer, and Scene will do. I use a base Stage class (for theatrical-ish naming) to control when things get initialized.

    A Global Object for Tracking Window Dimensions

    window.innerWidth and window.innerHeight trigger document reflow when you use them (more about document reflow here). So I keep them in one object, only updating it when necessary and reading from the object, instead of using window and causing reflow. Notice these are all set to 0 and not actual values by default. window gets evaluated as undefined when using SSR, so we want to wait to set this until our app is mounted, GL class is initialized, and window is defined to avoid everybody’s favorite error: Cannot read properties of undefined (reading ‘window’).

    // src/gl/viewport.js
    
    export const viewport = {
      width: 0,
      height: 0,
      devicePixelRatio: 1,
      aspectRatio: 0,
    };
    
    export const resizeViewport = () => {
      viewport.width = window.innerWidth;
      viewport.height = window.innerHeight;
    
      viewport.aspectRatio = viewport.width / viewport.height;
    
      viewport.devicePixelRatio = Math.min(window.devicePixelRatio, 2);
    };

    A Basic Three.js Scene, Renderer, and Camera

    Before we can render anything, we need a small framework to handle our scene setup, rendering loop, and resizing logic. Instead of scattering this across multiple files, we’ll wrap it in a Stage class that initializes the camera, renderer, and scene in one place. This makes it easier to keep our WebGL lifecycle organized, especially once we start adding more complex objects and effects.

    // src/gl/stage.js
    
    import { WebGLRenderer, Scene, PerspectiveCamera } from 'three';
    import { viewport, resizeViewport } from './viewport';
    
    class Stage {
      init(element) {
        resizeViewport() // Set the initial viewport dimensions, helps to avoid using window inside of viewport.js for SSR-friendliness
        
        this.camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
        this.camera.position.set(0, 0, 2); // back the camera up 2 units so it isn't on top of the meshes we make later, you won't see them otherwise.
    
        this.renderer = new WebGLRenderer();
        this.renderer.setSize(viewport.width, viewport.height);
        element.appendChild(this.renderer.domElement); // attach the renderer to the dom so our canvas shows up
    
        this.renderer.setPixelRatio(viewport.devicePixelRatio); // Renders higher pixel ratios for screens that require it.
    
        this.scene = new Scene();
      }
    
      render() {
        this.renderer.render(this.scene, this.camera);
        requestAnimationFrame(this.render.bind(this));
    // All of the scenes child classes with a render method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.render && typeof child.render === 'function') {
            child.render();
          }
        });
      }
    
      resize() {
        this.renderer.setSize(viewport.width, viewport.height);
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
    
    // All of the scenes child classes with a resize method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.resize && typeof child.resize === 'function') {
            child.resize();
          }
        });
      }
    }
    
    export default new Stage();

    And a Fancy Mesh to Go With It

    With our stage ready, we can give it something interesting to render. A torus knot is perfect for this: it has plenty of curves and detail to show off both the wireframe and solid passes. We’ll start with a simple MeshNormalMaterial in wireframe mode so we can clearly see its structure before moving on to the blended shader version.

    // src/gl/torus.js
    
    import { Mesh, MeshBasicMaterial, TorusKnotGeometry } from 'three';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial({
          color: 0xffff00,
          wireframe: true,
        });
    
        this.position.set(0, 0, -8); // Back up the mesh from the camera so its visible
      }
    }

    A quick note on lights

    For simplicity we’re using MeshNormalMaterial so we don’t have to mess with lights. The original effect on Blackbird had six lights, waaay too many. The GPU on my M1 Max was choked to 30fps trying to render the complex models and realtime six-point lighting. But reducing this to just 2 lights (which visually looked identical) ran at 120fps no problem. Three.js isn’t like Blender where you can plop in 14 lights and torture your beefy computer with the render for 12 hours while you sleep. The lights in WebGL have consequences 🫠

    Now, the Solid JSX Components to House It All

    // src/components/GlCanvas.tsx
    
    import { onMount, onCleanup } from 'solid-js';
    import Stage from '~/gl/stage';
    
    export default function GlCanvas() {
    // let is used instead of refs, these aren't reactive
      let el;
      let gl;
      let observer;
    
      onMount(() => {
        if(!el) return
        gl = Stage;
    
        gl.init(el);
        gl.render();
    
    
        observer = new ResizeObserver((entry) => gl.resize());
        observer.observe(el); // use ResizeObserver instead of the window resize event. 
        // It is debounced AND fires once when initialized, no need to call resize() onMount
      });
    
      onCleanup(() => {
        if (observer) {
          observer.disconnect();
        }
      });
    
    
      return (
        <div
          ref={el}
          style={{
            position: 'fixed',
            inset: 0,
            height: '100lvh',
            width: '100vw',
          }}
          
        />
      );
    }

    let is used to declare a ref, there is no formal useRef() function in Solid. Signals are the only reactive method. Read more on refs in Solid.

    Then slap that component into app.tsx:

    // src/app.tsx
    
    import { Router } from '@solidjs/router';
    import { FileRoutes } from '@solidjs/start/router';
    import { Suspense } from 'solid-js';
    import GlCanvas from './components/GlCanvas';
    
    export default function App() {
      return (
        <Router
          root={(props) => (
            <Suspense>
              {props.children}
              <GlCanvas />
            </Suspense>
          )}
        >
          <FileRoutes />
        </Router>
      );
    }

    Each 3D piece I use is tied to a specific element on the page (usually for timeline and scrolling), so I create an individual component to control each class. This helps me keep organized when I have 5 or 6 WebGL moments on one page.

    // src/components/WireframeDemo.tsx
    
    import { createEffect, createSignal, onMount } from 'solid-js'
    import Stage from '~/gl/stage';
    import Torus from '~/gl/torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new Torus()); // Stage is initialized when the page initially mounts, 
        // so it's not available until the next tick. 
        // A signal forces this update to the next tick, 
        // after Stage is available.
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} />;
    }

    createEffect() instead of onMount(): this automatically tracks dependencies (element, and actor in this case) and fires the function when they change, no more useEffect() with dependency arrays 🙃. Read more on createEffect in Solid.

    Then a minimal route to put the component on:

    // src/routes/index.tsx
    
    import WireframeDemo from '~/components/WiframeDemo';
    
    export default function Home() {
      return (
        <main>
          <WireframeDemo />
        </main>
      );
    }
    Diagramming showing the folder structure of a code project

    Now you’ll see this:

    Rainbow torus knot

    Switching a Material to Wireframe

    I loved wireframe styling for the Blackbird site! It fit the prototype feel of the story, fully textured models felt too clean, wireframes are a bit “dirtier” and unpolished. You can wireframe just about any material in Three.js with this:

    // /gl/torus.js
    
      this.material.wireframe = true
      this.material.needsUpdate = true;
    Rainbow torus knot changing from wireframe to solid colors

    But we want to do this dynamically on only part of our model, not on the entire thing.

    Enter render targets.

    The Fun Part: Render Targets

    Render Targets are a super deep topic but they boil down to this: Whatever you see on screen is a frame for your GPU to render, in WebGL you can export that frame and re-use it as a texture on another mesh, you are creating a “target” for your rendered output, a render target.

    Since we’re going to need two of these targets, we can make a single class and re-use it.

    // src/gl/render-target.js
    
    import { WebGLRenderTarget } from 'three';
    import { viewport } from '../viewport';
    import Torus from '../torus';
    import Stage from '../stage';
    
    export default class RenderTarget extends WebGLRenderTarget {
      constructor() {
        super();
    
        this.width = viewport.width * viewport.devicePixelRatio;
        this.height = viewport.height * viewport.devicePixelRatio;
      }
    
      resize() {
        const w = viewport.width * viewport.devicePixelRatio;
        const h = viewport.height * viewport.devicePixelRatio;
    
        this.setSize(w, h)
      }
    }

    This is just an output for a texture, nothing more.

    Now we can make the class that will consume these outputs. It’s a lot of classes, I know, but splitting up individual units like this helps me keep track of where stuff happens. 800 line spaghetti mega-classes are the stuff of nightmares when debugging WebGL.

    // src/gl/targeted-torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      PerspectiveCamera,
      PlaneGeometry,
    } from 'three';
    import Torus from './torus';
    import { viewport } from './viewport';
    import RenderTarget from './render-target';
    import Stage from './stage';
    
    export default class TargetedTorus extends Mesh {
      targetSolid = new RenderTarget();
      targetWireframe = new RenderTarget();
    
      scene = new Torus(); // The shape we created earlier
      camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
      
      constructor() {
        super();
    
        this.geometry = new PlaneGeometry(1, 1);
        this.material = new MeshNormalMaterial();
      }
    
      resize() {
        this.targetSolid.resize();
        this.targetWireframe.resize();
    
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
      }
    }

    Now, switch our WireframeDemo.tsx component to use the TargetedTorus class, instead of Torus:

    // src/components/WireframeDemo.tsx 
    
    import { createEffect, createSignal, onMount } from 'solid-js';
    import Stage from '~/gl/stage';
    import TargetedTorus from '~/gl/targeted-torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new TargetedTorus()); // << change me
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} data-gl="wireframe" />;
    }

    “Now all I see is a blue square Nathan, it feel like we’re going backwards, show me the cool shape again”.

    Shhhhh, It’s by design I swear!

    From MeshNormalMaterial to ShaderMaterial

    We can now take our Torus rendered output and smack it onto the blue plane as a texture using ShaderMaterial. MeshNormalMaterial doesn’t let us use a texture, and we’ll need shaders soon anyway. Inside of targeted-torus.js remove the MeshNormalMaterial and switch this in:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        void main() {
          gl_FragColor = vec4(0.67, 0.08, 0.86, 1.0);
        }
      `,
    });

    Now we have a much prettier purple plane with the help of two shaders:

    • Vertex shaders manipulate vertex locations of our material, we aren’t going to touch this one further
    • Fragment shaders assign the colors and properties to each pixel of our material. This shader tells every pixel to be purple

    Using the Render Target Texture

    To show our Torus instead of that purple color, we can feed the fragment shader an image texture via uniforms:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        // declare 2 uniforms
        uniform sampler2D u_texture_solid;
        uniform sampler2D u_texture_wireframe;
    
        void main() {
          // declare 2 images
          vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
          vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
          // set the color to that of the image
          gl_FragColor = solid_texture;
        }
      `,
      uniforms: {
        u_texture_solid: { value: this.targetSolid.texture },
        u_texture_wireframe: { value: this.targetWireframe.texture },
      },
    });

    And add a render method to our TargetedTorus class (this is called automatically by the Stage class):

    // src/gl/targeted-torus.js
    
    render() {
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      Stage.renderer.render(this.scene, this.camera);
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.clear();
      Stage.renderer.setRenderTarget(null);
    }

    THE TORUS IS BACK. We’ve passed our image texture into the shader and its outputting our original render.

    Mixing Wireframe and Solid Materials with Shaders

    Shaders were black magic to me before this project. It was my first time using them in production and I’m used to frontend where you think in boxes. Shaders are coordinates 0 to 1, which I find far harder to understand. But, I’d used Photoshop and After Effects with layers plenty of times. These applications do a lot of the same work shaders can: GPU computing. This made it far easier. Starting out by picturing or drawing what I wanted, thinking how I might do it in Photoshop, then asking myself how I could do it with shaders. Photoshop or AE into shaders is far less mentally taxing when you don’t have a deep foundation in shaders.

    Populating Both Render Targets

    At the moment, we are only saving data to the solidTarget render target via normals. We will update our render loop, so that our shader has them both this and wireframeTarget available simultaneously.

    // src/gl/targeted-torus.js
    
    render() {
      // Render wireframe version to wireframe render target
      this.scene.material.wireframe = true;
      Stage.renderer.setRenderTarget(this.targetWireframe);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_wireframe.value = this.targetWireframe.texture;
    
      // Render solid version to solid render target
      this.scene.material.wireframe = false;
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      // Reset render target
      Stage.renderer.setRenderTarget(null);
    }

    With this, you end up with a flow that under the hood looks like this:

    Diagram with red lines describing data being passed around

    Fading Between Two Textures

    Our fragment shader will get a little update, 2 additions:

    • smoothstep creates a linear ramp between 2 values. UVs only go from 0 to 1, so in this case we use .15 and .65 as the limits (they look make the effect more obvious than 0 and 1). Then we use the x value of the uvs to define which value gets fed into smoothstep.
    • vec4 mixed = mix(wireframe_texture, solid_texture, blend); mix does exactly what it says, mixes 2 values together at a ratio determined by blend. .5 being a perfectly even split.
    // src/gl/targeted-torus.js
    
    fragmentShader: `
      varying vec2 v_uv;
      varying vec3 v_position;
    
      // declare 2 uniforms
      uniform sampler2D u_texture_solid;
      uniform sampler2D u_texture_wireframe;
    
      void main() {
        // declare 2 images
        vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
        vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
        float blend = smoothstep(0.15, 0.65, v_uv.x);
        vec4 mixed = mix(wireframe_texture, solid_texture, blend);        
    
        gl_FragColor = mixed;
      }
    `,

    And boom, MIXED:

    Rainbow torus knot with wireframe texture

    Let’s be honest with ourselves, this looks exquisitely boring being static so we can spice this up with little magic from GSAP.

    // src/gl/torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      TorusKnotGeometry,
    } from 'three';
    import gsap from 'gsap';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial();
    
        this.position.set(0, 0, -8);
    
        // add me!
        gsap.to(this.rotation, {
          y: 540 * (Math.PI / 180), // needs to be in radians, not degrees
          ease: 'power3.inOut',
          duration: 4,
          repeat: -1,
          yoyo: true,
        });
      }
    }

    Thank You!

    Congratulations, you’ve officially spent a measurable portion of your day blending two materials together. It was worth it though, wasn’t it? At the very least, I hope this saved you some of the mental gymnastics orchestrating a pair of render targets.

    Have questions? Hit me up on Twitter!



    Source link

  • F.I.R.S.T. acronym for better unit tests &vert; Code4IT

    F.I.R.S.T. acronym for better unit tests | Code4IT


    Good unit tests have some properties in common: they are Fast, Independent, Repeatable, Self-validating, and Thorough. In a word: FIRST!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    FIRST is an acronym that you should always remember if you want to write clean and extensible tests.

    This acronym tells us that Unit Tests should be Fast, Independent, Repeatable, Self-validating, and Thorough.

    Fast

    You should not create tests that require a long time for setup and start-up: ideally, you should be able to run the whole test suite in under a minute.

    If your unit tests are taking too much time for running, there must be something wrong with it; there are many possibilities:

    1. You’re trying to access remote sources (such as real APIs, Databases, and so on): you should mock those dependencies to make tests faster and to avoid accessing real resources. If you need real data, consider creating integration/e2e tests instead.
    2. Your system under test is too complex to build: too many dependencies? DIT value too high?
    3. The method under test does too many things. You should consider splitting it into separate, independent methods, and let the caller orchestrate the method invocations as necessary.

    Independent (or Isolated)

    Test methods should be independent of one another.

    Avoid doing something like this:

    MyObject myObj = null;
    
    [Fact]
    void Test1()
    {
        myObj = new MyObject();
        Assert.True(string.IsNullOrEmpty(myObj.MyProperty));
    
    }
    
    [Fact]
    void Test2()
    {
    
        myObj.MyProperty = "ciao";
        Assert.Equal("oaic", Reverse(myObj.MyProperty));
    
    }
    

    Here, to have Test2 working correctly, Test1 must run before it, otherwise myObj would be null. There’s a dependency between Test1 and Test2.

    How to avoid it? Create new instances for every test! May it be with some custom methods or in the StartUp phase. And remember to reset the mocks as well.

    Repeatable

    Unit Tests should be repeatable. This means that wherever and whenever you run them, they should behave correctly.

    So you should remove any dependency on the file system, current date, and so on.

    Take this test as an example:

    [Fact]
    void TestDate_DoNotDoIt()
    {
    
        DateTime d = DateTime.UtcNow;
        string dateAsString = d.ToString("yyyy-MM-dd");
    
        Assert.Equal("2022-07-19", dateAsString);
    }
    

    This test is strictly bound to the current date. So, if I’ll run this test again in a month, it will fail.

    We should instead remove that dependency and use dummy values or mock.

    [Fact]
    void TestDate_DoIt()
    {
    
        DateTime d = new DateTime(2022,7,19);
        string dateAsString = d.ToString("yyyy-MM-dd");
    
        Assert.Equal("2022-07-19", dateAsString);
    }
    

    There are many ways to inject DateTime (and other similar dependencies) with .NET. I’ve listed some of them in this article: “3 ways to inject DateTime and test it”.

    Self-validating

    Self-validating means that a test should perform operations and programmatically check for the result.

    For instance, if you’re testing that you’ve written something on a file, the test itself is in charge of checking that it worked correctly. No manual operations should be done.

    Also, tests should provide explicit feedback: a test either passes or fails; no in-between.

    Thorough

    Unit Tests should be thorough in that they must validate both the happy paths and the failing paths.

    So you should test your functions with valid inputs and with invalid inputs.

    You should also validate what happens if an exception is thrown while executing the path: are you handling errors correctly?

    Have a look at this class, with a single, simple, method:

    public class ItemsService
    {
    
        readonly IItemsRepository _itemsRepo;
    
        public ItemsService(IItemsRepository itemsRepo)
        {
            _itemsRepo = itemsRepo;
        }
    
        public IEnumerable<Item> GetItemsByCategory(string category, int maxItems)
        {
    
            var allItems = _itemsRepo.GetItems();
    
            return allItems
                    .Where(i => i.Category == category)
                    .Take(maxItems);
        }
    }
    

    Which tests should you write for GetItemsByCategory?

    I can think of these:

    • what if category is null or empty?
    • what if maxItems is less than 0?
    • what if allItems is null?
    • what if one of the items inside allItems is null?
    • what if _itemsRepo.GetItems() throws an exception?
    • what if _itemsRepo is null?

    As you can see, even for a trivial method like this you should write a lot of tests, to ensure that you haven’t missed anything.

    Conclusion

    F.I.R.S.T. is a good way to way to remember the properties of a good unit test suite.

    Always try to stick to it, and remember that tests should be written even better than production code.

    Happy coding!

    🐧



    Source link

  • How to propagate HTTP Headers (and  Correlation IDs) using HttpClients in C#

    How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C#


    Propagating HTTP Headers can be useful, especially when dealing with Correlation IDs. It’s time to customize our HttpClients!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Imagine this: you have a system made up of different applications that communicate via HTTP. There’s some sort of entry point, exposed to the clients, that orchestrates the calls to the other applications. How do you correlate those requests?

    A good idea is to use a Correlation ID: one common approach for HTTP-based systems is passing a value to the “public” endpoint using HTTP headers; that value will be passed to all the other systems involved in that operation to say that “hey, these incoming requests in the internal systems happened because of THAT SPECIFIC request in the public endpoint”. Of course, it’s more complex than this, but you got the idea.

    Now. How can we propagate an HTTP Header in .NET? I found this solution on GitHub, provided by no less than David Fowler. In this article, I’m gonna dissect his code to see how he built this solution.

    Important update: there’s a NuGet package that implements these functionalities: Microsoft.AspNetCore.HeaderPropagation. Consider this article as an excuse to understand what happens behind the scenes of an HTTP call, and use it to learn how to customize and extend those functionalities. Here’s how to integrate that package.

    Just interested in the C# methods?

    As I said, I’m not reinventing anything new: the source code I’m using for this article is available on GitHub (see link above), but still, I’ll paste the code here, for simplicity.

    First of all, we have two extension methods that add some custom functionalities to the IServiceCollection.

    public static class HeaderPropagationExtensions
    {
        public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
        {
            services.AddHttpContextAccessor();
            services.ConfigureAll(configure);
            services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
            return services;
        }
    
        public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
        {
            builder.Services.AddHttpContextAccessor();
            builder.Services.Configure(builder.Name, configure);
            builder.AddHttpMessageHandler((sp) =>
            {
                var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
                var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
                return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
            });
    
            return builder;
        }
    }
    

    Then we have a Filter that will be used to customize how the HttpClients must be built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    next, a simple class that holds the headers we want to propagate

    public class HeaderPropagationOptions
    {
        public IList<string> HeaderNames { get; set; } = new List<string>();
    }
    

    and, lastly, the handler that actually propagates the headers.

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    Ok, and how can we use all of this?

    It’s quite easy: if you want to propagate the my-correlation-id header for all the HttpClients created in your application, you just have to add this line to your Startup method.

    builder.Services.AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    

    Time to study this code!

    How to “enrich” HTTP requests using DelegatingHandler

    Let’s start with the HeaderPropagationMessageHandler class:

    public class HeaderPropagationMessageHandler : DelegatingHandler
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandler(HeaderPropagationOptions options, IHttpContextAccessor contextAccessor)
        {
            _options = options;
            _contextAccessor = contextAccessor;
        }
    
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (_contextAccessor.HttpContext != null)
            {
                foreach (var headerName in _options.HeaderNames)
                {
                    // Get the incoming header value
                    var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                    if (StringValues.IsNullOrEmpty(headerValue))
                    {
                        continue;
                    }
    
                    request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
                }
            }
    
            return base.SendAsync(request, cancellationToken);
        }
    }
    

    This class lies in the middle of the HTTP Request pipeline. It can extend the functionalities of HTTP Clients because it inherits from System.Net.Http.DelegatingHandler.

    If you recall from a previous article, the SendAsync method is the real core of any HTTP call performed using .NET’s HttpClients, and here we’re enriching that method by propagating some HTTP headers.

     protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
    {
        if (_contextAccessor.HttpContext != null)
        {
            foreach (var headerName in _options.HeaderNames)
            {
                // Get the incoming header value
                var headerValue = _contextAccessor.HttpContext.Request.Headers[headerName];
                if (StringValues.IsNullOrEmpty(headerValue))
                {
                    continue;
                }
    
                request.Headers.TryAddWithoutValidation(headerName, (string[])headerValue);
            }
        }
    
        return base.SendAsync(request, cancellationToken);
    }
    

    By using _contextAccessor we can access the current HTTP Context. From there, we retrieve the current HTTP headers, check if one of them must be propagated (by looking up _options.HeaderNames), and finally, we add the header to the outgoing HTTP call by using TryAddWithoutValidation.

    HTTP Headers are “cloned” and propagated

    Notice that we’ve used `TryAddWithoutValidation` instead of `Add`: in this way, we can use whichever HTTP header key we want without worrying about invalid names (such as the ones with a new line in it). Invalid header names will simply be ignored, as opposed to the Add method that will throw an exception.
    Finally, we continue with the HTTP call by executing `base.SendAsync`, passing the `HttpRequestMessage` object now enriched with additional headers.

    Using HttpMessageHandlerBuilder to configure how HttpClients must be built

    The Microsoft.Extensions.Http.IHttpMessageHandlerBuilderFilter interface allows you to apply some custom configurations to the HttpMessageHandlerBuilder right before the HttpMessageHandler object is built.

    internal class HeaderPropagationMessageHandlerBuilderFilter : IHttpMessageHandlerBuilderFilter
    {
        private readonly HeaderPropagationOptions _options;
        private readonly IHttpContextAccessor _contextAccessor;
    
        public HeaderPropagationMessageHandlerBuilderFilter(IOptions<HeaderPropagationOptions> options, IHttpContextAccessor contextAccessor)
        {
            _options = options.Value;
            _contextAccessor = contextAccessor;
        }
    
        public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
        {
            return builder =>
            {
                builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
                next(builder);
            };
        }
    }
    

    The Configure method allows you to customize how the HttpMessageHandler will be built: we are adding a new instance of the HeaderPropagationMessageHandler class we’ve seen before to the current HttpMessageHandlerBuilder’s AdditionalHandlers collection. All the handlers registered in the list will then be used to build the HttpMessageHandler object we’ll use to send and receive requests.

    via GIPHY

    By having a look at the definition of HttpMessageHandlerBuilder you can grasp a bit of what happens when we’re creating HttpClients in .NET.

    namespace Microsoft.Extensions.Http
    {
        public abstract class HttpMessageHandlerBuilder
        {
            protected HttpMessageHandlerBuilder();
    
            public abstract IList<DelegatingHandler> AdditionalHandlers { get; }
    
            public abstract string Name { get; set; }
    
            public abstract HttpMessageHandler PrimaryHandler { get; set; }
    
            public virtual IServiceProvider Services { get; }
    
            protected internal static HttpMessageHandler CreateHandlerPipeline(HttpMessageHandler primaryHandler, IEnumerable<DelegatingHandler> additionalHandlers);
    
            public abstract HttpMessageHandler Build();
        }
    
    }
    

    Ah, and remember the wise words you can read in the docs of that class:

    The Microsoft.Extensions.Http.HttpMessageHandlerBuilder is registered in the service collection as a transient service.

    Nice 😎

    Share the behavior with all the HTTP Clients in the .NET application

    Now that we’ve defined the custom behavior of HTTP clients, we need to integrate it into our .NET application.

    public static IServiceCollection AddHeaderPropagation(this IServiceCollection services, Action<HeaderPropagationOptions> configure)
    {
        services.AddHttpContextAccessor();
        services.ConfigureAll(configure);
        services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
        return services;
    }
    

    Here, we’re gonna extend the IServiceCollection with those functionalities. At first, we’re adding AddHttpContextAccessor, which allows us to access the current HTTP Context (the one we’ve used in the HeaderPropagationMessageHandler class).

    Then, services.ConfigureAll(configure) registers an HeaderPropagationOptions that will be used by HeaderPropagationMessageHandlerBuilderFilter. Without that line, we won’t be able to specify the names of the headers to be propagated.

    Finally, we have this line:

    services.TryAddEnumerable(ServiceDescriptor.Singleton<IHttpMessageHandlerBuilderFilter, HeaderPropagationMessageHandlerBuilderFilter>());
    

    Honestly, I haven’t understood it thoroughly: I thought that it allows us to use more than one class implementing IHttpMessageHandlerBuilderFilter, but apparently if we create a sibling class and add them both using Add, everything works the same. If you know what this line means, drop a comment below! 👇

    Wherever you access the ServiceCollection object (may it be in the Startup or in the Program class), you can propagate HTTP headers for every HttpClient by using

    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    Yes, AddHeaderPropagation is the method we’ve seen in the previous paragraph!

    Seeing it in action

    Now we have all the pieces in place.

    It’s time to run it 😎

    To fully understand it, I strongly suggest forking this repository I’ve created and running it locally, placing some breakpoints here and there.

    As a recap: in the Program class, I’ve added these lines to create a named HttpClient specifying its BaseAddress property. Then I’ve added the HeaderPropagation as we’ve seen before.

    builder.Services.AddHttpClient("items")
                        .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://en5xof8r16a6h.x.pipedream.net/"));
    
    builder.Services.AddHeaderPropagation(options =>
        options.HeaderNames.Add("my-correlation-id")
    );
    

    There’s also a simple Controller that acts as an entry point and that, using an HttpClient, sends data to another endpoint (the one defined in the previous snippet).

    [HttpPost]
    public async Task<IActionResult> PostAsync([FromQuery] string value)
    {
        var item = new Item(value);
    
        var httpClient = _httpClientFactory.CreateClient("items");
        await httpClient.PostAsJsonAsync("/", item);
        return NoContent();
    }
    

    What happens at start-up time

    When a .NET application starts up, the Main method in the Program class acts as an entry point and registers all the dependencies and configurations required.

    We will then call builder.Services.AddHeaderPropagation, which is the method present in the HeaderPropagationExtensions class.

    All the configurations are then set, but no actual operations are being executed.

    The application then starts normally, waiting for incoming requests.

    What happens at runtime

    Now, when we call the PostAsync method by passing an HTTP header such as my-correlation-id:123, things get interesting.

    The first operation is

    var httpClient = _httpClientFactory.CreateClient("items");
    

    While creating the HttpClient, the engine is calling all the registered IHttpMessageHandlerBuilderFilter and calling their Configure method. So, you’ll see the execution moving to HeaderPropagationMessageHandlerBuilderFilter’s Configure.

    public Action<HttpMessageHandlerBuilder> Configure(Action<HttpMessageHandlerBuilder> next)
    {
        return builder =>
        {
            builder.AdditionalHandlers.Add(new HeaderPropagationMessageHandler(_options, _contextAccessor));
            next(builder);
        };
    }
    

    Of course, you’re also executing the HeaderPropagationMessageHandler constructor.

    The HttpClient is now ready: when we call httpClient.PostAsJsonAsync("/", item) we’re also executing all the registered DelegatingHandler instances, such as our HeaderPropagationMessageHandler. In particular, we’re executing the SendAsync method and adding the required HTTP Headers to the outgoing HTTP calls.

    We will then see the same HTTP Header on the destination endpoint.

    We did it!

    Propagating CorrelationId to a specific HttpClient

    You can also specify which headers need to be propagated on single HTTP Clients:

    public static IHttpClientBuilder AddHeaderPropagation(this IHttpClientBuilder builder, Action<HeaderPropagationOptions> configure)
    {
        builder.Services.AddHttpContextAccessor();
        builder.Services.Configure(builder.Name, configure);
    
        builder.AddHttpMessageHandler((sp) =>
        {
            var options = sp.GetRequiredService<IOptionsMonitor<HeaderPropagationOptions>>();
            var contextAccessor = sp.GetRequiredService<IHttpContextAccessor>();
    
            return new HeaderPropagationMessageHandler(options.Get(builder.Name), contextAccessor);
        });
    
        return builder;
    }
    

    Which works similarly, but registers the Handler only to a specific HttpClient.

    For instance, you can have 2 distinct HttpClient that will propagate only a specific set of HTTP Headers:

    builder.Services.AddHttpClient("items")
            .AddHeaderPropagation(options => options.HeaderNames.Add("my-correlation-id"));
    
    builder.Services.AddHttpClient("customers")
            .AddHeaderPropagation(options => options.HeaderNames.Add("another-correlation-id"));
    

    Further readings

    Finally, some additional resources if you want to read more.

    For sure, you should check out (and star⭐) David Fowler’s code:

    🔗 Original code | GitHub

    If you’re not sure about what are extension methods (and you cannot respond to this question: How does inheritance work with extension methods?), then you can have a look at this article:

    🔗 How you can create extension methods in C# | Code4IT

    We heavily rely on HttpClient and HttpClientFactory. How can you test them? Well, by mocking the SendAsync method!

    🔗 How to test HttpClientFactory with Moq | Code4IT

    We’ve seen which is the role of HttpMessageHandlerBuilder when building HttpClients. You can explore that class starting from the documentation.

    🔗 HttpMessageHandlerBuilder Class | Microsoft Docs

    We’ve already seen how to inject and use HttpContext in our applications:

    🔗 How to access the HttpContext in .NET API

    Finally, the repository that you can fork to toy with it:

    🔗 PropagateCorrelationIdOnHttpClients | GitHub

    This article first appeared on Code4IT

    Conclusion

    What a ride!

    We’ve seen how to add functionalities to HttpClients and to HTTP messages. All integrated into the .NET pipeline!

    We’ve learned how to propagate generic HTTP Headers. Of course, you can choose any custom HttpHeader and promote one of them as CorrelationId.

    Again, I invite you to download the code and toy with it – it’s incredibly interesting 😎

    Happy coding!

    🐧



    Source link

  • RBI Emphasizes Adopting Zero Trust Approaches for Banking Institutions

    RBI Emphasizes Adopting Zero Trust Approaches for Banking Institutions


    In a significant move to bolster cybersecurity in India’s financial ecosystem, the Reserve Bank of India (RBI) has underscored the urgent need for regulated entities—especially banks—to adopt Zero Trust approaches as part of a broader strategy to curb cyber fraud. In its latest Financial Stability Report (June 2025), RBI highlighted Zero Trust as a foundational pillar for risk-based supervision, AI-aware defenses, and proactive cyber risk management.

    The directive comes amid growing concerns about the digital attack surface, vendor lock-in risks, and the systemic threats posed by overreliance on a few IT infrastructure providers. RBI has clarified that traditional perimeter-based security is no longer enough, and financial institutions must transition to continuous verification models where no user or device is inherently trusted.

    What is Zero Trust?

    Zero Trust is a modern security framework built on the principle: “Never trust, always verify.”

    Unlike legacy models that grant broad access to anyone inside the network, Zero Trust requires every user, device, and application to be verified continuously, regardless of location—inside or outside the organization’s perimeter.

    Key principles of Zero Trust include:

    • Least-privilege access: Users only get access to what they need—nothing more.
    • Micro-segmentation: Breaking down networks and applications into smaller zones to isolate threats.
    • Continuous verification: Access is granted based on multiple dynamic factors, including identity, device posture, location, time, and behavior.
    • Assume breach: Security models assume threats are already inside the network and act accordingly.

    In short, Zero Trust ensures that access is never implicit, and every request is assessed with context and caution.

    Seqrite ZTNA: Zero Trust in Action for Indian Banking

    To help banks and financial institutions meet RBI’s Zero Trust directive, Seqrite ZTNA (Zero Trust Network Access) offers a modern, scalable, and India-ready solution that aligns seamlessly with RBI’s vision.

    Key Capabilities of Seqrite ZTNA

    • Granular access control
      It allows access only to specific applications based on role, user identity, device health, and risk level, eliminating broad network exposure.
    • Continuous risk-based verification
      Each access request is evaluated in real time using contextual signals like location, device posture, login time, and behavior.
    • No VPN dependency
      Removes the risks of traditional VPNs that grant excessive access. Seqrite ZTNA gives just-in-time access to authorized resources.
    • Built-in analytics and audit readiness
      Detailed logs of every session help organizations meet RBI’s incident reporting and risk-based supervision requirements.
    • Easy integration with identity systems
      Works seamlessly with Azure AD, Google Workspace, and other Identity Providers to enforce secure authentication.
    • Supports hybrid and remote workforces
      Agent-based or agent-less deployment suits internal employees, third-party vendors, and remote users.

    How Seqrite ZTNA Supports RBI’s Zero Trust Mandate

    RBI’s recommendations aren’t just about better firewalls but about shifting the cybersecurity posture entirely. Seqrite ZTNA helps financial institutions adopt this shift with:

    • Risk-Based Supervision Alignment
    • Policies can be tailored based on user risk, job function, device posture, or geography.
    • Enables graded monitoring, as RBI emphasizes, with intelligent access decisions based on risk level.
    • CART and AI-Aware Defenses
    • Behavior analytics and real-time monitoring help institutions detect anomalies and conduct Continuous Assessment-Based Red Teaming (CART) simulations.
    • Uniform Incident Reporting
    • Seqrite’s detailed session logs and access histories simplify compliance with RBI’s call for standardized incident reporting frameworks.
    • Vendor Lock-In Mitigation
    • Unlike global cloud-only vendors, Seqrite ZTNA is designed with data sovereignty and local compliance in mind, offering full control to Indian enterprises.

    Sample Use Case: A Mid-Sized Regional Bank

    Challenge: The bank must secure access to its core banking applications for remote employees and third-party vendors without relying on VPNs.

    With Seqrite ZTNA:

    • Users access only assigned applications, not the entire network.
    • Device posture is verified before every session.
    • Behavior is monitored continuously to detect anomalies.
    • Detailed logs assist compliance with RBI audits.
    • Risk-based policies automatically adjust based on context (e.g., denying access from unknown locations or outdated devices).

    Result: A Zero Trust-aligned access model with reduced attack surface, better visibility, and continuous compliance readiness.

    Conclusion: Future-Proofing Banking Security with Zero Trust

    RBI’s directive isn’t just another compliance checklist, it’s a wake-up call. As India’s financial institutions expand digitally, adopting Zero Trust is essential for staying resilient, secure, and compliant.

    Seqrite ZTNA empowers banks to implement Zero Trust in a practical, scalable way aligned with national cybersecurity priorities. With granular access control, continuous monitoring, and compliance-ready visibility, Seqrite ZTNA is the right step forward in securing India’s digital financial infrastructure.



    Source link

  • use Miniprofiler instead of Stopwatch to profile code performance &vert; Code4IT

    use Miniprofiler instead of Stopwatch to profile code performance | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Do you need to tune up the performance of your code? You can create some StopWatch objects and store the execution times or rely on external libraries like MiniProfiler.

    Note: of course, we’re just talking about time duration, and not about memory usage!

    How to profile code using Stopwatch

    A Stopwatch object acts as a (guess what?) stopwatch.

    You can manually make it start and stop, and keep track of the elapsed time:

    Stopwatch sw = Stopwatch.StartNew();
    DoSomeOperations(100);
    var with100 = sw.ElapsedMilliseconds;
    
    
    sw.Restart();
    DoSomeOperations(2000);
    var with2000 = sw.ElapsedMilliseconds;
    
    sw.Stop();
    
    Console.WriteLine($"With 100: {with100}ms");
    Console.WriteLine($"With 2000: {with2000}ms");
    

    It’s useful, but you have to do it manually. There’s a better choice.

    How to profile code using MiniProfiler

    A good alternative is MiniProfiler: you can create a MiniProfiler object that holds all the info related to the current code execution. You then can add some Steps, which can have a name, and even nest them.

    Finally, you can print the result using RenderPlainText.

    MiniProfiler profiler = MiniProfiler.StartNew();
    
    using (profiler.Step("With 100"))
    {
        DoSomeOperations(100);
    }
    
    
    using (profiler.Step("With 2000"))
    {
        DoSomeOperations(2000);
    }
    
    Console.WriteLine(profiler.RenderPlainText());
    

    You won’t anymore stop and start any StopWatch instance.

    You can even use inline steps, to profile method execution and store its return value:

    var value = profiler.Inline(() => MethodThatReturnsSomething(12), "Get something");
    

    Here I decided to print the result on the Console. You can even create HTML reports, which are quite useful when profiling websites. You can read more here, where I experimented with MiniProfiler in a .NET API project.

    Here’s an example of what you can get:

    MiniProfiler API report

    Further readings

    We’ve actually already talked about MiniProfiler in an in-depth article you can find here:

    🔗 Profiling .NET code with MiniProfiler | Code4IT

    Which, oddly, is almost more detailed than the official documentation, that you can still find here:

    🔗 MiniProfiler for .NET | MiniProfiler

    Happy coding!

    🐧



    Source link

  • How to log Correlation IDs in .NET APIs with Serilog &vert; Code4IT

    How to log Correlation IDs in .NET APIs with Serilog | Code4IT


    APIs often call other APIs to perform operations. If an error occurs in one of them, how can you understand the context that caused that error? You can use Correlation IDs in your logs!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Correlation IDs are values that are passed across different systems to correlate the operations performed during a “macro” operation.

    Most of the time they are passed as HTTP Headers – of course in systems that communicate via HTTP.

    In this article, we will learn how to log those Correlation IDs using Serilog, a popular library that helps handle logs in .NET applications.

    Setting up the demo dotNET project

    This article is heavily code-oriented. So, let me first describe the demo project.

    Overview of the project

    To demonstrate how to log Correlation IDs and how to correlate logs generated by different systems, I’ve created a simple solution that handles bookings for a trip.

    The “main” project, BookingSystem, fetches data from external systems by calling some HTTP endpoints; it then manipulates the data and returns an aggregate object to the caller.

    BookingSystem depends on two projects, placed within the same solution: CarRentalSystem, which returns data about the available cars in a specified date range, and HotelsSystem, which does the same for hotels.

    So, this is the data flow:

    Operations sequence diagram

    If an error occurs in any of those systems, can we understand the full story of the failed request? No. Unless we use Correlation IDs!

    Let’s see how to add them and how to log them.

    We need to propagate HTTP Headers. You could implement it from scratch, as we’ve seen in a previous article. Or we could use a native library that does it all for us.

    Of course, let’s go with the second approach.

    For every project that will propagate HTTP headers, we have to follow these steps.

    First, we need to install Microsoft.AspNetCore.HeaderPropagation: this NuGet package allows us to add the .NET classes needed to propagate HTTP headers.

    Next, we have to update the part of the project that we use to configure our application. For .NET projects with Minimal APIs, it’s the Program class.

    Here we need to add the capability to read the HTTP Context, by using

    builder.Services.AddHttpContextAccessor();
    

    As you can imagine, this is needed because, to propagate HTTP Headers, we need to know which are the incoming HTTP Headers. And they can be read from the HttpContext object.

    Next, we need to specify, as a generic behavior, which headers must be propagated. For instance, to propagate the “my-custom-correlation-id” header, you must add

    builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    

    Then, for every HttpClient that will propagate those headers, you have to add AddHeaderPropagation(), like this:

    builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    

    Finally, one last instruction that tells the application that it needs to use the Header Propagation functionality:

    app.UseHeaderPropagation();
    

    To summarize, here’s the minimal configuration to add HTTP Header propagation in a dotNET API.

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddControllers();
        builder.Services.AddHttpContextAccessor();
        builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    
        builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    
        var app = builder.Build();
        app.UseHeaderPropagation();
        app.MapControllers();
        app.Run();
    }
    

    We’re almost ready to go!

    But we’re missing the central point of this article: logging an HTTP Header as a Correlation ID!

    Initializing Serilog

    We’ve already met Serilog several times in this blog, so I won’t repeat how to install it and how to define logs the best way possible.

    We will write our logs on Seq, and we’re overriding the minimum level to skip the noise generated by .NET:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Seq("http://localhost:5341")
        .MinimumLevel.Information()
        .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
        .MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
        .Enrich.FromLogContext();
    

    Since you probably know what’s going on, let me go straight to the point.

    Install Serilog Enricher for Correlation IDs

    We’re gonna use a specific library to log HTTP Headers treating them as Correlation IDs. To use it, you have to install the Serilog.Enrichers.CorrelationId package available on NuGet.

    Therefore, you can simply run

    dotnet add Serilog.Enrichers.CorrelationId
    

    to every .NET project that will use this functionality.

    Once we have that NuGet package ready, we can add its functionality to our logger by adding this line:

    .Enrich.WithCorrelationIdHeader("my-custom-correlation-id")
    

    This simple line tells dotnet that, when we see an HTTP Header named “my-custom-correlation-id”, we should log it as a Correlation ID.

    Run it all together

    Now we have everything in place – it’s time to run it!

    We have to run all the 3 services at the same time (you can do it with VisualStudio or you can run them separately using a CMD), and we need to have Seq installed on our local machine.

    You will see 3 instances of Swagger, and each instance is running under a different port.

    Swagger pages of our systems

    Once we have all the 3 applications up and running, we can call the /Bookings endpoint passing it a date range and an HTTP Header with key “my-custom-correlation-id” and value = “123” (or whatever we want).

    How to use HTTP Headers with Postman

    If everything worked as expected, we can open Seq and see all the logs we’ve written in our applications:

    All logs in SEQ

    Open one of them and have a look at the attributes of the logs: you will see a CorrelationId field with the value set to “123”.

    Our HTTP Header is now treated as Correlation ID

    Now, to better demonstrate how it works, call the endpoint again, but this time set “789” as my-custom-correlation-id, and specify a different date range. You should be able to see another set of logs generated by this second call.

    You can now apply filters to see which logs are related to a specific Correlation ID: open one log, click on the tick button and select “Find”.

    Filter button on SEQ

    You will then see all and only logs that were generated during the call with header my-custom-correlation-id set to “789”.

    List of all logs related to a specific Correlation ID

    Further readings

    That’s it. With just a few lines of code, you can dramatically improve your logging strategy.

    You can download and run the whole demo here:

    🔗 LogCorrelationId demo | GitHub

    To run this project you have to install both Serilog and Seq. You can do that by following this step-by-step guide:

    🔗 Logging with Serilog and Seq | Code4IT

    For this article, we’ve used the Microsoft.AspNetCore.HeaderPropagation package, which is ready to use. Are you interested in building your own solution – or, at least, learning how you can do that?

    🔗 How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C# | Code4IT

    Lastly, why not use Serilog’s Scopes? And what are they? Check it out here:

    🔗 How to improve Serilog logging in .NET 6 by using Scopes | Code4IT

    Wrapping up

    This article concludes a sort of imaginary path that taught us how to use Serilog, how to correlate different logs within the same application using Scopes, and how to correlate logs from different services using Correlation IDs.

    Using these capabilities, you will be able to write logs that can help you understand the context in which a specific log occurred, thus helping you fix errors more efficiently.

    This article first appeared on Code4IT

    Happy coding!

    🐧



    Source link

  • throw exceptions instead of returning null when there is no fallback &vert; Code4IT

    throw exceptions instead of returning null when there is no fallback | Code4IT


    In case of unmanageable error, should you return null or throw exceptions?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you don’t have any fallback operation to manage null values (eg: retry pattern), you should throw an exception instead of returning null.

    You will clean up your code and make sure that, if something cannot be fixed, it gets caught as soon as possible.

    Don’t return null or false

    Returning nulls impacts the readability of your code. The same happens for boolean results for operations. And you still have to catch other exceptions.

    Take this example:

    bool SaveOnFileSystem(ApiItem item)
    {
        // save on file system
        return false;
    }
    
    ApiItem GetItemFromAPI(string apiId)
    {
        var httpResponse = GetItem(apiId);
        if (httpResponse.StatusCode == 200)
        {
            return httpResponse.Content;
        }
        else
        {
            return null;
        }
    }
    
    DbItem GetItemFromDB()
    {
        // returns the item or null
        return null;
    }
    

    If all those methods complete successfully, they return an object (DbItem, ApiItem, or true); if they fail, they return null or false.

    How can you consume those methods?

    void Main()
    {
        var itemFromDB = GetItemFromDB();
        if (itemFromDB != null)
        {
            var itemFromAPI = GetItemFromAPI(itemFromDB.ApiId);
    
            if (itemFromAPI != null)
            {
                bool successfullySaved = SaveOnFileSystem(itemFromAPI);.
    
                if (successfullySaved)
                    Console.WriteLine("Saved");
            }
        }
        Console.WriteLine("Cannot save the item");
    }
    

    Note that there is nothing we can do in case something fails. So, do we really need all that nesting? We can do better!

    Throw Exceptions instead

    Let’s throw exceptions instead:

    void SaveOnFileSystem(ApiItem item)
    {
        // save on file system
        throw new FileSystemException("Cannot save item on file system");
    }
    
    
    ApiItem GetItemFromAPI(string apiId)
    {
        var httpResponse = GetItem(apiId);
        if (httpResponse.StatusCode == 200)
        {
            return httpResponse.Content;
        }
        else
        {
            throw new ApiException("Cannot download item");
        }
    }
    
    
    DbItem GetItemFromDB()
    {
        // returns the item or throws an exception
        throw new DbException("item not found");
    }
    

    Here, each method can complete in two statuses: it either completes successfully or it throws an exception of a type that tells us about the operation that failed.

    We can then consume the methods in this way:

    void Main()
    {
        try
        {
            var itemFromDB = GetItemFromDB();
            var itemFromAPI = GetItemFromAPI(itemFromDB.ApiId);
            SaveOnFileSystem(itemFromAPI);
            Console.WriteLine("Saved");
        }
        catch(Exception ex)
        {
            Console.WriteLine("Cannot save the item");
        }
    
    }
    

    Now the reader does not have to spend time reading the nested operations, it’s all more linear and immediate.

    Conclusion

    Remember, this way of writing code should be used only when you cannot do anything if an operation failed. You should use exceptions carefully!

    Now, a question for you: if you need more statuses as a return type of those methods (so, not only “success” and “fail”, but also some other status like “partially succeeded”), how would you transform that code?

    Happy coding!

    🐧



    Source link

  • The 2 secret endpoints I create in my .NET APIs &vert; Code4IT

    The 2 secret endpoints I create in my .NET APIs | Code4IT


    In this article, I will show you two simple tricks that help me understand the deployment status of my .NET APIs

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When I create Web APIs with .NET I usually add two “secret” endpoints that I can use to double-check the status of the deployment.

    I generally expose two endpoints: one that shows me some info about the current environment, and another one that lists all the application settings defined after the deployment.

    In this article, we will see how to create those two endpoints, how to update the values when building the application, and how to hide those endpoints.

    Project setup

    For this article, I will use a simple .NET 6 API project. We will use Minimal APIs, and we will use the appsettings.json file to load the application’s configuration values.

    Since we are using Minimal APIs, you will have the endpoints defined in the Main method within the Program class.

    To expose an endpoint that accepts the GET HTTP method, you can write

    endpoints.MapGet("say-hello", async context =>
    {
       await context.Response.WriteAsync("Hello, everybody!");
    });
    

    That’s all you need to know about .NET Minimal APIs for the sake of this article. Let’s move to the main topics ⏩

    How to show environment info in .NET APIs

    Let’s say that your code execution depends on the current Environment definition. Typical examples are that, if you’re running on production you may want to hide some endpoints otherwise visible in the other environments, or that you will use a different error page when an unhandled exception is thrown.

    Once the application has been deployed, how can you retrieve the info about the running environment?

    Here we go:

    app.MapGet("/env", async context =>
    {
        IWebHostEnvironment? hostEnvironment = context.RequestServices.GetRequiredService<IWebHostEnvironment>();
        var thisEnv = new
        {
            ApplicationName = hostEnvironment.ApplicationName,
            Environment = hostEnvironment.EnvironmentName,
        };
    
        var jsonSerializerOptions = new JsonSerializerOptions { WriteIndented = true };
        await context.Response.WriteAsJsonAsync(thisEnv, jsonSerializerOptions);
    });
    

    This endpoint is quite simple.

    The context variable, which is of type HttpContext, exposes some properties. Among them, the RequestServices property allows us to retrieve the services that have been injected when starting up the application. We can then use GetRequiredService to get a service by its type and store it into a variable.

    💡 GetRequiredService throws an exception if the service cannot be found. On the contrary, GetService returns null. I usually prefer GetRequiredService, but, as always, it depends on what you’re using it.

    Then, we create an anonymous object with the information of our interest and finally return them as an indented JSON.

    It’s time to run it! Open a terminal, navigate to the API project folder (in my case, SecretEndpoint), and run dotnet run. The application will compile and start; you can then navigate to /env and see the default result:

    The JSON result of the env endpoint

    How to change the Environment value

    While the applicationName does not change – it is the name of the running assembly, so any other value will make stop your application from running – you can (and, maybe, want to) change the Environment value.

    When running the application using the command line, you can use the --environment flag to specify the Environment value.

    So, running

    dotnet run --environment MySplendidCustomEnvironment
    

    will produce this result:

    MySplendidCustomEnvironment is now set as Environment

    There’s another way to set the environment: update the launchSettings.json and run the application using Visual Studio.

    To do that, open the launchSettings.json file and update the profile you are using by specifying the Environment name. In my case, the current profile section will be something like this:

    "profiles": {
    "SecretEntpoints": {
        "commandName": "Project",
        "dotnetRunMessages": true,
        "launchBrowser": true,
        "launchUrl": "swagger",
        "applicationUrl": "https://localhost:7218;http://localhost:5218",
        "environmentVariables":
            {
                "ASPNETCORE_ENVIRONMENT": "EnvByProfile"
            }
        },
    }
    

    As you can see, the ASPNETCORE_ENVIRONMENT variable is set to EnvByProfile.

    If you run the application using Visual Studio using that profile you will see the following result:

    EnvByProfile as defined in the launchSettings file

    How to list all the configurations in .NET APIs

    In my current company, we deploy applications using CI/CD pipelines.

    This means that final variables definition comes from the sum of 3 sources:

    • the project’s appsettings file
    • the release pipeline
    • the deployment environment

    You can easily understand how difficult it is to debug those applications without knowing the exact values for the configurations. That’s why I came up with these endpoints.

    To print all the configurations, we’re gonna use an approach similar to the one we’ve used in the previous example.

    The endpoint will look like this:

    app.MapGet("/conf", async context =>
    {
        IConfiguration? allConfig = context.RequestServices.GetRequiredService<IConfiguration>();
    
        IEnumerable<KeyValuePair<string, string>> configKv = allConfig.AsEnumerable();
    
        var jsonSerializerOptions = new JsonSerializerOptions { WriteIndented = true };
        await context.Response.WriteAsJsonAsync(configKv, jsonSerializerOptions);
    });
    

    What’s going on? We are retrieving the IConfiguration object, which contains all the configurations loaded at startup; then, we’re listing all the configurations as key-value pairs, and finally, we’re returning the list to the client.

    As an example, here’s my current appsettings.json file:

    {
      "ApiService": {
        "BaseUrl": "something",
        "MaxPage": 5,
        "NestedValues": {
          "Skip": 10,
          "Limit": 56
        }
      },
      "MyName": "Davide"
    }
    

    When I run the application and call the /conf endpoint, I will see the following result:

    Configurations printed as key-value pairs

    Notice how the structure of the configuration values changes. The value

    {
      "ApiService": {
        "NestedValues": {
          "Limit": 56
        }
    }
    

    is transformed into

    {
        "Key": "ApiService:NestedValues:Limit",
        "Value": "56"
    },
    

    That endpoint shows a lot more than you can imagine: take some time to have a look at those configurations – you’ll thank me later!

    How to change the value of a variable

    There are many ways to set the value of your variables.

    The most common one is by creating an environment-specific appsettings file that overrides some values.

    So, if your environment is called “EnvByProfile”, as we’ve defined in the previous example, the file will be named appsettings.EnvByProfile.json.

    There are actually some other ways to override application variables: we will learn them in the next article, so stay tuned! 😎

    3 ways to hide your endpoints from malicious eyes

    Ok then, we have our endpoints up and running, but they are visible to anyone who correctly guesses their addresses. And you don’t want to expose such sensitive info to malicious eyes, right?

    There are, at least, 3 simple values to hide those endpoints:

    • Use a non-guessable endpoint: you can use an existing word, such as “housekeeper”, use random letters, such as “lkfrmlvkpeo”, or use a Guid, such as “E8E9F141-6458-416E-8412-BCC1B43CCB24”;
    • Specify a key on query string: if that key is not found or it has an invalid value, return a 404-not found result
    • Use an HTTP header, and, again, return 404 if it is not valid.

    Both query strings and HTTP headers are available in the HttpContext object injected in the route definition.

    Now it’s your turn to find an appropriate way to hide these endpoints. How would you do that? Drop a comment below 📩

    Edit 2022-10-10: I thought it was quite obvious, but apparently it is not: these endpoints expose critical information about your applications and your infrastructure, so you should not expose them unless it is strictly necessary! If you have strong authentication in place, use it to secure those endpoints. If you don’t, hide those endpoints the best you can, and show only necessary data, and not everything. Strip out sensitive content. And, as soon as you don’t need that info anymore, remove those endpoints (comment them out or generate them only if a particular flag is set at compilation time). Another possible way is by using feature flags. In the end, take that example with a grain of salt: learn that you can expose them, but keep in mind that you should not expose them.

    Further readings

    We’ve used a quite new way to build and develop APIs with .NET, called “Minimal APIs”. You can read more here:

    🔗 Minimal APIs | Microsoft Learn

    If you are not using Minimal APIs, you still might want to create such endpoints. We’ve talked about accessing the HttpContext to get info about the HTTP headers and query string. When using Controllers, accessing the HttpContext requires some more steps. Here’s an article that you may find interesting:

    🔗 How to access the HttpContext in .NET API | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    In this article, we’ve seen how two endpoints can help us with understanding the status of the deployments of our applications.

    It’s a simple trick that you can consider adding to your projects.

    Do you have some utility endpoints?

    Happy coding!

    🐧



    Source link

  • Designer Spotlight: Julie Marting | Codrops

    Designer Spotlight: Julie Marting | Codrops


    Hey there. My name is Julie Marting, and I’m a Paris-based designer. Focusing on concept, interactivity, and 3D, I’ve been working on these subjects at Hervé Studio for a few years now, with occasional freelance projects when something cool comes my way.

    The types of projects I work on revolve around interactive and immersive experiences. From a landing page with interactive elements to a virtual immersive exhibition, or interactive user journeys within applications, my goal is to enhance the user experience by evoking emotions that encourage them to explore or use a service.

    Featured work

    Madbox

    Madbox is a mobile game publisher creating fun, small games with simple gameplay that anyone can enjoy. Our mission was to create a website that reflected their image: playful, innovative, full of references and surprises that delight users, and also one that would make people want to join their team. (And obviously, with a mobile first approach.)

    If you are curious, you will be intrigued by the hot-air balloon traveling through the hero section: it takes you on a test to see if you would be a good fit to join the Madbox team.

    Personal Notes

    This was the first project I worked on when I joined Hervé in 2021, and it’s still one of my favorites. We had so much fun coming up with concepts, interactions, animations, and easter eggs to add to this joyful design. It was a pleasure working with the clients, and a great collaboration with the developers, who were very proactive in making the experience as good as possible.

    View it online

    Fruitz

    Fruitz is a French dating app that uses fruits to categorize what you’re looking for: one-night stands, casual matches, serious relationships… While the service is only available through the app, the clients still wanted a playful and interactive landing page for curious visitors. That’s where our adventure began!

    To echo the tags and labels used on dating apps, we chose to explore an artistic direction centered around stickers. This also allowed us to highlight the puns that Fruitz loves to use in its communication campaigns.

    Personal Notes

    This project was a great opportunity to develop a new style for Fruitz’s communication, based on their brand guidelines but with some freedom to explore playful new visuals. It’s also always interesting to come up with a concept for a simple landing page with limited content. It has to catch the eye and delight users, without being “too much”.

    View it online

    LVMH, The Showroom

    For the Vivatech event, the LVMH group needed to create a virtual showcase of its brands’ latest innovations. On this occasion, I teamed up with Cosmic Shelter to create “The Showroom”, an immersive experience where you can discover the stories and the technological advances of the best Maisons, through an imaginary world.

    Personal Notes

    Aside from the art direction, which I really enjoyed, I found it very interesting to work as a freelancer for another digital agency. Although we share similar processes and methods, everyone works differently, so it’s always instructive to exchange ideas. Working as a freelancer on a specific part of a project (in this case, 3D art direction) and working as a designer within a studio with multiple roles on the same project are two very different experiences, both of which are incredibly enriching to explore.

    365, A Year Of Cartier

    Every year, Cartier publishes a magazine showcasing their key actions over the past 12 months. For two years in a row, they asked us at Hervé Studio to create a digital version of the magazine.

    The goal was to bring together 29 articles across 6 chapters around a central navigation system, ensuring that users wouldn’t miss anything, and especially the 6 immersive articles we developed further.

    Personal Notes

    The challenge on this project was the tight deadline in relation to the complexity of the experiments and the creative intentions we wanted to bring to them. We were a small team, so we had to stay organized and work quickly, but in the end it was a real pleasure to see all these experiments come to life.

    View it online

    Lacoste

    For the end-of-year celebrations, Lacoste wanted to promote the customization feature of their polo shirts. We were asked at Hervé Studio to design a short video highlighting this feature and its various possibilities.

    Personal notes

    This really cool project, despite its short deadline, was a great opportunity to explore the physics effects in Cinema 4D, which I wasn’t very familiar with. It was important to develop a storytelling approach around the creation of a unique polo, and I’m proud of the result we managed to achieve for this two-week project.

    Background & Career highlights

    As interactive design is a recent and niche field, I began my studies in graphic design without knowing it even existed. It was at Gobelins that I discovered and fell in love with this specialty, and I went on to enroll in a master’s degree in interactive design. The main strength of this school was the sandwich course, which allowed me to start my first job at a very young age. After working for a bespoke industrial design studio, a ready-to-wear brand, and a digital agency, I finally joined Hervé Studio over four years ago.

    We were a small team with a human spirit, which gave me the opportunity to quickly take on responsibilities for projects of various sizes.

    We’ve grown and evolved, winning over new clients and new types of projects. We were eventually invited to give a talk at the Paris Design Meetup organized by Algolia and Jitter, and later at the OFFF Vienna festival. There, we shared our experience on the following topic: “WebGL for Interactivity: From Concept to Production”. The idea was to demystify the use of this technology, highlight the possibilities it opens up for designers, and explain our workflow and collaboration with developers to move forward together on a shared project.

    Talk at the OFFF Vienna Festival with the co-founders of Hervé Studio: Romain Briaux and Vincent Jouty

    Design Philosophy

    I am convinced that in this overwhelming world, design can bring us meaning and calm. In my approach, I see it as a way to transport people into a world beyond the ordinary. Immersing users, capturing their attention, and offering them a moment of escape while conveying a message is what I aspire to.

    Injecting meaning into projects is a leitmotif. It’s obviously not enough to create something beautiful; it’s about creating something tailor-made that holds meaning for users, clients, and ourselves.

    Interactive design enables us to place the user at the center of the experience, making them an active participant rather than just a reader of information or a potential consumer. Moreover, interactive design can sometimes evoke emotions and create lasting memories. For these reasons, this specialty feels like a powerful medium for expression and exchange, because that’s what it’s all about.

    Tools & Technics

    • A pencil and some paper: inherent to creation
    • Figma: to gather and create
    • Cinema 4D + Octane render: to let the magic happen

    But I would say the best tool is communication, within the team and with developers, to understand how we can work better together, which techniques to use, and how to achieve a smoother workflow and a stunning result.

    Inspiration

    We’re lucky to have many platforms to find inspiration today, but I would say my main source of inspiration comes from life itself, everything that crosses our path at any given moment. Staying open to what surrounds us, observing, and focusing our attention on things that catch our eye or raise questions. It can be visual (static or moving), a sensation, a feeling, a moment, an action, a concept, a sentence, an idea, anything we’re sensitive to.

    And if we get inspired by something and need to take some notes or sketch it, no matter how accurate the result is, the important thing is to catch the inspiration and explore all around. This is why I like to do some photography in my spare time, or other accessible crafts like painting on objects, nude drawing sessions, or creating little jewels out of nowhere. These activities are very invigorating and allow us to take a break from our hectic lives.

    Future goals

    My main goal is to finally start working on my portfolio. Like many designers, I’ve always postponed this moment, relying on platforms like Behance to showcase my projects. But there comes a time when it’s important to have an online presence, a professional storefront that evolves with us over time.

    Final Thoughts

    Don’t pay too much attention to negative minds. Believe in yourself, stick to what you like, explore and try without worrying about rules or expectations. Make mistakes and don’t blame yourself for them. On the contrary, failures can sometimes lead to good surprises, or at least valuable lessons. Above all, listen to yourself and find the right balance between creating and taking time to breathe. Enjoying yourself is essential.

    Contact

    Thank you very much for your reading, and feel free to reach out if interested in anything, I would be happy to discuss!

    Instagram
    Linkedin
    X (Twitter)
    Website: stay tuned 😎





    Source link

  • DRY or not DRY? &vert; Code4IT

    DRY or not DRY? | Code4IT


    DRY is a fundamental principle in software development. Should you apply it blindly?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You’ve probably heard about the DRY principle: Don’t Repeat Yourself.

    Does it really make sense? Not always.

    When to DRY

    Yes, you should not repeat yourself if there is some logic that you can reuse. Take this simple example:

    public class PageExistingService
    {
        public async Task<string> GetHomepage()
        {
            string url = "https://www.code4it.dev/";
    
            var httpClient = new HttpClient();
            var result = await httpClient.GetAsync(url);
    
            if (result.IsSuccessStatusCode)
            {
                return await result.Content.ReadAsStringAsync();
            }
            return "";
        }
    
        public async Task<string> GetAboutMePage()
        {
            string url = "https://www.code4it.dev/about-me";
    
            var httpClient = new HttpClient();
            var result = await httpClient.GetAsync(url);
    
            if (result.IsSuccessStatusCode)
            {
                return await result.Content.ReadAsStringAsync();
            }
            return "";
        }
    }
    

    As you can see, the two methods are almost identical: the only difference is with the page that will be downloaded.

    pss: that’s not the best way to use an HttpClient! Have a look at this article

    Now, what happens if an exception is thrown? You’d better add a try-catch to handle those errors. But, since the logic is repeated, you have to add the same logic to both methods.

    That’s one of the reasons you should not repeat yourself: if you had to update a common functionality, you have to do that in every place it is used.

    You can then refactor these methods in this way:

    public class PageExistingService
    {
        public Task<string> GetHomepage() => GetPage("https://www.code4it.dev/");
    
        public Task<string> GetAboutMePage() => GetPage("https://www.code4it.dev/about-me");
    
    
        private async Task<string> GetPage(string url)
        {
    
            var httpClient = new HttpClient();
            var result = await httpClient.GetAsync(url);
    
            if (result.IsSuccessStatusCode)
            {
                return await result.Content.ReadAsStringAsync();
            }
            return "";
        }
    
    }
    

    Now both GetHomepage and GetAboutMePage use the same logic defined in the GetPage method: you can then add the error handling only in one place.

    When NOT to DRY

    This doesn’t mean that you have to refactor everything without thinking of the meanings.

    You should not follow the DRY principle when

    • the components are not referring to the same context
    • the components are expected to evolve in different ways

    The two points are strictly related.
    A simple example is separating the ViewModels and the Database Models.

    Say that you have a CRUD application that handles Users.

    Both the View and the DB are handling Users, but in different ways and with different purposes.

    We might have a ViewModelUser class used by the view (or returned from the APIs, if you prefer)

    class ViewModelUser
    {
        public string Name { get; set; }
        public string LastName { get; set; }
        public DateTime RegistrationDate {get; set; }
    }
    

    and a DbUser class, similar to ViewModelUser, but which also handles the user Id.

    class DbUser
    {
    
        public int Id { get; set; }
        public string Name { get; set; }
        public string LastName { get; set; }
        public DateTime RegistrationDate {get; set; }
    }
    

    If you blinldy follow the DRY principle, you might be tempted to only use the DbUser class, maybe rename it as User, and just use the necessary fields on the View.

    Another step could be to create a base class and have both models inherit from that class:

    public abstract class User
    {
        public string Name { get; set; }
        public string LastName { get; set; }
        public DateTime RegistrationDate {get; set; }
    }
    
    class ViewModelUser : User
    {
    }
    
    class DbUser : User
    {
        public int Id { get; set; }
    }
    

    Sounds familiar?

    Well, in this case, ViewModelUser and DbUser are used in different contexts and with different purposes: showing the user data on screen and saving the user on DB.

    What if, for some reason, you must update the RegistrationDate type from DateTime to string? That change will impact both the ViewModel and the DB.

    There are many other reasons this way of handling models can bring more troubles than benefits. Can you find some? Drop a comment below 📧

    The solution is quite simple: duplicate your code.

    In that way, you have the freedom to add and remove fields, add validation, expose behavior… everything that would’ve been a problem to do with the previous approach.

    Of course, you will need to map the two data types, if necessary: luckily it’s a trivial task, and there are many libraries that can do that for you. By the way, I prefer having 100% control of those mappings, also to have the flexibility of changes and custom behavior.

    Further readings

    DRY implies the idea of Duplication. But duplication is not just “having the same lines of code over and over”. There’s more:

    🔗 Clean Code Tip: Avoid subtle duplication of code and logic | Code4IT

    As I anticipated, the way I used the HttpClient is not optimal. There’s a better way:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    DRY is a principle, not a law written in stone. Don’t blindly apply it.

    Well, you should never apply anything blindly: always consider the current and the future context.

    Happy coding!
    🐧



    Source link