برچسب: With

  • how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT


    After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.

    In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.

    I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.

    Introducing my blog architecture

    To better understand what’s going on, I need a very brief overview of the architecture of my blog.

    It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).

    So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.

    As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.

    Now that you know some general aspects, it’s time to deep dive into my writing process.

    Before writing: organizing ideas with GitHub

    My central source, as you might have already understood, is GitHub.

    There, I write all my notes and keep track of the status of my articles.

    Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.

    Github Projects to track the status of the articles

    GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.

    GitHub projects

    I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.

    In this way, I can use different columns and have more flexibility when handling the status of the tasks.

    GitHub issues templates

    As I said, to write my notes I use GitHub issues.

    When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.

    To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.

    The list of GitHub issues templates I use

    Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form

    Article creation form as generated by a template

    which is prepopulated with some fields:

    • Title: with a placeholder ([Article] )
    • Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
    • Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)

    How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.

    ---
    name: New article
    about: New blog article
    title: "[Article] - "
    labels: Article
    assignees: bellons91
    ---
    
    ## Argomenti
    
    ## Link
    
    ## Appunti vari
    

    And you’re good to go!

    GitHub action to assign issues to a project

    Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!

    With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.

    So, here’s mine:

    Auto-assign to project GitHub Action

    For better readability, you can find the Gist here.

    This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.

    In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎

    Writing

    Now it’s the time of writing!

    As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.

    For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.

    But, of course, I automated it! 😎

    Powershell script to scaffold a new article

    Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.

    Also, every MD file begins with a Frontmatter section, like this:

    ---
    title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
    path: "/blog/automate-articles-creations-github-powershell-azure"
    tags: ["MainArticle"]
    featuredImage: "./cover.png"
    excerpt: "a description for 072-how-i-create-articles"
    created: 4219-11-20
    updated: 4219-11-20
    ---
    

    But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.

    PowerShell script to scaffold a new article

    You can find the code in this Gist.

    This script performs several actions:

    1. Switches to the Master branch and downloads the latest updates
    2. Asks for the article slug that will be used to create the folder name
    3. Creates a new branch using the article slug as a name
    4. Creates a new folder that will contain all the files I will be using for my article (markdown content and images)
    5. Creates the article file with the Frontmatter part populated with dummy values
    6. Copies a placeholder image into this folder; this image will be the temporary cover image

    In this way, with a single command, I can scaffold a new article with all the files I need to get started.

    Ok, but how can I run a PowerShell in a Gatsby repository?

    I added this script in the package.json file

    "create-article": "@powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./article-creator.ps1",
    

    where article-creator.ps1 is the name of the file that contains the script.

    Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.

    Markdown preview on VS Code

    I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).

    One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.

    Here’s what I can see right now while I’m writing:

    Markdown preview on the side with VS Code

    Grammar check with Grammarly

    Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).

    I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.

    Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).

    Unprofessional, but fun, cover images

    One of the tasks I like the most is creating my cover images.

    I don’t use stock images, I prefer using less professional but more original cover images.

    Some of the cover images for my articles

    You can see all of them here.

    Creating and scheduling PR on GitHub with Templates and Actions

    Now that my article is complete, I can set it as ready for being scheduled.

    To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!

    I have created a PR template in an MD file, which I use to create a draft of the PR content.

    Pull Request form on GitHub

    In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).

    Also, I can define when this PR will be merged on Master, using the /schedule tag.

    It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.

    YAML of Merge Schedule action

    So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.

    As usual, you can find the code of this action here

    After the PR is merged, I also receive an email that notifies me of the action.

    After publishing

    Once a new article is online, I like to give it some visibility.

    To do that, I heavily rely on Azure Logic Apps.

    Azure Logic App for sharing on Twitter

    My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.

    I use it to trigger an Azure Logic App to publish a message on Twitter:

    Azure Logic App workflow for publishing on Twitter

    The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.

    If you prefer, you can use a custom Azure Function! The choice is yours!

    Cross-post reminder with Azure Logic Apps

    Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.

    Azure Logic App workflow for crosspost reminders

    I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.

    Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.

    And, my friends, this is everything that happens in the background of my blog!

    What I’m still missing

    I’ve added a lot of effort to my blog, and I’m incredibly proud of it!

    But still, there are a few things I’d like to improve.

    SEO Tools/analysis

    I’ve never considered SEO. Or, better, Keywords.

    I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.

    I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.

    Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.

    Also, I should spend more time thinking of the correct title and section titles.

    Any idea?

    Easy upgrade of Gatsby/Migrate to other headless CMSs

    Lastly, I’d like to find another theme or platform and leave the one I’m currently using.

    Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.

    Wrapping up

    That’s it: in this article, I’ve explained everything that I do when writing a blog post.

    Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!

    So, for now, happy coding!

    🐧



    Source link

  • Closing the Compliance Gap with Data Discovery

    Closing the Compliance Gap with Data Discovery


    In today’s regulatory climate, compliance is no longer a box-ticking exercise. It is a strategic necessity. Organizations across industries are under pressure to secure sensitive data, meet privacy obligations, and avoid hefty penalties. Yet, despite all the talk about “data visibility” and “compliance readiness,” one fundamental gap remains: unseen data—the information your business holds but doesn’t know about.

    Unseen data isn’t just a blind spot—it’s a compliance time bomb waiting to trigger regulatory and reputational damage.

    The Myth: Sensitive Data Lives Only in Databases

    Many businesses operate under the dangerous assumption that sensitive information exists only in structured repositories like databases, ERP platforms, or CRM systems. While it’s true these systems hold vast amounts of personal and financial information, they’re far from the whole picture.

    Reality check: Sensitive data is often scattered across endpoints, collaboration platforms, and forgotten storage locations. Think of HR documents on a laptop, customer details in a shared folder, or financial reports in someone’s email archive. These are prime targets for breaches—and they often escape compliance audits because they live outside the “official” data sources.

    Myth vs Reality: Why Structured Data is Not the Whole Story

    Yes, structured sources like SQL databases allow centralized access control and auditing. But compliance risks aren’t limited to structured data. Unstructured and endpoint data can be far more dangerous because:

    • They are harder to track.
    • They often bypass IT policies.
    • They get replicated in multiple places without oversight.

    When organizations focus solely on structured data, they risk overlooking up to 50–70% of their sensitive information footprint.

    The Challenge Without Complete Discovery

    Without full-spectrum data discovery—covering structured, unstructured, and endpoint environments—organizations face several challenges:

    1. Compliance Gaps – Regulations like GDPR, DPDPA, HIPAA, and CCPA require knowing all locations of personal data. If data is missed, compliance reports will be incomplete.
    2. Increased Breach Risk – Cybercriminals exploit the easiest entry points, often targeting endpoints and poorly secured file shares.
    3. Inefficient Remediation – Without knowing where data lives, security teams can’t effectively remove, encrypt, or mask it.
    4. Costly Investigations – Post-breach forensics becomes slower and more expensive when data locations are unknown.

    The Importance of Discovering Data Everywhere

    A truly compliant organization knows where every piece of sensitive data resides, no matter the format or location. That means extending discovery capabilities to:

    1. Structured Data
    • Where it lives: Databases, ERP, CRM, and transactional systems.
    • Why it matters: It holds core business-critical records, such as customer PII, payment data, and medical records.
    • Risks if ignored: Non-compliance with data subject rights requests; inaccurate reporting.
    1. Unstructured Data
    • Where it lives: File servers, SharePoint, Teams, Slack, email archives, cloud storage.
    • Why it matters: Contains contracts, scanned IDs, reports, and sensitive documents in freeform formats.
    • Risks if ignored: Harder to monitor, control, and protect due to scattered storage.
    1. Endpoint Data
    • Where it lives: Laptops, desktops, mobile devices (Windows, Mac, Linux).
    • Why it matters: Employees often store working copies of sensitive files locally.
    • Risks if ignored: Theft, loss, or compromise of devices can expose critical information.

    Real-World Examples of Compliance Risks from Unseen Data

    1. Healthcare Sector: A hospital’s breach investigation revealed patient records stored on a doctor’s laptop, which was never logged into official systems. GDPR fines followed.
    2. Banking & Finance: An audit found loan application forms with customer PII on a shared drive, accessible to interns.
    3. Retail: During a PCI DSS assessment, old CSV exports containing cardholder data were discovered in an unused cloud folder.
    4. Government: Sensitive citizen records are emailed between departments, bypassing secure document transfer systems, and are later exposed to a phishing attack.

    Closing the Gap: A Proactive Approach to Data Discovery

    The only way to eliminate unseen data risks is to deploy comprehensive data discovery and classification tools that scan across servers, cloud platforms, and endpoints—automatically detecting sensitive content wherever it resides.

    This proactive approach supports regulatory compliance, improves breach resilience, reduces audit stress, and ensures that data governance policies are meaningful in practice, not just on paper.

    Bottom Line

    Compliance isn’t just about protecting data you know exists—it’s about uncovering the data you don’t. From servers to endpoints, organizations need end-to-end visibility to safeguard against unseen risks and meet today’s stringent data protection laws.

    Seqrite empowers organizations to achieve full-spectrum data visibility — from servers to endpoints — ensuring compliance and reducing risk. Learn how we can help you discover what matters most.



    Source link

  • 3 ways to check the object passed to mocks with Moq in C# | Code4IT

    3 ways to check the object passed to mocks with Moq in C# | Code4IT


    In unit tests, sometimes you need to perform deep checks on the object passed to the mocked service. We will learn 3 ways to do that with Moq and C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing unit tests, you can use Mocks to simulate the usage of class dependencies.

    Even though some developers are harshly against the usage of mocks, they can be useful, especially when the mocked operation does not return any value, but still, you want to check that you’ve called a specific method with the correct values.

    In this article, we will learn 3 ways to check the values passed to the mocks when using Moq in our C# Unit Tests.

    To better explain those 3 ways, I created this method:

    public void UpdateUser(User user, Preference preference)
    {
        var userDto = new UserDto
        {
            Id = user.id,
            UserName = user.username,
            LikesBeer = preference.likesBeer,
            LikesCoke = preference.likesCoke,
            LikesPizza = preference.likesPizza,
        };
    
        _userRepository.Update(userDto);
    }
    

    UpdateUser simply accepts two objects, user and preference, combines them into a single UserDto object, and then calls the Update method of _userRepository, which is an interface injected in the class constructor.

    As you can see, we are not interested in the return value from _userRepository.Update. Rather, we are interested in checking that we are calling it with the right values.

    We can do it in 3 ways.

    Verify each property with It.Is

    The simplest, most common way is by using It.Is<T> within the Verify method.

    [Test]
    public void VerifyEachProperty()
    {
        // Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
    
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u =>
            u.Id == expected.Id
            && u.UserName == expected.UserName
            && u.LikesPizza == expected.LikesPizza
            && u.LikesBeer == expected.LikesBeer
            && u.LikesCoke == expected.LikesCoke
        )));
    }
    

    In the example above, we used It.Is<UserDto> to check the exact item that was passed to the Update method of userRepo.

    Notice that it accepts a parameter. That parameter is of type Func<UserDto, bool>, and you can use it to define when your expectations are met.

    In this particular case, we’ve checked each and every property within that function:

    u =>
        u.Id == expected.Id
        && u.UserName == expected.UserName
        && u.LikesPizza == expected.LikesPizza
        && u.LikesBeer == expected.LikesBeer
        && u.LikesCoke == expected.LikesCoke
    

    This approach works well when you have to perform checks on only a few fields. But the more fields you add, the longer and messier that code becomes.

    Also, a problem with this approach is that if it fails, it becomes hard to understand which is the cause of the failure, because there is no indication of the specific field that did not match the expectations.

    Here’s an example of an error message:

    Expected invocation on the mock at least once, but was never performed: _ => _.Update(It.Is<UserDto>(u => (((u.Id == 1 && u.UserName == "Davidde") && u.LikesPizza == True) && u.LikesBeer == True) && u.LikesCoke == False))
    
    Performed invocations:
    
    Mock<IUserRepository:1> (_):
        IUserRepository.Update(UserDto { UserName = Davide, Id = 1, LikesPizza = True, LikesCoke = False, LikesBeer = True })
    

    Can you spot the error? And what if you were checking 15 fields instead of 5?

    Verify with external function

    Another approach is by externalizing the function.

    [Test]
    public void WithExternalFunction()
    {
        //Arrange
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        userRepo.Verify(_ => _.Update(It.Is<UserDto>(u => AreEqual(u, expected))));
    }
    
    private bool AreEqual(UserDto u, UserDto expected)
    {
        Assert.AreEqual(expected.UserName, u.UserName);
        Assert.AreEqual(expected.Id, u.Id);
        Assert.AreEqual(expected.LikesBeer, u.LikesBeer);
        Assert.AreEqual(expected.LikesCoke, u.LikesCoke);
        Assert.AreEqual(expected.LikesPizza, u.LikesPizza);
    
        return true;
    }
    

    Here, we are passing an external function to the It.Is<T> method.

    This approach allows us to define more explicit and comprehensive checks.

    The good parts of it are that you will gain more control over the assertions, and you will also have better error messages in case a test fails:

    Expected string length 6 but was 7. Strings differ at index 5.
    Expected: "Davide"
    But was:  "Davidde"
    

    The bad part is that you will stuff your test class with lots of different methods, and the class can easily become hard to maintain. Unluckily, we cannot use local functions.

    On the other hand, having external functions allows us to combine them when we need to do some tests that can be reused across test cases.

    Intercepting the function parameters with Callback

    Lastly, we can use a hidden gem of Moq: Callbacks.

    With Callbacks, you can store in a local variable the reference to the item that was called by the method.

    [Test]
    public void CompareWithCallback()
    {
        // Arrange
    
        var user = new User(1, "Davide");
        var preferences = new Preference(true, true, false);
    
        UserDto actual = null;
        userRepo.Setup(_ => _.Update(It.IsAny<UserDto>()))
            .Callback(new InvocationAction(i => actual = (UserDto)i.Arguments[0]));
    
        UserDto expected = new UserDto
        {
            Id = 1,
            UserName = "Davide",
            LikesBeer = true,
            LikesCoke = false,
            LikesPizza = true,
        };
    
        //Act
        userUpdater.UpdateUser(user, preferences);
    
        //Assert
        Assert.IsTrue(AreEqual(expected, actual));
    }
    

    In this way, you can use it locally and run assertions directly to that object without relying on the Verify method.

    Or, if you use records, you can use the auto-equality checks to simplify the Verify method as I did in the previous example.

    Wrapping up

    In this article, we’ve explored 3 ways to perform checks on the objects passed to dependencies mocked with Moq.

    Each way has its pros and cons, and it’s up to you to choose the approach that fits you the best.

    I personally prefer the second and third approaches, as they allow me to perform better checks on the passed values.

    What about you?

    For now, happy coding!

    🐧



    Source link

  • Building a Blended Material Shader in WebGL with Solid.js

    Building a Blended Material Shader in WebGL with Solid.js



    Blackbird was a fun, experimental site that I used as a way to get familiar with WebGL inside of Solid.js. It went through the story of how the SR-71 was built in super technical detail. The wireframe effect covered here helped visualize the technology beneath the surface of the SR-71 while keeping the polished metal exterior visible that matched the sites aesthetic.

    Here is how the effect looks like on the Blackbird site:

    In this tutorial, we’ll rebuild that effect from scratch: rendering a model twice, once as a solid and once as a wireframe, then blending the two together in a shader for a smooth, animated transition. The end result is a flexible technique you can use for technical reveals, holograms, or any moment where you want to show both the structure and the surface of a 3D object.

    There are three things at work here: material properties, render targets, and a black-to-white shader gradient. Let’s get into it!

    But First, a Little About Solid.js

    Solid.js isn’t a framework name you hear often, I’ve switched my personal work to it for the ridiculously minimal developer experience and because JSX remains the greatest thing since sliced bread. You absolutely don’t need to use the Solid.js part of this demo, you could strip it out and use vanilla JS all the same. But who knows, you may enjoy it 🙂

    Intrigued? Check out Solid.js.

    Why I Switched

    TLDR: Full-stack JSX without all of the opinions of Next and Nuxt, plus it’s like 8kb gzipped, wild.

    The technical version: Written in JSX, but doesn’t use a virtual DOM, so a “reactive” (think useState()) doesn’t re-render an entire component, just one DOM node. Also runs isomorphically, so "use client" is a thing of the past.

    Setting Up Our Scene

    We don’t need anything wild for the effect: a Mesh, Camera, Renderer, and Scene will do. I use a base Stage class (for theatrical-ish naming) to control when things get initialized.

    A Global Object for Tracking Window Dimensions

    window.innerWidth and window.innerHeight trigger document reflow when you use them (more about document reflow here). So I keep them in one object, only updating it when necessary and reading from the object, instead of using window and causing reflow. Notice these are all set to 0 and not actual values by default. window gets evaluated as undefined when using SSR, so we want to wait to set this until our app is mounted, GL class is initialized, and window is defined to avoid everybody’s favorite error: Cannot read properties of undefined (reading ‘window’).

    // src/gl/viewport.js
    
    export const viewport = {
      width: 0,
      height: 0,
      devicePixelRatio: 1,
      aspectRatio: 0,
    };
    
    export const resizeViewport = () => {
      viewport.width = window.innerWidth;
      viewport.height = window.innerHeight;
    
      viewport.aspectRatio = viewport.width / viewport.height;
    
      viewport.devicePixelRatio = Math.min(window.devicePixelRatio, 2);
    };

    A Basic Three.js Scene, Renderer, and Camera

    Before we can render anything, we need a small framework to handle our scene setup, rendering loop, and resizing logic. Instead of scattering this across multiple files, we’ll wrap it in a Stage class that initializes the camera, renderer, and scene in one place. This makes it easier to keep our WebGL lifecycle organized, especially once we start adding more complex objects and effects.

    // src/gl/stage.js
    
    import { WebGLRenderer, Scene, PerspectiveCamera } from 'three';
    import { viewport, resizeViewport } from './viewport';
    
    class Stage {
      init(element) {
        resizeViewport() // Set the initial viewport dimensions, helps to avoid using window inside of viewport.js for SSR-friendliness
        
        this.camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
        this.camera.position.set(0, 0, 2); // back the camera up 2 units so it isn't on top of the meshes we make later, you won't see them otherwise.
    
        this.renderer = new WebGLRenderer();
        this.renderer.setSize(viewport.width, viewport.height);
        element.appendChild(this.renderer.domElement); // attach the renderer to the dom so our canvas shows up
    
        this.renderer.setPixelRatio(viewport.devicePixelRatio); // Renders higher pixel ratios for screens that require it.
    
        this.scene = new Scene();
      }
    
      render() {
        this.renderer.render(this.scene, this.camera);
        requestAnimationFrame(this.render.bind(this));
    // All of the scenes child classes with a render method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.render && typeof child.render === 'function') {
            child.render();
          }
        });
      }
    
      resize() {
        this.renderer.setSize(viewport.width, viewport.height);
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
    
    // All of the scenes child classes with a resize method will have it called automatically
        this.scene.children.forEach((child) => {
          if (child.resize && typeof child.resize === 'function') {
            child.resize();
          }
        });
      }
    }
    
    export default new Stage();

    And a Fancy Mesh to Go With It

    With our stage ready, we can give it something interesting to render. A torus knot is perfect for this: it has plenty of curves and detail to show off both the wireframe and solid passes. We’ll start with a simple MeshNormalMaterial in wireframe mode so we can clearly see its structure before moving on to the blended shader version.

    // src/gl/torus.js
    
    import { Mesh, MeshBasicMaterial, TorusKnotGeometry } from 'three';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial({
          color: 0xffff00,
          wireframe: true,
        });
    
        this.position.set(0, 0, -8); // Back up the mesh from the camera so its visible
      }
    }

    A quick note on lights

    For simplicity we’re using MeshNormalMaterial so we don’t have to mess with lights. The original effect on Blackbird had six lights, waaay too many. The GPU on my M1 Max was choked to 30fps trying to render the complex models and realtime six-point lighting. But reducing this to just 2 lights (which visually looked identical) ran at 120fps no problem. Three.js isn’t like Blender where you can plop in 14 lights and torture your beefy computer with the render for 12 hours while you sleep. The lights in WebGL have consequences 🫠

    Now, the Solid JSX Components to House It All

    // src/components/GlCanvas.tsx
    
    import { onMount, onCleanup } from 'solid-js';
    import Stage from '~/gl/stage';
    
    export default function GlCanvas() {
    // let is used instead of refs, these aren't reactive
      let el;
      let gl;
      let observer;
    
      onMount(() => {
        if(!el) return
        gl = Stage;
    
        gl.init(el);
        gl.render();
    
    
        observer = new ResizeObserver((entry) => gl.resize());
        observer.observe(el); // use ResizeObserver instead of the window resize event. 
        // It is debounced AND fires once when initialized, no need to call resize() onMount
      });
    
      onCleanup(() => {
        if (observer) {
          observer.disconnect();
        }
      });
    
    
      return (
        <div
          ref={el}
          style={{
            position: 'fixed',
            inset: 0,
            height: '100lvh',
            width: '100vw',
          }}
          
        />
      );
    }

    let is used to declare a ref, there is no formal useRef() function in Solid. Signals are the only reactive method. Read more on refs in Solid.

    Then slap that component into app.tsx:

    // src/app.tsx
    
    import { Router } from '@solidjs/router';
    import { FileRoutes } from '@solidjs/start/router';
    import { Suspense } from 'solid-js';
    import GlCanvas from './components/GlCanvas';
    
    export default function App() {
      return (
        <Router
          root={(props) => (
            <Suspense>
              {props.children}
              <GlCanvas />
            </Suspense>
          )}
        >
          <FileRoutes />
        </Router>
      );
    }

    Each 3D piece I use is tied to a specific element on the page (usually for timeline and scrolling), so I create an individual component to control each class. This helps me keep organized when I have 5 or 6 WebGL moments on one page.

    // src/components/WireframeDemo.tsx
    
    import { createEffect, createSignal, onMount } from 'solid-js'
    import Stage from '~/gl/stage';
    import Torus from '~/gl/torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new Torus()); // Stage is initialized when the page initially mounts, 
        // so it's not available until the next tick. 
        // A signal forces this update to the next tick, 
        // after Stage is available.
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} />;
    }

    createEffect() instead of onMount(): this automatically tracks dependencies (element, and actor in this case) and fires the function when they change, no more useEffect() with dependency arrays 🙃. Read more on createEffect in Solid.

    Then a minimal route to put the component on:

    // src/routes/index.tsx
    
    import WireframeDemo from '~/components/WiframeDemo';
    
    export default function Home() {
      return (
        <main>
          <WireframeDemo />
        </main>
      );
    }
    Diagramming showing the folder structure of a code project

    Now you’ll see this:

    Rainbow torus knot

    Switching a Material to Wireframe

    I loved wireframe styling for the Blackbird site! It fit the prototype feel of the story, fully textured models felt too clean, wireframes are a bit “dirtier” and unpolished. You can wireframe just about any material in Three.js with this:

    // /gl/torus.js
    
      this.material.wireframe = true
      this.material.needsUpdate = true;
    Rainbow torus knot changing from wireframe to solid colors

    But we want to do this dynamically on only part of our model, not on the entire thing.

    Enter render targets.

    The Fun Part: Render Targets

    Render Targets are a super deep topic but they boil down to this: Whatever you see on screen is a frame for your GPU to render, in WebGL you can export that frame and re-use it as a texture on another mesh, you are creating a “target” for your rendered output, a render target.

    Since we’re going to need two of these targets, we can make a single class and re-use it.

    // src/gl/render-target.js
    
    import { WebGLRenderTarget } from 'three';
    import { viewport } from '../viewport';
    import Torus from '../torus';
    import Stage from '../stage';
    
    export default class RenderTarget extends WebGLRenderTarget {
      constructor() {
        super();
    
        this.width = viewport.width * viewport.devicePixelRatio;
        this.height = viewport.height * viewport.devicePixelRatio;
      }
    
      resize() {
        const w = viewport.width * viewport.devicePixelRatio;
        const h = viewport.height * viewport.devicePixelRatio;
    
        this.setSize(w, h)
      }
    }

    This is just an output for a texture, nothing more.

    Now we can make the class that will consume these outputs. It’s a lot of classes, I know, but splitting up individual units like this helps me keep track of where stuff happens. 800 line spaghetti mega-classes are the stuff of nightmares when debugging WebGL.

    // src/gl/targeted-torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      PerspectiveCamera,
      PlaneGeometry,
    } from 'three';
    import Torus from './torus';
    import { viewport } from './viewport';
    import RenderTarget from './render-target';
    import Stage from './stage';
    
    export default class TargetedTorus extends Mesh {
      targetSolid = new RenderTarget();
      targetWireframe = new RenderTarget();
    
      scene = new Torus(); // The shape we created earlier
      camera = new PerspectiveCamera(45, viewport.aspectRatio, 0.1, 1000);
      
      constructor() {
        super();
    
        this.geometry = new PlaneGeometry(1, 1);
        this.material = new MeshNormalMaterial();
      }
    
      resize() {
        this.targetSolid.resize();
        this.targetWireframe.resize();
    
        this.camera.aspect = viewport.aspectRatio;
        this.camera.updateProjectionMatrix();
      }
    }

    Now, switch our WireframeDemo.tsx component to use the TargetedTorus class, instead of Torus:

    // src/components/WireframeDemo.tsx 
    
    import { createEffect, createSignal, onMount } from 'solid-js';
    import Stage from '~/gl/stage';
    import TargetedTorus from '~/gl/targeted-torus';
    
    export default function WireframeDemo() {
      let el;
      const [element, setElement] = createSignal(null);
      const [actor, setActor] = createSignal(null);
    
      createEffect(() => {
        setElement(el);
        if (!element()) return;
    
        setActor(new TargetedTorus()); // << change me
    
        Stage.scene.add(actor());
      });
    
      return <div ref={el} data-gl="wireframe" />;
    }

    “Now all I see is a blue square Nathan, it feel like we’re going backwards, show me the cool shape again”.

    Shhhhh, It’s by design I swear!

    From MeshNormalMaterial to ShaderMaterial

    We can now take our Torus rendered output and smack it onto the blue plane as a texture using ShaderMaterial. MeshNormalMaterial doesn’t let us use a texture, and we’ll need shaders soon anyway. Inside of targeted-torus.js remove the MeshNormalMaterial and switch this in:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        void main() {
          gl_FragColor = vec4(0.67, 0.08, 0.86, 1.0);
        }
      `,
    });

    Now we have a much prettier purple plane with the help of two shaders:

    • Vertex shaders manipulate vertex locations of our material, we aren’t going to touch this one further
    • Fragment shaders assign the colors and properties to each pixel of our material. This shader tells every pixel to be purple

    Using the Render Target Texture

    To show our Torus instead of that purple color, we can feed the fragment shader an image texture via uniforms:

    // src/gl/targeted-torus.js
    
    this.material = new ShaderMaterial({
      vertexShader: `
        varying vec2 v_uv;
    
        void main() {
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
          v_uv = uv;
        }
      `,
      fragmentShader: `
        varying vec2 v_uv;
        varying vec3 v_position;
    
        // declare 2 uniforms
        uniform sampler2D u_texture_solid;
        uniform sampler2D u_texture_wireframe;
    
        void main() {
          // declare 2 images
          vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
          vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
          // set the color to that of the image
          gl_FragColor = solid_texture;
        }
      `,
      uniforms: {
        u_texture_solid: { value: this.targetSolid.texture },
        u_texture_wireframe: { value: this.targetWireframe.texture },
      },
    });

    And add a render method to our TargetedTorus class (this is called automatically by the Stage class):

    // src/gl/targeted-torus.js
    
    render() {
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      Stage.renderer.render(this.scene, this.camera);
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.clear();
      Stage.renderer.setRenderTarget(null);
    }

    THE TORUS IS BACK. We’ve passed our image texture into the shader and its outputting our original render.

    Mixing Wireframe and Solid Materials with Shaders

    Shaders were black magic to me before this project. It was my first time using them in production and I’m used to frontend where you think in boxes. Shaders are coordinates 0 to 1, which I find far harder to understand. But, I’d used Photoshop and After Effects with layers plenty of times. These applications do a lot of the same work shaders can: GPU computing. This made it far easier. Starting out by picturing or drawing what I wanted, thinking how I might do it in Photoshop, then asking myself how I could do it with shaders. Photoshop or AE into shaders is far less mentally taxing when you don’t have a deep foundation in shaders.

    Populating Both Render Targets

    At the moment, we are only saving data to the solidTarget render target via normals. We will update our render loop, so that our shader has them both this and wireframeTarget available simultaneously.

    // src/gl/targeted-torus.js
    
    render() {
      // Render wireframe version to wireframe render target
      this.scene.material.wireframe = true;
      Stage.renderer.setRenderTarget(this.targetWireframe);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_wireframe.value = this.targetWireframe.texture;
    
      // Render solid version to solid render target
      this.scene.material.wireframe = false;
      Stage.renderer.setRenderTarget(this.targetSolid);
      Stage.renderer.render(this.scene, this.camera);
      this.material.uniforms.u_texture_solid.value = this.targetSolid.texture;
    
      // Reset render target
      Stage.renderer.setRenderTarget(null);
    }

    With this, you end up with a flow that under the hood looks like this:

    Diagram with red lines describing data being passed around

    Fading Between Two Textures

    Our fragment shader will get a little update, 2 additions:

    • smoothstep creates a linear ramp between 2 values. UVs only go from 0 to 1, so in this case we use .15 and .65 as the limits (they look make the effect more obvious than 0 and 1). Then we use the x value of the uvs to define which value gets fed into smoothstep.
    • vec4 mixed = mix(wireframe_texture, solid_texture, blend); mix does exactly what it says, mixes 2 values together at a ratio determined by blend. .5 being a perfectly even split.
    // src/gl/targeted-torus.js
    
    fragmentShader: `
      varying vec2 v_uv;
      varying vec3 v_position;
    
      // declare 2 uniforms
      uniform sampler2D u_texture_solid;
      uniform sampler2D u_texture_wireframe;
    
      void main() {
        // declare 2 images
        vec4 wireframe_texture = texture2D(u_texture_wireframe, v_uv);
        vec4 solid_texture = texture2D(u_texture_solid, v_uv);
    
        float blend = smoothstep(0.15, 0.65, v_uv.x);
        vec4 mixed = mix(wireframe_texture, solid_texture, blend);        
    
        gl_FragColor = mixed;
      }
    `,

    And boom, MIXED:

    Rainbow torus knot with wireframe texture

    Let’s be honest with ourselves, this looks exquisitely boring being static so we can spice this up with little magic from GSAP.

    // src/gl/torus.js
    
    import {
      Mesh,
      MeshNormalMaterial,
      TorusKnotGeometry,
    } from 'three';
    import gsap from 'gsap';
    
    export default class Torus extends Mesh {
      constructor() {
        super();
    
        this.geometry = new TorusKnotGeometry(1, 0.285, 300, 26);
        this.material = new MeshNormalMaterial();
    
        this.position.set(0, 0, -8);
    
        // add me!
        gsap.to(this.rotation, {
          y: 540 * (Math.PI / 180), // needs to be in radians, not degrees
          ease: 'power3.inOut',
          duration: 4,
          repeat: -1,
          yoyo: true,
        });
      }
    }

    Thank You!

    Congratulations, you’ve officially spent a measurable portion of your day blending two materials together. It was worth it though, wasn’t it? At the very least, I hope this saved you some of the mental gymnastics orchestrating a pair of render targets.

    Have questions? Hit me up on Twitter!



    Source link

  • How to log Correlation IDs in .NET APIs with Serilog &vert; Code4IT

    How to log Correlation IDs in .NET APIs with Serilog | Code4IT


    APIs often call other APIs to perform operations. If an error occurs in one of them, how can you understand the context that caused that error? You can use Correlation IDs in your logs!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Correlation IDs are values that are passed across different systems to correlate the operations performed during a “macro” operation.

    Most of the time they are passed as HTTP Headers – of course in systems that communicate via HTTP.

    In this article, we will learn how to log those Correlation IDs using Serilog, a popular library that helps handle logs in .NET applications.

    Setting up the demo dotNET project

    This article is heavily code-oriented. So, let me first describe the demo project.

    Overview of the project

    To demonstrate how to log Correlation IDs and how to correlate logs generated by different systems, I’ve created a simple solution that handles bookings for a trip.

    The “main” project, BookingSystem, fetches data from external systems by calling some HTTP endpoints; it then manipulates the data and returns an aggregate object to the caller.

    BookingSystem depends on two projects, placed within the same solution: CarRentalSystem, which returns data about the available cars in a specified date range, and HotelsSystem, which does the same for hotels.

    So, this is the data flow:

    Operations sequence diagram

    If an error occurs in any of those systems, can we understand the full story of the failed request? No. Unless we use Correlation IDs!

    Let’s see how to add them and how to log them.

    We need to propagate HTTP Headers. You could implement it from scratch, as we’ve seen in a previous article. Or we could use a native library that does it all for us.

    Of course, let’s go with the second approach.

    For every project that will propagate HTTP headers, we have to follow these steps.

    First, we need to install Microsoft.AspNetCore.HeaderPropagation: this NuGet package allows us to add the .NET classes needed to propagate HTTP headers.

    Next, we have to update the part of the project that we use to configure our application. For .NET projects with Minimal APIs, it’s the Program class.

    Here we need to add the capability to read the HTTP Context, by using

    builder.Services.AddHttpContextAccessor();
    

    As you can imagine, this is needed because, to propagate HTTP Headers, we need to know which are the incoming HTTP Headers. And they can be read from the HttpContext object.

    Next, we need to specify, as a generic behavior, which headers must be propagated. For instance, to propagate the “my-custom-correlation-id” header, you must add

    builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    

    Then, for every HttpClient that will propagate those headers, you have to add AddHeaderPropagation(), like this:

    builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    

    Finally, one last instruction that tells the application that it needs to use the Header Propagation functionality:

    app.UseHeaderPropagation();
    

    To summarize, here’s the minimal configuration to add HTTP Header propagation in a dotNET API.

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddControllers();
        builder.Services.AddHttpContextAccessor();
        builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    
        builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    
        var app = builder.Build();
        app.UseHeaderPropagation();
        app.MapControllers();
        app.Run();
    }
    

    We’re almost ready to go!

    But we’re missing the central point of this article: logging an HTTP Header as a Correlation ID!

    Initializing Serilog

    We’ve already met Serilog several times in this blog, so I won’t repeat how to install it and how to define logs the best way possible.

    We will write our logs on Seq, and we’re overriding the minimum level to skip the noise generated by .NET:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Seq("http://localhost:5341")
        .MinimumLevel.Information()
        .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
        .MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
        .Enrich.FromLogContext();
    

    Since you probably know what’s going on, let me go straight to the point.

    Install Serilog Enricher for Correlation IDs

    We’re gonna use a specific library to log HTTP Headers treating them as Correlation IDs. To use it, you have to install the Serilog.Enrichers.CorrelationId package available on NuGet.

    Therefore, you can simply run

    dotnet add Serilog.Enrichers.CorrelationId
    

    to every .NET project that will use this functionality.

    Once we have that NuGet package ready, we can add its functionality to our logger by adding this line:

    .Enrich.WithCorrelationIdHeader("my-custom-correlation-id")
    

    This simple line tells dotnet that, when we see an HTTP Header named “my-custom-correlation-id”, we should log it as a Correlation ID.

    Run it all together

    Now we have everything in place – it’s time to run it!

    We have to run all the 3 services at the same time (you can do it with VisualStudio or you can run them separately using a CMD), and we need to have Seq installed on our local machine.

    You will see 3 instances of Swagger, and each instance is running under a different port.

    Swagger pages of our systems

    Once we have all the 3 applications up and running, we can call the /Bookings endpoint passing it a date range and an HTTP Header with key “my-custom-correlation-id” and value = “123” (or whatever we want).

    How to use HTTP Headers with Postman

    If everything worked as expected, we can open Seq and see all the logs we’ve written in our applications:

    All logs in SEQ

    Open one of them and have a look at the attributes of the logs: you will see a CorrelationId field with the value set to “123”.

    Our HTTP Header is now treated as Correlation ID

    Now, to better demonstrate how it works, call the endpoint again, but this time set “789” as my-custom-correlation-id, and specify a different date range. You should be able to see another set of logs generated by this second call.

    You can now apply filters to see which logs are related to a specific Correlation ID: open one log, click on the tick button and select “Find”.

    Filter button on SEQ

    You will then see all and only logs that were generated during the call with header my-custom-correlation-id set to “789”.

    List of all logs related to a specific Correlation ID

    Further readings

    That’s it. With just a few lines of code, you can dramatically improve your logging strategy.

    You can download and run the whole demo here:

    🔗 LogCorrelationId demo | GitHub

    To run this project you have to install both Serilog and Seq. You can do that by following this step-by-step guide:

    🔗 Logging with Serilog and Seq | Code4IT

    For this article, we’ve used the Microsoft.AspNetCore.HeaderPropagation package, which is ready to use. Are you interested in building your own solution – or, at least, learning how you can do that?

    🔗 How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C# | Code4IT

    Lastly, why not use Serilog’s Scopes? And what are they? Check it out here:

    🔗 How to improve Serilog logging in .NET 6 by using Scopes | Code4IT

    Wrapping up

    This article concludes a sort of imaginary path that taught us how to use Serilog, how to correlate different logs within the same application using Scopes, and how to correlate logs from different services using Correlation IDs.

    Using these capabilities, you will be able to write logs that can help you understand the context in which a specific log occurred, thus helping you fix errors more efficiently.

    This article first appeared on Code4IT

    Happy coding!

    🐧



    Source link

  • Shortest route between points in a city – with Python and OpenStreetMap – Useful code

    Shortest route between points in a city – with Python and OpenStreetMap – Useful code


    After the article for introduction to Graphs in Python, I have decided to put the graph theory into practice and start looking for the shortest points between points in a city. Parts of the code are inspired from the book Optimization Algorithms by Alaa Khamis, other parts are mine 🙂

    The idea is to go from the monument to the church with a car. The flag marks the middle, between the two points.

    The solution uses several powerful Python libraries:

    • OSMnx to download and work with real road networks from OpenStreetMap
    • NetworkX to model the road system as a graph and calculate the shortest path using Dijkstra’s algorithm
    • Folium for interactive map visualization

    We start by geocoding the two landmarks to get their latitude and longitude. Then we build a drivable street network centered around the Levski Monument using ox.graph_from_address. After snapping both points to the nearest graph nodes, we compute the shortest route by distance. Finally, we visualize everything both in an interactive map and in a clean black-on-white static graph where the path is drawn in yellow.


    Nodes and edges in radius of 1000 meters around the center point


    Red and green are the nodes, that are the closest to the start and end points.


    The closest driving route between the two points is in blue.

    The full code is implemented in a Jupyter Notebook in GitHub and explained in the video.

    https://www.youtube.com/watch?v=kQIK2P7erAA

    GitHub link:

    Enjoy the rest of your day! 🙂



    Source link

  • How to customize Swagger UI with custom CSS in .NET 7 &vert; Code4IT

    How to customize Swagger UI with custom CSS in .NET 7 | Code4IT


    Exposing Swagger UI is a good way to help developers consume your APIs. But don’t be boring: customize your UI with some fancy CSS

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Brace yourself, Christmas is coming! 🎅

    If you want to add a more festive look to your Swagger UI, it’s just a matter of creating a CSS file and injecting it.

    You should create a custom CSS for your Swagger endpoints, especially if you are exposing them outside your company: if your company has a recognizable color palette, using it in your Swagger pages can make your brand stand out.

    In this article, we will learn how to inject a CSS file in the Swagger UI generated using .NET Minimal APIs.

    How to add Swagger in your .NET Minimal APIs

    There are plenty of tutorials about how to add Swagger to your APIs. I wrote some too, where I explained how every configuration impacts what you see in the UI.

    That article was targeting older dotNET versions without Minimal APIs. Now everything’s easier.

    When you create your API project, Visual Studio asks you if you want to add OpenAPI support (aka Swagger). By adding it, you will have everything in place to get started with Swagger.

    You Minimal APIs will look like this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddEndpointsApiExplorer();
        builder.Services.AddSwaggerGen();
    
        var app = builder.Build();
    
        app.UseSwagger();
        app.UseSwaggerUI();
    
    
        app.MapGet("/weatherforecast", (HttpContext httpContext) =>
        {
            // return something
        })
        .WithName("GetWeatherForecast")
        .WithOpenApi();
    
        app.Run();
    }
    

    The key parts are builder.Services.AddEndpointsApiExplorer(), builder.Services.AddSwaggerGen(), app.UseSwagger(), app.UseSwaggerUI() and WithOpenApi(). Do you know that those methods do? If so, drop a comment below! 📩

    Now, if we run our application, we will see a UI similar to the one below.

    Basic Swagger UI

    That’s a basic UI. Quite boring, uh? Let’s add some style

    Create the CSS file for Swagger theming

    All the static assets must be stored within the wwwroot folder. It does not exist by default, so you have to create it manually. Click on the API project, add a new folder, and name it “wwwroot”. Since it’s a special folder, by default Visual Studio will show it with a special icon (it’s a sort of blue world, similar to 🌐).

    Now you can add all the folders and static resources needed.

    I’ve created a single CSS file under /wwwroot/assets/css/xmas-style.css. Of course, name it as you wish – as long as it is within the wwwroot folder, it’s fine.

    My CSS file is quite minimal:

    body {
      background-image: url("../images/snowflakes.webp");
    }
    
    div.topbar {
      background-color: #34a65f !important;
    }
    
    h2,
    h3 {
      color: #f5624d !important;
    }
    
    .opblock-summary-get > button > span.opblock-summary-method {
      background-color: #235e6f !important;
    }
    
    .opblock-summary-post > button > span.opblock-summary-method {
      background-color: #0f8a5f !important;
    }
    
    .opblock-summary-delete > button > span.opblock-summary-method {
      background-color: #cc231e !important;
    }
    

    There are 3 main things to notice:

    1. the element selectors are taken directly from the Swagger UI – you’ll need a bit of reverse-engineering skills: just open the Browser Console and find the elements you want to update;
    2. unless the element does not already have the rule you want to apply, you have to add the !important CSS operator. Otherwise, your code won’t affect the UI;
    3. you can add assets from other folders: I’ve added background-image: url("../images/snowflakes.webp"); to the body style. That image is, as you can imagine, under the wwwroot folder we created before.

    Just as a recap, here’s my project structure:

    Static assets files

    Of course, it’s not enough: we have to tell Swagger to take into consideration that file

    How to inject a CSS file in Swagger UI

    This part is quite simple: you have to update the UseSwaggerUI command within the Main method:

    app.UseSwaggerUI(c =>
    +   c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Notice how that path begins: no wwwroot, no ~, no .. It starts with /assets.

    One last step: we have to tell dotNET to consider static files when building and running the application.

    You just have to add UseStaticFiles()

    After builder.Build():

    var app = builder.Build();
    + app.UseStaticFiles();
    
    app.UseSwagger();
    app.UseSwaggerUI(c =>
        c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Now we can run our APIs as admire our wonderful Xmas-style UI 🎅

    XMAS-style Swagger UI

    Further readings

    This article is part of 2022 .NET Advent, created by Dustin Moris 🐤:

    🔗 dotNET Advent Calendar 2022

    CSS is not the only part you can customize, there’s way more. Here’s an article I wrote about Swagger integration in .NET Core 3 APIs, but it’s still relevant (I hope! 😁)

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    Theming is often not considered an important part of API development. That’s generally correct: why should I bother adding some fancy colors to APIs that are not expected to have a UI?

    This makes sense if you’re working on private APIs. In fact, theming is often useful to improve brand recognition for public-facing APIs.

    You should also consider using theming when deploying APIs to different environments: maybe Blue for Development, Yellow for Staging, and Green for Production. That way your developers can understand which environment they’re exploring right easily.

    Happy coding!
    🐧





    Source link

  • Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js

    Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js


    Back in November 2024, I shared a post on X about a tool I was building to help visualize kitchen remodels. The response from the Three.js community was overwhelmingly positive. The demo showed how procedural rendering techniques—often used in games—can be applied to real-world use cases like designing and rendering an entire kitchen in under 60 seconds.

    In this article, I’ll walk through the process and thinking behind building this kind of procedural 3D kitchen design tool using vanilla Three.js and TypeScript—from drawing walls and defining cabinet segments to auto-generating full kitchen layouts. Along the way, I’ll share key technical choices, lessons learned, and ideas for where this could evolve next.

    You can try out an interactive demo of the latest version here: https://kitchen-designer-demo.vercel.app/. (Tip: Press the “/” key to toggle between 2D and 3D views.)

    Designing Room Layouts with Walls

    Example of user drawing a simple room shape using the built-in wall module.

    To initiate our project, we begin with the wall drawing module. At a high level, this is akin to Figma’s pen tool, where the user can add one line segment at a time until a closed—or open-ended—polygon is complete on an infinite 2D canvas. In our build, each line segment represents a single wall as a 2D plane from coordinate A to coordinate B, while the complete polygon outlines the perimeter envelope of a room.

    1. We begin by capturing the [X, Z] coordinates (with Y oriented upwards) of the user’s initial click on the infinite floor plane. This 2D point is obtained via Three.js’s built-in raycaster for intersection detection, establishing Point A.
    2. As the user hovers the cursor over a new spot on the floor, we apply the same intersection logic to determine a temporary Point B. During this movement, a preview line segment appears, connecting the fixed Point A to the dynamic Point B for visual feedback.
    3. Upon the user’s second click to confirm Point B, we append the line segment (defined by Points A and B) to an array of segments. The former Point B instantly becomes the new Point A, allowing us to continue the drawing process with additional line segments.

    Here is a simplified code snippet demonstrating a basic 2D pen-draw tool using Three.js:

    import * as THREE from 'three';
    
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.set(0, 5, 10); // Position camera above the floor looking down
    camera.lookAt(0, 0, 0);
    
    const renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);
    
    // Create an infinite floor plane for raycasting
    const floorGeometry = new THREE.PlaneGeometry(100, 100);
    const floorMaterial = new THREE.MeshBasicMaterial({ color: 0xcccccc, side: THREE.DoubleSide });
    const floor = new THREE.Mesh(floorGeometry, floorMaterial);
    floor.rotation.x = -Math.PI / 2; // Lay flat on XZ plane
    scene.add(floor);
    
    const raycaster = new THREE.Raycaster();
    const mouse = new THREE.Vector2();
    let points: THREE.Vector3[] = []; // i.e. wall endpoints
    let tempLine: THREE.Line | null = null;
    const walls: THREE.Line[] = [];
    
    function getFloorIntersection(event: MouseEvent): THREE.Vector3 | null {
      mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
      mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
      raycaster.setFromCamera(mouse, camera);
      const intersects = raycaster.intersectObject(floor);
      if (intersects.length > 0) {
        // Round to simplify coordinates (optional for cleaner drawing)
        const point = intersects[0].point;
        point.x = Math.round(point.x);
        point.z = Math.round(point.z);
        point.y = 0; // Ensure on floor plane
        return point;
      }
      return null;
    }
    
    // Update temporary line preview
    function onMouseMove(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && points.length > 0) {
        // Remove old temp line if exists
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
        // Create new temp line from last point to current hover
        const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 1], point]);
        const material = new THREE.LineBasicMaterial({ color: 0x0000ff }); // Blue for temp
        tempLine = new THREE.Line(geometry, material);
        scene.add(tempLine);
      }
    }
    
    // Add a new point and draw permanent wall segment
    function onMouseDown(event: MouseEvent) {
      if (event.button !== 0) return; // Left click only
      const point = getFloorIntersection(event);
      if (point) {
        points.push(point);
        if (points.length > 1) {
          // Draw permanent wall line from previous to current point
          const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 2], points[points.length - 1]]);
          const material = new THREE.LineBasicMaterial({ color: 0xff0000 }); // Red for permanent
          const wall = new THREE.Line(geometry, material);
          scene.add(wall);
          walls.push(wall);
        }
        // Remove temp line after click
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
      }
    }
    
    // Add event listeners
    window.addEventListener('mousemove', onMouseMove);
    window.addEventListener('mousedown', onMouseDown);
    
    // Animation loop
    function animate() {
      requestAnimationFrame(animate);
      renderer.render(scene, camera);
    }
    animate();

    The above code snippet is a very basic 2D pen tool, and yet this information is enough to generate an entire room instance. For reference: not only does each line segment represent a wall (2D plane), but the set of accumulated points can also be used to auto-generate the room’s floor mesh, and likewise the ceiling mesh (the inverse of the floor mesh).

    In order to view the planes representing the walls in 3D, one can transform each THREE.Line into a custom Wall class object, which contains both a line (for orthogonal 2D “floor plan” view) and a 2D inward-facing plane (for perspective 3D “room” view). To build this class:

    class Wall extends THREE.Group {
      constructor(length: number, height: number = 96, thickness: number = 4) {
        super();
    
        // 2D line for top view, along the x-axis
        const lineGeometry = new THREE.BufferGeometry().setFromPoints([
          new THREE.Vector3(0, 0, 0),
          new THREE.Vector3(length, 0, 0),
        ]);
        const lineMaterial = new THREE.LineBasicMaterial({ color: 0xff0000 });
        const line = new THREE.Line(lineGeometry, lineMaterial);
        this.add(line);
    
        // 3D wall as a box for thickness
        const wallGeometry = new THREE.BoxGeometry(length, height, thickness);
        const wallMaterial = new THREE.MeshBasicMaterial({ color: 0xaaaaaa, side: THREE.DoubleSide });
        const wall = new THREE.Mesh(wallGeometry, wallMaterial);
        wall.position.set(length / 2, height / 2, 0);
        this.add(wall);
      }
    }

    We can now update the wall draw module to utilize this newly created Wall object:

    // Update our variables
    let tempWall: Wall | null = null;
    const walls: Wall[] = [];
    
    // Replace line creation in onMouseDown with
    if (points.length > 1) {
      const start = points[points.length - 2];
      const end = points[points.length - 1];
      const direction = end.clone().sub(start);
      const length = direction.length();
      const wall = new Wall(length);
      wall.position.copy(start);
      wall.rotation.y = Math.atan2(direction.z, direction.x); // Align along direction (assuming CCW for inward facing)
      scene.add(wall);
      walls.push(wall);
    }
    

    Upon adding the floor and ceiling meshes, we can further transform our wall module into a room generation module. To recap what we have just created: by adding walls one by one, we have given the user the ability to create full rooms with walls, floors, and ceilings—all of which can be adjusted later in the scene.

    User dragging out the wall in 3D perspective camera-view.

    Generating Cabinets with Procedural Modeling

    Our cabinet-related logic can consist of countertops, base cabinets, and wall cabinets.

    Rather than taking several minutes to add the cabinets on a case-by-case basis—for example, like with IKEA’s 3D kitchen builder—it’s possible to add all the cabinets at once via a single user action. One method to employ here is to allow the user to draw high-level cabinet line segments, in the same manner as the wall draw module.

    In this module, each cabinet segment will transform into a linear row of base and wall cabinets, along with a parametrically generated countertop mesh on top of the base cabinets. As the user creates the segments, we can automatically populate this line segment with pre-made 3D cabinet meshes in meshing software like Blender. Ultimately, each cabinet’s width, depth, and height parameters will be fixed, while the width of the last cabinet can be dynamic to fill the remaining space. We use a cabinet filler piece mesh here—a regular plank, with its scale-X parameter stretched or compressed as needed.

    Creating the Cabinet Line Segments

    User can make a half-peninsula shape by dragging the cabinetry line segments alongside the walls, then in free-space.

    Here we will construct a dedicated cabinet module, with the aforementioned cabinet line segment logic. This process is very similar to the wall drawing mechanism, where users can draw straight lines on the floor plane using mouse clicks to define both start and end points. Unlike walls, which can be represented by simple thin lines, cabinet line segments need to account for a standard depth of 24 inches to represent the base cabinets’ footprint. These segments do not require closing-polygon logic, as they can be standalone rows or L-shapes, as is common in most kitchen layouts.

    We can further improve the user experience by incorporating snapping functionality, where the endpoints of a cabinet line segment automatically align to nearby wall endpoints or wall intersections, if within a certain threshold (e.g., 4 inches). This ensures cabinets fit snugly against walls without requiring manual precision. For simplicity, we’ll outline the snapping logic in code but focus on the core drawing functionality.

    We can start by defining the CabinetSegment class. Like the walls, this should be its own class, as we will later add the auto-populating 3D cabinet models.

    class CabinetSegment extends THREE.Group {
      public length: number;
    
      constructor(length: number, height: number = 96, depth: number = 24, color: number = 0xff0000) {
        super();
        this.length = length;
        const geometry = new THREE.BoxGeometry(length, height, depth);
        const material = new THREE.MeshBasicMaterial({ color, wireframe: true });
        const box = new THREE.Mesh(geometry, material);
        box.position.set(length / 2, height / 2, depth / 2); // Shift so depth spans 0 to depth (inward)
        this.add(box);
      }
    }

    Once we have the cabinet segment, we can use it in a manner very similar to the wall line segments:

    let cabinetPoints: THREE.Vector3[] = [];
    let tempCabinet: CabinetSegment | null = null;
    const cabinetSegments: CabinetSegment[] = [];
    const CABINET_DEPTH = 24; // everything in inches
    const CABINET_SEGMENT_HEIGHT = 96; // i.e. both wall & base cabinets -> group should extend to ceiling
    const SNAPPING_DISTANCE = 4;
    
    function getSnappedPoint(point: THREE.Vector3): THREE.Vector3 {
      // Simple snapping: check against existing wall points (wallPoints array from wall module)
      for (const wallPoint of wallPoints) {
        if (point.distanceTo(wallPoint) < SNAPPING_DISTANCE) return wallPoint;
      }
      return point;
    }
    
    // Update temporary cabinet preview
    function onMouseMoveCabinet(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && cabinetPoints.length > 0) {
        const snappedPoint = getSnappedPoint(point);
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
        const start = cabinetPoints[cabinetPoints.length - 1];
        const direction = snappedPoint.clone().sub(start);
        const length = direction.length();
        if (length > 0) {
          tempCabinet = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0x0000ff); // Blue for temp
          tempCabinet.position.copy(start);
          tempCabinet.rotation.y = Math.atan2(direction.z, direction.x);
          scene.add(tempCabinet);
        }
      }
    }
    
    // Add a new point and draw permanent cabinet segment
    function onMouseDownCabinet(event: MouseEvent) {
      if (event.button !== 0) return;
      const point = getFloorIntersection(event);
      if (point) {
        const snappedPoint = getSnappedPoint(point);
        cabinetPoints.push(snappedPoint);
        if (cabinetPoints.length > 1) {
          const start = cabinetPoints[cabinetPoints.length - 2];
          const end = cabinetPoints[cabinetPoints.length - 1];
          const direction = end.clone().sub(start);
          const length = direction.length();
          if (length > 0) {
            const segment = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0xff0000); // Red for permanent
            segment.position.copy(start);
            segment.rotation.y = Math.atan2(direction.z, direction.x);
            scene.add(segment);
            cabinetSegments.push(segment);
          }
        }
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
      }
    }
    
    // Add separate event listeners for cabinet mode (e.g., toggled via UI button)
    window.addEventListener('mousemove', onMouseMoveCabinet);
    window.addEventListener('mousedown', onMouseDownCabinet);

    Auto-Populating the Line Segments with Live Cabinet Models

    Here we fill 2 line-segments with 3D cabinet models (base & wall), and countertop meshes.

    Once the cabinet line segments are defined, we can procedurally populate them with detailed components. This involves dividing each segment vertically into three layers: base cabinets at the bottom, countertops in the middle, and wall cabinets above. For the base and wall cabinets, we’ll use an optimization function to divide the segment’s length into standard widths (preferring 30-inch cabinets), with any remainder filled using the filler piece mentioned above. Countertops are even simpler—they form a single continuous slab stretching the full length of the segment.

    The base cabinets are set to 24 inches deep and 34.5 inches high. Countertops add 1.5 inches in height and extend to 25.5 inches deep (including a 1.5-inch overhang). Wall cabinets start at 54 inches high (18 inches above the countertop), measure 12 inches deep, and are 30 inches tall. After generating these placeholder bounding boxes, we can replace them with preloaded 3D models from Blender using a loading function (e.g., via GLTFLoader).

    // Constants in inches
    const BASE_HEIGHT = 34.5;
    const COUNTER_HEIGHT = 1.5;
    const WALL_HEIGHT = 30;
    const WALL_START_Y = 54;
    const BASE_DEPTH = 24;
    const COUNTER_DEPTH = 25.5;
    const WALL_DEPTH = 12;
    
    const DEFAULT_MODEL_WIDTH = 30;
    
    // Filler-piece information
    const FILLER_PIECE_FALLBACK_PATH = 'models/filler_piece.glb'
    const FILLER_PIECE_WIDTH = 3;
    const FILLER_PIECE_HEIGHT = 12;
    const FILLER_PIECE_DEPTH = 24;

    To handle individual cabinets, we’ll create a simple Cabinet class that manages the placeholder and model loading.

    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    
    const loader = new GLTFLoader();
    
    class Cabinet extends THREE.Group {
      constructor(width: number, height: number, depth: number, modelPath: string, color: number) {
        super();
    
        // Placeholder box
        const geometry = new THREE.BoxGeometry(width, height, depth);
        const material = new THREE.MeshBasicMaterial({ color });
        const placeholder = new THREE.Mesh(geometry, material);
        this.add(placeholder);
    
    
        // Load and replace with model async
    
        // Case: Non-standard width -> use filler piece
        if (width < DEFAULT_MODEL_WIDTH) {
          loader.load(FILLER_PIECE_FALLBACK_PATH, (gltf) => {
            const model = gltf.scene;
            model.scale.set(
              width / FILLER_PIECE_WIDTH,
              height / FILLER_PIECE_HEIGHT,
              depth / FILLER_PIECE_DEPTH,
            );
            this.add(model);
            this.remove(placeholder);
          });
        }
    
        loader.load(modelPath, (gltf) => {
          const model = gltf.scene;
          model.scale.set(width / DEFAULT_MODEL_WIDTH, 1, 1); // Scale width
          this.add(model);
          this.remove(placeholder);
        });
      }
    }

    Then, we can add a populate method to the existing CabinetSegment class:

    function splitIntoCabinets(width: number): number[] {
      const cabinets = [];
      // Preferred width
      while (width >= DEFAULT_MODEL_WIDTH) {
        cabinets.push(DEFAULT_MODEL_WIDTH);
        width -= DEFAULT_MODEL_WIDTH;
      }
      if (width > 0) {
        cabinets.push(width); // Custom empty slot
      }
      return cabinets;
    }
    
    class CabinetSegment extends THREE.Group {
      // ... (existing constructor and properties)
    
      populate() {
        // Remove placeholder line and box
        while (this.children.length > 0) {
          this.remove(this.children[0]);
        }
    
        let offset = 0;
        const widths = splitIntoCabinets(this.length);
    
        // Base cabinets
        widths.forEach((width) => {
          const baseCab = new Cabinet(width, BASE_HEIGHT, BASE_DEPTH, 'models/base_cabinet.glb', 0x8b4513);
          baseCab.position.set(offset + width / 2, BASE_HEIGHT / 2, BASE_DEPTH / 2);
          this.add(baseCab);
          offset += width;
        });
    
        // Countertop (single slab, no model)
        const counterGeometry = new THREE.BoxGeometry(this.length, COUNTER_HEIGHT, COUNTER_DEPTH);
        const counterMaterial = new THREE.MeshBasicMaterial({ color: 0xa9a9a9 });
        const counter = new THREE.Mesh(counterGeometry, counterMaterial);
        counter.position.set(this.length / 2, BASE_HEIGHT + COUNTER_HEIGHT / 2, COUNTER_DEPTH / 2);
        this.add(counter);
    
        // Wall cabinets
        offset = 0;
        widths.forEach((width) => {
          const wallCab = new Cabinet(width, WALL_HEIGHT, WALL_DEPTH, 'models/wall_cabinet.glb', 0x4b0082);
          wallCab.position.set(offset + width / 2, WALL_START_Y + WALL_HEIGHT / 2, WALL_DEPTH / 2);
          this.add(wallCab);
          offset += width;
        });
      }
    }
    
    // Call for each cabinetSegment after drawing
    cabinetSegments.forEach((segment) => segment.populate());

    Further Improvements & Optimizations

    We can further improve the scene with appliances, varying-height cabinets, crown molding, etc.

    At this point, we should have the foundational elements of room and cabinet creation logic fully in place. In order to take this project from a rudimentary segment-drawing app into the practical realm—along with dynamic cabinets, multiple realistic material options, and varying real appliance meshes—we can further enhance the user experience through several targeted refinements:

    • We can implement a detection mechanism to determine if a cabinet line segment is in contact with a wall line segment.
      • For cabinet rows that run parallel to walls, we can automatically incorporate a backsplash in the space between the wall cabinets and the countertop surface.
      • For cabinet segments not adjacent to walls, we can remove the upper wall cabinets and extend the countertop by an additional 15 inches, aligning with standard practices for kitchen islands or peninsulas.
    • We can introduce drag-and-drop functionality for appliances, each with predefined widths, allowing users to position them along the line segment. This integration will instruct our cabinet-splitting algorithm to exclude those areas from dynamic cabinet generation.
    • Additionally, we can give users more flexibility by enabling the swapping of one appliance with another, applying different textures to our 3D models, and adjusting default dimensions—such as wall cabinet depth or countertop overhang—to suit specific preferences.

    All these core components lead us to a comprehensive, interactive application that enables the rapid rendering of a complete kitchen: cabinets, countertops, and appliances, in a fully interactive, user-driven experience.

    The aim of this project is to demonstrate that complex 3D tasks can be distilled down to simple user actions. It is fully possible to take the high-dimensional complexity of 3D tooling—with seemingly limitless controls—and encode these complexities into low-dimensional, easily adjustable parameters. Whether the developer chooses to expose these parameters to the user or an LLM, the end result is that historically complicated 3D processes can become simple, and thus the entire contents of a 3D scene can be fully transformed with only a few parameters.

    If you find this type of development interesting, have any great ideas, or would love to contribute to the evolution of this product, I strongly welcome you to reach out to me via email. I firmly believe that only recently has it become possible to build home design software that is so wickedly fast and intuitive that any person—regardless of architectural merit—will be able to design their own single-family home in less than 5 minutes via a web app, while fully adhering to local zoning, architectural, and design requirements. All the infrastructure necessary to accomplish this already exists; all it takes is a team of crazy, ambitious developers looking to change the standard of architectural home design.





    Source link

  • 2 ways to check communication with MongoDB &vert; Code4IT

    2 ways to check communication with MongoDB | Code4IT


    Health Checks are fundamental to keep track of the health of a system. How can we check if MongoDB is healthy?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In any complex system, you have to deal with external dependencies.

    More often than not, if one of the external systems (a database, another API, or an authentication provider) is down, the whole system might be affected.

    In this article, we’re going to learn what Health Checks are, how to create custom ones, and how to check whether a MongoDB instance can be reached or not.

    What are Health Checks?

    A Health Check is a special type of HTTP endpoint that allows you to understand the status of the system – well, it’s a check on the health of the whole system, including external dependencies.

    You can use it to understand whether the application itself and all of its dependencies are healthy and responding in a reasonable amount of time.

    Those endpoints are also useful for humans, but are even more useful for tools that monitor the application and can automatically fix some issues if occurring – for example, they can restart the application if it’s in a degraded status.

    How to add Health Checks in dotNET

    Lucky for us, .NET already comes with Health Check capabilities, so we can just follow the existing standard without reinventing the wheel.

    For the sake of this article, I created a simple .NET API application.

    Head to the Program class – or, in general, wherever you configure the application – and add this line:

    builder.Services.AddHealthChecks();
    

    and then, after var app = builder.Build();, you must add the following line to have the health checks displayed under the /healtz path.

    app.MapHealthChecks("/healthz");
    

    To sum up, the minimal structure should be:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddHealthChecks();
    
    var app = builder.Build();
    
    app.MapHealthChecks("/healthz");
    
    app.MapControllers();
    
    app.Run();
    

    So that, if you run the application and navigate to /healthz, you’ll just see an almost empty page with two characteristics:

    • the status code is 200;
    • the only printed result is Healthy

    Clearly, that’s not enough for us.

    How to create a custom Health Check class in .NET

    Every project has its own dependencies and requirements. We should be able to build custom Health Checks and add them to our endpoint.

    It’s just a matter of creating a new class that implements IHealthCheck, an interface that lives under the Microsoft.Extensions.Diagnostics.HealthChecks namespace.

    Then, you have to implement the method that tells us whether the system under test is healthy or degraded:

    Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default);
    

    The method returns an HealthCheckResult, which is a struct that can have those values:

    • Healthy: everything is OK;
    • Degraded: the application is running, but it’s taking too long to respond;
    • Unhealthy: the application is offline, or an error occurred while performing the check.

    So, for example, we build a custom Health Check class such as:

    public class MyCustomHealthCheck : IHealthCheck
    {
        private readonly IExternalDependency _dependency;
    
        public MyCustomHealthCheck(IExternalDependency dependency)
        {
            _dependency = dependency;
        }
    
        public Task<HealthCheckResult> CheckHealthAsync(
            HealthCheckContext context, CancellationToken cancellationToken = default)
        {
            var isHealthy = _dependency.IsHealthy();
    
            if (isHealthy)
            {
                return Task.FromResult(HealthCheckResult.Healthy());
            }
            return Task.FromResult(HealthCheckResult.Unhealthy());
        }
    }
    

    And, finally, add it to the Startup class:

    builder.Services.AddHealthChecks()
        .AddCheck<MyCustomHealthCheck>("A custom name");
    

    Now, you can create a stub class that implements IExternalDependency to toy with the different result types. In fact, if we create and inject a stub class like this:

    public class StubExternalDependency : IExternalDependency
    {
        public bool IsHealthy() => false;
    }
    

    and we run the application, we can see that the final result of the application is Unhealthy.

    A question for you: why should we specify a name to health checks, such as “A custom name”? Drop a comment below 📩

    Adding a custom Health Check Provider for MongoDB

    Now we can create a custom Health Check for MongoDB.

    Of course, we will need to use a library to access Mongo: so simply install via NuGet the package MongoDB.Driver – we’ve already used this library in a previous article.

    Then, you can create a class like this:

    public class MongoCustomHealthCheck : IHealthCheck
    {
        private readonly IConfiguration _configurations;
        private readonly ILogger<MongoCustomHealthCheck> _logger;
    
        public MongoCustomHealthCheck(IConfiguration configurations, ILogger<MongoCustomHealthCheck> logger)
        {
            _configurations = configurations;
            _logger = logger;
        }
    
        public async Task<HealthCheckResult> CheckHealthAsync(
            HealthCheckContext context, CancellationToken cancellationToken = default)
        {
            try
            {
                await MongoCheck();
                return HealthCheckResult.Healthy();
            }
            catch (Exception ex)
            {
                return HealthCheckResult.Unhealthy();
            }
        }
    
        private async Task IsMongoHealthy()
        {
            string connectionString = _configurations.GetConnectionString("MongoDB");
            MongoUrl url = new MongoUrl(connectionString);
    
            IMongoDatabase dbInstance = new MongoClient(url)
                .GetDatabase(url.DatabaseName)
                .WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary));
    
            _ = await dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } });
        }
    }
    

    As you can see, it’s nothing more than a generic class with some services injected into the constructor.

    The key part is the IsMongoHealthy method: it’s here that we access the DB instance. Let’s have a closer look at it.

    How to Ping a MongoDB instance

    Here’s again the IsMongoHealthy method.

    string connectionString = _configurations.GetConnectionString("MongoDB");
    MongoUrl url = new MongoUrl(connectionString);
    
    IMongoDatabase dbInstance = new MongoClient(url)
        .GetDatabase(url.DatabaseName)
        .WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary));
    
    _ = await dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } });
    

    Clearly, we create a reference to a specific DB instance: new MongoClient(url).GetDatabase(url.DatabaseName). Notice that we’re requiring access to the Secondary node, to avoid performing operations on the Primary node.

    Then, we send the PING command: dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } }).

    Now what? The PING command either returns an object like this:

    or, if the command cannot be executed, it throws a System.TimeoutException.

    MongoDB Health Checks with AspNetCore.Diagnostics.HealthChecks

    If we don’t want to write such things on our own, we can rely on pre-existing libraries.

    AspNetCore.Diagnostics.HealthChecks is a library you can find on GitHub that automatically handles several types of Health Checks for .NET applications.

    Note that this library is NOT maintained or supported by Microsoft – but it’s featured in the official .NET documentation.

    This library exposes several NuGet packages for tens of different dependencies you might want to consider in your Health Checks. For example, we have Azure.IoTHub, CosmosDb, Elasticsearch, Gremlin, SendGrid, and many more.

    Obviously, we’re gonna use the one for MongoDB. It’s quite easy.

    First, you have to install the AspNetCore.HealthChecks.MongoDb NuGet package.

    NuGet package for AspNetCore.HealthChecks.MongoDb

    Then, you have to just add a line of code to the initial setup:

    builder.Services.AddHealthChecks()
        .AddMongoDb(mongodbConnectionString: builder.Configuration.GetConnectionString("MongoDB"))
    

    That’s it! Neat and easy! 😎

    Why do we even want a custom provider?

    Ok, if we can just add a line of code instead of creating a brand-new class, why should we bother creating the whole custom class?

    There are some reasons to create a custom provider:

    1. You want more control over the DB access: for example, you want to ping only Secondary nodes, as we did before;
    2. You don’t just want to check if the DB is up, but also the performance of doing some specific operations, such as retrieving all the documents from a specified collection.

    But, yes, in general, you can simply use the NuGet package we used in the previous section, and you’re good to go.

    Further readings

    As usual, the best way to learn more about a topic is by reading the official documentation:

    🔗 Health checks in ASP.NET Core | Microsoft Docs

    How can you use MongoDB locally? Well, easy: with Docker!

    🔗 First steps with Docker: download and run MongoDB locally | Code4IT

    As we saw, we can perform PING operation on a MongoDB instance.

    🔗 Ping command | MongoDB

    This article first appeared on Code4IT 🐧

    Finally, here’s the link to the GitHub repo with the list of Health Checks:

    🔗 AspNetCore.Diagnostics.HealthChecks | GitHub

    and, if you want to sneak peek at the MongoDB implementation, you can read the code here:

    🔗 MongoDbHealthCheck.cs | GitHub

    Wrapping up

    In this article, we’ve learned two ways to implement Health Checks for a MongoDB connection.

    You can either use a pre-existing NuGet package, or you can write a custom one on your own. It all depends on your use cases.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to download an online file and store it on file system with C# &vert; Code4IT

    How to download an online file and store it on file system with C# | Code4IT


    Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Downloading files from an online source and saving them on the local machine seems an easy task.

    And guess what? It is!

    In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?

    How to download a file stream from an online resource using HttpClient

    Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.

    HttpClient httpClient = new HttpClient();
    Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
    

    Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.

    But, ok, it looks easy – way too easy! There are two more parts to consider.

    How to handle errors while downloading a stream of data

    You know, shit happens!

    There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.

    In both cases, an HttpRequestException exception is thrown, with the following stack trace:

    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
    at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
    

    As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.

    You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.

    Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.

    So, let me refactor the part that downloads the file stream and put it in a standalone method:

    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    

    so that we can use Stream.Null to check for the existence of the stream.

    How to store a file in your local machine

    Now we have our stream of data. We need to store it somewhere.

    We will need to copy our input stream to a FileStream object, placed within a using block.

    using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
    {
        await fileStream.CopyToAsync(outputFileStream);
    }
    

    Possible errors and considerations

    When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.

    FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.

    public enum FileMode
    {
        CreateNew = 1,
        Create,
        Open,
        OpenOrCreate,
        Truncate,
        Append
    }
    

    Again, there are some edge cases that we have to consider:

    • the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
    • the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.

    Full Example

    It’s time to put the pieces together:

    async Task DownloadAndSave(string sourceFile, string destinationFolder, string destinationFileName)
    {
        Stream fileStream = await GetFileStream(sourceFile);
    
        if (fileStream != Stream.Null)
        {
            await SaveStream(fileStream, destinationFolder, destinationFileName);
        }
    }
    
    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    
    async Task SaveStream(Stream fileStream, string destinationFolder, string destinationFileName)
    {
        if (!Directory.Exists(destinationFolder))
            Directory.CreateDirectory(destinationFolder);
    
        string path = Path.Combine(destinationFolder, destinationFileName);
    
        using (FileStream outputFileStream = new FileStream(path, FileMode.CreateNew))
        {
            await fileStream.CopyToAsync(outputFileStream);
        }
    }
    

    Bonus tips: how to deal with file names and extensions

    You have the file URL, and you want to get its extension and its plain file name.

    You can use some methods from the Path class:

    string image = "https://website.come/csharptips/format-interpolated-strings/featuredimage.png";
    Path.GetExtension(image); // .png
    Path.GetFileNameWithoutExtension(image); // featuredimage
    

    But not every image has a file extension. For example, Twitter cover images have this format: https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080×360

    string image = "https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080x360";
    Path.GetExtension(image); // [empty string]
    Path.GetFileNameWithoutExtension(image); // 1080x360
    

    Further readings

    As I said, you should not instantiate a new HttpClient() every time. You should use HttpClientFactory instead.

    If you want to know more details, I’ve got an article for you:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This was a quick article, quite easy to understand – I hope!

    My main point here is that not everything is always as easy as it seems – you should always consider edge cases!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link