نویسنده: post Bina

  • like Mermaid, but better. Syntax, installation, and practical usage tips | Code4IT

    like Mermaid, but better. Syntax, installation, and practical usage tips | Code4IT


    D2 is an open-source tool to design architectural layouts using a declarative syntax. It’s a textual format, which can also be stored under source control. Let’s see how it works, how you can install it, and some practical usage tips.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When defining the architecture of a system, I believe in the adage that says that «A picture is worth a thousand words».

    Proper diagramming helps in understanding how the architecture is structured, the dependencies between components, how the different components communicate, and their responsibilities.

    A clear architectural diagram can also be useful for planning. Once you have a general idea of the components, you can structure the planning according to the module dependencies and the priorities.

    A lack of diagramming leads to a “just words” definition: how many times have you heard people talk about modules that do not exist or do not work as they were imagining?

    The whole team can benefit from having a common language: a clear diagram brings clear thoughts, helping all the stakeholders (developers, architects, managers) understand the parts that compose a system.

    I tried several approaches: both online WYSIWYG tools like Draw.IO and DSL like Structurizr and Mermaid. For different reasons, I wasn’t happy with any of them.

    Then I stumbled upon D2: its rich set of elements makes it my new go-to tool for describing architectures. Let’s see how it works!

    A quick guide to D2 syntax

    Just like the more famous Mermaid, when using D2, you have to declare all the elements and connections as textual nodes.

    You can generate diagrams online by using the Playground section available on the official website, or you can install it locally (as you will see later).

    Elements: the basic components of every diagram

    Elements are defined as a set of names that can be enriched with a label and other metadata.

    Here’s an example of the most straightforward configurations for standalone elements.

    service
    
    user: Application User
    
    job: {
      shape: hexagon
    }
    

    For each element, you can define its internal name (service), a label (user: Application User) and a shape (shape: hexagon).

    A simple diagram with only two unrelated elements

    Other than that, I love the fact that you can define elements to be displayed as multiple instances: this can be useful when a service has multiple instances of the same type, and you want to express it clearly without the need to manually create multiple elements.

    You can do it by setting the multiple property to true.

    apiGtw: API Gateway {
      shape: cloud
    }
    be: BackEnd {
      style.multiple: true
    }
    
    apiGtw -> be
    

    Simple diagram with multiple backends

    Grouping: nesting elements hierarchically

    You may want to group elements. You can do that by using a hierarchical structure.

    In the following example, the main container represents my e-commerce application, composed of a website and a background job. The website is composed of a frontend, a backend, and a database.

    ecommerce: E-commerce {
      website: User Website {
        frontend
        backend
        database: DB {
          shape: cylinder
        }
      }
    
      job: {
        shape: hexagon
      }
    }
    

    As you can see from the diagram definition, elements can be nested in a hierarchical structure using the {} symbols. Of course, you can still define styles and labels to nested elements.

    Diagram with nested elements

    Connections: making elements communicate

    An architectural diagram is helpful only if it can express connections between elements.

    To connect two elements, you must use the --, the -> or the <- connector. You have to link their IDs, not their labels.

    ecommerce: E-commerce {
        website: User Website {
            frontend
        backend
        database: DB {
            shape: cylinder
        }
        frontend -> backend
        backend -> database: retrieve records {
            style.stroke: red
        }
      }
    
      job: {
          shape: hexagon
      }
      job -> website.database: update records
    }
    

    The previous example contains some interesting points.

    • Elements within the same container can be referenced directly using their ID: frontend -> backend.
    • You can add labels to a connection: backend -> database: retrieve records.
    • You can apply styles to a connection, like choosing the arrow colour with style.stroke: red.
    • You can create connections between elements from different containers: job -> website.database.

    Connections between elements from different containers

    When referencing items from different containers, you must always include the container ID: job -> website.database works, but job -> database doesn’t because database is not defined (so it gets created from scratch).

    SQL Tables: represent the table schema

    An interesting part of D2 diagrams is the possibility of adding the description of SQL tables.

    Obviously, the structure cannot be validated: the actual syntax depends on the database vendor.

    However, having the table schema defined in the diagram can be helpful in reasoning around the dependencies needed to complete a development.

    serv: Products Service
    
    db: Database Schema {
      direction: right
      shape: cylinder
      userTable: dbo.user {
        shape: sql_table
        Id: int {constraint: primary_key}
        FirstName: text
        LastName: text
        Birthday: datetime2
      }
    
      productsTable: dbo.products {
        shape: sql_table
        Id: int {constraint: primary_key}
        Owner: int {constraint: foreign_key}
        Description: text
      }
    
      productsTable.Owner -> userTable.Id
    }
    
    serv -> db.productsTable: Retrieve products by user id
    

    Diagram with database tables

    Notice how you can also define constraints to an element, like {constraint: foreign_key}, and specify the references from one table to another.

    How to install and run D2 locally

    D2 is a tool written in Go.

    Go is not natively present in every computer, so you have to install it. You can learn how to install it from the official page.

    Once Go is ready, you can install D2 in several ways. I use Windows 11, so my preferred installation approach is to use a .msi installer, as described here.

    If you are on macOS, you can use Homebrew to install it by running:

    Regardless of the Operating System, you can have Go directly install D2 by running the following command:

    go install oss.terrastruct.com/d2@latest
    

    It’s even possible to install it via Docker. However, this approach is quite complex, so I prefer installing D2 directly with the other methods I explained before.

    You can find more information about the several installation approaches on the GitHub page of the project.

    Use D2 via command line

    To work with D2 diagrams, you need to create a file with the .d2 extension. That file will contain the textual representation of the diagrams, following the syntax we saw before.

    Once D2 is installed and the file is present in the file system (in my case, I named the file my-diagram.d2), you can use the console to generate the diagram locally – remember, I’m using Windows11, so I need to run the exe file:

    d2.exe --watch .\my-diagram.d2
    

    Now you can open your browser, head to the localhost page displayed on the shell, and see how D2 renders the local file. Thanks to the --watch flag, you can update the file locally and see the result appear on the browser without the need to restart the application.

    When the diagram is ready, you can export it as a PNG or SVG by running

    d2.exe .\my-diagram.d2 my-wonderful-design.png
    

    Create D2 Diagrams on Visual Studio Code

    Another approach is to install the D2 extension on VS Code.

    D2 extension on Visual Studio Code

    Thanks to this extension, you can open any D2 file and, by using the command palette, see a preview of the final result. You can also format the document to have the diagram definition tidy and well-structured.

    D2 extension command palette

    How to install and use D2 Diagrams on Obsidian

    Lastly, D2 can be easily integrated with tools like Obsidian. Among the community plugins, you can find the official D2 plugin.

    D2 plugin for Obsidian

    As you can imagine, Go is required on your machine.
    And, if necessary, you are required to explicitly set the path to the bin folder of Go. In my case, I had to set it to C:\Users\BelloneDavide\go\bin\.

    D2 plugin settings for Obsidian

    To insert a D2 diagram in a note generated with Obsidian, you have to use d2 as a code fence language.

    Practical tips for using D2

    D2 is easy to use once you have a basic understanding of how to create elements and connections.

    However, some tips may be useful to ease the process of creating the diagrams. Or, at least, these tips helped me write and maintain my diagrams.

    Separate elements and connections definition

    A good approach is to declare the application’s structure first, and then list all the connections between elements unless the elements are within the same components and are not expected to change.

    ecommerce: E-commerce {
      website: User Website {
        backend
        database: DB {
          shape: cylinder
        }
    
        backend -> database: retrieve records {
          style.stroke: red
        }
      }
    
      job -> website.database: update records
    }
    

    Here, the connection between backend and database is internal to the website element, so it makes sense to declare it directly within the website element.

    However, the other connection between the job and the database is cross-element. In the long run, it may bring readability problems.

    So, you could update it like this:

    ecommerce: E-commerce {
     website: User Website {
     backend
     database: DB {
     shape: cylinder
     }
    
     backend -> database: retrieve records {
     style.stroke: red
     }
     }
    
    - job -> website.database: update records
    }
    
    + ecommerce.job -> ecommerce.website.database: update records
    

    This tip can be extremely useful when you have more than one element with the same name belonging to different parents.

    Needless to say, since the order of the connection declarations does not affect the final rendering, write them in an organized way that best fits your needs. In general, I prefer creating sections (using comments to declare the area), and grouping connections by the outbound module.

    Pick a colour theme (and customize it, if you want!)

    D2 allows you to specify a theme for the diagram. There are some predefined themes (which are a set of colour palettes), each with a name and an ID.

    To use a theme, you have to specify it in the vars element on top of the diagram:

    vars: {
      d2-config: {
        theme-id: 103
      }
    }
    

    103 is the theme named “Earth tones”, using a brown-based palette that, when applied to the diagram, renders it like this.

    Diagram using the 103 colour palette

    However, if you have a preferred colour palette, you can use your own colours by overriding the default values:

    vars: {
      d2-config: {
        # Terminal theme code
        theme-id: 103
        theme-overrides: {
          B4: "#C5E1A5"
        }
      }
    }
    

    Diagram with a colour overridden

    You can read more about themes and customizations here.

    What is that B4 key overridden in the previous example? Unfortunately, I don’t know: you must try all the variables to understand how the diagram is rendered.

    Choose the right layout engine

    You can choose one of the three supported layout engines to render the elements in a different way (more info here).

    DAGRE and ELK are open source, but quite basic. TALA is more sophisticated, but it requires a paid licence.

    Here’s an example of how the same diagram is rendered using the three different engines.

    A comparison betweel DAGRE, ELK and TALA layout engines

    You can decide which engine to use by declaring it in the layout-engine element:

    vars: {
      d2-config: {
        layout-engine: tala
      }
    }
    

    Choosing the right layout engine can be beneficial because sometimes some elements are not rendered correctly: here’s a weird rendering with the DAGRE engine.

    DAGRE engine with a weird rendering

    Use variables to simplify future changes

    D2 allows you to define variables in a single place and have the same value repeated everywhere it’s needed.

    So, for example, instead of having

    mySystem: {
      reader: Magazine Reader
      writer: Magazine Writer
    }
    

    With the word “Magazine” repeated, you can move it to a variable, so that it can change in the future:

    vars: {
      entityName: Magazine
    }
    
    mySystem: {
      reader: ${entityName} Reader
      writer: ${entityName} Writer
    }
    

    If in the future you’ll have to handle not only Magazines but also other media types, you can simply replace the value of entityName in one place and have it updated all over the diagram.

    D2 vs Mermaid: a comparison

    D2 and Mermaid are similar but have some key differences.

    They both are diagram-as-a-code tools, meaning that the definition of a diagram is expressed as a text file, thus making it available under source control.

    Mermaid is already supported by many tools, like Azure DevOps wikis, GitHub pages, and so on.
    On the contrary, D2 must be installed (along with the Go language).

    Mermaid is quite a “close” system: even if it allows you to define some basic styles, it’s not that flexible.

    On the contrary, D2 allows you to choose a theme for the whole diagram, as well as choosing different layout engines.
    Also, D2 has some functionalities that are (currently) missing on Mermaid:

    Mermaid, on the contrary, allows us to define more types of diagrams: State Diagrams, Gantt, Mindmaps, and so on. Also, as we saw, it’s already supported on many platforms.

    So, my (current) choice is: use D2 for architectural diagrams, and use Mermaid for everything else.

    I haven’t tried D2 for Sequence Diagrams yet, so I won’t express an opinion on that.

    Further readings

    D2 is available online with a playground you can use to try things out in a sandboxed environment.

    🔗 D2 Playground

    All the documentation can be found on GitHub or on the official website:

    🔗 D2 documentation

    And, if you want, you can use icons to create better diagrams: D2 exposes a set of SVG icons that can be easily integrated into your diagrams. You can find them here:

    🔗 D2 predefined icons

    This article first appeared on Code4IT 🐧

    Ok, but diagrams have to live in a context. How can you create useful and maintainable documentation for your future self?

    A good way to document your architectural choices is to define ADRs (Architecture Decision Records), as explained here:

    🔗 Tracking decision with Architecture Decision Records (ADRs) | Code4IT

    And, of course, just the architectural diagram is not enough: you should also describe the dependencies, the constraints, the deployment strategies, and so on. Arc42 is a template that can guide you to proper system documentation:

    🔗 Arc42 Documentation, for a comprehensive description of your project | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Nite Riot: Minimalism Gets a Wild Side

    Nite Riot: Minimalism Gets a Wild Side


    Nite Riot isn’t just one person, it’s a powerhouse team of creatives crafting
    high-voltage promos and commercials for the biggest names in music, fashion,
    and entertainment. The kind of work that makes your jaw drop and your brain
    scream, “Damn, that’s cool”! So, when we got the chance to build their digital
    playground, we knew one thing: it had to hit just as hard as their portfolio.

    It just so happened that while working on this project, I was deep into
    Louis Paquet’s Awwwards masterclass,
    Memorable UI Design For Interactive Experiences. I challenged myself to apply a whole new approach:
    “Big Idea”. It became the driving force behind everything
    that followed.

    Less Noise, More Punch

    Nite Riot’s work hits like a lightning bolt—loud, bold, and impossible to
    ignore. The website needed to do the opposite: create space for that energy to
    shine. We stripped away the noise and leaned into a minimal black-and-white
    aesthetic, relying on a dynamic grid layout and a Difference (Blend) Mode to
    do the heavy lifting.

    We went through multiple visual directions at the start: concepts were
    sketched, tested, and tossed. Only the strongest ideas made the cut, forming a
    clean yet bold design system.

    And we didn’t wait for the homepage to make a first impression. The preloader
    set the tone from the jump—not just functional, but atmospheric. It’s the
    handshake, the deep breath before the plunge. I always think of the preloader
    as the overture of the experience.

    Big Idea

    My guiding light? Difference Mode. It didn’t just influence the design; it
    became the heartbeat of the entire site.

    • Typography treatments
    • Imagery overlays
    • Video hovers
    • Case study rollovers
    • Page transitions
    • Scroll effects
    • The logo itself
    • Dark/Light theme toggling
    • Drag scroll on the Inspired page
    • Even the 404 page

    Enter the Difference Mode

    The goal wasn’t simply to layer on visual effects; instead, it was about
    crafting a rhythm. Difference Mode brought contrast and edge to every element
    it touched. Hover states, transitions, and page reveals all flowed together,
    each following the same beat.

    But the impact wasn’t confined to the visual side. On the technical front,
    Difference Mode allowed us to create a smooth dark/light theme experience
    without the need for redundant styles. With a single toggle, the entire color
    palette reversed seamlessly, ensuring both sharpness and performance.

    Design Index Page

    We experimented with multiple layouts to strike the perfect balance. The
    client didn’t want a fullscreen visual overload, but at the same time, they
    wanted a strong presence of imagery—all within a minimalist aesthetic. The
    result? A carefully structured design that offers a rich visual experience
    without overwhelming the user, maintaining the sleek and intentional feel of
    the site.

    Case Study: A Scrolling Cinematic Experience

    Case studies aren’t just pages on this site—they’re a journey. Forget the
    standard click-and-load experience; here, case studies unfold like a film
    reel, one seamless story rolling into the next.

    • On desktop, the layout moves horizontally—because why
      should scrolling always be up and down?
    • No matter if a title is one line or three, we made sure everything adapts
      beautifully.
    • We developed a multi-case format, so you’re never locked
      into just one story.
    • And the showstopper?
      The ultra-smooth case study transition. Instead of abruptly
      ending one project and making you manually start the next, we designed a
      flow where the next case subtly appears, teasing you into it. You can either
      click and dive in or keep scrolling, and like magic, you’re onto the next
      visual masterpiece.

    Inspired? You Will Be.

    Our favorite part? The Inspired page. Imagine a canvas where images float,
    shift, and react to your every move. Drag, drop, hold—boom, Difference Mode
    kicks in. It’s a playground for the restless creative mind, where inspiration
    isn’t just consumed—it’s interacted with.

    404, But Make It Rockstar

    Most 404 pages are where fun goes to die. Not here. Ours is a full-blown
    experience—an Easter egg waiting to be discovered. The kind of 404 that makes
    you want to get lost on purpose.

    Oh, and did we mention? We applied
    double Difference Mode here. Because why not?

    I accidentally duplicated a video layer that had Difference Mode on—and turns
    out, the number shapes had the same mode. That happy accident created a unique
    setup with a ton of character. And the best part? It was insanely easy to
    build in Webflow with just the native tools.

    Animation

    Most 404 pages are where fun goes to die. Not here. Ours is a full-blown
    experience—an Easter egg waiting to be discovered. The kind of 404 that makes
    you want to get lost on purpose.

    Every animation began in Figma. Once we nailed the tone and pacing, I moved it
    all into After Effects, tweaking easings and timings to hit that sweet spot
    between smooth and snappy.

    I leaned on three key easing patterns to shape the site’s movement:

    easeSlowStartFastEnd
    cubic-bezier(0.2, 0, 0.1, 1)
    
    easeFastStartSmoothEnd
    cubic-bezier(0.75, 0, 0, 1)
    
    easeHeadings
    cubic-bezier(0.75, 0, 0, 0.35)

    When it came to development, GSAP gave us the
    control and nuance to bring those animations to life on the web.

    Development Choices

    We didn’t have an unlimited budget, but we had a clear vision. So we chose
    tools that gave us flexibility without compromising on polish.

    We pushed Webflow and
    GSAP to their limits to bring this vision to
    life—fully custom interactions, smooth performance, and pixel-perfect
    precision across all devices. Every scroll, hover, and transition was
    carefully crafted to support the story.

    Stack Under the Hood:

    • Webflow: our base platform
    • GSAP: the animation engine
    • Barba.js: for seamless page transitions
    • Embla.js: smooth slider on
      the homepage
    • Lenis: buttery custom
      scroll experience
    • Glightbox: for
      full-screen video playback

    Barba.js and Lenis worked perfectly together for our infinite scroll effect
    and horizontal case study navigation.

    Tech Breakdown: From Stack to Story

    Global

    • Page transitions powered by barba.js +
      GSAP, including smooth element reveals on entry.

    Index Page

    • Case slider built with Embla Carousel for
      the title timeline and mobile image switching.
    • GSAP handles pagination animations and desktop image
      transitions.

    Inspired Page

    • Drag-to-explore canvas with floating images and color
      inversion using vanilla JS + CSS.als on
      entry.

    Work Page

    • List item hover effects with highlight and background
      transitions using JS + CSS + GSAP.

    About Page

    • Section overlap animation with background scaling and
      masked reveals powered by CSS + GSAP.
    • On-scroll reveal of text and imagery using
      GSAP ScrollTrigger.
    • Hover animations on cities (desktop) +
      auto image rotation (mobile) via
      JS + GSAP.
    • Smooth scrolling handled by lenis.js.

    Case Studies

    • Horizontal scroll experience using
      lenis.js + GSAP.
    • Pagination updates animated with
      JS + GSAP.
    • Parallax effects on H1, media, and next-case blocks powered
      by GSAP.
    • Tab transitions via barba.js + GSAP.
    • Scroll-based transition to the next case using
      JS + barba.js + GSAP.
    • Back button transition animated with
      JS + barba.js + GSAP.
    • Full-screen video block with smooth entry/exit animations
      using JS + GSAP + glightbox.

    404 Page

    • Scrolling text ticker via CSS animations.
    • Cursor-following 404 block on desktop using
      JS + GSAP.
    • Chaotic digit displacement animated with
      GSAP.
    • Motion-reactive number shift on mobile/tablet via
      JS + Device Orientation API.

    Visual Optimization

    Performance mattered—especially on case study pages packed with video previews
    and hi-res imagery. Our toolchain:

    • Handbrake for compressing videos
    • Picflow for bulk image optimization
    • tinypng for WebP polishing

    Picflow let us handle massive batches of photos way faster than Squoosh ever
    could. Big time-saver.

    Handbrake
    Picflow
    tinypng

    CMS

    We built everything in Webflow’s CMS. Super clean and fast to update. Adding a
    new case is as easy as filling out a form.

    Not Just a Portfolio. A Vibe.

    This wasn’t about making another nice-looking site. It was about building a
    space that feels alive. Every pixel, every transition, every weird, wonderful
    interaction was designed to make people feel something. Minimalism with an
    edge. Order with a dash of chaos. Just like Nite Riot.

    Oh, and speaking of hidden gems—let’s just say we have a soft spot for Easter
    eggs. If you hover over the Nite Riot logo, you might just stumble upon a
    couple of surprises. No spoilers, though. You’ll have to find them yourself.
    😉

    Click. Explore. Get Lost.

    This is not just a website. It’s an experience. A digital world built to be
    played with. So go ahead—dive in, mess around, and see what happens!

    Visit the Nite Riot Site

    Credits

    Creation Direction:
    BL/S®

    Art / Creative Director:
    Serhii Polyvanyi

    UI / UX Design:
    Vladyslav Litovka

    PM: Julia Nikitenko

    Dev Direction: V&M, Yevhenii Prykhodko





    Source link

  • How to log to Azure Application Insights using ILogger in ASP.NET Core &vert; Code4IT

    How to log to Azure Application Insights using ILogger in ASP.NET Core | Code4IT


    Application Insights is a great tool for handling high volumes of logs. How can you configure an ASP.NET application to send logs to Azure Application Insights? What can I do to have Application Insights log my exceptions?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is crucial for any application. However, generating logs is not enough: you must store them somewhere to access them.

    Application Insights is one of the tools that allows you to store your logs in a cloud environment. It provides a UI and a query editor that allows you to drill down into the details of your logs.

    In this article, we will learn how to integrate Azure Application Insights with an ASP.NET Core application and how Application Insights treats log properties such as Log Levels and exceptions.

    For the sake of this article, I’m working on an API project with HTTP Controllers with only one endpoint. The same approach can be used for other types of applications.

    How to retrieve the Azure Application Insights connection string

    Azure Application Insights can be accessed via any browser by using the Azure Portal.

    Once you have an instance ready, you can simply get the value of the connection string for that resource.

    You can retrieve it in two ways.

    You can get the connection string by looking at the Connection String property in the resource overview panel:

    Azure Application Insights overview panel

    The alternative is to navigate to the Configure > Properties page and locate the Connection String field.

    Azure Application Insights connection string panel

    How to add Azure Application Insights to an ASP.NET Core application

    Now that you have the connection string, you can place it in the configuration file or, in general, store it in a place that is accessible from your application.

    To configure ASP.NET Core to use Application Insights, you must first install the Microsoft.Extensions.Logging.ApplicationInsights NuGet package.

    Now you can add a new configuration to the Program class (or wherever you configure your services and the ASP.NET core pipeline):

    builder.Logging.AddApplicationInsights(
    configureTelemetryConfiguration: (config) =>
     config.ConnectionString = "InstrumentationKey=your-connection-string",
     configureApplicationInsightsLoggerOptions: (options) => { }
    );
    

    The configureApplicationInsightsLoggerOptions allows you to configure some additional properties: TrackExceptionsAsExceptionTelemetry, IncludeScopes, and FlushOnDispose. These properties are by default set to true, so you probably don’t want to change the default behaviour (except one, which we’ll modify later).

    And that’s it! You have Application Insights ready to be used.

    How log levels are stored and visualized on Application Insights

    I have this API endpoint that does nothing fancy: it just returns a random number.

    [Route("api/[controller]")]
    [ApiController]
    public class MyDummyController(ILogger<DummyController> logger) : ControllerBase
    {
     private readonly ILogger<DummyController> _logger = logger;
    
        [HttpGet]
        public async Task<IActionResult> Get()
        {
            int number = Random.Shared.Next();
            return Ok(number);
        }
    }
    

    We can use it to run experiments on how logs are treated using Application Insights.

    First, let’s add some simple log messages in the Get endpoint:

    [HttpGet]
    public async Task<IActionResult> Get()
    {
        int number = Random.Shared.Next();
    
        _logger.LogDebug("A debug log");
        _logger.LogTrace("A trace log");
        _logger.LogInformation("An information log");
        _logger.LogWarning("A warning log");
        _logger.LogError("An error log");
        _logger.LogCritical("A critical log");
    
        return Ok(number);
    }
    

    These are just plain messages. Let’s search for them in Application Insights!

    You first have to run the application – duh! – and wait for a couple of minutes for the logs to be ready on Azure. So, remember not to close the application immediately: you have to give it a few seconds to send the log messages to Application Insights.

    Then, you can open the logs panel and access the logs stored in the traces table.

    Log levels displayed on Azure Application Insights

    As you can see, the messages appear in the query result.

    There are three important things to notice:

    • in .NET, the log level is called “Log Level”, while on Application Insights it’s called “severity level”;
    • the log levels lower than Information are ignored by default (in fact, you cannot see them in the query result);
    • the Log Levels are exposed as numbers in the severityLevel column: the higher the value, the higher the log level.

    So, if you want to update the query to show only the log messages that are at least Warnings, you can do something like this:

    traces
    | where severityLevel >= 2
    | order  by timestamp desc
    | project timestamp, message, severityLevel
    

    How to log exceptions on Application Insights

    In the previous example, we logged errors like this:

    _logger.LogError("An error log");
    

    Fortunately, ILogger exposes an overload that accepts an exception in input and logs all the details.

    Let’s try it by throwing an exception (I chose AbandonedMutexException because it’s totally nonsense in this simple context, so it’s easy to spot).

    private void SomethingWithException(int number)
    {
        try
        {
            _logger.LogInformation("In the Try block");
    
            throw new AbandonedMutexException("An exception message");
        }
        catch (Exception ex)
        {
            _logger.LogInformation("In the Catch block");
            _logger.LogError(ex, "Unable to complete the operation");
        }
        finally
        {
            _logger.LogInformation("In the Finally block");
        }
    }
    

    So, when calling it, we expect to see 4 log entries, one of which contains the details of the AbandonedMutexException exception.

    The Exception message in Application Insights

    Hey, where is the exception message??

    It turns out that ILogger, when creating log entries like _logger.LogError("An error log");, generates objects of type TraceTelemetry. However, the overload that accepts as a first parameter an exception (_logger.LogError(ex, "Unable to complete the operation");) is internally handled as an ExceptionTelemetry object. Since internally, it’s a different type of Telemetry object, and it gets ignored by default.

    To enable logging exceptions, you have to update the way you add Application Insights to your application by setting the TrackExceptionsAsExceptionTelemetry property to false:

    builder.Logging.AddApplicationInsights(
    configureTelemetryConfiguration: (config) =>
     config.ConnectionString = connectionString,
     configureApplicationInsightsLoggerOptions: (options) => options.TrackExceptionsAsExceptionTelemetry = false);
    

    This way, ExceptionsTelemetry objects are treated as TraceTelemetry logs, making them available in Application Insights logs:

    The Exception log appears in Application Insights

    Then, to access the details of the exception like the message and the stack trace, you can look into the customDimensions element of the log entry:

    Details of the Exception log

    Even though this change is necessary to have exception logging work, it is barely described in the official documentation.

    Further readings

    It’s not the first time we have written about logging in this blog.

    For example, suppose you don’t want to use Application Insights but prefer an open-source, vendor-independent log sink. In that case, my suggestion is to try Seq:

    🔗 Easy logging management with Seq and ILogger in ASP.NET | Code4IT

    Logging manually is nice, but you may be interested in automatically logging all the data related to incoming HTTP requests and their responses.

    🔗 HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!) | Code4IT

    This article first appeared on Code4IT 🐧

    You can read the official documentation here (even though I find it not much complete and does not show the results):

    🔗 Application Insights logging with .NET | Microsoft docs

    Wrapping up

    This article taught us how to set up Azure Application Insights in an ASP.NET application.
    We touched on the basics, discussing log levels and error handling. In future articles, we’ll delve into some other aspects of logging, such as correlating logs, understanding scopes, and more.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Turning Music Into Motion: The Making of the 24/7 Artists Launch Page

    Turning Music Into Motion: The Making of the 24/7 Artists Launch Page


    In this article, we’ll explore the behind-the-scenes process of how Waaark brought 24/7 Artists’ new product launch landing page to life. See how creative vision, design, and development came together to shape the final result.

    Brief

    24/7 Artists reached out after discovering our work on AW Portfolio. They came to us with a clear challenge: help them break through a creative deadlock and redesign their site to support an upcoming product launch—on a tight deadline.

    At Waaark, having time to think, breathe, and work at our own pace is key. We typically avoid last-minute projects, but this one felt like a puzzle worth solving. We saw a narrow but feasible path forward and accepted the challenge.

    Creative research

    We kicked off the project by exploring ways to visually represent music. After some wandering sessions on platforms like Pinterest and Behance, we narrowed our direction toward visualiser aesthetics—particularly the use of lines to suggest sound waves.

    The client also emphasised their desire to introduce depth and dimensionality into the site. We collected inspiration reflecting this concept and organised everything into a Milanote moodboard, including ideas around color, typography, layout, and impactful hero sections to set a clear creative direction.

    Given the time constraints, it was important to focus on bold, achievable visuals—techniques we had already mastered.

    Design

    Story board

    For a storytelling-focused, long-scrolling landing page like this, we replaced our typical UI wireframes with a full storyboard. This storyboard mapped out each step of the user journey, along with transitions between sections.

    Our goal was twofold: to provide the client with a clear visual direction and to start shaping the flow and pacing on our end.

    Creative Direction

    With both the moodboard and storyboard approved, we began merging them to define the site’s visual language.

    Right from the hero section, we wanted the message to be loud and clear: music meets tech. We envisioned a dark, immersive intro with circular lines evoking vinyl records or sound waves. Layered on top: a bold sans-serif headline and a ticket-style navigation bar to reinforce the music industry vibe.

    To instantly capture user attention, we imagined a mouse-trail animation where artist photos appear in an equalizer-style movement.

    To contrast the dark intro, we introduced a more colorful palette throughout the rest of the site, showcasing the diversity of music and the artists’ unique sensibilities.

    Implementation

    Tech stack

    We used our go-to stack, which the client was already familiar with: WordPress. It provided a solid foundation—easy to manage, flexible for the frontend, and scalable.

    For the front-end experience, we integrated a few select libraries:

    • GSAP for fluid, expressive animations
    • Luge to manage the overall page lifecycle
    • Lenis for smooth scrolling

    We aimed to minimise external dependencies, instead relying on native CSS 3D transformations and lightweight JS/Canvas-based animations—especially for effects mimicking depth.

    Animation

    To save time, all the animations were directly coded based on what we had envisioned and mapped out in the storyboard. Some of them worked exactly as imagined from the start, while others needed a bit of fine-tuning to integrate fully into the overall experience.

    Scroll Animations

    To keep users engaged while presenting 24/7 Artists’ vision and offering, we crafted a sequence of scroll-driven animations—alternating between smooth flows and unexpected reveals.

    Micro-Interactions

    On a product launch page, micro-interactions are key. They spark curiosity, highlight key elements, and subtly guide the user toward action.

    For the main call to action, we designed a distinctive interaction using the same equalizer-like shape seen in the photo animations. On hover, it animates like a music player—both playful and thematic.

    Tile Grid Setup
    We began by constructing a grid made of 1×1 and 2×2 tiles.

    Z-Axis Scroll Effect
    Since we weren’t using true 3D, we faked depth using scale transforms. We calculated the scale needed to have the grid’s central hole (where content would go) expand to fill the viewport. Then, we transitioned each tile from its original size and position to the final state using GSAP.

    Playing with GSAP staggered animation adds more depth to the motion.

    Simulated Cube Depth
    To simulate 3D cubes, we calculated the back-face vertices based on a smaller grid to keep the illusion of perspective. We then drew side faces accordingly, making sure to hide vertices behind the front face.

    Canvas-Based Content Reveal
    To finish the effect, we redrew the 2×2 tiles’ content in Canvas and added a cover layer that scrolls at a faster rate, revealing the content below.

    Conclusion

    The 24/7 Artists landing page was a bold and fast-paced project that pushed us to distill ideas quickly and trust our creative instincts.

    Through strong visual metaphors, smooth storytelling, and carefully crafted motion, we built a launchpad that sets the tone for the brand’s next chapter.

    This first release is just the beginning. The site was designed with scalability in mind, and additional sections and pages are already being added to support future growth and evolving needs.

    When the vision is clear and the momentum is right, great things can happen—fast.



    Source link

  • Zero Trust Network Access Use Cases

    Zero Trust Network Access Use Cases


    As organizations navigate the evolving threat landscape, traditional security models like VPNs and legacy access solutions are proving insufficient. Zero Trust Network Access (ZTNA) has emerged as a modern alternative that enhances security while improving user experience. Let’s explore some key use cases where ZTNA delivers significant value.

    Leveraging ZTNA as a VPN Alternative

    Virtual Private Networks (VPNs) have long been the go-to solution for secure remote access. However, they come with inherent challenges, such as excessive trust, lateral movement risks, and performance bottlenecks. ZTNA eliminates these issues by enforcing a least privilege access model, verifying every user and device before granting access to specific applications rather than entire networks. This approach minimizes attack surfaces and reduces the risk of breaches.

    ZTNA for Remote and Hybrid Workforce

    With the rise of remote and hybrid work, employees require seamless and secure access to corporate resources from anywhere. ZTNA ensures secure, identity-based access without relying on traditional perimeter defenses. By continuously validating users and devices, ZTNA provides a better security posture while offering faster, more reliable connectivity than conventional VPNs. Cloud-native ZTNA solutions can dynamically adapt to user locations, reducing latency and enhancing productivity.

    Securing BYOD Using ZTNA

    Bring Your Own Device (BYOD) policies introduce security risks due to the varied nature of personal devices connecting to corporate networks. ZTNA secures these endpoints by enforcing device posture assessments, ensuring that only compliant devices can access sensitive applications. Unlike VPNs, which expose entire networks, ZTNA grants granular access based on identity and device trust, significantly reducing the attack surface posed by unmanaged endpoints.

    Replacing Legacy VDI

    Virtual Desktop Infrastructure (VDI) has traditionally provided secure remote access. However, VDIs can be complex to manage, require significant resources, and often introduce performance challenges. ZTNA offers a lighter, more efficient alternative by providing direct, controlled access to applications without needing a full virtual desktop environment. This improves user experience, simplifies IT operations, and reduces costs.

    Secure Access to Vendors and Partners

    Third-party vendors and partners often require access to corporate applications, but providing them with excessive permission can lead to security vulnerabilities. Zero Trust Network Access enables secure, policy-driven access for external users without exposing internal networks. By implementing identity-based controls and continuous monitoring, organizations can ensure that external users only access what they need when they need it, reducing potential risks from supply chain attacks.

    Conclusion

    ZTNA is revolutionizing secure access by addressing the limitations of traditional VPNs and legacy security models. Whether securing remote workers, BYOD environments, or third-party access, ZTNA provides a scalable, flexible, and security-first approach. As cyber threats evolve, adopting ZTNA is a crucial step toward a Zero Trust architecture, ensuring robust protection without compromising user experience.

    Is your organization ready to embrace Zero Trust Network Access? Now is the time for a more secure, efficient, and scalable access solution. Contact us or visit our website for more information.



    Source link

  • Digital Personal Data Protection act Guide for Healthcare Leaders

    Digital Personal Data Protection act Guide for Healthcare Leaders


    The digital transformation of India’s healthcare sector has revolutionized patient care, diagnostics, and operational efficiency. However, this growing reliance on digital platforms has also led to an exponential increase in the collection and processing of sensitive personal data. The Digital Personal Data Protection (DPDP) Act 2023 is a critical regulatory milestone, shaping how healthcare organizations manage patient data.

    This blog explores the significance of the DPDP Act for hospitals, clinics, pharmaceutical companies, and other healthcare entities operating in India.

    Building an Ethical and Trustworthy Healthcare Environment

    Trust is the cornerstone of patient-provider relationships. The DPDP Act 2023 reinforces this trust by granting Data Principals (patients) fundamental rights over their digital health data, including access, correction, and erasure requests.

    By complying with these regulations, healthcare organizations can demonstrate a commitment to patient privacy, strengthening relationships, and enhancing healthcare outcomes.

    Strengthening Data Security in a High-Risk Sector

    The healthcare industry is a prime target for cyberattacks due to the sensitivity and value of patient data, including medical history, treatment details, and financial records. The DPDP Act mandates that healthcare providers (Data Fiduciaries) implement comprehensive security measures to protect patient information from unauthorized access, disclosure, and breaches. This includes adopting technical and organizational safeguards to ensure data confidentiality, integrity, and availability.

    Ensuring Regulatory Compliance and Avoiding Penalties

    With strict compliance requirements, the Digital Personal Data Protection Act provides a robust legal framework for data protection in healthcare. Failure to comply can result in financial penalties of up to ₹250 crore for serious violations. By aligning data processing practices with regulatory requirements, healthcare entities can avoid legal risks, safeguard their reputation, and uphold ethical standards.

    Promoting Patient Empowerment and Data Control

    The DPDP Act empowers patients with greater control over their health data. Healthcare providers must establish transparent mechanisms for data collection and obtain explicit, informed, and unambiguous patient consent. Patients also have the right to know how their data is used, who has access, and for what purposes, reinforcing trust and accountability within the healthcare ecosystem.

    Facilitating Innovation and Research with Safeguards

    While prioritizing data privacy, the Digital Personal Data Protection Act also enables responsible data utilization for medical research, public health initiatives, and technological advancements. The Act provides pathways for the ethical use of anonymized or pseudonymized data, ensuring continued innovation while protecting patient rights. Healthcare organizations can leverage data analytics to improve treatment protocols and patient outcomes, provided they adhere to principles of data minimization and purpose limitation.

    Key Obligations for Healthcare Providers under the DPDP Act

    Healthcare organizations must comply with several critical obligations under the DPDP Act 2023:

    • Obtaining Valid Consent: Secure explicit patient consent for collecting and processing personal data for specified purposes.
    • Implementing Security Safeguards: To prevent breaches, deploy advanced security measures, such as encryption, access controls, and regular security audits.
    • Data Breach Notification: Promptly report data breaches to the Data Protection Board of India and affected patients.
    • Data Retention Limitations: Retain patient data only as long as necessary and ensure secure disposal once the purpose is fulfilled.
    • Addressing Patient Rights: Establish mechanisms for patients to access, correct, and erase their personal data while addressing privacy-related concerns.
    • Potential Appointment of a Data Protection Officer (DPO): Organizations processing large volumes of sensitive data may be required to appoint a DPO to oversee compliance efforts.

    Navigating the Path to DPDP Compliance in Healthcare

    A strategic approach is essential for healthcare providers to implement the DPDP Act effectively. This includes:

    • Conducting a comprehensive data mapping exercise to understand how patient data is collected, stored, and shared.
    • Updating privacy policies and internal procedures to align with the Act’s compliance requirements.
    • Training employees on data protection best practices to ensure organization-wide compliance.
    • Investing in advanced data security technologies and establishing robust consent management and incident response mechanisms.

    A Commitment to Data Privacy in Healthcare

    The Digital Personal Data Protection Act 2023 is a transformative regulation for the healthcare industry in India. By embracing its principles, healthcare organizations can ensure compliance, strengthen patient trust, and build a secure, ethical, and innovation-driven ecosystem.

    Seqrite offers cutting-edge security solutions to help healthcare providers protect patient data and seamlessly comply with the DPDP Act.

     



    Source link

  • Developer Spotlight: Andrea Biason | Codrops

    Developer Spotlight: Andrea Biason | Codrops


    Hi Codrops community! My name is Andrea Biason, and I’m a Creative Frontend Developer currently living in Brescia. I spend my days at Adoratorio Studio, where we create award-winning interactive and immersive experiences by combining strategy, design, and technology.

    Today, I’d like to present four diverse projects that showcase our approach and vision of what a web experience can be.

    The Blue Desert is an R&D project developed to explore new ways of conveying information typically confined to PDFs, such as Global Impact Reports and corporate press releases.

    We strongly believe in the power of storytelling to communicate data that might otherwise be overlooked — yet is truly future-defining — and we wanted to share that with the world.

    The experience presents two narratives:

    • A sci-fi overarching narrative that follows a wanderer’s journey through his devastated world, forever altered by a catastrophic desertification phenomenon.
    • A data-driven narrative revealed through pins scattered across the experience, comparing the Climate Change Goals set by COP21 with the current progress — and shortcomings — we’ve made as a species.

    While creating The Blue Desert, we aimed to maintain an immersive, sand-inspired aesthetic that complemented the narrative and setting — from the color palette to the font choices and design elements. This visual continuity is evident in everything from the smallest icons to the most noticeable transitions, and especially in the brushed, soft, and warm style of the 3D models that define the entire experience.

    The smooth camera movements guiding the user through the various scenes were studied in meticulous detail and often refined throughout the development process.

    We believe that attention to the smallest details is what makes or breaks an immersive experience — and there are many we’re particularly proud of. Each scene offers a unique interaction, such as the rapid flow of a waterfall, the blooming of flowers on tree crowns, or the rejuvenation of the blue desert when touched by the “Komai” blobs.

    When it comes to overall navigation, we’re always struck by the profound, AI-generated voice-over — carefully refined by human hands — paired with the original soundtrack and a rich variety of sound effects tailored to each scene.

    One final element we particularly love is the moment you break through the clouds at the start of the experience and arrive at the wanderer’s campsite. It immediately sets a deeper level of immersion from the very beginning.

    Challenges

    The wanderer’s aphorism at the beginning of the experience reflects the project’s learning journey: “The more we know, the more we realize we don’t know.”

    And that’s exactly what we found ourselves grappling with throughout the creation of this unique digital experience.

    The main challenges we faced included:

    1. 3D challenges: Coordinating multiple scenes in a continuous flow, ensuring smooth navigation, and optimizing exported assets. We also had to manage the project’s large scale — particularly handling cameras, backgrounds, and high-quality textures positioned close to the point of view.
    2. Development challenges: Maintaining a consistent frame rate for users by implementing a dual optimization process for both desktop and mobile, along with leveraging Three.js’ frustum culling. The 3D and development teams worked closely together, experimenting with and implementing InstancedMesh, with a particular focus on Shader Materials and Vertex Shaders.

    Personal Notes

    In terms of R&D, The Blue Desert was definitely the most challenging project we engaged with in recent memory. This isn’t only true in terms of (heavily) expanding our understanding and knowledge of optimization and 3D/WebGL, but also in polishing (and in the beginning, often reworking) the art direction, camera movements, and not falling into the pitfall of “wanting every random idea to make it to the final experience.”

    Given that the project was pretty much carried out in parallel between the 3D and Dev departments, the first phase for both teams consisted of a wide variety of specific experimentations that lasted from a few days to a week, in order to fully master — and then be able to customise to our needs — a variety of technical specifics, from texturing, UV map creation, and modeling, to understanding the shortcomings of optimisation tools like glTF-Transform, as well as specific tests on shaders and animations. The biggest lesson and achievement was definitely structuring (not without hardships) a smoother, more natural collaboration and hand-over process between the two teams.

    Another unexpected outcome, given the unusual pairing of content and approach, was the audiences this experience gathered interest from: initially (and expectedly) from the design community, who enjoyed the story as well as the imaginative world it takes place in, the curated 3D models, and the original themes we produced for the experience. Secondly (and a bit unexpectedly), from many brands that took notice and reached out, finding the approach fresh and engaging as a way to highlight years of efforts in achieving higher sustainability standards.

    Tech Stack

    Vue.js, Three.js, GSAP, Howler.js, Blender

    Ariostea is a brand of Iris Ceramica Group, one of the world’s largest ceramics producers and innovators. What initially started as a simple request for a new corporate website quickly evolved into a complete overhaul of their brand, positioning, and digital presentation.

    The first step in the project was completely revisiting the former sitemap (which consisted of more than 40 first-level pages — a mess we had never seen before), reducing it to eight main pages.

    What makes us particularly proud of this work is the thorough implementation of the brand design elements — most notably the smooth-cornered, rectangular shape of Ariostea’s slabs, which became a cornerstone of the design system — as well as the integration of the client’s PIM to automatically source and update product data, textures, 3D models, and the catalogue.

    While corporate websites generally tend not to stand out as much as WebGL experiences or interactive experiments, we find great satisfaction in crafting well-polished, strategically sound, and functional platforms that — like in Ariostea’s case — will stand the test of time and may serve as a foundation for bolder experiments to follow, as we’ll soon see with ICG Galleries.

    Challenges

    Approaching the design and development for Ariostea, we knew we would face three main challenges. Let’s delve into them one by one:

    The sheer scale of the project

    As mentioned above, the gargantuan amount of information presented on the website proved to be the first challenge. While a careful restructuring streamlined the UI and header — going from 40-something elements (I’m still in disbelief typing this out) to around 8 — the information itself wasn’t going to be deleted. Having longer, richer pages therefore meant creating a more diverse array of modules working in unison, featuring a variety of image and text carousels to better organize the content consumption.

    This was only the first step, as the richest section of the website was (who would’ve thought) the product catalogue — featuring more than 1,500 unique products.

    Our main objective for the “Collections” page was to avoid overwhelming the user with such a diverse selection, which was also arbitrarily divided from a business — not user — perspective. We therefore created a progressive filter system, clearly displayed at the bottom-center, with intuitive options tailored to user needs such as appearance, application, and material size.

    Finding a distinctive element / module / page that would take the project a step further

    While integrating the brand guidelines into the design system as seamlessly as possible, we knew we wanted to design a single page that would set Ariostea apart — something that would act as a memorable visual for all users. That could only be the product page.

    The design we chose to pursue presents an initial, card-like view featuring a texture preview, main data such as description and formats, and a sticky component showcasing the product name, collection, contact CTA, and the option to add the product to a wishlist.

    The texture preview was integrated with a savvy, highly optimized WebGL, generatively sourcing from the client’s PIM and presenting an authentic slab, just like those seen in their showrooms.

    Working in tandem with the brand’s IT to fail-proof the PIM connection

    Given our extensive expertise in custom development and WebGL, as well as modular, consistent productions for corporate websites, the biggest challenge we faced was definitely coordinating and working in tandem with Ariostea’s technical team to smoothly integrate their PIM — a challenge that wasn’t accomplished without a wide range of creative solutions and nerve-wracking brainstorming sessions!

    Personal Notes

    Ariostea was our first time creating custom APIs to connect with the brand’s PIM — something that proved useful later on for developing other, more experiential projects for the group. Additionally, the delicate tuning of multiple animations to achieve a cohesive experience, with a strong focus on UX and content consumption rather than flashy, explosive transitions, was a careful effort that often goes unnoticed.

    Tech Stack

    Vue.js + Nuxt.js, Three.js, GSAP, Blender, WordPress

    Iris Ceramica Group is one of the world leaders in ceramic production, with an ever-growing number of stunning showrooms around the world.

    ICG Galleries are designed to showcase the breadth of the group’s products, brands, and innovations — simulating living environments, presenting new collaborations, and hosting talks and events.

    Formerly represented only on the main company website, the objective of the immersive experience was to present ICG products in a contextualized setting, while also showcasing — through 3D and animations — the wide array of innovations and technologies developed by the company for ceramic surfaces. This idealized version of the Galleries was something the company had long been hoping to achieve digitally.

    Based on three floors that highlight the group’s Values, Products, and Creations, the experience begins with a slider displaying the three isometric floor plans, allowing the user to start their journey freely from whichever floor they find most enticing.

    While isometric views have been used in similar experiences before, for this project we were committed, first, to achieving a distinctive, warm, and refined art direction that embodied the group’s positioning and products — and second, to implementing them directly in the WebGL scene, rather than simply using images.

    Each of the floors presents a custom-designed (and engineered!) space, beautifully adorned with surfaces and interior design elements from ICG — a process that, due to its level of detail, required painstaking shoulder-to-shoulder collaboration with the client. Animations were also created for each of the pins, ranging from simpler ones — like a selection of slabs being highlighted in the Material Library — to more advanced ones, such as the Caveau revealing itself behind the Vault, or a listening room coming to life on the ground floor to showcase the “Hypertouch” technology.

    Given the numerous artistic and design collaborations the Group engages in, we also added a “bonus” floor — the Pop-Up Window — where a new setting or collaboration can be showcased every few months. We like this touch, as it allows the experience to evolve over time, not only in its more information-oriented pages.

    The experience also includes two editorial pages: one showcasing every ICG location around the world, and the other highlighting the events organized at those locations.

    Challenges

    When proposing this project to the client, we knew it was something they hadn’t considered (or even thought possible!) before that moment. We also knew we were asking them to take a leap of faith, to some extent.

    Being launched for the 2025 Milan Design Week (yes, you’re the first ones seeing this!), we’re incredibly proud to show how — through strategy, creative alignment, and close collaboration with the client — truly experiential and visually stunning results can be achieved, even in the B2B space.

    Personal Notes

    The final product we achieved with ICG Galleries is the result of many learnings from previous experiences — particularly in terms of WebGL experimentation — as well as a stable, proven tech stack and a well-oiled collaboration between the Dev and 3D teams.

    Technically speaking, we’re very proud of the smoothness of the camera scrolling, as well as the synchronization of the pins with the 3D environment and its connected HTML overlays.

    What definitely pushed us was adding a variety of animations to every interaction — from the technologies to the living spaces — creating an experience that didn’t feel like a static image, but rather a living, breathing space.

    Tech Stack

    Vue.js + Nuxt.js (SSR), Three.js, GSAP, Blender, WordPress

    Emerging from the politically charged atmosphere of Bologna in the late ’70s, the Intrusion Project serves as a living tribute to Radio Alice — a pioneering force in the free radio movement that sparked a cultural revolution.

    A project born from passion, it features seven audio-reactive shaders designed to amplify the powerful tracks of six genre-defining Italian underground artists, who reinterpreted the recording of the free radio’s final 23 minutes before it was broken into and shut down by police in 1977.

    While designing the Radio Alice experience, we had two certainties. Design-wise, we wanted to reverse our usual approach by inviting members of our creative team who are typically less tied to digital to lead the design — intentionally foregoing recognized (and self-imposed) UI/UX standards in favor of a bolder, grittier experience.

    Conceptually, we wanted to create an incisive moment that spoke directly to the gut of users, but we were also dead set on not letting the design and visuals overshadow the strong, generational narrative the project conveyed.

    The design employs a brutalist aesthetic that emphasizes raw, unpolished elements — mirroring the disruptive nature of the radio station itself — while bringing the archival sounds of Radio Alice to life in a visually dynamic environment.

    By creating a space where users can interact with and connect to the past, the site aims to embed the rebellious spirit of Radio Alice into our collective consciousness. The goal is to leave a trace as indelible as the events of 1977 themselves, encouraging new generations to absorb, reflect, and find inspiration in the courage of those voices.

    Challenges

    The first task for the project was defining a cohesive and scalable art direction for the seven audio-reactive shaders, tackled by our Interaction Designer in collaboration with the creative team. Once we had defined the visuals, the next challenge was transforming the source audio into texture via FFT (Fast Fourier Transform), an algorithm that breaks sound down into its individual frequencies, allowing our visuals to be influenced by a variety of inputs instead of just the combined track.

    As always, when working on the web, optimization was also a key focus — aiming to create the lightest possible shaders that could perform well across all kinds of devices.

    Taking a step back from the technical elements of the website, Radio Alice was one of the first opportunities for our Development team to implement shaders they hadn’t directly developed themselves, leading to a smoother collaboration with our Interaction Designer.

    Personal Notes

    We are particularly proud of the audio-reactive shaders. These not only create a mesmerizing visual experience synced to historical broadcasts but also embody Radio Alice’s chaos and creativity.

    A detail we’d like to highlight is the ability to switch the experience’s color palette between three versions. The reason behind this choice is actually quite simple — and a bit childish: while the iconic red is how we showcase the project and probably the most authentic representation of Radio Alice, the black-and-white and white-on-black versions were just too stunning to leave out!

    Tech Stack

    Vue.js + Nuxt.js, OGL, GSAP, Web Audio Beat Detector

    More about me

    My academic background is anything but related to programming or design. I graduated from a classical high school and chose a university path that combined the worlds of communication, design, and development.

    About 12 years ago, I began my journey at Studio Idee Materia, an independent creative agency in a small town in the province of Venice, with my latest project, Bertani, earning my first SOTD (Site of the Day) on Awwwards.

    For the past 4 years, my path has become more focused on frontend development, since I joined Adoratorio Studio — an independent creative studio known for its drive toward innovation and the creation of deeply branded digital experiences.

    Here, together with Andrea, Daniele, and the entire team, we share daily challenges and a philosophy that we try to convey and bring into the projects we develop.

    Our Philosophy

    It’s a philosophy based on obsessive attention to detail, the desire to always experiment with something new in each project, setting medium- to long-term goals, knowing that nothing is magically achieved overnight, and finally, being aware that the way we implement or approach a problem isn’t always the best one — but it’s the one that suits us best.

    Our Stack

    Depending on the needs of the project at hand, we’ve developed different stacks:

    • The most commonly used stack is based on Vue.js + Nuxt.js + WordPress as the backend. Nuxt is used in both of its variants: SSR mode on a Node.js server, and generate mode deployed via CI/CD on Firebase servers. This stack is used for all corporate projects — especially in generate mode — to meet SEO requirements.
    • For smaller and/or more experimental projects, or those without SEO concerns, we sometimes use a simpler stack that relies on Vite.js.
    • For projects involving 3D, we’ve developed two stacks as well: one using Three.js, and a lighter one using OGL — used when we only need to write shaders without relying on 3D models.

    Looking forward

    As for our current experimentations, we’re still refining our skills in Three.js — particularly in optimization, shader development, and workflow integration with the 3D team.

    Another topic we deeply care about is accessibility, as we believe our projects — however flashy — should be accessible to everyone, without compromise.

    Last but not least, and I felt this needed to be addressed: in such a fast-paced, ever-evolving digital world, we’re continuously finding ways to use AI as a tool to support our creative coding and ideas — not as a jack-of-all-trades meant to replace us.

    Final thoughts

    If you’re a young (or new) developer, my tip is: don’t be afraid to experiment. Today more than ever, technology evolves rapidly and offers infinite possibilities to create new and exciting things. Don’t limit yourself to what you already know — go out and explore uncharted territory.

    When you experiment and try new things, sometimes it may end up like this:

    But a few, precious times, it ends up like this:



    Source link

  • An In-Depth Look at CallerMemberName (and some Compile-Time trivia) &vert; Code4IT

    An In-Depth Look at CallerMemberName (and some Compile-Time trivia) | Code4IT


    Let’s dive deep into the CallerMemberName attribute and explore its usage from multiple angles. We’ll see various methods of invoking it, shedding light on how it is defined at compile time.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Method names change. And, if you are using method names in some places specifying them manually, you’ll spend a lot of time updating them.

    Luckily for us, in C#, we can use an attribute named CallerMemberName.

    This attribute can be applied to a parameter in the method signature so that its runtime value is the caller method’s name.

    public void SayMyName([CallerMemberName] string? methodName = null) =>
     Console.WriteLine($"The method name is {methodName ?? "NULL"}!");
    

    It’s important to note that the parameter must be a nullable string: this way if the caller sets its value, the actual value is set. Otherwise, the name of the caller method is used. Well, if the caller method has a name! 👀

    Getting the caller method’s name via direct execution

    The easiest example is the direct call:

    private void DirectCall()
    {
      Console.WriteLine("Direct call:");
      SayMyName();
    }
    

    Here, the method prints:

    Direct call:
    The method name is DirectCall!
    

    In fact, we are not specifying the value of the methodName parameter in the SayMyName method, so it defaults to the caller’s name: DirectCall.

    CallerMemberName when using explicit parameter name

    As we already said, we can specify the value:

    private void DirectCallWithOverriddenName()
    {
      Console.WriteLine("Direct call with overridden name:");
      SayMyName("Walter White");
    }
    

    Prints:

    Direct call with overridden name:
    The method name is Walter White!
    

    It’s important to note that the compiler sets the methodName parameter only if it is not otherwise specified.

    This means that if you call SayMyName(null), the value will be null – because you explicitly declared the value.

    private void DirectCallWithNullName()
    {
      Console.WriteLine("Direct call with null name:");
      SayMyName(null);
    }
    

    The printed text is then:

    Direct call with null name:
    The method name is NULL!
    

    CallerMemberName when the method is called via an Action

    Let’s see what happens when calling it via an Action:

    public void CallViaAction()
    {
      Console.WriteLine("Calling via Action:");
    
      Action<int> action = (_) => SayMyName();
      var singleElement = new List<int> { 1 };
      singleElement.ForEach(s => action(s));
    }
    

    This method prints this text:

    Calling via Action:
    The method name is CallViaAction!
    

    Now, things get interesting: the CallerMemberName attribute recognizes the method’s name that contains the overall expression, not just the actual caller.

    We can see that, syntactically, the caller is the ForEach method (which is a method of the List<T> class). But, in the final result, the ForEach method is ignored, as the method is actually called by the CallViaAction method.

    This can be verified by accessing the compiler-generated code, for example by using Sharplab.

    Compiled code of Action with pre-set method name

    At compile time, since no value is passed to the SayMyName method, it gets autopopulated with the parent method name. Then, the ForEach method calls SayMyName, but the methodName is already defined at compiled time.

    Lambda executions and the CallerMemberName attribute

    The same behaviour occurs when using lambdas:

    private void CallViaLambda()
    {
      Console.WriteLine("Calling via lambda expression:");
    
      void lambdaCall() => SayMyName();
      lambdaCall();
    }
    

    The final result prints out the name of the caller method.

    Calling via lambda expression:
    The method name is CallViaLambda!
    

    Again, the magic happens at compile time:

    Compiled code for a lambda expression

    The lambda is compiled into this form:

    [CompilerGenerated]
    private void <CallViaLambda>g__lambdaCall|0_0()
    {
      SayMyName("CallViaLambda");
    }
    

    Making the parent method name available.

    CallerMemberName when invoked from a Dynamic type

    What if we try to execute the SayMyName method by accessing the root class (in this case, CallerMemberNameTests) as a dynamic type?

    private void CallViaDynamicInvocation()
    {
      Console.WriteLine("Calling via dynamic invocation:");
    
      dynamic dynamicInstance = new CallerMemberNameTests(null);
      dynamicInstance.SayMyName();
    }
    

    Oddly enough, the attribute does not work as could have expected, but it prints NULL:

    Calling via dynamic invocation:
    The method name is NULL!
    

    This happens because, at compile time, there is no reference to the caller method.

    private void CallViaDynamicInvocation()
    {
      Console.WriteLine("Calling via dynamic invocation:");
      
      object arg = new C();
      if (<>o__0.<>p__0 == null)
      {
        Type typeFromHandle = typeof(C);
        CSharpArgumentInfo[] array = new CSharpArgumentInfo[1];
        array[0] = CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null);
        <>o__0.<>p__0 = CallSite<Action<CallSite, object>>.Create(Microsoft.CSharp.RuntimeBinder.Binder.InvokeMember(CSharpBinderFlags.ResultDiscarded, "SayMyName", null, typeFromHandle, array));
      }
      <>o__0.<>p__0.Target(<>o__0.<>p__0, arg);
    }
    

    I have to admit that I don’t understand why this happens: if you want, drop a comment to explain to us what is going on, I’d love to learn more about it! 📩

    Event handlers can get the method name

    Then, we have custom events.

    We define events in one place, but they are executed indirectly.

    private void CallViaEventHandler()
    {
      Console.WriteLine("Calling via events:");
      var eventSource = new MyEventClass();
      eventSource.MyEvent += (sender, e) => SayMyName();
      eventSource.TriggerEvent();
    }
    
    public class MyEventClass
    {
      public event EventHandler MyEvent;
      public void TriggerEvent() =>
      // Raises an event which in our case calls SayMyName via subscribing lambda method
      MyEvent?.Invoke(this, EventArgs.Empty);
    }
    

    So, what will the result be? “Who” is the caller of this method?

    Calling via events:
    The method name is CallViaEventHandler!
    

    Again, it all boils down to how the method is generated at compile time: even if the actual execution is performed “asynchronously” – I know, it’s not the most obvious word for this case – at compile time the method is declared by the CallViaEventHandler method.

    CallerMemberName from the Class constructor

    Lastly, what happens when we call it from the constructor?

    public CallerMemberNameTests(IOutput output) : base(output)
    {
     Console.WriteLine("Calling from the constructor");
     SayMyName();
    }
    

    We can consider constructors to be a special kind of method, but what’s in their names? What can we find?

    Calling from the constructor
    The method name is .ctor!
    

    Yes, the actual method name is .ctor! Regardless of the class name, the constructor is considered to be a method with that specific internal name.

    Wrapping up

    In this article, we started from a “simple” topic but learned a few things about how code is compiled and the differences between runtime and compile time.

    As always, things are not as easy as they appear!

    This article first appeared on Code4IT 🐧

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How To Convert A List To A String In Python (With Examples)



    How To Convert A List To A String In Python (With Examples)



    Source link

  • JavaScript Location.reload() Explained (With Examples)

    JavaScript Location.reload() Explained (With Examples)


    In modern web development, there are times when a page needs to refresh itself without the user pressing a button. Whether you are responding to updated content, clearing form inputs, or forcing a session reset, JavaScript provides a simple method for this task: location.reload().

    This built-in method belongs to the window.location object and allows developers to programmatically reload the current web page. It is a concise and effective way to refresh a page under controlled conditions, without relying on user interaction.

    What Is JavaScript location.reload()?

    The location.reload() method refreshes the page it is called on. In essence, it behaves the same way a user would if they clicked the browser’s reload button. However, because it is called with JavaScript, the action can be triggered automatically or in response to specific events. 

    Here is the most basic usage:

    location.reload();

    This line of code tells the browser to reload the current page. It does not require any parameters by default and typically loads the page from the browser’s cache. Note that you can use our free resources (namely, online code editors) to follow along with this discussion.

    Forcing a Hard Reload

    Sometimes a regular reload is not enough, especially when you want to ensure that the browser fetches the latest version of the file from the server instead of using the cached copy. You can force a hard reload by passing true as a parameter:

    location.reload(true);

    However, it is important to note that modern browsers have deprecated this parameter in many cases. Instead, they treat all reloads the same. If you need to fully bypass the cache, server-side headers or a versioned URL might be a more reliable approach.

    And let’s talk syntax:

    So what about the false parameter? That reloads the page using the web browser cache. Note that false is also the default parameter. So if you run reload() without a parameter, you’re actually running object.reload(false). This is covered in the Mozilla developer docs.

    So when do you use Location.reload(true)? One common situation is when the page has outdated information. A hard reload can also bypass caching issues on the client side.

    Common Use Cases

    The location.reload() method is used across a wide range of situations. Here are a few specific scenarios where it’s especially useful:

    1. Reload after a form submission:

    document.getElementById("myForm").onsubmit = function() {
        setTimeout(function() {
            location.reload();
        }, 1000);
    };

    This use case helps clear form inputs or reset the page state after the form has been processed. You can test this in the online Javascript editor. No download required. Just enter the code and click run to immediately see how it looks.

    2. Refresh after receiving new data:

    In web applications that rely on live data, such as dashboards or status monitors, developers might use location.reload() to ensure the page displays the most current information after an update.

    3. Making a manual refresh button:

    <button onclick="location.reload();">Refresh Page</button>

    This is a simple way to give users control over when to reload, particularly in apps that fetch new content periodically.

    4. Reload a Page Without Keeping the Current Page in Session History

    This is another common use. It looks like this.

    window.location.replace(window.location.href);

    Basically, if a user presses the back button after they hit reload, they might be taken back to a page that no longer reflects the current application logic. The widow.location.replace() method navigates to a new URL, often the same one, and replaces the current page in the session history.

    This effectively reloads the page without leaving a trace in the user’s history stack. It is particularly useful for login redirects, post-submission screens, or any scenario where you want to reset the page without allowing users to revisit the previous state using the back button.

    Limitations and Best Practices

    While location.reload() is useful; it should be used thoughtfully. Frequent or automatic reloads can frustrate users, especially if they disrupt input or navigation. In modern development, reloading an entire page is sometimes considered a heavy-handed approach.

    For dynamic updates, using JavaScript to update only part of the page, through DOM manipulation or asynchronous fetch requests, is often more efficient and user-friendly.

    Also, keep in mind that reloading clears unsaved user input and resets page state. It can also cause data to be resubmitted if the page was loaded through a form POST, which may trigger browser warnings or duplicate actions. If you’re looking for a job, make sure to brush up on this and any other common JavaScript interview questions.

    Smarter Alternatives to Reloading the Page

    While location.reload() is simple and effective, it is often more efficient to update only part of a page rather than reloading the entire thing. Reloading can interrupt the user experience, clear form inputs, and lead to unnecessary data usage. In many cases, developers turn to asynchronous techniques that allow content to be refreshed behind the scenes.

    AJAX, which stands for Asynchronous JavaScript and XML, was one of the earliest ways to perform background data transfers without refreshing the page. It allows a web page to send or receive data from a server and update only the necessary parts of the interface. Although the term AJAX often brings to mind older syntax and XML data formats, the concept remains vital and is now commonly used with JSON and modern JavaScript methods.

    One of the most popular modern approaches is the Fetch API. Introduced as a cleaner and more flexible alternative to XMLHttpRequest, the Fetch API uses promises to handle asynchronous requests. It allows developers to retrieve or send data from a server and then apply those updates directly to the page using the Document Object Model, or DOM.

    Here is a simple example:

    fetch('/api/data')
      .then(response => response.json())
      .then(data => {
        document.getElementById('content').textContent = data.message;
      });

    This example retrieves data from the server and updates only a single element on the page. It is fast, efficient, and keeps the user interface responsive.

    By using AJAX or the Fetch API, developers can create a more fluid and interactive experience. These tools allow for partial updates, background syncing, and real-time features without forcing users to wait for an entire page to reload. In a world where performance and responsiveness matter more than ever, these alternatives offer a more refined approach to managing content updates on the web.

    Conclusion

    The location.reload() method in JavaScript is a straightforward way to refresh the current web page. Whether used for resetting the interface or updating content, it offers a quick and accessible solution for common front-end challenges. But like all tools in web development, it should be used with an understanding of its impact on user experience.

    Before reaching for a full page reload, consider whether updating the page’s content directly might serve your users better. When applied appropriately, location.reload() can be a useful addition to your JavaScript toolkit.

    Want to put this into action? Add it to a JavaScript project and test it out.

     





    Source link