بلاگ

  • How Hackers Use Vector Graphics for Phishing Attacks

    How Hackers Use Vector Graphics for Phishing Attacks


    Introduction

    In the ever-evolving cybersecurity landscape, attackers constantly seek new ways to bypass traditional defences. One of the latest and most insidious methods involves using Scalable Vector Graphics (SVG)—a file format typically associated with clean, scalable images for websites and applications. But beneath their seemingly harmless appearance, SVGs can harbour threatening scripts capable of executing sophisticated phishing attacks.

    This blog explores how SVGs are weaponized, why they often evade detection, and what organizations can do to protect themselves.

    SVGs: More Than Just Images

    SVG files differ fundamentally from standard image formats like JPEG or PNG. Instead of storing pixel data, SVGs use XML-based code to define vector paths, shapes, and text. This makes them ideal for responsive design, as they scale without losing quality. However, this same structure allows SVGs to contain embedded JavaScript, which can execute when the file is opened in a browser—something that happens by default on many Windows systems.

     Delivery

    • Email Attachments: Sent via spear-phishing emails with convincing subject lines and sender impersonation.
    • Cloud Storage Links: Shared through Dropbox, Google Drive, OneDrive, etc., often bypassing email filters.
    Fig:1 Attack chain of SVG campaign

    The image illustrates the SVG phishing attack chain in four distinct stages: it begins with an email containing a seemingly harmless SVG attachment, which, when opened, triggers JavaScript execution in the browser, ultimately redirecting the user to a phishing site designed to steal credentials.

    How the attack works:

    When a target receives an SVG attachment and opens an email, the file typically launches in their default web browser—unless they have a specific application set to handle SVG files—allowing any embedded scripts to execute immediately.

    Fig2: Phishing Email of SVG campaign

    Attackers commonly send phishing emails with deceptive subject lines like “Reminder for your Scheduled Event 7212025.msg” or “Meeting-Reminder-7152025.msg”, paired with innocuous-looking attachments named “Upcoming Meeting.svg” or “Your-to-do-List.svg” to avoid raising suspicion. Once opened, the embedded JavaScript within the SVG file silently redirects the victim to a phishing site that closely mimics trusted services like Microsoft 365 or Google Workspace. As shown in fig.

    Fig3: Malicious SVG code.

    In the analyzed SVG sample, the attacker embeds a <script> tag within the SVG, using a CDATA section to hide malicious logic. The code includes a long hex-encoded string (Y) and a short XOR key (q), which decodes into a JavaScript payload when processed. This decoded payload is then executed using window.location = ‘javascript:’ + v;, effectively redirecting the victim to a phishing site upon opening the file. An unused email address variable (g.rume@mse-filterpressen.de) is likely a decoy or part of targeted delivery.

    Upon decryption, we found the c2c phishing link as

    hxxps://hju[.]yxfbynit[.]es/koRfAEHVFeQZ!bM9

    Fig4: Cloudflare CAPTCHA gate

    The link directs to a phishing site protected by a Cloudflare CAPTCHA gate. After you check the box to verify, you’re human then you’re redirected to a malicious page controlled by the attackers.

    Fig5: Office 365 login form

    This page embeds a genuine-looking Office 365 login form, allowing the phishing group to capture and validate your email and password credentials simultaneously.

    Conclusion: Staying Ahead of SVG-Based Threats

    As attackers continue to innovate, organizations must recognize the hidden risks in seemingly benign file formats like SVG. Security teams should:

    • Implement deep content inspection for SVG files.
    • Disable automatic browser rendering of SVGs from untrusted sources.
    • Educate employees about the risks of opening unfamiliar attachments.
    • Monitor for unusual redirects and script activity in email and web traffic.

    SVGs may be powerful tools for developers, but in the wrong hands, they can become potent weapons for cybercriminals. Awareness and proactive defense are key to staying ahead of this emerging threat.

    IOCs

    c78a99a4e6c04ae3c8d49c8351818090

    f68e333c9310af3503942e066f8c9ed1

    2ecce89fa1e5de9f94d038744fc34219

    6b51979ffae37fa27f0ed13e2bbcf37e

    4aea855cde4c963016ed36566ae113b7

    84ca41529259a2cea825403363074538

     

    Authors:

    Soumen Burma

    Rumana Siddiqui



    Source link

  • 3 (and more) ways to set configuration values in .NET &vert; Code4IT

    3 (and more) ways to set configuration values in .NET | Code4IT


    Every application relies on some configurations. Many devs set them up using only the appsettings file. But there’s more!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Needless to say, almost every application needs to deal with some configurations. There are tons of use cases, and you already have some of them in mind, don’t you?

    If you’re working with .NET, you’ve probably already used the appsettings.json file. It’s a good starting point, but it may be not enough in the case of complex applications (and complex deployments).

    In this article, we will learn some ways to set configurations in a .NET API application. We will use the appsettings file, of course, and some other ways such as the dotnet CLI. Let’s go! 🚀

    Project setup

    First things first: let’s set up the demo project.

    I have created a simple .NET 6 API application using Minimal APIs. This is my whole application (yes, less than 50 lines!)

    using Microsoft.Extensions.Options;
    
    namespace HowToSetConfigurations
    {
        public class Program
        {
            public static void Main(string[] args)
            {
                WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
    
                builder.Services.Configure<MyRootConfig>(
                    builder.Configuration.GetSection("RootConfig")
                );
    
                builder.Services.Configure<JsonOptions>(o =>
                {
                    o.SerializerOptions.WriteIndented = true;
                });
    
                WebApplication app = builder.Build();
    
                app.MapGet("/config", (IOptionsSnapshot<MyRootConfig> options) =>
                {
                    MyRootConfig config = options.Value;
                    return config;
                });
    
                app.Run();
            }
        }
    
        public class MyRootConfig
        {
            public MyNestedConfig Nested { get; set; }
            public string MyName { get; set; }
        }
    
        public class MyNestedConfig
        {
            public int Skip { get; set; }
            public int Limit { get; set; }
        }
    }
    

    Nothing else! 🤩

    In short, I scaffold the WebApplicationBuilder, configure that I want to map the settings section with root named RootConfig to my class of type MyRootConfig, and then run the application.

    I then expose a single endpoint, /config, which returns the current configurations, wrapped within an IOptionsSnapshot<MyRootConfig> object.

    Where is the source of the application’s configurations?

    As stated on the Microsoft docs website, here 🔗, the WebApplicationBuilder

    Loads app configuration in the following order from:
    appsettings.json.
    appsettings.{Environment}.json.
    User secrets when the app runs in the Development environment using the entry assembly.
    Environment variables.
    Command-line arguments.

    So, yeah, we have several possible sources, and the order does matter.

    Let’s see a bunch of them.

    Define settings within the appsetting.json file

    The most common way is by using the appsettings.json file. Here, in a structured and hierarchical way, you can define all the logs used as a baseline for your application.

    A typical example is this one:

    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "Microsoft.AspNetCore": "Warning"
        }
      },
      "AllowedHosts": "*",
      "RootConfig": {
        "MyName": "Davide",
        "Nested": {
          "Skip": 2,
          "Limit": 3
        }
      }
    }
    

    With this file, all the fields within the RootConfig element will be mapped to the MyRootConfig class at startup. That object can then be returned using the /config endpoint.

    Running the application (using Visual Studio or the dotnet CLI) you will be able to call that endpoint and see the expected result.

    Configuration results from plain Appsettings file

    Use environment-specific appsettings.json

    Now, you probably know that you can use other appsettings files with a name such as appsettings.Development.json.

    appsettings.Development file

    With that file, you can override specific configurations using the same structure, but ignoring all the configs that don’t need to be changed.

    Let’s update the Limit field defined in the “base” appsettings. You don’t need to recreate the whole structure just for one key; you can use this JSON instead:

    {
      "RootConfig": {
        "Nested": {
          "Limit": 9
        }
      }
    }
    

    Now, if we run the application using VS we will see this result:

    The key defined in the appsettings.Development.json file is replaced in the final result

    Ok, but what made .NET understand that I wanted to use that file?? It’s a matter of Environment variables and Launch profiles.

    How to define profiles within the launchSettings.json file

    Within the Properties folder in your project, you can see a launchSettings.json file. As you might expect, that file describes how you can launch the application.

    launchSettings file location in the solution

    Here we have some Launch profiles, and each of them specifies an ASPNETCORE_ENVIRONMENT variable. By default, its value is set to Development.

    "profiles": {
        "HowToSetConfigurations": {
          "commandName": "Project",
          "dotnetRunMessages": true,
          "launchBrowser": true,
          "launchUrl": "config",
          "applicationUrl": "https://localhost:7280;http://localhost:5280",
          "environmentVariables": {
            "ASPNETCORE_ENVIRONMENT": "Development"
          }
        },
    }
    

    Now, recall that the environment-specific appsettings file name is defined as appsettings.{Environment}.json. Therefore, by running your application with Visual Studio using the HowToSetConfigurations launch profile, you’re gonna replace that {Environment} with Development, thus using the appsettings.Development.json.

    Ça va sans dire that you can use every value you prefer – such as Staging, MyCustomEnvironmentName, and so on.

    How to define the current Environment with the CLI

    If you are using the dotnet CLI you can set that environment variable as

    dotnet run --ASPNETCORE_ENVIRONMENT=Development
    

    or, in a simpler way, you can use

    dotnet run --environment Development
    

    and get the same result.

    How do nested configurations get resolved?

    As we’ve seen in a previous article, even if we are using configurations defined in a hierarchical structure, in the end, they are transformed into key-value pairs.

    The Limit key as defined here:

    {
      "RootConfig": {
        "Nested": {
          "Limit": 9
        }
      }
    }
    

    is transformed into

    {
        "Key": "RootConfig:Nested:Limit",
        "Value": "9"
    },
    

    with the : separator. We will use this info shortly.

    Define configurations in the launchSettings file

    As we’ve seen before, each profile defined in the launchSettings file describes a list of environment variables:

    "environmentVariables": {
      "ASPNETCORE_ENVIRONMENT": "Development"
    }
    

    This means that we can also define our configurations here, and have them loaded when using this specific profile.

    From these configurations

    "RootConfig": {
        "MyName": "Davide",
        "Nested": {
          "Skip": 2,
          "Limit": 3
        }
      }
    

    I want to update the MyName field.

    I can then update the current profile as such:

    "environmentVariables": {
      "ASPNETCORE_ENVIRONMENT": "Development",
      "RootConfig:MyName": "Mr Bellone"
    }
    

    so that, when I run the application using that profile, I will get this result:

    The RootConfig:MyName is replaced, its value is taken from the launchSettings file

    Have you noticed the key RootConfig:MyName? 😉

    🔎 Notice that now we have both MyName = Mr Bellone, as defined in the lauchSettings file, and Limit = 9, since we’re still using the appsettings.Development.json file (because of that “ASPNETCORE_ENVIRONMENT”: “Development” ).

    How to define the current profile with the CLI

    Clearly, we can use the dotnet CLI to load the whole environment profile. We just need to specify it using the --launch-profile flag:

    dotnet run --launch-profile=HowToSetConfigurations
    

    Define application settings using the dotnet CLI

    Lastly, we can specify config values directly using the CLI.

    It’s just a matter of specifying the key-value pairs as such:

    dotnet run --RootConfig:Nested:Skip=55
    

    And – TAH-DAH! – you will see this result:

    JSON result with the key specified on the CLI

    ❓ A question for you! Notice that, even though I specified only the Skip value, both Limit and MyName have the value defined before. Do you know why it happens? Drop a message below if you know the answer! 📩

    Further readings

    As always, there’s more!

    If you want to know more about how dotNET APIs load and start, you should have a look at this page:

    🔗 ASP.NET Core Web Host | Microsoft Docs

    Ok, now you know different approaches for setting configurations.
    How do you know the exact values that are set in your application?

    🔗 The 2 secret endpoints I create in my .NET APIs | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    Ok then, in this article we’ve seen different approaches you can use to define configurations in your .NET API projects.

    Knowing what you can do with the CLI can be helpful especially when using CI/CD, in case you need to run the application using specific keys.

    Do you know any other ways to define configs?

    Happy coding!

    🐧



    Source link

  • Building Aether 1: Sound Without Boundaries

    Building Aether 1: Sound Without Boundaries



    Aether 1 began as an internal experiment at OFF+BRAND: Could we craft a product‑launch site so immersive that visitors would feel the sound?

    The earbuds themselves are fictional, but every pixel of the experience is real – an end‑to‑end sandbox where our brand, 3D, and engineering teams pushed WebGL, AI‑assisted tooling, and narrative design far beyond a typical product page.

    This technical case study is the living playbook of that exploration. Inside you’ll find:

    • 3D creation workflow – how we sculpted, animated, and optimised the earphones and their charging case.
    • Interactive WebGL architecture – the particle flow‑fields, infinite scroll, audio‑reactive shaders, and custom controllers that make the site feel alive.
    • Performance tricks – GPU‑friendly materials, faux depth‑of‑field, selective bloom, and other tactics that kept the project running at 60 FPS on mobile hardware.
    • Tool stack & takeaways – what worked, what didn’t, and why every lesson here can translate to your own projects.

    Whether you’re a developer, designer, or producer, the next sections unpack the decisions, experiments, and hard‑won optimizations that helped us prove that “sound without boundaries” can exist on the web.

    1. 3D Creation Workflow

    By Celia Lopez

    3D creation of the headphone and case

    For the headphone shape, we needed to create one from scratch. To help ourselves quickly sketch out the ideas we had in mind, we used Midjourney. Thanks to references from the internet and the help of AI, we agreed on an artistic direction.

    Size reference and headphone creation

    To ensure the size matched a real-life reference, we used Apple headphones and iterated until we found something interesting. We used Figma to present all the iterations to the team, exporting three images – front, side, and back – each time to help them better visualize the object.

    Same for the case.

    Storyboard

    For the storyboard, we first sketched our ideas and tried to match each specific scene with a 3D visualization. 

    We iterated for a while before finalizing the still frames for each part. Some parts were too tricky to represent in 3D, so we adjusted the workflow accordingly.

    Motion

    So that everyone agrees on the flow, look, and feel, we created a full-motion version of it.

    Unwrapping and renaming

    To prepare the scene for a developer, we needed to spend some time unwrapping the UVs, cleaning the file, and renaming the elements. We used C4D exclusively for unwrapping since the shapes weren’t too complex. It’s also very important to rename all parts and organize the file so the developer can easily recognize which object is which. (In the example below, we show the technique – not the full workflow or a perfect unwrap.)

    Fluid flow baked

    Almost all the animations were baked from C4D to Blender and exported as .glb files.

    Timing

    We decided to start with an infinite scroll and a looped experience. When the user releases the scroll, seven anchors subtly and automatically guide the progression. To make it easier for the developer to divide the baked animation, we used specific timing for each step — 200 keyframes between each anchor.

    AO baking

    Because the headphones were rotating, we couldn’t bake the lighting. We only baked the Ambient Occlusion shadows to enhance realism. For that, after unwrapping the objects, we combined all the different parts of the headphones into a single object, applied a single texture with the Ambient Occlusion, and baked it in Redshift. Same for the case.

    Normal map baked

    For the Play‑Stade touchpad only, we needed a normal map, so we exported it. However, since the AO was already baked, the UVs had to remain the same.

    Camera path and target

    In order to ensure a smooth flow during the web experience, it was crucial to use a single camera. However, since we have different focal points, we needed two separate circular paths with different centers and sizes, along with a null object to serve as a target reference throughout the flow.

    2. WebGL Features and Interactive Architecture

    By Adrian Gubrica

    GPGPU particles

    Particles are a great way to add an extra layer of detail to 3D scenes, as was the case with Aether 1. To complement the calming motion of the audio waves, a flow‑field simulation was used — a technique known for producing believable and natural movement in particle systems. With the right settings, the resulting motion can also be incredibly relaxing to watch.

    To calculate the flow fields, noise algorithms — specifically Simplex4D — were used. Since these can be highly performance-intensive on the CPU, a GPGPU technique (essentially the WebGL equivalent of a compute shader) was implemented to run the simulation efficiently on the GPU. The results were stored and updated across two textures, enabling smooth and high-performance motion.

    Smooth scene transitions

    To create a seamless transition between scenes, I developed a custom controller to manage when each scene should or shouldn’t render. I also implemented a manual way of controlling their scroll state, allowing me, for example, to display the last position of a scene without physically scrolling there. By combining this with a custom transition function that primarily uses GSAP to animate values, I was able to create both forward and backward animations to the target scene.

    It is important to note that all scenes and transitions are displayed within a “post‑processing scene,” which consists of an orthographic camera and a full‑screen plane. In the fragment shader, I merge all the renders together.

    This transition technique became especially tricky when transitioning at the end of each scroll in the main scene to create an infinite loop. To achieve this, I created two instances of the main scene (A and B) and swapped between them whenever a transition occurred.

    Custom scroll controller for infinite scrolling

    As mentioned earlier, the main scene features an infinite loop at both the start and end of the scroll, which triggers a transition back to the beginning or end of the scene. This behavior is enhanced with some resistance during the backward movement and other subtle effects. Achieving this required careful manual tweaking of the Lenis library.

    My initial idea was to use Lenis’ infinite: true property, which at first seemed like a quick solution – especially for returning to the starting scroll position. However, this approach required manually listening to the scroll velocity and predicting whether the scroll would pass a certain threshold to stop it at the right moment and trigger the transition. While possible, it quickly proved unreliable, often leading to unpredictable behavior like broken scroll states, unintended transitions, or a confused browser scroll history.

    Because of these issues, I decided to remove the infinite: true property and handle the scroll transitions manually. By combining Lenis.scrollTo(), Lenis.stop(), and Lenis.start(), I was able to recreate the same looping effect at the end of each scroll with greater control and reliability. An added benefit was being able to retain Lenis’s default easing at the beginning and end of the scroll, which contributed a smooth and polished feel.

    Cursor with fluid simulation pass

    Fluid simulation triggered by mouse or touch movement has become a major trend on immersive websites in recent years. But beyond just being trendy, it consistently enhances the visual appeal and adds a satisfying layer of interactivity to the user experience.

    In my implementation, I used the fluid simulation as a blue overlay that follows the pointer movement. It also served as a mask for the Fresnel pass (explained in more detail below) and was used to create a dynamic displacement and RGB shift effect in the final render.

    Because fluid simulations can be performance‑intensive – requiring multiple passes to calculate realistic behavior – I downscaled it to just 7.5 percent of the screen resolution. This optimization still produced a visually compelling effect while maintaining smooth overall performance.

    Fresnel pass on the earphones

    In the first half of the main scene’s scroll progression, users can see the inner parts of the earphones when hovering over them, adding a nice interactive touch to the scene. I achieved this effect by using the fluid simulation pass as a mask on the earphones’ material.

    However, implementing this wasn’t straightforward at first, since the earphones and the fluid simulation use different coordinate systems. My initial idea was to create a separate render pass for the earphones and apply the fluid mask in that specific pass. But this approach would have been costly and introduced unnecessary complexity to the post‑processing pipeline.

    After some experimentation, I realized I could use the camera’s view position as a kind of screen‑space UV projection onto the material. This allowed me to accurately sample the fluid texture directly in the earphones’ material – exactly what I needed to make the effect work without additional rendering overhead.

    Audio reactivity

    Since the project is a presentation of earphones, some scene parameters needed to become audio‑reactive. I used one of the background audio’s frequency channels – the one that produced the most noticeable “jumps,” as the rest of the track had a very stable tone – which served as the input to drive various effects. This included modifying the pace and shape of the wave animations, influencing the strength of the particles’ flow field, and shaping the touchpad’s visualizer.

    The background audio itself was also processed using the Web Audio API, specifically a low‑pass filter. This filter was triggered when the user hovered over the earphones in the first section of the main scene, as well as during the scene transitions at the start and end. The low‑pass effect helped amplify the impact of the animations, creating a subtle sensation of time slowing down.

    Animation and empties

    Most of the animations were baked directly into the .glb file and controlled via the scroll progress using THREE.js’s AnimationMixer. This included the camera movement as well as the earphone animations.

    This workflow proved to be highly effective when collaborating with another 3D artist, as it gave them control over multiple aspects of the experience – such as timing, motion, and transitions – while allowing me to focus solely on the real‑time interactions and logic.

    Speaking of real‑time actions, I extended the scene by adding multiple empties, animating their position and scale values to act as drivers for various interactive events – such as triggering interactive points or adjusting input strength during scroll. This approach made it easy to fine‑tune these events directly in Blender’s timeline and align them precisely with other baked animations.

    3. Optimization Techniques

    Visual expectations were set very high for this project, making it clear from the start that performance optimization would be a major challenge. Because of this, I closely monitored performance metrics throughout development, constantly looking for opportunities to save resources wherever possible. This often led to unexpected yet effective solutions to problems that initially seemed too demanding or impractical for our goals. Some of these optimizations have already been mentioned – such as using GPGPU techniques for particle simulation and significantly reducing the resolution of the cursor’s fluid simulation. However, there were several other key optimizations that played a crucial role in maintaining solid performance:

    Artificial depth of field

    One of that was using depth of field during the close‑up view on the headphones. Depth of field is usually used as a post‑processing layer using some kind of convolution to simulate progressive blurring of the rendered scene. I considered this as a good‑to‑have from the beginning in case we will be left with some additional fps, but not as a realistic option.

    However, after implementing the particles simulation, which used smoothstep function in the particle’s fragment shader to draw the blue circle, I was wondering if it might not be enough to simply modify its values to make it look like it’s blurred. After few little tweaks, the particles became blurry.

    The only problem left was that the blur was not progressive like in a real camera, meaning it was not getting blurry according to the focus point of the camera. So I decided to try the camera’s view position to get some kind of depth value, which surprisingly did the job well.

    I applied the same smoothstep technique to the rotating tube in the background, but now without the progressive effect since it was almost at a constant distance most of the time.

    Voilà. Depth of field for almost free (not perfect, but does the job well).

    Artificial bloom

    Bloom was also part of the post‑processing stack – typically a costly effect due to the additional render pass it requires. This becomes even more demanding when using selective bloom, which I needed to make the core of the earphones glow. In that case, the render pass is effectively doubled to isolate and blend only specific elements.

    To work around this performance hit, I replaced the bloom effect with a simple plane using a pre‑generated bloom texture that matched the shape of the earphone core. The plane was set to always face the camera (a billboard technique), creating the illusion of bloom without the computational overhead.

    Surprisingly, this approach worked very well. With a bit of fine‑tuning – especially adjusting the depth write settings – I was even able to avoid visible overlaps with nearby geometry, maintaining a clean and convincing look.

    Custom performant glass material

    A major part of the earphones’ visual appeal came from the glossy surface on the back. However, achieving realistic reflections in WebGL is always challenging – and often expensive – especially when using double‑sided materials.

    To tackle this, I used a strategy I often rely on: combining a MeshStandardMaterial for the base physical lighting model with a glass matcap texture, injected via the onBeforeCompile callback. This setup provided a good balance between realism and performance.

    To enhance the effect further, I added Fresnel lighting on the edges and introduced a slight opacity, which together helped create a convincing glass‑like surface. The final result closely matched the visual concept provided for the project – without the heavy cost of real‑time reflections or more complex materials.

    Simplified raycasting

    Raycasting on high‑polygon meshes can be slow and inefficient. To optimise this, I used invisible low‑poly proxy meshes for the points of interest – such as the earphone shapes and their interactive areas.

    This approach significantly reduced the performance cost of raycasting while giving me much more flexibility. I could freely adjust the size and position of the raycastable zones without affecting the visual mesh, allowing me to fine‑tune the interactions for the best possible user experience.

    Mobile performance

    Thanks to the optimisation techniques mentioned above, the experience maintains a solid 60 FPS – even on older devices like the iPhone SE (2020).

    • Three.js: For a project of this scale, Three.js was the clear choice. Its built‑in materials, loaders, and utilities made it ideal for building highly interactive WebGL scenes. It was especially useful when setting up the GPGPU particle simulation, which is supported via a dedicated addon provided by the Three.js ecosystem.
    • lil‑gui: Commonly used alongside Three.js, was instrumental in creating a debug environment during development. It also allowed designers to interactively tweak and fine‑tune various parameters of the experience without needing to dive into the code.
    • GSAP: Most linear animations were handled with GSAP and its timeline system. It proved particularly useful when manually syncing animations to the scroll progress provided by Lenis, offering precise control over timing and transitions.
    • Lenis: As mentioned earlier, Lenis provided a smooth and reliable foundation for scroll behavior. Its syncTouch parameter helped manage DOM shifting on mobile devices, which can be a common challenge in scroll‑based experiences.

    5. Results and Takeaways

    Aether 1 successfully demonstrated how brand narrative, advanced WebGL interactions, and rigorous 3D workflows can blend into a single, performant, and emotionally engaging web experience. 

    By baking key animations, using empties for event triggers, and leaning on tools like Three.js, GSAP, and Lenis, the team was able to iterate quickly without sacrificing polish. Meanwhile, the 3D pipeline- from Midjourney concept sketches through C4D unwrapping and Blender export ensured the visual fidelity stayed aligned with the brand vision.

    Most importantly, every technique outlined here is transferable. Whether you are considering audio‑reactive visuals, infinite scroll adventures, or simply trying to squeeze extra frames per second out of a heavy scene, the solutions documented above show that thoughtful planning and a willingness to experiment can push WebGL far beyond typical product‑page expectations.

    6. Author Contributions

    General – Ross Anderson
    3D – Celia Lopez
    WebGL – Adrian Gubrica

    7. Site credits

    Art Direction – Ross Anderson
    Design – Gilles Tossoukpe
    3D – Celia Lopez
    WebGL – Adrian Gubrica
    AI Integration – Federico Valla
    Motion – Jason Kearley
    Front End / Webflow – Youness Benammou



    Source link

  • use the @ prefix when a name is reserved &vert; Code4IT

    use the @ prefix when a name is reserved | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You already know it: using meaningful names for variables, methods, and classes allows you to write more readable and maintainable code.

    It may happen that a good name for your business entity matches one of the reserved keywords in C#.

    What to do, now?

    There are tons of reserved keywords in C#. Some of these are

    • int
    • interface
    • else
    • null
    • short
    • event
    • params

    Some of these names may be a good fit for describing your domain objects or your variables.

    Talking about variables, have a look at this example:

    var eventList = GetFootballEvents();
    
    foreach(var event in eventList)
    {
        // do something
    }
    

    That snippet will not work, since event is a reserved keyword.

    You can solve this issue in 3 ways.

    You can use a synonym, such as action:

    var eventList = GetFootballEvents();
    
    foreach(var action in eventList)
    {
        // do something
    }
    

    But, you know, it doesn’t fully match the original meaning.

    You can use the my prefix, like this:

    var eventList = GetFootballEvents();
    
    foreach(var myEvent in eventList)
    {
        // do something
    }
    

    But… does it make sense? Is it really your event?

    The third way is by using the @ prefix:

    var eventList = GetFootballEvents();
    
    foreach(var @event in eventList)
    {
        // do something
    }
    

    That way, the code is still readable (even though, I admit, that @ is a bit weird to see around the code).

    Of course, the same works for every keyword, like @int, @class, @public, and so on

    Further readings

    If you are interested in a list of reserved keywords in C#, have a look at this article:

    🔗 C# Keywords (Reserved, Contextual) | Tutlane

    This article first appeared on Code4IT

    Wrapping up

    It’s a tiny tip, but it can help you write better code.

    Happy coding!

    🐧



    Source link

  • How to deploy .NET APIs on Azure using GitHub actions &vert; Code4IT

    How to deploy .NET APIs on Azure using GitHub actions | Code4IT


    Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.

    To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.

    In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.

    Create a .NET API project

    Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.

    To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.

    Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.

    All our code is stored in the Program file:

    var builder = WebApplication.CreateBuilder(args);
    
    var app = builder.Build();
    
    app.UseHttpsRedirection();
    
    app.MapGet("/", () => "Hello World!");
    
    app.Run();
    

    Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.

    Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.

    Create an App Service on Azure

    Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.

    Open the Azure Portal, navigate to the App Service section, and create a new one.

    Configure it as you wish, and then proceed until you have it up and running.

    Once everything is done, you should have something like this:

    Azure App Service overview

    Now the application is ready to be used: we now need to deploy our code here.

    Generate the GitHub Action YAML file for deploying .NET APIs on Azure

    It’s time to create our Continuous Delivery pipeline.

    Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.

    On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.

    New Workflow button on GitHub

    You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:

    Template for deploying the .NET Application on Azure

    Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.

    In particular, you will have to update the environment variables specified in this section:

    env:
      AZURE_WEBAPP_NAME: your-app-name # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "5" # set this to the .NET Core version to use
    

    Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.

    For my specific project, I’ve replaced that section with

    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName> # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "6.0" # set this to the .NET Core version to use
    

    🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧

    Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.

    So, as a reference, here’s the full YAML file I’m using to deploy my APIs:

    name: Build and deploy ASP.Net Core app to an Azure Web App
    
    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName>
      AZURE_WEBAPP_PACKAGE_PATH: "."
      DOTNET_VERSION: "6.0"
    
    on:
      push:
        branches: ["master"]
      workflow_dispatch:
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
          - uses: actions/checkout@v3
    
          - name: Set up .NET Core
            uses: actions/setup-dotnet@v2
            with:
              dotnet-version: ${{ env.DOTNET_VERSION }}
    
          - name: Set up dependency caching for faster builds
            uses: actions/cache@v3
            with:
              path: ~/.nuget/packages
              key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
              restore-keys: |
                            ${{ runner.os }}-nuget-
    
          - name: Build with dotnet
            run: dotnet build --configuration Release
    
          - name: dotnet publish
            run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
    
          - name: Upload artifact for deployment job
            uses: actions/upload-artifact@v3
            with:
              name: .net-app
              path: ${{env.DOTNET_ROOT}}/myapp
    
      deploy:
        permissions:
          contents: none
        runs-on: ubuntu-latest
        needs: build
        environment:
          name: "Development"
          url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
    
        steps:
          - name: Download artifact from build job
            uses: actions/download-artifact@v3
            with:
              name: .net-app
    
          - name: Deploy to Azure Web App
            id: deploy-to-webapp
            uses: azure/webapps-deploy@v2
            with:
              app-name: ${{ env.AZURE_WEBAPP_NAME }}
              publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
              package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    As you can see, we have 2 distinct steps: build and deploy.

    In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.

    In the deploy step, we retrieve the newly created artifact and publish it on Azure.

    Store the Publish profile as GitHub Secret

    As you can see in the instructions of the workflow file, you have to

    Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.

    That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).

    A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.

    We have to get our publish profile and save it into GitHub secrets.

    To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.

    Get Publish Profile button on Azure Portal

    Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.

    Here you can create a new secret related to your repository.

    Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.

    You will then see something like this:

    GitHub secret for Publish profile

    Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:

    - name: Deploy to Azure Web App
        id: deploy-to-webapp
        uses: azure/webapps-deploy@v2
        with:
            app-name: ${{ env.AZURE_WEBAPP_NAME }}
            publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
            package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.

    Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.

    Final result

    It’s time to see the final result.

    Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.

    Under the Actions tab, you will see your CD pipeline run.

    CD workflow run

    Once it’s completed, you can head to your application root and see the final result.

    Final result of the API

    Further readings

    Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.

    My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…

    If you want to peek at what I do, here are my little secrets:

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.

    Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.

    Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.

    Nice and easy, right? 😉

    Happy coding!

    🐧



    Source link

  • Wish You Were Here – Win a Free Ticket to Penpot Fest 2025!

    Wish You Were Here – Win a Free Ticket to Penpot Fest 2025!


    What if your dream design tool understood dev handoff pain? Or your dev team actually loved the design system?

    If you’ve ever thought, “I wish design and development worked better together,” you’re not alone — and you’re exactly who Penpot Fest 2025 is for.

    This October, the world’s friendliest open-source design & code event returns to Madrid — and you could be going for free.

    Penpot Fest, 2025, Madrid

    Why Penpot Fest?

    Happening from October 8–10, 2025, Penpot Fest is where designers, developers, and open-source enthusiasts gather to explore one big idea:
    Better, together.

    Over three days, you’ll dive into:

    • 8 thought-provoking keynotes
    • 1 lively panel discussion
    • 3 hands-on workshops
    • Full meals, drinks, swag, and a welcome party
    • A breathtaking venue and space to connect, collaborate, and be inspired

    With confirmed speakers like Glòria Langreo (GitHub), Francesco Siddi (Blender), and Laura Kalbag (Penpot), you’ll be learning from some of the brightest minds in the design-dev world.

    And this year, we’re kicking it off with something extra special…

    The Contest: “Wish You Were Here”

    We’re giving away a free ticket to Penpot Fest 2025, and entering is as easy as sharing a thought.

    Here’s the idea:
    We want to hear your “I wish…” — your vision, your frustration, your funny or heartfelt take on the future of design tools, team workflows, or dev collaboration.

    It can be:

    • “I wish design tools spoke dev.”
    • “I wish handoff wasn’t a hand grenade.”
    • “I wish design files didn’t feel like final bosses.”

    Serious or silly — it’s all valid.

    How to Enter

    1. Post your “I wish…” message on one of the following networks: X (Twitter), LinkedIn, Instagram, Bluesky, Mastodon, or Facebook
    2. Include the hashtag #WishYouWereHerePenpot
    3. Tag @PenpotApp so we can find your entry!

    Get creative: write it, design it, animate it, sing it — whatever helps your wish stand out.

    Key Dates

    • Contest opens: August 4, 2025
    • Last day to enter: September 4, 2025

    Why This Matters

    This campaign isn’t just about scoring a free ticket (though that’s awesome). It’s about surfacing what our community really needs — and giving space for those wishes to be heard.

    Penpot is built by people who listen. Who believe collaboration between design and code should be open, joyful, and seamless. This is your chance to share what you want from that future — and maybe even help shape it.

    Ready to Join Us in Madrid?

    We want to hear your voice. Your “I wish…” could make someone laugh, inspire a toolmaker, or land you in Madrid this fall with the Penpot crew.

    So what are you waiting for?

    Post your “I wish…” with #WishYouWereHerePenpot and tag @PenpotApp by September 4th for a chance to win a free ticket to Penpot Fest 2025!

    Wish you were here — and maybe you will be. ❤️





    Source link

  • Methods should have a coherent level of abstraction &vert; Code4IT

    Methods should have a coherent level of abstraction | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Mixed levels of abstraction make the code harder to understand.

    At the first sight, the reader should be able to understand what the code does without worrying about the details of the operations.

    Take this code snippet as an example:

    public void PrintPriceWithDiscountForProduct(string productId)
    {
        var product = sqlRepository.FindProduct(productId);
        var withDiscount = product.Price * 0.9;
        Console.WriteLine("The final price is " + withDiscount);
    }
    

    We are mixing multiple levels of operations. In the same method, we are

    • integrating with an external service
    • performing algebraic operations
    • concatenating strings
    • printing using .NET Console class

    Some operations have a high level of abstraction (call an external service, I don’t care how) while others are very low-level (calculate the price discount using the formula ProductPrice*0.9).

    Here the readers lose focus on the overall meaning of the method because they’re distracted by the actual implementation.

    When I’m talking about abstraction, I mean how high-level an operation is: the more we stay away from bit-to-bit and mathematical operations, the more our code is abstract.

    Cleaner code should let the reader understand what’s going on without the need of understanding the details: if they’re interested in the details, they can just read the internals of the methods.

    We can improve the previous method by splitting it into smaller methods:

    public void PrintPriceWithDiscountForProduct(string productId)
    {
        var product = GetProduct(productId);
        var withDiscount = CalculateDiscountedPrice(product);
        PrintPrice(withDiscount);
    }
    
    private Product GetProduct(string productId)
    {
        return sqlRepository.FindProduct(productId);
    }
    
    private double CalculateDiscountedPrice(Product product)
    {
        return product.Price * 0.9;
    }
    
    private void PrintPrice(double price)
    {
        Console.WriteLine("The final price is " + price);
    }
    

    Here you can see the different levels of abstraction: the operations within PrintPriceWithDiscountForProduct have a coherent level of abstraction: they just tell you what the steps performed in this method are; all the methods describe an operation at a high level, without expressing the internal operations.

    Yes, now the code is much longer. But we have gained some interesting advantages:

    • readers can focus on the “what” before getting to the “how”;
    • we have more reusable code (we can reuse GetProduct, CalculateDiscountedPrice, and PrintPrice in other methods);
    • if an exception is thrown, we can easily understand where it happened, because we have more information on the stack trace.

    You can read more about the latest point here:

    🔗 Clean code tip: small functions bring smarter exceptions | Code4IT

    This article first appeared on Code4IT 🐧

    Happy coding!

    🐧



    Source link

  • Shortest route between points in a city – with Python and OpenStreetMap – Useful code

    Shortest route between points in a city – with Python and OpenStreetMap – Useful code


    After the article for introduction to Graphs in Python, I have decided to put the graph theory into practice and start looking for the shortest points between points in a city. Parts of the code are inspired from the book Optimization Algorithms by Alaa Khamis, other parts are mine 🙂

    The idea is to go from the monument to the church with a car. The flag marks the middle, between the two points.

    The solution uses several powerful Python libraries:

    • OSMnx to download and work with real road networks from OpenStreetMap
    • NetworkX to model the road system as a graph and calculate the shortest path using Dijkstra’s algorithm
    • Folium for interactive map visualization

    We start by geocoding the two landmarks to get their latitude and longitude. Then we build a drivable street network centered around the Levski Monument using ox.graph_from_address. After snapping both points to the nearest graph nodes, we compute the shortest route by distance. Finally, we visualize everything both in an interactive map and in a clean black-on-white static graph where the path is drawn in yellow.


    Nodes and edges in radius of 1000 meters around the center point


    Red and green are the nodes, that are the closest to the start and end points.


    The closest driving route between the two points is in blue.

    The full code is implemented in a Jupyter Notebook in GitHub and explained in the video.

    https://www.youtube.com/watch?v=kQIK2P7erAA

    GitHub link:

    Enjoy the rest of your day! 🙂



    Source link

  • How to create an API Gateway using Azure API Management &vert; Code4IT

    How to create an API Gateway using Azure API Management | Code4IT


    In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.

    In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂

    In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.

    Demo: publish .NET API services and locate the OpenAPI definition

    For the sake of this article, we will work with 2 API services: BooksService and VideosService.

    They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).

    Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.

    Swagger pages

    How to create Azure API Management (APIM) Service from Azure Portal

    Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:

    An API Gateway hides origin endpoints to clients

    It’s time to create our APIM resource.👷‍♂️

    Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.

    API Management description on Azure Portal

    The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).

    Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.

    After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.

    API management dashboard

    We are now ready to add our APIs and expose them to our clients.

    How to add APIs to Azure API Management using Swagger definition (OpenAPI)

    As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.

    Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.

    Swagger UI for BooksAPI

    We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.

    Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).

    Import API from OpenAPI specification

    You will see a form that allows you to create new resources from OpenAPI specifications.

    Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.

    Wizard to import APIs from OpenAPI

    You will then see your APIs appear in the panel shown below. It is composed of different parts:

    • The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
    • The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
    • A list of policies that are applied to the inbound requests before hitting the real endpoint;
    • The real endpoint used when calling the facade exposed by APIM;
    • A list of policies applied to the outbound requests after the origin has processed the requests.

    API detail panel

    For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.

    Consuming APIs exposed on the API Gateway

    We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.

    Where to find the Gateway URL

    This will be the root URL that our clients will use.

    We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).

    The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.

    Videos API on Origin and on API Gateway

    On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:

    Books API on Origin and on API Gateway

    Further readings

    As usual, a bunch of interesting readings 📚

    In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:

    🔗 What is Azure API Management? | Microsoft docs

    To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.

    🔗 How to deploy .NET APIs on Azure using GitHub actions | Code4IT

    Lastly, since we’ve talked about Swagger, here’s an article where I dissected how you can integrate Swagger in dotNET Core applications:

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.

    We will come back to this topic soon.

    Happy coding!

    🐧



    Source link

  • Raise synchronous events using Timer (and not a While loop) &vert; Code4IT

    Raise synchronous events using Timer (and not a While loop) | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    There may be times when you need to process a specific task on a timely basis, such as polling an endpoint to look for updates or refreshing a Refresh Token.

    If you need infinite processing, you can pick two roads: the obvious one or the better one.

    For instance, you can use an infinite loop and put a Sleep command to delay the execution of the next task:

    while(true)
    {
        Thread.Sleep(2000);
        Console.WriteLine("Hello, Davide!");
    }
    

    There’s nothing wrong with it – but we can do better.

    Introducing System.Timers.Timer

    The System.Timers namespace exposes a cool object that you can use to achieve that result: Timer.

    You then define the timer, choose which event(s) must be processed, and then run it:

    void Main()
    {
        System.Timers.Timer timer = new System.Timers.Timer(2000);
        timer.Elapsed += AlertMe;
        timer.Elapsed += AlertMe2;
    
        timer.Start();
    }
    
    void AlertMe(object sender, ElapsedEventArgs e)
    {
        Console.WriteLine("Ciao Davide!");
    }
    
    void AlertMe2(object sender, ElapsedEventArgs e)
    {
        Console.WriteLine("Hello Davide!");
    }
    

    The constructor accepts in input an interval (a double value that represents the milliseconds for the interval), whose default value is 100.

    This class implements IDisposable: if you’re using it as a dependency of another component that must be Disposed, don’t forget to call Dispose on that Timer.

    Note: use this only for synchronous tasks: there are other kinds of Timers that you can use for asynchronous operations, such as PeriodicTimer, which also can be stopped by canceling a CancellationToken.

    This article first appeared on Code4IT 🐧

    Happy coding!

    🐧



    Source link