نویسنده: post Bina

  • How to extract, create, and navigate Zip Files in C# | Code4IT

    How to extract, create, and navigate Zip Files in C# | Code4IT


    Learn how to zip and unzip compressed files with C#. Beware: it’s not as obvious as it might seem!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working with local files, you might need to open, create, or update Zip files.

    In this article, we will learn how to work with Zip files in C#. We will learn how to perform basic operations such as opening, extracting, and creating a Zip file.

    The main class we will use is named ZipFile, and comes from the System.IO.Compression namespace. It’s been present in C# since .NET Framework 4.5, so we can say it’s pretty stable 😉 Nevertheless, there are some tricky points that you need to know before using this class. Let’s learn!

    Using C# to list all items in a Zip file

    Once you have a Zip file, you can access the internal items without extracting the whole Zip.

    You can use the ZipFile.Open method.

    using ZipArchive archive = ZipFile.Open(zipFilePath, ZipArchiveMode.Read);
    System.Collections.ObjectModel.ReadOnlyCollection<ZipArchiveEntry> entries = archive.Entries;
    

    Notice that I specified the ZipArchiveMode. This is an Enum whose values are Read, Create, and Update.

    Using the Entries property of the ZipArchive, you can access the whole list of files stored within the Zip folder, each represented by a ZipArchiveEntry instance.

    All entries in the current Zip file

    The ZipArchiveEntry object contains several fields, like the file’s name and the full path from the root archive.

    Details of a single ZipEntry item

    There are a few key points to remember about the entries listed in the ZipArchiveEntry.

    1. It is a ReadOnlyCollection<ZipArchiveEntry>: it means that even if you find a way to add or update the items in memory, the changes are not applied to the actual files;
    2. It lists all files and folders, not only those at the root level. As you can see from the image above, it lists both the files at the root level, like File.txt, and those in inner folders, such as TestZip/InnerFolder/presentation.pptx;
    3. Each file is characterized by two similar but different properties: Name is the actual file name (like presentation.pptx), while FullName contains the path from the root of the archive (e.g. TestZip/InnerFolder/presentation.pptx);
    4. It lists folders as if they were files: in the image above, you can see TestZip/InnerFolder. You can recognize them because their Name property is empty and their Length is 0;

    Folders are treated like files, but with no Size or Name

    Lastly, remember that ZipFile.Open returns an IDisposable, so you should place the operations within a using statement.

    ❓❓A question for you! Why do we see an item for the TestZip/InnerFolder folder, but there is no reference to the TestZip folder? Drop a comment below 📩

    Extracting a Zip folder is easy but not obvious.

    We have only one way to do that: by calling the ZipFile.ExtractToDirectory method.

    It accepts as mandatory parameters the path of the Zip file to be extracted and the path to the destination:

    var zipPath = @"C:\Users\d.bellone\Desktop\TestZip.zip";
    var destinationPath = @"C:\Users\d.bellone\Desktop\MyDestination";
    ZipFile.ExtractToDirectory(zipPath, destinationPath);
    

    Once you run it, you will see the content of the Zip copied and extracted to the MyDestination folder.

    Note that this method creates the destination folder if it does not exist.

    This method accepts two more parameters:

    • entryNameEncoding, by which you can specify the encoding. The default value is UTF-8.
    • overwriteFiles allows you to specify whether it must overwrite existing files. The default value is false. If set to false and the destination files already exist, this method throws a System.IO.IOException saying that the file already exists.

    Using C# to create a Zip from a folder

    The key method here is ZipFile.CreateFromDirectory, which allows you to create Zip files in a flexible way.

    The first mandatory value is, of course, the source directory path.

    The second mandatory parameter is the destination of the resulting Zip file.

    It can be the local path to the file:

    string sourceFolderPath = @"\Desktop\myFolder";
    string destinationZipPath = @"\Desktop\destinationFile.zip";
    
    ZipFile.CreateFromDirectory(sourceFolderPath, destinationZipPath);
    

    Or it can be a Stream that you can use later for other operations:

    using (MemoryStream memStream = new MemoryStream())
    {
        string sourceFolderPath = @"\Desktop\myFolder";
        ZipFile.CreateFromDirectory(sourceFolderPath, memStream);
    
        var lenght = memStream.Length;// here the Stream is populated
    }
    

    You can finally add some optional parameters:

    • compressionLevel, whose values are Optimal, Fastest, NoCompression, SmallestSize.
    • includeBaseDirectory: a flag that defines if you have to copy only the first-level files or also the root folder.

    A quick comparison of the four Compression Levels

    As we just saw, we have four compression levels: Optimal, Fastest, NoCompression, and SmallestSize.

    What happens if I use the different values to zip all the photos and videos of my latest trip?

    The source folder’s size is 16.2 GB.

    Let me zip it with the four compression levels:

     private long CreateAndTrack(string sourcePath, string destinationPath, CompressionLevel compression)
     {
         Stopwatch stopwatch = Stopwatch.StartNew();
    
         ZipFile.CreateFromDirectory(
             sourceDirectoryName: sourcePath,
             destinationArchiveFileName: destinationPath,
             compressionLevel: compression,
             includeBaseDirectory: true
             );
         stopwatch.Stop();
    
         return stopwatch.ElapsedMilliseconds;
     }
    
    // in Main...
    
    var smallestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Smallest.zip"),
        CompressionLevel.SmallestSize);
    
    var noCompressionTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "NoCompression.zip"),
        CompressionLevel.NoCompression);
    
    var fastestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Fastest.zip"),
        CompressionLevel.Fastest);
    
    var optimalTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Optimal.zip"),
        CompressionLevel.Optimal);
    

    By executing this operation, we have this table:

    Compression Type Execution time (ms) Execution time (s) Size (bytes) Size on disk (bytes)
    Optimal 483481 483 17,340,065,594 17,340,067,840
    Fastest 661674 661 16,935,519,764 17,004,888,064
    Smallest 344756 344 17,339,881,242 17,339,883,520
    No Compression 42521 42 17,497,652,162 17,497,653,248

    We can see a bunch of weird things:

    • Fastest compression generates a smaller file than Smallest compression.
    • Fastest compression is way slower than Smallest compression.
    • Optimal lies in the middle.

    This is to say: don’t trust the names; remember to benchmark the parts where you need performance, even with a test as simple as this.

    Wrapping up

    This was a quick article about one specific class in the .NET ecosystem.

    As we saw, even though the class is simple and it’s all about three methods, there are some things you should keep in mind before using this class in your code.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Friday the 13th Sale (75% OFF!) 👻

    Friday the 13th Sale (75% OFF!) 👻


    At Browserling and Online Tools we love sales.

    We just created a new automated Friday the 13th Sale.

    Now on all Fridays the 13th, we show a 75% discount offer to all users who visit our site.

    This year it runs on Jun 13th, next year on Feb 13th, etc.

    How did we find the dates of all Fridays the 13th?

    We used our Friday the 13th Finder tool!

    Buy a Sub Now!

    What Is Browserling?

    Browserling is an online service that lets you test how other websites look and work in different web browsers, like Chrome, Firefox, or Safari, without needing to install them. It runs real browsers on real machines and streams them to your screen, kind of like remote desktop but focused on browsers. This helps web developers and regular users check for bugs, suspicious links, and weird stuff that happens in certain browsers. You just go to Browserling, pick a browser and version, and then enter the site you want to test. It’s quick, easy, and works from your browser with no downloads or installs.

    What Are Online Tools?

    Online Tools is an online service that offers free, browser-based productivity tools for everyday tasks like editing text, converting files, editing images, working with code, and way more. It’s an all-in-one Digital Swiss Army Knife with 1500+ utilities, so you can find the exact tool you need without installing anything. Just open the site, use what you need, and get things done fast.

    Who Uses Browserling and Online Tools?

    Browserling and Online Tools are used by millions of regular internet users, developers, designers, students, and even Fortune 100 companies. Browserling is handy for testing websites in different browsers without having to install them. Online Tools are used for simple tasks like resizing or converting images, or even fixing small file problems quickly without downloading any apps.

    Buy a subscription now and see you next time!



    Source link

  • [ENG] .NET 5, 6, 7, and 8 for busy developers | .NET Community Austria



    [ENG] .NET 5, 6, 7, and 8 for busy developers | .NET Community Austria



    Source link

  • Developer Spotlight: Robin Payot | Codrops

    Developer Spotlight: Robin Payot | Codrops


    Hey, I’m Robin, a Creative Developer since 2015, based in Paris and a former HETIC student.

    I’ve worked at agencies like 84.Paris and Upperquad, and I’ve also freelanced with many others, picking up a few web awards along the way. I created Wind Waker.js and started a YouTube channel where I teach WebGL tutorials.

    What really excites me about development is having an idea in mind and being able to see it come to life visually, tweaking it again and again until I find the right solution to achieve the result I want.

    Projects I’m Proud Of

    Wind Waker JS

    When I was a kid, I was a huge fan of a GameCube video game called Zelda: The Wind Waker. It was a vibrant, colorful game where you sailed a boat to explore the world, with a really cool pirate vibe! I wanted to challenge myself, so I decided to try recreating it in Three.js to see how far I could go.

    Luckily for me, a brilliant creative coder named Nathan Gordon had already written an article back in 2016 about recreating the game’s water. That gave me a solid foundation to start from.

    After a lot of effort, I managed to create something I was really proud of, including six islands with LOD (Level of Detail) logic, dynamic day/night and weather cycles, fake physics with objects floating on water, a mini-game similar to Temple Run, and a treasure hunt where you search for the Triforce.

    I faced many challenges along the way, and if you’re curious about how I tackled them, I made two videos explaining everything:

    The project received a lot of positive feedback, and I’m truly grateful I got the chance to pay tribute to this incredible Nintendo game.

    McDonald’s Switzerland – The Golden Slide Game

    Last December, I had the opportunity to create a mobile video game for McDonald’s Switzerland with the Swipe Back team.

    The 3D designer provided us with some really fun, toon-style assets, which made the game look both playful and professional—especially exciting for me, as it was my first time working on a real game project.

    I worked alongside David Ronai, just the two of us as developers, and it was quite a challenge! The game featured weekly quests, unlockable cosmetics, real-world rewards for top players, and a full server-side backend (which David handled).

    David also had this wild idea: to build the game using TSL, a new language in the Three.js ecosystem that automatically converts your JS shaders to WebGPU. I learned it during the project and used it to create the 3D game. At the time, documentation was sparse and the tech was very fresh, but it promised much better performance than WebGL. Despite the challenge, we made it work, and the result was amazing—WebGPU ran incredibly smoothly on Android.

    With all the 3D assets we had, we needed to optimize carefully. One of the key techniques we used was Batched Mesh, combining all obstacles into a single mesh, which didn’t require TSL but helped a lot with performance.

    The website is no longer available since it was part of a Christmas event, but I captured a video of the project that you can check out here.

    Issey Miyake – Le sel d’Issey

    Last year, I worked on a 3D project where users could create their own salt crystal using different ingredients, all as part of a campaign for a new Issey Miyake perfume. It was a really fun experience, and the main technical challenge was achieving a beautiful refraction shader effect.

    I handled the front-end development alone and used React Three Fiber for the first time, a WebGL framework based on Three.js that lets you build 3D scenes using React-style components.

    The library was super helpful for setting things up quickly. As I got deeper into the project, however, I ran into a few minor issues, but I managed to solve them with some custom code. I’d definitely recommend React Three Fiber if you already know a lot about WebGL/Three.js and enjoy working in the React ecosystem.

    This project was awarded Site of the Day (SOTD) on FWA.

    Portfolio 2021

    I’ve included my portfolio as the final case study. Even though it’s an older project and not always up to date, it still means a lot to me.

    I started working on it during a break right after the pandemic. I had a very vague idea at first, so I began designing and programming at the same time. It was a curious way of working because I was never quite sure how it would turn out. With lots of back and forth, trial and error, and restarts, I really enjoyed that creative, spontaneous process—and I’d definitely recommend it if you’re working on a personal project!

    This project received a Site of the Day (SOTD) award on both Awwwards and FWA.

    About me

    I’m a Creative Web Developer with 10 years of experience, based in Paris.

    I studied at a French school called HETIC, where I learned a wide range of web-related skills including design, project management, marketing, and programming. In 2015, I had the chance to do a six-month internship at UNIT9. This is where I discovered WebGL for the first time, and I immediately fell in love with it.

    My very first project involved building a VR version of a horror movie on the web using Three.js, and I found it absolutely fascinating.

    After that, I worked at several agencies: first at 84.Paris in France, then for a year and a half at Upperquad in San Francisco. At these agencies, I learned a lot from other developers about creative development, clean code architecture, and fine-tuning animations. I contributed to multiple award-winning websites (Awwwards, FWA), and in 2021, I finally decided to start freelancing.

    I won my first award solo with my portfolio, and since then I’ve worked with clients around the world, occasionally winning more awards along the way.

    Eventually, I decided it was my turn to share knowledge, so I created a YouTube channel where I teach how to build WebGL effects. I’ve also been part of the FWA jury since 2018, and I had the opportunity to publish Creating a Risograph Grain Light Effect in Three.js and Creating a Bulge Distortion Effect with WebGL on Codrops.

    Philosophy & Workflow

    As a front-end developer, I’ve always enjoyed pushing the limits of web animation. I love experimenting with different effects and sharing them with the team to inspire new ideas. I don’t have a specific workflow, because I work with many agencies all over the world and always have to adapt to new frameworks, workflows, and structures. So I wouldn’t recommend any specific workflow—just try different ones and pick the one that fits best for your project!

    Current learning & challenges

    Currently, I’m learning TSL, a Three.js-based approach that compiles your Three.js code to WebGPU (with a WebGL fallback) for even better performance! For my current and future challenges, I would love to create a 3D web development course!

    Final Thoughts

    Thank you Codrops for inviting me, I’ve always been a fan of the amazing web animation tutorials.

    If you have a project in mind, don’t give up on it! Try to find some free time to at least give it a shot. Stay creative!



    Source link

  • Use TestCase to run similar unit tests with NUnit &vert; Code4IT

    Use TestCase to run similar unit tests with NUnit | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In my opinion, Unit tests should be well structured and written even better than production code.

    In fact, Unit Tests act as a first level of documentation of what your code does and, if written properly, can be the key to fixing bugs quickly and without adding regressions.

    One way to improve readability is by grouping similar tests that only differ by the initial input but whose behaviour is the same.

    Let’s use a dummy example: some tests on a simple Calculator class that only performs sums on int values.

    public static class Calculator
    {
        public static int Sum(int first, int second) => first + second;
    }
    

    One way to create tests is by creating one test for each possible combination of values:

    public class SumTests
    {
    
        [Test]
        public void SumPositiveNumbers()
        {
            var result = Calculator.Sum(1, 5);
            Assert.That(result, Is.EqualTo(6));
        }
    
        [Test]
        public void SumNegativeNumbers()
        {
            var result = Calculator.Sum(-1, -5);
            Assert.That(result, Is.EqualTo(-6));
        }
    
        [Test]
        public void SumWithZero()
        {
            var result = Calculator.Sum(1, 0);
            Assert.That(result, Is.EqualTo(1));
        }
    }
    

    However, it’s not a good idea: you’ll end up with lots of identical tests (DRY, remember?) that add little to no value to the test suite. Also, this approach forces you to add a new test method to every new kind of test that pops into your mind.

    When possible, we should generalize it. With NUnit, we can use the TestCase attribute to specify the list of parameters passed in input to our test method, including the expected result.

    We can then simplify the whole test class by creating only one method that accepts the different cases in input and runs tests on those values.

    [Test]
    [TestCase(1, 5, 6)]
    [TestCase(-1, -5, -6)]
    [TestCase(1, 0, 1)]
    public void SumWorksCorrectly(int first, int second, int expected)
    {
        var result = Calculator.Sum(first, second);
        Assert.That(result, Is.EqualTo(expected));
    }
    

    By using TestCase, you can cover different cases by simply adding a new case without creating new methods.

    Clearly, don’t abuse it: use it only to group methods with similar behaviour – and don’t add if statements in the test method!

    There is a more advanced way to create a TestCase in NUnit, named TestCaseSource – but we will talk about it in a future C# tip 😉

    Further readings

    If you are using NUnit, I suggest you read this article about custom equality checks – you might find it handy in your code!

    🔗 C# Tip: Use custom Equality comparers in Nunit tests | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Building an Infinite Parallax Grid with GSAP and Seamless Tiling

    Building an Infinite Parallax Grid with GSAP and Seamless Tiling


    Hey! Jorge Toloza again, Co-Founder and Creative Director at DDS Studio. In this tutorial, we’re going to build a visually rich, infinitely scrolling grid where images move with a parallax effect based on scroll and drag interactions.

    We’ll use GSAP for buttery-smooth animations, add a sprinkle of math to achieve infinite tiling, and bring it all together with dynamic visibility animations and a staggered intro reveal.

    Let’s get started!

    Setting Up the HTML Container

    To start, we only need a single container to hold all the tiled image elements. Since we’ll be generating and positioning each tile dynamically with JavaScript, there’s no need for any static markup inside. This keeps our HTML clean and scalable as we duplicate tiles for infinite scrolling.

    <div id="images"></div>

    Basic Styling for the Grid Items

    Now that we have our container, let’s give it the foundational styles it needs to hold and animate a large set of tiles.

    We’ll use absolute positioning for each tile so we can freely place them anywhere in the grid. The outer container (#images) is set to relative so that all child .item elements are positioned correctly inside it. Each image fills its tile, and we’ll use will-change: transform to optimize animation performance.

    #images {
      width: 100%;
      height: 100%;
      display: inline-block;
      white-space: nowrap;
      position: relative;
      .item {
        position: absolute;
        top: 0;
        left: 0;
        will-change: transform;
        white-space: normal;
        .item-wrapper {
          will-change: transform;
        }
        .item-image {
          overflow: hidden;
          img {
            width: 100%;
            height: 100%;
            object-fit: cover;
            will-change: transform;
          }
        }
        small {
          width: 100%;
          display: block;
          font-size: 8rem;
          line-height: 1.25;
          margin-top: 12rem;
        }
      }
    }

    Defining Item Positions with JSON from Figma

    To control the visual layout of our grid, we’ll use design data exported directly from Figma. This gives us pixel-perfect placement while keeping layout logic separate from our code.

    I created a quick layout in Figma using rectangles to represent tile positions and dimensions. Then I exported that data into a JSON file, giving us a simple array of objects containing x, y, w, and h values for each tile.

    [
          {x: 71, y: 58, w: 400, h: 270},
          {x: 211, y: 255, w: 540, h: 360},
          {x: 631, y: 158, w: 400, h: 270},
          {x: 1191, y: 245, w: 260, h: 195},
          {x: 351, y: 687, w: 260, h: 290},
          {x: 751, y: 824, w: 205, h: 154},
          {x: 911, y: 540, w: 260, h: 350},
          {x: 1051, y: 803, w: 400, h: 300},
          {x: 71, y: 922, w: 350, h: 260},
    ]

    Generating an Infinite Grid with JavaScript

    With the layout data defined, the next step is to dynamically generate our tile grid in the DOM and enable it to scroll infinitely in both directions.

    This involves three main steps:

    1. Compute the scaled tile dimensions based on the viewport and the original Figma layout’s aspect ratio.
    2. Duplicate the grid in both the X and Y axes so that as one tile set moves out of view, another seamlessly takes its place.
    3. Store metadata for each tile, such as its original position and a random easing value, which we’ll use to vary the parallax animation slightly for a more organic effect.

    The infinite scroll illusion is achieved by duplicating the entire tile set horizontally and vertically. This 2×2 tiling approach ensures there’s always a full set of tiles ready to slide into view as the user scrolls or drags.

    onResize() {
      // Get current viewport dimensions
      this.winW = window.innerWidth;
      this.winH = window.innerHeight;
    
      // Scale tile size to match viewport width while keeping original aspect ratio
      this.tileSize = {
        w: this.winW,
        h: this.winW * (this.originalSize.h / this.originalSize.w),
      };
    
      // Reset scroll state
      this.scroll.current = { x: 0, y: 0 };
      this.scroll.target = { x: 0, y: 0 };
      this.scroll.last = { x: 0, y: 0 };
    
      // Clear existing tiles from container
      this.$container.innerHTML = '';
    
      // Scale item positions and sizes based on new tile size
      const baseItems = this.data.map((d, i) => {
        const scaleX = this.tileSize.w / this.originalSize.w;
        const scaleY = this.tileSize.h / this.originalSize.h;
        const source = this.sources[i % this.sources.length];
        return {
          src: source.src,
          caption: source.caption,
          x: d.x * scaleX,
          y: d.y * scaleY,
          w: d.w * scaleX,
          h: d.h * scaleY,
        };
      });
    
      this.items = [];
    
      // Offsets to duplicate the grid in X and Y for seamless looping (2x2 tiling)
      const repsX = [0, this.tileSize.w];
      const repsY = [0, this.tileSize.h];
    
      baseItems.forEach((base) => {
        repsX.forEach((offsetX) => {
          repsY.forEach((offsetY) => {
            // Create item DOM structure
            const el = document.createElement('div');
            el.classList.add('item');
            el.style.width = `${base.w}px`;
    
            const wrapper = document.createElement('div');
            wrapper.classList.add('item-wrapper');
            el.appendChild(wrapper);
    
            const itemImage = document.createElement('div');
            itemImage.classList.add('item-image');
            itemImage.style.width = `${base.w}px`;
            itemImage.style.height = `${base.h}px`;
            wrapper.appendChild(itemImage);
    
            const img = new Image();
            img.src = `./img/${base.src}`;
            itemImage.appendChild(img);
    
            const caption = document.createElement('small');
            caption.innerHTML = base.caption;
    
            // Split caption into lines for staggered animation
            const split = new SplitText(caption, {
              type: 'lines',
              mask: 'lines',
              linesClass: 'line'
            });
            split.lines.forEach((line, i) => {
              line.style.transitionDelay = `${i * 0.15}s`;
              line.parentElement.style.transitionDelay = `${i * 0.15}s`;
            });
    
            wrapper.appendChild(caption);
            this.$container.appendChild(el);
    
            // Observe caption visibility for animation triggering
            this.observer.observe(caption);
    
            // Store item metadata including offset, easing, and bounding box
            this.items.push({
              el,
              container: itemImage,
              wrapper,
              img,
              x: base.x + offsetX,
              y: base.y + offsetY,
              w: base.w,
              h: base.h,
              extraX: 0,
              extraY: 0,
              rect: el.getBoundingClientRect(),
              ease: Math.random() * 0.5 + 0.5, // Random parallax easing for organic movement
            });
          });
        });
      });
    
      // Double the tile area to account for 2x2 duplication
      this.tileSize.w *= 2;
      this.tileSize.h *= 2;
    
      // Set initial scroll position slightly off-center for visual balance
      this.scroll.current.x = this.scroll.target.x = this.scroll.last.x = -this.winW * 0.1;
      this.scroll.current.y = this.scroll.target.y = this.scroll.last.y = -this.winH * 0.1;
    }
    

    Key Concepts

    • Scaling the layout ensures that your Figma-defined design adapts to any screen size without distortion.
    • 2×2 duplication ensures seamless continuity when the user scrolls in any direction.
    • Random easing values create slight variation in tile movement, making the parallax effect feel more natural.
    • extraX and extraY values will later be used to shift tiles back into view once they scroll offscreen.
    • SplitText animation is used to break each caption (<small>) into individual lines, enabling line-by-line animation.

    Adding Interactive Scroll and Drag Events

    To bring the infinite grid to life, we need to connect it to user input. This includes:

    • Scrolling with the mouse wheel or trackpad
    • Dragging with a pointer (mouse or touch)
    • Smooth motion between input updates using linear interpolation (lerp)

    Rather than instantly snapping to new positions, we interpolate between the current and target scroll values, which creates fluid, natural transitions.

    Scroll and Drag Tracking

    We capture two types of user interaction:

    1) Wheel Events
    Wheel input updates a target scroll position. We multiply the deltas by a damping factor to control sensitivity.

    onWheel(e) {
      e.preventDefault();
      const factor = 0.4;
      this.scroll.target.x -= e.deltaX * factor;
      this.scroll.target.y -= e.deltaY * factor;
    }

    2) Pointer Dragging
    On mouse or touch input, we track when the drag starts, then update scroll targets based on the pointer’s movement.

    onMouseDown(e) {
      e.preventDefault();
      this.isDragging = true;
      document.documentElement.classList.add('dragging');
      this.mouse.press.t = 1;
      this.drag.startX = e.clientX;
      this.drag.startY = e.clientY;
      this.drag.scrollX = this.scroll.target.x;
      this.drag.scrollY = this.scroll.target.y;
    }
    
    onMouseUp() {
      this.isDragging = false;
      document.documentElement.classList.remove('dragging');
      this.mouse.press.t = 0;
    }
    
    onMouseMove(e) {
      this.mouse.x.t = e.clientX / this.winW;
      this.mouse.y.t = e.clientY / this.winH;
    
      if (this.isDragging) {
        const dx = e.clientX - this.drag.startX;
        const dy = e.clientY - this.drag.startY;
        this.scroll.target.x = this.drag.scrollX + dx;
        this.scroll.target.y = this.drag.scrollY + dy;
      }
    }

    Smoothing Motion with Lerp

    In the render loop, we interpolate between the current and target scroll values using a lerp function. This creates smooth, decaying motion rather than abrupt changes.

    render() {
      // Smooth current → target
      this.scroll.current.x += (this.scroll.target.x - this.scroll.current.x) * this.scroll.ease;
      this.scroll.current.y += (this.scroll.target.y - this.scroll.current.y) * this.scroll.ease;
    
      // Calculate delta for parallax
      const dx = this.scroll.current.x - this.scroll.last.x;
      const dy = this.scroll.current.y - this.scroll.last.y;
    
      // Update each tile
      this.items.forEach(item => {
        const parX = 5 * dx * item.ease + (this.mouse.x.c - 0.5) * item.rect.width * 0.6;
        const parY = 5 * dy * item.ease + (this.mouse.y.c - 0.5) * item.rect.height * 0.6;
    
        // Infinite wrapping
        const posX = item.x + this.scroll.current.x + item.extraX + parX;
        if (posX > this.winW)  item.extraX -= this.tileSize.w;
        if (posX + item.rect.width < 0) item.extraX += this.tileSize.w;
    
        const posY = item.y + this.scroll.current.y + item.extraY + parY;
        if (posY > this.winH)  item.extraY -= this.tileSize.h;
        if (posY + item.rect.height < 0) item.extraY += this.tileSize.h;
    
        item.el.style.transform = `translate(${posX}px, ${posY}px)`;
      });
    
      this.scroll.last.x = this.scroll.current.x;
      this.scroll.last.y = this.scroll.current.y;
    
      requestAnimationFrame(this.render);
    }

    The scroll.ease value controls how fast the scroll position catches up to the target—smaller values result in slower, smoother motion.

    Animating Item Visibility with IntersectionObserver

    To enhance the visual hierarchy and focus, we’ll highlight only the tiles that are currently within the viewport. This creates a dynamic effect where captions appear and styling changes as tiles enter view.

    We’ll use the IntersectionObserver API to detect when each tile becomes visible and toggle a CSS class accordingly.

    this.observer = new IntersectionObserver(entries => {
      entries.forEach(entry => {
        entry.target.classList.toggle('visible', entry.isIntersecting);
      });
    });
    // …and after appending each wrapper:
    this.observer.observe(wrapper);

    Creating an Intro Animation with GSAP

    To finish the experience with a strong visual entry, we’ll animate all currently visible tiles from the center of the screen into their natural grid positions. This creates a polished, attention-grabbing introduction and adds a sense of depth and intentionality to the layout.

    We’ll use GSAP for this animation, utilizing gsap.set() to position elements instantly, and gsap.to() with staggered timing to animate them into place.

    Selecting Visible Tiles for Animation

    First, we filter all tile elements to include only those currently visible in the viewport. This avoids animating offscreen elements and keeps the intro lightweight and focused:

    import gsap from 'gsap';
    initIntro() {
      this.introItems = [...this.$container.querySelectorAll('.item-wrapper')].filter((item) => {
        const rect = item.getBoundingClientRect();
        return (
          rect.x > -rect.width &&
          rect.x < window.innerWidth + rect.width &&
          rect.y > -rect.height &&
          rect.y < window.innerHeight + rect.height
        );
      });
      this.introItems.forEach((item) => {
        const rect = item.getBoundingClientRect();
        const x = -rect.x + window.innerWidth * 0.5 - rect.width * 0.5;
        const y = -rect.y + window.innerHeight * 0.5 - rect.height * 0.5;
        gsap.set(item, { x, y });
      });
    }

    Animating to Final Positions

    Once the tiles are centered, we animate them outward to their natural positions using a smooth easing curve and staggered timing:

    intro() {
      gsap.to(this.introItems.reverse(), {
        duration: 2,
        ease: 'expo.inOut',
        x: 0,
        y: 0,
        stagger: 0.05,
      });
    }
    • x: 0, y: 0 restores the original position set via CSS transforms.
    • expo.inOut provides a dramatic but smooth easing curve.
    • stagger creates a cascading effect, enhancing visual rhythm

    Wrapping Up

    What we’ve built is a scrollable, draggable image grid with a parallax effect, visibility animations, and a smooth GSAP-powered intro. It’s a flexible base you can adapt for creative galleries, interactive backgrounds, or experimental interfaces.



    Source link

  • 4 ways to create Unit Tests without Interfaces in C# &vert; Code4IT

    4 ways to create Unit Tests without Interfaces in C# | Code4IT


    C# devs have the bad habit of creating interfaces for every non-DTO class because «we need them for mocking!». Are you sure it’s the only way?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common traits of C# developers is the excessive usage of interfaces.

    For every non-DTO class we define, we usually also create the related interface. Most of the time, we don’t need it because we have multiple implementations of an interface. Instead, we say that we need an interface to enable mocking.

    That’s true; it’s pretty straightforward to mock an interface: lots of libraries, like Moq and NSubstitute, allow you to create mocks and pass them to the class under test. What if there were another way?

    In this article, we will learn how to have complete control over a dependency while having the concrete class, and not the related interface, injected in the constructor.

    C# devs always add interfaces, just in case

    If you’re a developer like me, you’ve been taught something like this:

    One of the SOLID principles is Dependency Inversion; to achieve it, you need Dependency Injection. The best way to do that is by creating an interface, injecting it in the consumer’s constructor, and then mapping the interface and the concrete class.

    Sometimes, somebody explains that we don’t need interfaces to achieve Dependency Injection. However, there are generally two arguments proposed by those who keep using interfaces everywhere: the “in case I need to change the database” argument and, even more often, the “without interfaces, I cannot create mocks”.

    Are we sure?

    The “Just in case I need to change the database” argument

    One phrase that I often hear is:

    Injecting interfaces allows me to change the concrete implementation of a class without worrying about the caller. You know, just in case I had to change the database engine…

    Yes, that’s totally right – using interfaces, you can change the internal implementation in a bat of an eye.

    Let’s be honest: in all your career, how many times have you changed the underlying database? In my whole career, it happened just once: we tried to build a solution using Gremlin for CosmosDB, but it turned out to be too expensive – so we switched to a simpler MongoDB.

    But, all in all, it wasn’t only thanks to the interfaces that we managed to switch easily; it was because we strictly separated the classes and did not leak the models related to Gremlin into the core code. We structured the code with a sort of Hexagonal Architecture, way before this term became a trend in the tech community.

    Still, interfaces can be helpful, especially when dealing with multiple implementations of the same methods or when you want to wrap your head around the methods, inputs, and outputs exposed by a module.

    The “I need to mock” argument

    Another one I like is this:

    Interfaces are necessary for mocking dependencies! Otherwise, how can I create Unit Tests?

    Well, I used to agree with this argument. I was used to mocking interfaces by using libraries like Moq and defining the behaviour of the dependency using the SetUp method.

    It’s still a valid way, but my point here is that that’s not the only one!

    One of the simplest tricks is to mark your classes as abstract. But… this means you’ll end up with every single class marked as abstract. Not the best idea.

    We have other tools in our belt!

    A realistic example: Dependency Injection without interfaces

    Let’s start with a real-ish example.

    We have a NumbersRepository that just exposes one method: GetNumbers().

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, int.MaxValue).ToArray();
        }
    
        public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Generally, one would be tempted to add an interface with the same name as the class, INumbersRepository, and include the GetNumbers method in the interface definition.

    We are not going to do that – the interface is not necessary, so why clutter the code with something like that?

    Now, for the consumer. We have a simple NumbersSearchService that accepts, via Dependency Injection, an instance of NumbersRepository (yes, the concrete class!) and uses it to perform a simple search:

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    }
    

    To add these classes to your ASP.NET project, you can add them in the DI definition like this:

    builder.Services.AddSingleton<NumbersRepository>();
    builder.Services.AddSingleton<NumbersSearchService>();
    

    Without adding any interface.

    Now, how can we test this class without using the interface?

    Way 1: Use the “virtual” keyword in the dependency to create stubs

    We can create a subclass of the dependency, even if it is a concrete class, by overriding just some of its functionalities.

    For example, we can choose to mark the GetNumbers method in the NumbersRepository class as virtual, making it easily overridable from a subclass.

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
    -    public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    +    public virtual IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Yes, we can mark a method as virtual even if the class is concrete!

    Now, in our Unit Tests, we can create a subtype of NumbersRepository to have complete control of the GetNumbers method:

    internal class StubNumberRepo : NumbersRepository
    {
        private IEnumerable<int> _numbers;
    
        public void SetNumbers(params int[] numbers) => _numbers = numbers;
    
        public override IEnumerable<int> GetNumbers() => _numbers;
    }
    

    We have overridden the GetNumbers method, but to do so, we had to include a new method, SetNumbers, to define the expected result of the former method.

    We then can use it in our tests like this:

    [Test]
    public void Should_WorkWithStubRepo()
    {
        // Arrange
        var repository = new StubNumberRepo();
        repository.SetNumbers(1, 2, 3);
        var service = new NumbersSearchService(repository);
    
        // Act
        var result = service.Contains(3);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    You now have the full control over the subclass. But this approach comes with a problem: if you have multiple methods marked as virtual, and you are going to use all of them in your test classes, then you will need to override every single method (to have control over them) and work out how to decide whether to use the concrete method or the stub implementation.

    For example, we can update the StubNumberRepo to let the consumer choose if we need the dummy values or the base implementation:

    internal class StubNumberRepo : NumbersRepository
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers;
            _useStubNumbers = true;
        }
    
        public override IEnumerable<int> GetNumbers()
        {
            if (_useStubNumbers)
                return _numbers;
            return base.GetNumbers();
        }
    }
    

    With this approach, by default, we use the concrete implementation of NumbersRepository because _useStubNumbers is false. If we call the SetNumbers method, we also specify that we don’t want to use the original implementation.

    Way 2: Use the virtual keyword in the service to avoid calling the dependency

    Similar to the previous approach, we can mark some methods of the caller as virtual to allow us to change parts of our class while keeping everything else as it was.

    To achieve it, we have to refactor a little our Service class:

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
    -       var numbers = _repository.GetNumbers();
    +       var numbers = GetNumbers();
            return numbers.Contains(number);
        }
    
    +    public virtual IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    }
    

    The key is that we moved the calls to the external references to a separate method, marking it as virtual.

    This way, we can create a stub class of the Service itself without the need to stub its dependencies:

    internal class StubNumberSearch : NumbersSearchService
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public StubNumberSearch() : base(null)
        {
        }
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers.ToArray();
            _useStubNumbers = true;
        }
    
        public override IEnumerable<int> GetNumbers()
            => _useStubNumbers ? _numbers : base.GetNumbers();
    }
    

    The approach is almost identical to the one we saw before. The difference can be seen in your tests:

    [Test]
    public void Should_UseStubService()
    {
        // Arrange
        var service = new StubNumberSearch();
        service.SetNumbers(12, 15, 30);
    
        // Act
        var result = service.Contains(15);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    There is a problem with this approach: many devs (correctly) add null checks in the constructor to ensure that the dependencies are not null:

    public NumbersSearchService(NumbersRepository repository)
    {
        ArgumentNullException.ThrowIfNull(repository);
        _repository = repository;
    }
    

    While this approach makes it safe to use the NumbersSearchService reference within the class’ methods, it also stops us from creating a StubNumberSearch. Since we want to create an instance of NumbersSearchService without the burden of injecting all the dependencies, we call the base constructor passing null as a value for the dependencies. If we validate against null, the stub class becomes unusable.

    There’s a simple solution: adding a protected empty constructor:

    public NumbersSearchService(NumbersRepository repository)
    {
        ArgumentNullException.ThrowIfNull(repository);
        _repository = repository;
    }
    
    protected NumbersSearchService()
    {
    }
    

    We mark it as protected because we want that only subclasses can access it.

    Way 3: Use the “new” keyword in methods to hide the base implementation

    Similar to the virtual keyword is the new keyword, which can be applied to methods.

    We can then remove the virtual keyword from the base class and hide its implementation by marking the overriding method as new.

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            ArgumentNullException.ThrowIfNull(repository);
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    
    -    public virtual IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    +    public IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    }
    

    We have restored the original implementation of the Repository.

    Now, we can update the stub by adding the new keyword.

    internal class StubNumberSearch : NumbersSearchService
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers.ToArray();
            _useStubNumbers = true;
        }
    
    -    public override IEnumerable<int> GetNumbers() => _useStubNumbers ? _numbers : base.GetNumbers();
    +    public new IEnumerable<int> GetNumbers() => _useStubNumbers ? _numbers : base.GetNumbers();
    }
    

    We haven’t actually solved any problem except for one: we can now avoid cluttering all our classes with the virtual keyword.

    A question for you! Is there any difference between using the new and the virtual keyword? When you should pick one instead of the other? Let me know in the comments section! 📩

    Way 4: Mock concrete classes by marking a method as virtual

    Sometimes, I hear developers say that mocks are the absolute evil, and you should never use them.

    Oh, come on! Don’t be so silly!

    That’s true, when using mocks you are writing tests on a irrealistic environment. But, well, that’s exactly the point of having mocks!

    If you think about it, at school, during Science lessons, we were taught to do our scientific calculations using approximations: ignore the air resistance, ignore friction, and so on. We knew that that world did not exist, but we removed some parts to make it easier to validate our hypothesis.

    In my opinion, it’s the same for testing. Mocks are useful to have full control of a specific behaviour. Still, only relying on mocks makes your tests pretty brittle: you cannot be sure that your system is working under real conditions.

    That’s why, as I explained in a previous article, I prefer the Testing Diamond over the Testing Pyramid. In many real cases, five Integration Tests are more valuable than fifty Unit Tests.

    But still, mocks can be useful. How can we use them if we don’t have interfaces?

    Let’s start with the basic example:

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
        public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    
    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            ArgumentNullException.ThrowIfNull(repository);
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    }
    

    If we try to use Moq to create a mock of NumbersRepository (again, the concrete class) like this:

    [Test]
    public void Should_WorkWithMockRepo()
    {
        // Arrange
        var repository = new Moq.Mock<NumbersRepository>();
        repository.Setup(_ => _.GetNumbers()).Returns(new int[] { 1, 2, 3 });
        var service = new NumbersSearchService(repository.Object);
    
        // Act
        var result = service.Contains(3);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    It will fail with this error:

    System.NotSupportedException : Unsupported expression: _ => _.GetNumbers()
    Non-overridable members (here: NumbersRepository.GetNumbers) may not be used in setup / verification expressions.

    This error occurs because the implementation GetNumbers is fixed as defined in the NumbersRepository class and cannot be overridden.

    Unless you mark it as virtual, as we did before.

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
    -    public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    +    public virtual IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Now the test passes: we have successfully mocked a concrete class!

    Further readings

    Testing is a crucial part of any software application. I personally write Unit Tests even for throwaway software – this way, I can ensure that I’m doing the correct thing without the need for manual debugging.

    However, one part that is often underestimated is the code quality of tests. Tests should be written even better than production code. You can find more about this topic here:

    🔗 Tests should be even more well-written than production code | Code4IT

    Also, Unit Tests are not enough. You should probably write more Integration Tests than Unit Tests. This one is a testing strategy called Testing Diamond.

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    This article first appeared on Code4IT 🐧

    Clearly, you can write Integration Tests for .NET APIs easily. In this article, I explain how to create and customize Integration Tests using NUnit:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    In this article, we learned that it’s not necessary to create interfaces for the sake of having mocks.

    We have different other options.

    Honestly speaking, I’m still used to creating interfaces and using them with mocks.

    I find it easy to do, and this approach provides a quick way to create tests and drive the behaviour of the dependencies.

    Also, I recognize that interfaces created for the sole purpose of mocking are quite pointless: we have learned that there are other ways, and we should consider trying out these solutions.

    Still, interfaces are quite handy for two “non-technical” reasons:

    • using interfaces, you can understand in a glimpse what are the operations that you can call in a clean and concise way;
    • interfaces and mocks allow you to easily use TDD: while writing the test cases, you also define what methods you need and the expected behaviour. I know you can do that using stubs, but I find it easier with interfaces.

    I know, this is a controversial topic – I’m not saying that you should remove all your interfaces (I think it’s a matter of personal taste, somehow!), but with this article, I want to highlight that you can avoid interfaces.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 5 Signs Your Organization Needs Zero Trust Network Access

    5 Signs Your Organization Needs Zero Trust Network Access


    In today’s hyperconnected business environment, the question isn’t if your organization will face a security breach, but when. With the growing complexity of remote workforces, cloud adoption, and third-party integrations, many businesses are discovering that their traditional security tools are no longer enough to keep threats at bay.

    Enter Zero Trust Network Access (ZTNA)—a modern security model that assumes no user, device, or request is trustworthy until proven otherwise. Unlike traditional perimeter-based security, ZTNA treats every access request suspiciously, granting application-level access based on strict verification and context.

    But how do you know when it’s time to make the shift?

    Here are five tell-tale signs that your current access strategy may be outdated—and why Zero Trust Network Access could be the upgrade your organization needs.

     

    1. Your VPN is Always on—and Always a Risk

    Virtual Private Networks (VPNs) were built for a simpler time when employees worked from office desktops, and network boundaries were clear. Today, they’re an overworked, often insecure solution trying to fit a modern, mobile-first world.

    The problem? Once a user connects via VPN, they often gain full access to the internal network, regardless of what they need. One compromised credential can unlock the entire infrastructure.

    How ZTNA helps:

    ZTNA enforces the principle of least privilege. Instead of exposing the entire network, it grants granular, application-specific access based on identity, role, and device posture. Even if credentials are compromised, the intruder won’t get far.

     

    1. Remote Access is a Growing Operational Burden

    Managing remote access for a distributed workforce is no longer optional—it’s mission-critical. Yet many organizations rely on patchwork solutions that are not designed for scale, leading to latency, downtime, and support tickets galore.

    If your IT team constantly fights connection issues, reconfigures VPN clients, or manually provisions contractor access, it’s time to rethink your approach.

    How ZTNA helps:

    ZTNA enables seamless, cloud-delivered access without the overhead of legacy systems. Employees, contractors, and partners can connect from anywhere, without IT having to manage tunnels, gateways, or physical infrastructure. It also supports agentless access, perfect for unmanaged devices or third-party vendors.

     

    1. You Lack Visibility into What Users do After They Log in

    Traditional access tools like VPNs authenticate users at the start of a session but offer little to no insight into what happens afterward. Did they access sensitive databases? Transfer files? Leave their session open on an unsecured device?

    This lack of visibility is a significant risk to security and compliance. It leaves gaps in auditing, limits forensic investigations, and increases your exposure to insider threats.

    How ZTNA helps:

    With ZTNA, organizations get deep session-level visibility and control. You can log every action, enforce session recording, restrict clipboard usage, and even automatically terminate sessions based on unusual behavior or policy violations. This isn’t just security, it’s accountability.

     

    1. Your Attack Surface is Expanding Beyond Your Control

    Every new SaaS app, third-party vendor, or remote endpoint is a new potential doorway into your environment. In traditional models, this means constantly updating firewall rules, managing IP allowlists, or creating segmented VPN tunnels—reactive and complex to scale tasks.

    How ZTNA helps:

    ZTNA eliminates network-level access. Instead of exposing apps to the public or placing them behind perimeter firewalls, ZTNA makes applications invisible, only discoverable to users with strict access policies. It drastically reduces your attack surface without limiting business agility.

     

    1. Security and User Experience Are at War

    Security policies are supposed to protect users, not frustrate them. But when authentication is slow, access requires multiple manual steps, or users are locked out due to inflexible policies, they look for shortcuts—often unsafe ones.

    This is where shadow IT thrives—and where security begins to fail.

    How ZTNA helps:

    ZTNA provides context-aware access control that strikes the right balance between security and usability. For example, a user accessing a low-risk app from a trusted device and location may pass through with minimal friction. However, someone connecting from an unknown device or foreign IP may face additional verification or be denied altogether.

    ZTNA adapts security policies based on real-time context, ensuring protection without compromising productivity.

     

    In Summary

    The traditional approach to access control is no match for today’s dynamic, perimeterless world. If you’re experiencing VPN fatigue, blind spots in user activity, growing exposure, or frustrated users, it’s time to rethink your security architecture.

    Zero Trust Network Access isn’t just another security tool—it’s a more innovative, adaptive framework for modern businesses.

    Looking to modernize your access control without compromising on security or performance? Explore Seqrite ZTNA—a future-ready ZTNA solution built for enterprises navigating today’s cybersecurity landscape.



    Source link

  • Handling exceptions with Task.WaitAll and Task.WhenAll &vert; Code4IT

    Handling exceptions with Task.WaitAll and Task.WhenAll | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Asynchronous programming enables you to execute multiple operations without blocking the main thread.

    In general, we often think of the Happy Scenario, when all the operations go smoothly, but we rarely consider what to do when an error occurs.

    In this article, we will explore how Task.WaitAll and Task.WhenAll behave when an error is thrown in one of the awaited Tasks.

    Prepare the tasks to be executed

    For the sake of this article, we are going to use a silly method that returns the same number passed in input but throws an exception in case the input number can be divided by 3:

    public Task<int> Echo(int value) => Task.Factory.StartNew(
    () =>
    {
        if (value % 3 == 0)
        {
            Console.WriteLine($"[LOG] You cannot use {value}!");
            throw new Exception($"[EXCEPTION] Value cannot be {value}");
        }
        Console.WriteLine($"[LOG] {value} is a valid value!");
        return value;
    }
    );
    

    Those Console.WriteLine instructions will allow us to see what’s happening “live”.

    We prepare the collection of tasks to be awaited by using a simple Enumerable.Range

    var tasks = Enumerable.Range(1, 11).Select(Echo);
    

    And then, we use a try-catch block with some logs to showcase what happens when we run the application.

    try
    {
    
        Console.WriteLine("START");
    
        // await all the tasks
    
        Console.WriteLine("END");
    }
    catch (Exception ex)
    {
        Console.WriteLine("The exception message is: {0}", ex.Message);
        Console.WriteLine("The exception type is: {0}", ex.GetType().FullName);
    
        if (ex.InnerException is not null)
        {
            Console.WriteLine("Inner exception: {0}", ex.InnerException.Message);
        }
    }
    finally
    {
        Console.WriteLine("FINALLY!");
    }
    

    If we run it all together, we can notice that nothing really happened:

    In fact, we just created a collection of tasks (which does not actually exist, since the result is stored in a lazy-loaded enumeration).

    We can, then, call WaitAll and WhenAll to see what happens when an error occurs.

    Error handling when using Task.WaitAll

    It’s time to execute the tasks stored in the tasks collection, like this:

    try
    {
        Console.WriteLine("START");
    
        // await all the tasks
        Task.WaitAll(tasks.ToArray());
    
        Console.WriteLine("END");
    }
    

    Task.WaitAll accepts an array of tasks to be awaited and does not return anything.

    The execution goes like this:

    START
    1 is a valid value!
    2 is a valid value!
    :(  You cannot use 6!
    5 is a valid value!
    :(  You cannot use 3!
    4 is a valid value!
    8 is a valid value!
    10 is a valid value!
    :(  You cannot use 9!
    7 is a valid value!
    11 is a valid value!
    The exception message is: One or more errors occurred. ([EXCEPTION] Value cannot be 3) ([EXCEPTION] Value cannot be 6) ([EXCEPTION] Value cannot be 9)
    The exception type is: System.AggregateException
    Inner exception: [EXCEPTION] Value cannot be 3
    FINALLY!
    

    There are a few things to notice:

    • the tasks are not executed in sequence: for example, 6 was printed before 4. Well, to be honest, we can say that Console.WriteLine printed the messages in that sequence, but maybe the tasks were executed in another different order (as you can deduce from the order of the error messages);
    • all the tasks are executed before jumping to the catch block;
    • the exception caught in the catch block is of type System.AggregateException; we’ll come back to it later;
    • the InnerException property of the exception being caught contains the info for the first exception that was thrown.

    Error handling when using Task.WhenAll

    Let’s replace Task.WaitAll with Task.WhenAll.

    try
    {
        Console.WriteLine("START");
    
        await Task.WhenAll(tasks);
    
        Console.WriteLine("END");
    }
    

    There are two main differences to notice when comparing Task.WaitAll and Task.WhenAll:

    1. Task.WhenAll accepts in input whatever type of collection (as long as it is an IEnumerable);
    2. it returns a Task that you have to await.

    And what happens when we run the program?

    START
    2 is a valid value!
    1 is a valid value!
    4 is a valid value!
    :(  You cannot use 3!
    7 is a valid value!
    5 is a valid value!
    :(  You cannot use 6!
    8 is a valid value!
    10 is a valid value!
    11 is a valid value!
    :(  You cannot use 9!
    The exception message is: [EXCEPTION] Value cannot be 3
    The exception type is: System.Exception
    FINALLY!
    

    Again, there are a few things to notice:

    • just as before, the messages are not printed in order;
    • the exception message contains the message for the first exception thrown;
    • the exception is of type System.Exception, and not System.AggregateException as we saw before.

    This means that the first exception breaks everything, and you lose the info about the other exceptions that were thrown.

    📩 but now, a question for you: we learned that, when using Task.WhenAll, only the first exception gets caught by the catch block. What happens to the other exceptions? How can we retrieve them? Drop a message in the comment below ⬇️

    Comparing Task.WaitAll and Task.WhenAll

    Task.WaitAll and Task.WhenAll are similar but not identical.

    Task.WaitAll should be used when you are in a synchronous context and need to block the current thread until all tasks are complete. This is common in simple old-style console applications or scenarios where asynchronous programming is not required. However, it is not recommended in UI or modern ASP.NET applications because it can cause deadlocks or freeze the UI.

    Task.WhenAll is preferred in modern C# code, especially in asynchronous methods (where you can use async Task). It allows you to await the completion of multiple tasks without blocking the calling thread, making it suitable for environments where responsiveness is important. It also enables easier composition of continuations and better exception handling.

    Let’s wrap it up in a table:

    Feature Task.WaitAll Task.WhenAll
    Return Type void Task or Task<TResult[]>
    Blocking/Non-blocking Blocking (waits synchronously) Non-blocking (returns a Task)
    Exception Handling Throws AggregateException immediately Exceptions observed when awaited
    Usage Context Synchronous code (e.g., console apps) Asynchronous code (e.g., async methods)
    Continuation Not possible (since it blocks) Possible (use .ContinueWith or await)
    Deadlock Risk Higher in UI contexts Lower (if properly awaited)

    Bonus tip: get the best out of AggregateException

    We can expand a bit on the AggregateException type.

    That specific type of exception acts as a container for all the exceptions thrown when using Task.WaitAll.

    It contains a property named InnerExceptions that contains all the exceptions thrown so that you can access them using an Enumerator.

    A common example is this:

    if (ex is AggregateException aggEx)
    {
        Console.WriteLine("There are {0} exceptions in the aggregate exception.", aggEx.InnerExceptions.Count);
        foreach (var innerEx in aggEx.InnerExceptions)
        {
            Console.WriteLine("Inner exception: {0}", innerEx.Message);
        }
    }
    

    Further readings

    This article is all about handling the unhappy path.

    If you want to learn more about Task.WaitAll and Task.WhenAll, I’d suggest you read the following two articles that I find totally interesting and well-written:

    🔗 Understanding Task.WaitAll and Task.WhenAll in C# | Muhammad Umair

    and

    🔗 Understanding WaitAll and WhenAll in .NET | Prasad Raveendran

    This article first appeared on Code4IT 🐧

    But, if you don’t know what asynchronous programming is and how to use TAP in C#, I’d suggest you start from the basics with this article:

    🔗 First steps with asynchronous programming in C# | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Critical Security Flaws in eMagicOne Store Manager for WooCommerce

    Critical Security Flaws in eMagicOne Store Manager for WooCommerce


     The eMagicOne Store Manager for WooCommerce plugin is in WordPress used to simplify and improve store management by providing functionality not found in the normal WooCommerce admin interface.

    Two serious flaws, CVE-2025-5058 and CVE-2025-4603, were found in the eMagicOne Store Manager for WooCommerce WordPress plugin.Possessing a critical CVSS score of more than 9. Only in certain situations, such as default configurations with a 1:1 password or if the attacker manages to gain legitimate credentials then attacker accomplish remote code execution.

    Affected Versions:

    • eMagicOne Store Manager for WooCommerce * <=2.5

    Vulnerability Details:

    1. CVE-2025-5058:

                 The plugin’s remote management protocol endpoint (?connector=bridge), which manages file uploads, is vulnerable. The setimage()’s improper file type validation is the source of the vulnerability. The session key system and default credentials (login=1, password=1) are used by the authentication mechanism.

    Session Key Acquisition:

    Sending a POST request to the bridge endpoint with the hash and a task (such as get_version) yields a session key.

    Fig.1 Session Key Acquisition

     

    Arbitrary file upload:

                An attacker can use the set_image task to upload a file with a valid session key, exploiting the parameters to write whatever file they want.

    Fig.2 File Upload

     Real-world Consequences:

                This flaw gives attackers the opportunity to upload any file to the server of the compromised site, which could result in remote code execution. When default credentials are left unaltered, unauthenticated attackers can exploit it, which makes the damage very serious. A successful exploitation could lead to a full server compromise, giving attackers access to private data, the ability to run malicious code, or more compromise.

    1. CVE-2025-4603:

                 The delete_file() function of the eMagicOne Store Manager for WooCommerce plugin for WordPress lacks sufficient file path validation, making it susceptible to arbitrary file deletion. This enables unauthorized attackers to remove any file from the server, which can easily result in remote code execution if the correct file (like wp-config.php) is removed. Unauthenticated attackers can take advantage of this in default installations.

    The remote management protocol endpoint (?connector=bridge) of the plugin, which manages file deletion activities, is the source of the vulnerability. The session key system and default credentials (login=1, password=1) are used by the authentication mechanism. The default authentication hash, md5(‘1’. ‘1’), is computed as follows: c4ca4238a0b923820dcc509a6f75849b. An attacker can use the delete_file task to remove arbitrary files from the WordPress root or any accessible directory after gaining a session key.

     

    Session Key Acquisition:

    Sending a POST request to the bridge endpoint with the hash and a task (such as get_version) yields a session key.

    Fig.3 Session Key Acquisition

     

    Arbitrary file deletion:

                An attacker can use the delete_file task to delete a file if they have a valid session key.

     

    Fig.4 File Delete

    Real-world Consequences:

                If this vulnerability is successfully exploited, important server files like wp-config.php may be deleted, potentially disrupting the website and allowing remote code execution. The availability and integrity of the WordPress installation are seriously threatened by the ability to remove arbitrary files.

     

    Countermeasures for both the CVE’s.

    • Immediately update their authentication credentials from the default values.
    • Update the plugin to the latest version than 1.2.5 is recommended once a patch is available.
    • Implement strict file upload validation for CVE-2025-5058.
    • Review and restrict server-side file upload permissions for CVE-2025-5058.

     

    Conclusion:

    CVE-2025-5058 and CVE-2025-4603 demonstrates how default configurations can become a vector for unintended data exposure. By leveraging improper file handling and lacks of sufficient file path validation an attacker can compromised site which result in remote code execution. Unauthenticated attackers can take advantage of default credentials if they are left unmodified, which can cause significant harm.

     

     

     

     

     



    Source link