نویسنده: post Bina

  • How to run PostgreSQL locally with Docker | Code4IT

    How to run PostgreSQL locally with Docker | Code4IT


    PostgreSQL is a famous relational database. In this article, we will learn how to run it locally using Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is a relational database characterized for being open source and with a growing community supporting the project.

    There are several ways to store a Postgres database online so that you can use it to store data for your live applications. But, for local development, you might want to spin up a Postgres database on your local machine.

    In this article, we will learn how to run PostgreSQL on a Docker container for local development.

    Pull Postgres Docker Image

    As you may know, Docker allows you to download images of almost everything you want in order to run them locally (or wherever you want) without installing too much stuff.

    The best way to check the available versions is to head to DockerHub and search for postgres.

    Postgres image on DockerHub

    Here you’ll find a description of the image, all the documentation related to the installation parameters, and more.

    If you have Docker already installed, just open a terminal and run

    to download the latest image of PostgreSQL.

    Docker pull result

    Run the Docker Container

    Now that we have the image in our local environment, we can spin up a container and specify some parameters.

    Below, you can see the full command.

    docker run
        --name myPostgresDb
        -p 5455:5432
        -e POSTGRES_USER=postgresUser
        -e POSTGRES_PASSWORD=postgresPW
        -e POSTGRES_DB=postgresDB
        -d
        postgres
    

    Time to explain each and every part! 🔎

    docker run is the command used to create and run a new container based on an already downloaded image.

    --name myPostgresDb is the name we assign to the container that we are creating.

    -p 5455:5432 is the port mapping. Postgres natively exposes the port 5432, and we have to map that port (that lives within Docker) to a local port. In this case, the local 5455 port maps to Docker’s 5432 port.

    -e POSTGRES_USER=postgresUser, -e POSTGRES_PASSWORD=postgresPW, and -e POSTGRES_DB=postgresDB set some environment variables. Of course, we’re defining the username and password of the admin user, as well as the name of the database.

    -d indicates that the container run in a detached mode. This means that the container runs in a background process.

    postgres is the name of the image we are using to create the container.

    As a result, you will see the newly created container on the CLI (running docker ps) or view it using some UI tool like Docker Desktop:

    Containers running on Docker Desktop

    If you forgot which environment variables you’ve defined for that container, you can retrieve them using Docker Desktop or by running docker exec myPostgresDb env, as shown below:

    List all environment variables associated to a Container

    Note: environment variables may change with newer image versions. Always refer to the official docs, specifically to the documentation related to the image version you are consuming.

    Now that we have Postgres up and running, we can work with it.

    You can work with the DB using the console, or, if you prefer, using a UI.

    I prefer the second approach (yes, I know, it’s not cool as using the terminal, but it works), so I downloaded pgAdmin.

    There, you can connect to the server by using the environment variable you’ve defined when running docker run. Remember that the hostname is simply localhost.

    Connect to Postgres by using pgAdmin

    And we’ve finished! 🥳 Now you can work with a local instance of Postgres and shut it remove it when you don’t need it anymore.

    Additional resources

    I’ve already introduced Docker in another article, where I explained how to run MongoDB locally:

    🔗 First steps with Docker | Code4IT

    As usual, the best resource is the official website:

    🔗 PostgreSQL image | DockerHub

    Finally, a special mention to Francesco Ciulla, who thought me how to run Postgres with Docker while I thought him how to query it with C#. Yes, mutual help! 👏

    🔗 Francesco Ciulla’s blog

    Wrapping up

    In this article, we’ve seen how to download and install a PostgreSQL database on our local environment by using Docker.

    It’s just a matter of running a few commands and paying attention to the parameters passed in input.

    In a future article, we will learn how to perform CRUD operations on a PostgreSQL database using C#.

    For now, happy coding!

    🐧



    Source link

  • Between Strategy and Story: Thierry Chopain’s Creative Path

    Between Strategy and Story: Thierry Chopain’s Creative Path


    Hello I’m Thierry Chopain, a freelance interactive art director, co-founder of type8 studio and a UX/UI design instructor at SUP de PUB (Lyon).

    Based near Saint-Étienne, I cultivate a balance between creative ambition and local grounding, between high-level design and a more human pace of life. I work remotely with a close-knit team spread between Lyon, Montpellier, and Paris, where we design custom projects that blend strategy, brand identity, and digital experience.

    My approach is deeply collaborative. I believe in lasting relationships built on trust, mutual listening, and the value of each perspective. Beyond aesthetics, my role is to bring clarity, meaning, and visual consistency to every project. Alongside my design practice, I teach at SUP de PUB, where I support students not only in mastering UX/UI concepts, but also in shaping their path as independent designers. Sharing what I’ve learned on the ground the wins, the struggles, and the lessons is a mission that matters deeply to me.

    My day-to-day life is a mix of slow living and agility. This hybrid rhythm allows me to stay true to my values while continuing to grow in a demanding and inspiring industry. I collaborate with a trusted network of creatives including Jeremy Fagis, Marine Ferrari ,Thomas Aufresne, Jordan Thiervoz, Alexandre Avram, Benoit Drigny and Olivier Marmillon to enrich every project with a shared, high-level creative vision.

    Featured Projects

    OVA INVESTMENT

    It’s an investment fund built around a strong promise: to invest disruptively in the most valuable assets of our time. Type8 studio partnered collaboration with DEPARTMENT Maison de Création and Paul Barbin to design a fully reimagined website that lives up to its bold vision and distinctive positioning. Site structure, visual direction, tone of voice, and user experience were all redefined to reflect the strategic precision, elegance, and forward-thinking nature of the fund.

    The goal of this project: Position OVA as a benchmark combining financial performance, innovation, and rarity, through refined design, a seamless interface, and custom development, in order to strengthen its credibility with a discerning audience and strategic partners.

    Discover the website

    Hocus Pocus Studio

    Hocus Pocus is a Lyon based animation studio specialized in creation of CGI and visual effects for television, cinema and video game industry. The studio offer the best quality services with an always higher technical and artistic level of requirement. I worked on this project in collaboration with the Lyon-based studio AKARU which specializes in tailored and meticulously crafted projects.

    Instagram post HP

    The goal of this project: Develop a coherent and professional digital brand image that highlights visual effects, while boosting visibility and online presence to attract and inspire trust in customers.

    Discover the website

    21 TSI

    21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA where innovation, design, and technology converge into a rich, immersive journey. We collaborated with DEPARTMENT Maison de Création and Paul Barbin to create something truly unique.

    The goal of this project: A website that embodies the DNA of 21TSI: innovation, technology, minimalism. An immersive and aesthetic experience, a clean design, and an approach that explores new ways of engaging with sport through AI.

    Discover the website

    Teria

    TERIA is a system that provides real-time centimeter-level positioning. It is an innovative tool that allows the localization and georeferencing. We set out to create an intuitive and innovative experience that perfectly reflects Teria’s precision and forward-thinking vision. A major part of the work focused on a clean, minimalist design that allows for smooth navigation making space to highlight the incredible work of Alexandre Avram, showcasing the products through Spline and 3D motion design.

    The goal of this project: Develop a clear and professional digital brand that reflects the brand’s identity and values, showcases product innovation, and boosts visibility to build trust and attract customers.

    Discover the website

    Creating visual identities for musical artists

    In a dense and ever-evolving music scene, standing out requires more than just great sound it also takes a strong and cohesive visual presence. Whether it’s the cinematic intensity of Lecomte de Brégeot, the raw emotion of Élimane my approach remains the same: to craft a visual universe that extends and enhances the essence of each artist, regardless of the medium.

    AFFICHE POST SQ
    Visual recap – Cover design for “Sequences” (Lecomte de Brégeot)
    Élimane – Weaver of Sounds, Sculptor of Emotions.

    A Defining Moment in My Career

    A turning point in my journey was the transition from working as an independent designer to founding a structured creative studio, type8 Studio. For more than ten years, I worked solo or within informal networks, juggling projects, constantly adapting, and learning how to shape my own freedom. That period gave me a lot—not only in terms of experience, but also in understanding what I truly wanted… and what I no longer wanted.

    Creating a studio was never a predefined goal. It came together progressively, through encounters, shared values, and the growing need to give form to something more collective and sustainable. Type8 was born from this shared intention: bringing together skills and creative ambitions while preserving individual freedom.

    This change was not a rupture but a natural evolution. I didn’t abandon my three identities—independent designer, studio art director, and educator. On the contrary, I integrated them into a more fluid and conscious ecosystem. Today, I can choose the most relevant role depending on the project: sometimes the studio takes the lead, sometimes it’s the freelance spirit that fits best, and at other times, it’s the educator in me who comes forward.

    This hybrid model, which some might see as unstable, is for me a tailor-made balance, deeply aligned with how I envision work: adaptive, intentional, and guided by respect for the project’s purpose and values.

    My Design Philosophy

    I see design as a tool serving meaning, people, and impact beyond mere aesthetics. It’s about creating connection, clarity, and relevance between intention and users. This approach was shaped through my collaboration with my wife, an expert in digital accessibility, who raised my awareness of inclusion and real user needs often overlooked.

    Today, I bring ethics, care, and respect into every project, focusing on accessible design and core human values: kindness, clarity, usefulness, and respecting user constraints. I prioritize human collaboration, tailoring each solution to the client’s context and values, even if it means going against trends. My design blends strategic thinking, creativity, and personal commitment to create enriching and socially valuable experiences.

    Tools and Techniques

    • Figma: To design, create, and gather ideas collaboratively.
    • Jitter: For crafting smooth and engaging motion designs.
    • Loom: To exchange feedback efficiently with clients.

    Tools evolve but they’re just means to an end. What really matters is your ability to think and create. If you’re a good designer, you’ll know how to adapt, no matter the tool.

    My Inspirations

    My imagination was shaped somewhere between a game screen, a sketchbook. Among all my influences, narrative video games hold a special place. Titles like “The Last of Us” have had a deep impact on me not just for their striking art direction, but for their ability to tell a story in an immersive, emotional, and sensory way. What inspires me in these universes isn’t just the gameplay, but how they create atmosphere, build meaningful moments, and evoke emotion without words. Motion design, sound, typography, lighting all of it is composed like a language. And that’s exactly how I approach interactive design: orchestrating visual and experiential elements to convey a message, an intention, or a feeling.

    But my inspirations go beyond the digital world. They lie at the intersection of street art, furniture design, and sneakers. My personal environment also plays a crucial role in fueling my creativity. Living in a small village close to nature, surrounded by calm and serenity, gives me the mental space I need to create. It’s often in these quiet moments, a walk through the woods, a shared silence, the way light plays on a path that my strongest ideas emerge.

    INSPIRATIONS

    I’m a creative who exists at the crossroads: between storytelling and interaction, between city and nature, between aesthetics and purpose. That’s where my work finds its balance.

    Final Thoughts

    For me, design has always been more than a craft it’s a way to connect ideas, people, and emotions. Every project is an opportunity to tell a story, to create something that feels both meaningful and timeless. Stay curious, stay human, and don’t be afraid to push boundaries. Because the most memorable work is born when passion meets purpose.

    Contact

    Thanks for taking the time to read this article.

    If you’re a brand, studio, or institution looking for a strong and distinctive digital identity. I’d be happy to talk whether it’s about a project, a potential collaboration, or just sharing a few ideas.





    Source link

  • Avoid mental mappings | Code4IT

    Avoid mental mappings | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Every name must be meaningful and clear. If names are not obvious, other developers (or your future self) may misinterpret what you were meaning.

    Avoid using mental mapping to abbreviate names, unless the abbreviation is obvious or common.

    Names should not be based on mental mapping, even worse without context.

    Bad mental mappings

    Take this bad example:

    public void RenderWOSpace()
    

    What is a WOSpace? Without context, readers won’t understand its meaning. Ok, some people use WO as an abbreviation of without.

    So, a better name is, of course:

    public void RenderWithoutSpace()
    

    Acceptable mappings

    Some abbreviations are quite obvious and are totally fine to be used.

    For instance, standard abbreviations, like km for kilometer.

    public int DistanceInKm()
    

    or variables used, for instance, in a loop:

    for (int i = 0; i <; 100; i++){}
    

    or in lambdas:

    int[] collection = new int[] { 2, 3, 5, 8 };
    collection.Where(c => c < 5);
    

    It all depends on the scope: the narrower the scope, the meaningless (don’t get me wrong!) can be the variable.

    An edge case

    Sometimes, a common (almost obvious) abbreviation can have multiple meanings. What does DB mean? Database? Decibel? It all depends on the context!

    So, a _dbConnection obviously refers to the database. But a defaultDb, is the default decibel value or the default database?

    This article first appeared on Code4IT

    Conclusion

    As usual, clarity is the key for good code: a name, may it be for classes, modules, or variables, should be explicit and obvious to everyone.

    So, always use meaningful names!

    Happy coding!

    🐧



    Source link

  • From Zero to MCP: Simplifying AI Integrations with xmcp

    From Zero to MCP: Simplifying AI Integrations with xmcp



    The AI ecosystem is evolving rapidly, and Anthropic releasing the Model Context Protocol on November 25th, 2024 has certainly shaped how LLM’s connect with data. No more building custom integrations for every data source: MCP provides one protocol to connect them all. But here’s the challenge: building MCP servers from scratch can be complex.

    TL;DR: What is MCP?

    Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources, tools, and services. It’s an open protocol that enables AI applications to safely and efficiently access external context – whether that’s your company’s database, file systems, APIs, or custom business logic.

    Source: https://modelcontextprotocol.io/docs/getting-started/intro

    In practice, this means you can hook LLMs into the things you already work with every day. To name a few examples, you could query databases to visualize trends, pull and resolve issues from GitHub, fetch or update content to a CMS, and so on. Beyond development, the same applies to broader workflows: customer support agents can look up and resolve tickets, enterprise search can fetch and read content scattered across wikis and docs, operations can monitor infrastructure or control devices.

    But there’s more to it, and that’s when you really unlock the power of MCP. It’s not just about single tasks, but rethinking entire workflows. Suddenly, we’re shaping our way to interact with products and even our own computers: instead of adapting ourselves to the limitations of software, we can shape the experience around our own needs.

    That’s where xmcp comes in: a TypeScript framework designed with DX in mind, for developers who want to build and ship MCP servers without the usual friction. It removes the complexity and gets you up and running in a matter of minutes.

    A little backstory

    xmcp was born out of necessity at Basement Studio, where we needed to build internal tools for our development processes. As we dove deeper into the protocol, we quickly discovered how fragmented the tooling landscape was and how much time we were spending on setup, configuration, and deployment rather than actually building the tools our team needed.

    That’s when we decided to consolidate everything we’d learned into a framework. The philosophy was simple: developers shouldn’t have to become experts just to build AI tools. The focus should be on creating valuable functionality, not wrestling with boilerplate code and all sorts of complexities.

    Key features & capabilities

    xmcp shines in its simplicity. With just one command, you can scaffold a complete MCP server:

    npx create-xmcp-app@latest

    The framework automatically discovers and registers tools. No extra setup needed.

    All you need is tools/

    xmcp abstracts the original tool syntax from the TypeScript SDK and follows a SOC principle, following a simple three-exports structure:

    • Implementation: The actual tool logic.
    • Schema: Define input parameters using Zod schemas with automatic validation
    • Metadata: Specify tool identity and behavior hints for AI models
    // src/tools/greet.ts
    import { z } from "zod";
    import { type InferSchema } from "xmcp";
    
    // Define the schema for tool parameters
    export const schema = {
      name: z.string().describe("The name of the user to greet"),
    };
    
    // Define tool metadata
    export const metadata = {
      name: "greet",
      description: "Greet the user",
      annotations: {
        title: "Greet the user",
        readOnlyHint: true,
        destructiveHint: false,
        idempotentHint: true,
      },
    };
    
    // Tool implementation
    export default async function greet({ name }: InferSchema<typeof schema>) {
      return `Hello, ${name}!`;
    }

    Transport Options

    • HTTP: Perfect for server deployments, enabling tools that fetch data from databases or external APIs
    • STDIO: Ideal for local operations, allowing LLMs to perform tasks directly on your machine

    You can tweak the configuration to your needs by modifying the xmcp.config.ts file in the root directory. Among the options you can find the transport type, CORS setup, experimental features, tools directory, and even the webpack config. Learn more about this file here.

    const config: XmcpConfig = {
      http: {
        port: 3000,
        // The endpoint where the MCP server will be available
        endpoint: "/my-custom-endpoint",
        bodySizeLimit: 10 * 1024 * 1024,
        cors: {
          origin: "*",
          methods: ["GET", "POST"],
          allowedHeaders: ["Content-Type"],
          credentials: true,
          exposedHeaders: ["Content-Type"],
          maxAge: 600,
        },
      },
    
      webpack: (config) => {
        // Add raw loader for images to get them as base64
        config.module?.rules?.push({
          test: /\.(png|jpe?g|gif|svg|webp)$/i,
          type: "asset/inline",
        });
    
        return config;
      },
    };
    

    Built-in Middleware & Authentication

    For HTTP servers, xmcp provides native solutions to add Authentication (JWT, API Key, OAuth). You can always leverage your application by adding custom middlewares, which can even be an array.

    import { type Middleware } from 'xmcp';
    
    const middleware: Middleware = async (req, res, next) => {
      // Custom processing
      next();
    };
    
    export default middleware;
    

    Integrations

    While you can bootstrap an application from scratch, xmcp can also work on top of your existing Next.js or Express project. To get started, run the following command:

    npx init-xmcp@latest

    on your initialized application, and you are good to go! You’ll find a tools directory with the same discovery capabilities. If you’re using Next.js the handler is set up automatically. If you’re using Express, you’ll have to configure it manually.

    From zero to prod

    Let’s see this in action by building and deploying an MCP server. We’ll create a Linear integration that fetches issues from your backlog and calculates completion rates, perfect for generating project analytics and visualizations.

    For this walkthrough, we’ll use Cursor as our MCP client to interact with the server.

    Setting up the project

    The fastest way to get started is by deploying the xmcp template directly from Vercel. This automatically initializes the project and creates an HTTP server deployment in one click.

    Alternative setup: If you prefer a different platform or transport method, scaffold locally with npx create-xmcp-app@latest

    Once deployed, you’ll see this project structure:

    Building our main tool

    Our tool will accept three parameters: team name, start date, and end date. It’ll then calculate the completion rate for issues within that timeframe.

    Head to the tools directory, create a file called get-completion-rate.ts and export the three main elements that construct the syntax:

    import { z } from "zod";
    import { type InferSchema, type ToolMetadata } from "xmcp";
    
    export const schema = {
      team: z
        .string()
        .min(1, "Team name is required")
        .describe("The team to get completion rate for"),
      startDate: z
        .string()
        .min(1, "Start date is required")
        .describe("Start date for the analysis period (YYYY-MM-DD)"),
      endDate: z
        .string()
        .min(1, "End date is required")
        .describe("End date for the analysis period (YYYY-MM-DD)"),
    };
    
    export const metadata: ToolMetadata = {
      name: "get-completion-rate",
      description: "Get completion rate analytics for a specific team over a date range",
    };
    
    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    // tool implementation we'll cover in the next step
    };

    Our basic structure is set. We now have to add the client functionality to actually communicate with Linear and get the data we need.

    We’ll be using Linear’s personal API Key, so we’ll need to instantiate the client using @linear/sdk . We’ll focus on the tool implementation now:

    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        const linear = new LinearClient({
            apiKey: // our api key
        });
    
    };

    Instead of hardcoding API keys, we’ll use the native headers utilities to accept the Linear API key securely from each request:

    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        // API Key from headers
        const apiKey = headers()["linear-api-key"] as string;
    
        if (!apiKey) {
            return "No linear-api-key header provided";
        }
    
        const linear = new LinearClient({
            apiKey: apiKey,
        });
        
        // rest of the implementation
    }

    This approach allows multiple users to connect with their own credentials. Your MCP configuration will look like:

    "xmcp-local": {
      "url": "http://127.0.0.1:3001/mcp",
      "headers": {
        "linear-api-key": "your api key"
      }
    }

    Moving forward with the implementation, this is what our complete tool file will look like:

    import { z } from "zod";
    import { type InferSchema, type ToolMetadata } from "xmcp";
    import { headers } from "xmcp/dist/runtime/headers";
    import { LinearClient } from "@linear/sdk";
    
    export const schema = {
      team: z
        .string()
        .min(1, "Team name is required")
        .describe("The team to get completion rate for"),
      startDate: z
        .string()
        .min(1, "Start date is required")
        .describe("Start date for the analysis period (YYYY-MM-DD)"),
      endDate: z
        .string()
        .min(1, "End date is required")
        .describe("End date for the analysis period (YYYY-MM-DD)"),
    };
    
    export const metadata: ToolMetadata = {
      name: "get-completion-rate",
      description: "Get completion rate analytics for a specific team over a date range",
    };
    
    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        // API Key from headers
        const apiKey = headers()["linear-api-key"] as string;
    
        if (!apiKey) {
            return "No linear-api-key header provided";
        }
    
        const linear = new LinearClient({
            apiKey: apiKey,
        });
    
        // Get the team by name
        const teams = await linear.teams();
        const targetTeam = teams.nodes.find(t => t.name.toLowerCase().includes(team.toLowerCase()));
    
        if (!targetTeam) {
            return `Team "${team}" not found`
        }
    
        // Get issues created in the date range for the team
        const createdIssues = await linear.issues({
            filter: {
                team: { id: { eq: targetTeam.id } },
                createdAt: {
                    gte: startDate,
                    lte: endDate,
                },
            },
        });
    
        // Get issues completed in the date range for the team (for reporting purposes)
        const completedIssues = await linear.issues({
            filter: {
                team: { id: { eq: targetTeam.id } },
                completedAt: {
                    gte: startDate,
                    lte: endDate,
                },
            },
        });
    
        // Calculate completion rate: percentage of created issues that were completed
        const totalCreated = createdIssues.nodes.length;
        const createdAndCompleted = createdIssues.nodes.filter(issue => 
            issue.completedAt !== undefined && 
            issue.completedAt >= new Date(startDate) && 
            issue.completedAt <= new Date(endDate)
        ).length;
        const completionRate = totalCreated > 0 ? (createdAndCompleted / totalCreated * 100).toFixed(1) : "0.0";
    
        // Structure data for the response
        const analytics = {
            team: targetTeam.name,
            period: `${startDate} to ${endDate}`,
            totalCreated,
            totalCompletedFromCreated: createdAndCompleted,
            completionRate: `${completionRate}%`,
            createdIssues: createdIssues.nodes.map(issue => ({
                title: issue.title,
                createdAt: issue.createdAt,
                priority: issue.priority,
                completed: issue.completedAt !== null,
                completedAt: issue.completedAt,
            })),
            allCompletedInPeriod: completedIssues.nodes.map(issue => ({
                title: issue.title,
                completedAt: issue.completedAt,
                priority: issue.priority,
            })),
        };
    
        return JSON.stringify(analytics, null, 2);
    }

    Let’s test it out!

    Start your development server by running pnpm dev (or the package manager you’ve set up)

    The server will automatically restart whenever you make changes to your tools, giving you instant feedback during development. Then, head to Cursor Settings → Tools & Integrations and toggle the server on. You should see it’s discovering one tool file, which is our only file in the directory.

    Let’s now use the tool by querying to “Get the completion rate of the xmcp project between August 1st 2025 and August 20th 2025”.

    Let’s try using this tool in a more comprehensive way: we want to understand the project’s completion rate in three separate months, June, July and August, and visualize the tendency. So we will ask Cursor to retrieve the information for these months, and generate a tendency chart and a monthly issue overview:

    Once we’re happy with the implementation, we’ll push our changes and deploy a new version of our server.

    Pro tip: use Vercel’s branch deployments to test new tools safely before merging to production.

    Next steps

    Nice! We’ve built the foundation, but there’s so much more you can do with it.

    • Expand your MCP toolkit with a complete workflow automation. Take this MCP server as a starting point and add tools that generate weekly sprint reports and automatically save them to Notion, or build integrations that connect multiple project management platforms.
    • Leverage the application by adding authentication. You can use the OAuth native provider to add Linear’s authentication instead of using API Keys, or use the Better Auth integration to handle custom authentication paths that fit your organization’s security requirements.
    • For production workloads, you may need to add custom middlewares, like rate limiting, request logging, and error tracking. This can be easily set up by creating a middleware.ts file in the source directory. You can learn more about middlewares here.

    Final thoughts

    The best part of what you’ve built here is that xmcp handled all the protocol complexity for you. You didn’t have to learn the intricacies of the Model Context Protocol specification or figure out transport layers: you just focused on solving your actual business problem. That’s exactly how it should be.

    Looking ahead, xmcp’s roadmap includes full MCP specification compliance, bringing support for resources, prompts and elicitation. More importantly, the framework is evolving to bridge the gap between prototype and production, with enterprise-grade features for authentication, monitoring, and scalability.

    If you wish to learn more about the framework, visit xmcp.dev, read the documentation and check out the examples!



    Source link

  • CRUD operations on PostgreSQL using C# and Npgsql &vert; Code4IT

    CRUD operations on PostgreSQL using C# and Npgsql | Code4IT


    Once we have a Postgres instance running, we can perform operations on it. We will use Npgsql to query a Postgres instance with C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    PostgreSQL is one of the most famous relational databases. It has got tons of features, and it is open source.

    In a previous article, we’ve seen how to run an instance of Postgres by using Docker.

    In this article, we will learn how to perform CRUD operations in C# by using Npgsql.

    Introducing the project

    To query a Postgres database, I’ve created a simple .NET API application with CRUD operations.

    We will operate on a single table that stores info for my board game collection. Of course, we will Create, Read, Update and Delete items from the DB (otherwise it would not be an article about CRUD operations 😅).

    Before starting writing, we need to install Npgsql, a NuGet package that acts as a dataprovider for PostgreSQL.

    NpgSql Nuget Package

    Open the connection

    Once we have created the application, we can instantiate and open a connection against our database.

    private NpgsqlConnection connection;
    
    public NpgsqlBoardGameRepository()
    {
        connection = new NpgsqlConnection(CONNECTION_STRING);
        connection.Open();
    }
    

    We simply create a NpgsqlConnection object, and we keep a reference to it. We will use that reference to perform queries against our DB.

    Connection string

    The only parameter we can pass as input to the NpgsqlConnection constructor is the connection string.

    You must compose it by specifying the host address, the port, the database name we are connecting to, and the credentials of the user that is querying the DB.

    private const string CONNECTION_STRING = "Host=localhost:5455;" +
        "Username=postgresUser;" +
        "Password=postgresPW;" +
        "Database=postgresDB";
    

    If you instantiate Postgres using Docker following the steps I described in a previous article, most of the connection string configurations we use here match the Environment variables we’ve defined before.

    CRUD operations

    Now that everything is in place, it’s time to operate on our DB!

    We are working on a table, Games, whose name is stored in a constant:

    private const string TABLE_NAME = "Games";
    

    The Games table consists of several fields:

    Field name Field type
    id INTEGER PK
    Name VARCHAR NOT NULL
    MinPlayers SMALLINT NOT NULL
    MaxPlayers SMALLINT
    AverageDuration SMALLINT

    This table is mapped to the BoardGame class:

    public class BoardGame
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public int MinPlayers { get; set; }
        public int MaxPlayers { get; set; }
        public int AverageDuration { get; set; }
    }
    

    To double-check the results, you can use a UI tool to access the Database. For instance, if you use pgAdmin, you can find the list of databases running on a host.

    Database listing on pgAdmin

    And, if you want to see the content of a particular table, you can select it under Schemas>public>Tables>tablename, and then select View>AllRows

    How to view table rows on pgAdmin

    Create

    First things first, we have to insert some data in our DB.

    public async Task Add(BoardGame game)
    {
        string commandText = $"INSERT INTO {TABLE_NAME} (id, Name, MinPlayers, MaxPlayers, AverageDuration) VALUES (@id, @name, @minPl, @maxPl, @avgDur)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    The commandText string contains the full command to be issued. In this case, it’s a simple INSERT statement.

    We use the commandText string to create a NpgsqlCommandobject by specifying the query and the connection where we will perform that query. Note that the command must be Disposed after its use: wrap it in a using block.

    Then, we will add the parameters to the query. AddWithValue accepts two parameters: the first is the name of the key, with the same name defined in the query, but without the @ symbol; in the query, we use @minPl, and as a parameter, we use minPl.

    Never, never, create the query by concatenating the input params as a string, to avoid SQL Injection attacks.

    Finally, we can execute the query asynchronously with ExecuteNonQueryAsync.

    Read

    Now that we have some games stored in our table, we can retrieve those items:

    public async Task<BoardGame> Get(int id)
    {
        string commandText = $"SELECT * FROM {TABLE_NAME} WHERE ID = @id";
        await using (NpgsqlCommand cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", id);
    
            await using (NpgsqlDataReader reader = await cmd.ExecuteReaderAsync())
                while (await reader.ReadAsync())
                {
                    BoardGame game = ReadBoardGame(reader);
                    return game;
                }
        }
        return null;
    }
    

    Again, we define the query as a text, use it to create a NpgsqlCommand, specify the parameters’ values, and then we execute the query.

    The ExecuteReaderAsync method returns a NpgsqlDataReader object that we can use to fetch the data. We update the position of the stream with reader.ReadAsync(), and then we convert the current data with ReadBoardGame(reader) in this way:

    private static BoardGame ReadBoardGame(NpgsqlDataReader reader)
    {
        int? id = reader["id"] as int?;
        string name = reader["name"] as string;
        short? minPlayers = reader["minplayers"] as Int16?;
        short? maxPlayers = reader["maxplayers"] as Int16?;
        short? averageDuration = reader["averageduration"] as Int16?;
    
        BoardGame game = new BoardGame
        {
            Id = id.Value,
            Name = name,
            MinPlayers = minPlayers.Value,
            MaxPlayers = maxPlayers.Value,
            AverageDuration = averageDuration.Value
        };
        return game;
    }
    

    This method simply reads the data associated with each column (for instance, reader["averageduration"]), then we convert them to their data type. Then we build and return a BoardGame object.

    Update

    Updating items is similar to inserting a new item.

    public async Task Update(int id, BoardGame game)
    {
        var commandText = $@"UPDATE {TABLE_NAME}
                    SET Name = @name, MinPlayers = @minPl, MaxPlayers = @maxPl, AverageDuration = @avgDur
                    WHERE id = @id";
    
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("id", game.Id);
            cmd.Parameters.AddWithValue("name", game.Name);
            cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
            cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
            cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
    
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Of course, the query is different, but the general structure is the same: create the query, create the Command, add parameters, and execute the query with ExecuteNonQueryAsync.

    Delete

    Just for completeness, here’s how to delete an item by specifying its id.

    public async Task Delete(int id)
    {
        string commandText = $"DELETE FROM {TABLE_NAME} WHERE ID=(@p)";
        await using (var cmd = new NpgsqlCommand(commandText, connection))
        {
            cmd.Parameters.AddWithValue("p", id);
            await cmd.ExecuteNonQueryAsync();
        }
    }
    

    Always the same story, so I have nothing to add.

    ExecuteNonQueryAsync vs ExecuteReaderAsync

    As you’ve seen, some operations use ExecuteNonQueryAsync, while some others use ExecuteReaderAsync. Why?

    ExecuteNonQuery and ExecuteNonQueryAsync execute commands against a connection. Those methods do not return data from the database, but only the number of rows affected. They are used to perform INSERT, UPDATE, and DELETE operations.

    On the contrary, ExecuteReader and ExecuteReaderAsync are used to perform queries on the database and return a DbDataReader object, which is a read-only stream of rows retrieved from the data source. They are used in conjunction with SELECT queries.

    Bonus 1: Create the table if not already existing

    Of course, you can also create tables programmatically.

    public async Task CreateTableIfNotExists()
    {
        var sql = $"CREATE TABLE if not exists {TABLE_NAME}" +
            $"(" +
            $"id serial PRIMARY KEY, " +
            $"Name VARCHAR (200) NOT NULL, " +
            $"MinPlayers SMALLINT NOT NULL, " +
            $"MaxPlayers SMALLINT, " +
            $"AverageDuration SMALLINT" +
            $")";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        await cmd.ExecuteNonQueryAsync();
    }
    

    Again, nothing fancy: create the command text, create a NpgsqlCommand object, and execute the command.

    Bonus 2: Check the database version

    To check if the database is up and running, and your credentials are correct (those set in the connection string), you might want to retrieve the DB version.

    You can do it in 2 ways.

    With the following method, you query for the version directly on the database.

    public async Task<string> GetVersion()
    {
        var sql = "SELECT version()";
    
        using var cmd = new NpgsqlCommand(sql, connection);
    
        var versionFromQuery = (await cmd.ExecuteScalarAsync()).ToString();
    
        return versionFromQuery;
    }
    

    This method returns lots of info that directly depend on the database instance. In my case, I see PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit.

    The other way is to use PostgreSqlVersion.

    public async Task<string> GetVersion()
    {
        var versionFromConnection = connection.PostgreSqlVersion;
    
        return versionFromConnection;
    }
    

    PostgreSqlVersion returns a Version object containing some fields like Major, Minor, Revision, and more.

    PostgresVersion from connection info

    You can call the ToString method of that object to get a value like “14.1”.

    Additional readings

    In a previous article, we’ve seen how to download and run a PostgreSQL instance on your local machine using Docker.

    🔗 How to run PostgreSQL locally with Docker | Code4IT

    To query PostgreSQL with C#, we used the Npsgql NuGet package. So, you might want to read the official documentation.

    🔗 Npgsql documentation | Npsgql

    In particular, an important part to consider is the mapping between C# and SQL data types:

    🔗 PostgreSQL to C# type mapping | Npsgql

    When talking about parameters to be passed to the query, I mentioned the SQL Injection vulnerability. Here you can read more about it.

    🔗 SQL Injection | Imperva

    Finally, here you can find the repository used for this article.

    🔗 Repository used for this article | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we’ve learned how to perform simple operations on a PostgreSQL database to retrieve and update the content of a table.

    This is the most basic way to perform those operations. You explicitly write the queries and issue them without much stuff in between.

    In future articles, we will see some other ways to perform the same operations in C#, but using other tools and packages. Maybe Entity Framework? Maybe Dapper? Stay tuned!

    Happy coding!

    🐧



    Source link

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • NoisyBear Targets Kazakhstan Oil & Gas

    NoisyBear Targets Kazakhstan Oil & Gas


    Contents

    • Introduction
    • Key Targets
      • Industries Affected.
      • Geographical Focus.
    • Infection Chain.
    • Initial Findings
      • Looking into the malicious email.
      • Looking into the decoy-document.
    • Technical Analysis
      • Stage 0 – Malicious ZIP & LNK files.
      • Stage 1 – Malicious BATCH scripts.
      • Stage 2 – Malicious DOWNSHELL loaders.
      • Stage 3 – Malicious DLL implant.
    • Infrastructure and Hunting.
    • Attribution
    • Conclusion
    • Seqrite Protection.
    • IOCs
    • MITRE ATT&CK.

    Authors: Subhajeet Singha & Sathwik Ram Prakki

    Introduction

    Seqrite Labs APT-Team has been tracking and uncovered a supposedly new threat group since April 2025, that we track by the name Noisy Bear as Noisy Bear. This threat group has targeted entities in Central Asia, such as targeting the Oil and Gas or energy sector of Kazakhstan. The campaign is targeted towards employees of KazMunaiGas or KMG where the threat entity delivered a fake document related to KMG IT department, mimicking official internal communication and leveraging themes such as policy updates, internal certification procedures, and salary adjustments.

    In this blog, we will explore the in-depth technical details of the campaign, we encountered during our analysis. We will examine the various stages of this campaign, where infection starts with a phishing email having a ZIP attachment, which contains a malicious LNK downloader along with a decoy, which further downloads a malicious BATCH script, leading to PowerShell loaders, which we dubbed as DOWNSHELL reflectively loading a malicious DLL implant. We will also look into the infrastructure covering the entire campaign.

    Key Targets

    Industries Affected.

    • Energy Sector [Oil and Gas]

    Geographical Focus.

    Infection Chain

    Initial Findings

    Initially, we have been tracking this threat actor since April 2025, and we observed that this threat entity launched a campaign against KazMunaiGas employees in May 2025 using a spear-phishing-oriented method. A compromised business email was used to deliver a malicious ZIP file, which contained a decoy along with a malicious initial infection-based shortcut (.LNK) file known as График зарплат.lnk, which can be translated to Salary Schedule.lnk. The sample initially surfaced on Virus Total in the first half of May 2025.

    Now, let us look into the malicious email and decoy file.

    Looking into the malicious email.

    Initially, looking into the email file’s sender, we found that the threat actor used a compromised business email of an individual working in Finance Department of KazMunaiGas, using the email and an urgent prioritized subject URGENT! Review the updated salary schedule, they emailed it to the employees of KMG.

    Later, upon looking at the contents of the email, it became clear that the message was mostly crafted to look like an internal HR communication related to salary-oriented discussion or decision. The message basically says about reviewing an updated information about lot of things such as work schedules, salaries and incentives related policies and decisions. The TA also instructs the targets of KMG to check for a file known as График.zip translated to Schedule.zip and then to open a file known as График зарплат which translates to Salary Schedule , which is basically the shortcut (LNK) file to be executed to download further stagers.

    Well, last but not the least, the email also mentions to complete the instructions by 15th May 2025 enhancing a sense of urgency. Now, let us go ahead and analyze the decoy file.

    Looking into the decoy-document.

    Looking into the decoy document, we can see that it has an official logo of the targeted entity I.e., KazMunaiGas, along with instructions in both Russian and Kazakh language which instructs the employees through a series of simple steps which is to open the Downloads folder in the browser, extract a ZIP archive named KazMunayGaz_Viewer.zip, and run a file called KazMunayGaz_Viewer, although the file-name is irrelevant, but we believe, this is the exact file dropped from the malicious email. The decoy also mentions users to wait for a console window to appear and specifically advised them not to close or interact with it, to limit suspicion on targets’ ends. Last, not the least, it also mentions the IT-Support team in salutations to make it look completely legitimate, with above artefacts present in the decoy.

    Technical Analysis.

    We have divided the technical analysis into four parts, where initially we will look into the malicious ZIP containing the LNK file, which further downloads the malicious Batch script, and going ahead with downloading the script-based loader followed by the malicious DLL.

    Stage 0 – Malicious ZIP & LNK Files.

    Initially, looking into the ZIP file, we found three files, out of which one of them stands to be the decoy document, which we saw initially, the second one turns out to be README.txt, which once again makes sure that the instructions are present, so that it does not seem suspicious and the later one turns out to be malicious LNK file.

    Now, upon looking into the malicious shortcut(.LNK) file, named as График зарплат , we found that is using powershell.exe LOLBIN to execute a downloader-based behavior.

    It downloads a malicious batch script known as 123.bat, from a remote-server, which is hxxps[://]77[.]239[.]125[.]41[:]8443 and once it is downloaded, it stores the batch script under the path C:\Users\Public, it then executes the batch script using the Start-Process cmdlet from the path.

    Similarly, hunting for similar LNK file, we found another LNK, which belongs to the same campaign, looks slightly different.

    This malicious LNK file, uses a little operand shenanigan to avoid static signature detection, but concatenation of the string literals and further downloading a batch script from the same remote server, saving it to the Public folder, further executing it via cmdlet.

    In, the next section, we will examine the malicious BATCH scripts.

    Stage 1 – Malicious BATCH Scripts.

    Now, looking into the one of the BATCH scripts, I.e., it.bat , we can see that it is downloading PowerShell Loaders, which we have dubbed as DOWNSHELL, from a remote server known as support.ps1 and a.ps1, once they are downloaded, it then sleeps for a total of 11 seconds.

    Now, looking into the second batch script I.e., the 123.bat file, it also does the same which is downloading the PowerShell loaders, followed by a sleep of 10 seconds.

    In the next section, we will move ahead to understanding the working of the DOWNSHELL loaders written in PowerShell.

    Stage 2 – Malicious DOWNSHELL Loaders.

    In, this section we will look into the set of malicious PowerShell scripts, which we have dubbed as DOWNSHELL, the first PowerShell file, also known as support.ps1 is basically a script which is responsible for impairing defense on the target machine and the latter is responsible for performing loader-oriented function.

    Looking into the code, we figured out that the script is basically obfuscating, the target namespace by building “System.Management.Automation” via string concatenation, then enumerates all loaded .NET assemblies in the current AppDomain and filters for the one whose FullName matches that namespace.

    Then, using reflection technique, it resolves the internal type System.Management.Automation.AmsiUtils, which basically retrieves the private static field amsiInitiFailed, so changing or flipping this flag convinces PowerShell that the AMSI has failed to initialize, so the other malicious script belonging to DOWNSHELL family, does not get scanned and executes without any hassle or interruption. Now, let us look into the second PowerShell script.

    Looking into the first part of the code, it looks like a copied version of the famous red-team emulation-based tool known as PowerSploit, the function LookUpFunc basically dynamically retrieves the memory address of any exported function from a specified DLL without using traditional DllImport or Add-Type calls. It performs this by locating the Microsoft.Win32.UnsafeNativeMethods type within the already-loaded System.dll assembly, then extracting and invoking the hidden .NET wrappers for GetModuleHandle and GetProcAddress. By first resolving the base address of the target module ($moduleName) and then passing it along with the target function name ($functionName), it returns a raw function pointer to that API, which is required.

    Then, looking into the second part of the code, the function getDelegateType basically creates a custom .NET delegate on the fly, entirely in memory. It takes the parameter types and returns certain type, builds a new delegate class with those, and gives it an Invoke method so it can be used like a normal function. This lets the entire script wrap the raw function pointers (from LookupFunc) into something PowerShell can call directly, making it easy to run WinAPI functions without having to import them in the usual way, followed by querying the process ID of the explorer.exe process and storing it inside a variable.

    The latter part of the script is followed by a byte array containing the meterpreter reverse_tcpshellcode, which is basically using classical Create-RemoteThread Injection technique using OpenProcess, VirtualAllocEx, WriteProcessMemory & CreateRemoteThread to inject the shellcode inside the target process which is explorer.exe , followed by a message Injected! Check your listener!.

    Well, an interesting part of this script is some part of this is commented, which performs Reflective DLL injection into remote process, which is notepad in this case, using a tool known as PowerSploit , hosted at the remote server, which is downloaded, and the Meterpreter based DLL is being used. Another slight interesting case are the comments in Russian Language. In the next case, we will examine the DLL.

    Stage 3 – Malicious DLL Implant.

    Initially, we did check out the DLL implant, in a PE-analysis tool, and it was confirmed that the DLL implant or shellcode loader is a 64-bit binary.

    Next, moving ahead with the code, we saw that the implant is using Semaphores as a sort of gatekeeper to make sure only one copy of itself runs at a time, in this case the implant uses a named object Local\doSZQmSnP12lu4Pb5FRD. When it starts, it tries to create this semaphore then if it already exists, that means another instance is active. To double-check, it uses WaitForSingleObject on the semaphore and then looks for a specific named event. If the event exists, it knows another instance has already completed its setup. If it doesn’t, it creates the event itself.

    Now, depending on the previous function, which is responsible for checking the number of instances, the next step is it spawns a rundll32.exe process in a suspended manner.

    After creating the process in a suspended state, the implant performs classic thread-context hijacking: it calls GetThreadContext on the primary thread, uses VirtualAllocEx to reserve RWX memory in the target, WriteProcessMemory to drop the shellcode, updates the thread’s RIP to point to that buffer via SetThreadContext, and finally calls ResumeThread so execution continues at the injected shellcode. In this case, the shellcode basically is a reverse shell.

    Infrastructure & Hunting.

    Upon looking into the infrastructure, the threat entity had been using, we found a few slightly interesting details about it.

    Tool-Arsenal

    Along, with the tools, which we saw had been used by the threat actor, we also found that there are more open-source red-team oriented tools, which had been hosted by the threat actor for further usage.

    Pivoting

    Using similar fingerprint, we hunted a similar infrastructure, which belongs to the similar threat actor.

    One of most interesting part, being both the infrastructure is hosted under a sanctioned hosting firm known as Aeza Group LLC.

    Another interesting part is, we also discovered a lot of suspicious web applications being hosted, related to wellness, fitness and health assistance for Russian individuals.

    Attribution.

    Attribution is a very important metric when describing a threat entity. It involved analyzing and correlating various domains, which include Tactics, Techniques and Procedures (TTPs), operational mistakes, rotation and re-use of similar infrastructural artefacts, operational mistakes which could lead to attribution and much more.

    In our ongoing tracking of Noisy Bear, we have a lot of artefacts, such as languages present inside the tooling, usage of sanctioned web-hosting services and similar behavioral artefacts with related to Russian threat entities which have previously targeted similar Central Asian nations, we attribute the threat actor possibly could be of Russian origin.

    Conclusion.

    We have found that a threat entity, dubbed as NoisyBear is targeting Kazakh Energy Sector using company specific lure while heavily depending on PowerShell and open-source post-exploitation tools such as Metasploit, hosting them over a sanctioned web-hosting provider, we can also conclude that the threat actor has been active since the month of April 2025.

    SEQRITE Protection.

    TBD.

    IOCs

    File-Type SHA-256
    Outlook 5168a1e22ee969db7cea0d3e9eb64db4a0c648eee43da8bacf4c7126f58f0386
    ZIP 021b3d53fe113d014a9700488e31a6fb5e16cb02227de5309f6f93affa4515a6
    ZIP f5e7dc5149c453b98d05b73cad7ac1c42b381f72b6f7203546c789f4e750eb26
    LNK a40e7eb0cb176d2278c4ab02c4657f9034573ac83cee4cde38096028f243119c
    LNK 26f009351f4c645ad4df3c1708f74ae2e5f8d22f3b0bbb4568347a2a72651bee
    Batch Script d48aeb6afcc5a3834b3e4ca9e0672b61f9d945dd41046c9aaf782382a6044f97
    Batch Script 1eecfc1c607be3891e955846c7da70b0109db9f9fdf01de45916d3727bff96e0
    PowerShell da98b0cbcd784879ba38503946898d747ade08ace1d4f38d0fb966703e078bbf
    PowerShell 6d6006eb2baa75712bfe867bf5e4f09288a7d860a4623a4176338993b9ddfb4b
    PowerShell fb0f7c35a58a02473f26aabea4f682e2e483db84b606db2eca36aa6c7e7d9cf8
    DLL 1bfe65acbb9e509f80efcfe04b23daf31381e8b95a98112b81c9a080bdd65a2d
    Domains/IPs
    77[.]239[.]125[.]41
    wellfitplan[.]ru
    178[.]159[.]94[.]8

    MITRE ATT&CK

    Tactic Technique ID Name
    Reconnaissance T1589.002 Gather Victim Identity Information: Email Addresses
    Initial Access T1204.002

    T1078.002

    User Execution: Malicious File
    Valid Accounts: Domain Accounts
    Execution T1059.001

    T1059.00

    Command and Scripting Interpreter: PowerShell
    Defense Evasion T1562

    T1027.007

    T1027.013

    T1055.003

    T1620

    T1218.011

    Impair Defenses

    Dynamic API Resolution

    Encrypted/Encoded File

    Thread Execution Hijacking

    Reflective Code Loading

    System Binary Proxy Execution: Rundll32

    Command and Control T1105 Ingress Tool Transfer
    Exfiltration T1567.002 Exfiltration to Cloud Storage

     



    Source link

  • Exception handling with WHEN clause &vert; Code4IT

    Exception handling with WHEN clause | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    From C# 6 on, you can use the when keyword to specify a condition before handling an exception.

    Consider this – pretty useless, I have to admit – type of exception:

    public class RandomException : System.Exception
    {
        public int Value { get; }
        public RandomException()
        {
            Value = (new Random()).Next();
        }
    }
    

    This exception type contains a Value property which is populated with a random value when the exception is thrown.

    What if you want to print a different message depending on whether the Value property is odd or even?

    You can do it this way:

    try
    {
        throw new RandomException();
    }
    catch (RandomException re)
    {
        if(re.Value % 2 == 0)
            Console.WriteLine("Exception with even value");
        else
            Console.WriteLine("Exception with odd value");
    }
    

    But, well, you should keep your catch blocks as simple as possible.

    That’s where the when keyword comes in handy.

    CSharp when clause

    You can use it to create two distinct catch blocks, each one of them handles their case in the cleanest way possible.

    try
    {
        throw new RandomException();
    }
    catch (RandomException re) when (re.Value % 2 == 0)
    {
        Console.WriteLine("Exception with even value");
    }
    catch (RandomException re)
    {
        Console.WriteLine("Exception with odd value");
    }
    

    You must use the when keyword in conjunction with a condition, which can also reference the current instance of the exception being caught. In fact, the condition references the Value property of the RandomException instance.

    A real usage: HTTP response errors

    Ok, that example with the random exception is a bit… useless?

    Let’s see a real example: handling different HTTP status codes in case of failing HTTP calls.

    In the following snippet, I call an endpoint that returns a specified status code (506, in my case).

    try
    {
        var endpoint = "https://mock.codes/506";
        var httpClient = new HttpClient();
        var response = await httpClient.GetAsync(endpoint);
        response.EnsureSuccessStatusCode();
    }
    catch (HttpRequestException ex) when (ex.StatusCode == (HttpStatusCode)506)
    {
        Console.WriteLine("Handle 506: Variant also negotiates");
    }
    catch (HttpRequestException ex)
    {
        Console.WriteLine("Handle another status code");
    }
    

    If the response is not a success, the response.EnsureSuccessStatusCode() throws an exception of type HttpRequestException. The thrown exception contains some info about the returned status code, which we can use to route the exception handling to the correct catch block using when (ex.StatusCode == (HttpStatusCode)506).

    Quite interesting, uh? 😉

    This article first appeared on Code4IT

    To read more, you can head to the official documentation, even though there’s not so much.

    Happy coding!

    🐧



    Source link

  • 7 Must-Know GSAP Animation Tips for Creative Developers

    7 Must-Know GSAP Animation Tips for Creative Developers


    Today we’re going to go over some of my favorite GSAP techniques that can bring you great results with just a little code.

    Although the GSAP documentation is among the best, I find that developers often overlook some of GSAP’s greatest features or perhaps struggle with finding their practical application. 

    The techniques presented here will be helpful to GSAP beginners and seasoned pros. It is recommended that you understand the basics of loading GSAP and working with tweens, timelines and SplitText. My free beginner’s course GSAP Express will guide you through everything you need for a firm foundation.

    If you prefer a video version of this tutorial, you can watch it here:

    https://www.youtube.com/watch?v=EKjYspj9MaM

    Tip 1: SplitText Masking

    GSAP’s SplitText just went through a major overhaul. It has 14 new features and weighs in at roughly 7kb.

    SplitText allows you to split HTML text into characters, lines, and words. It has powerful features to support screen-readers, responsive layouts, nested elements, foreign characters, emoji and more.

    My favorite feature is its built-in support for masking (available in SplitText version 3.13+).

    Prior to this version of SplitText you would have to manually nest your animated text in parent divs that have overflow set to hidden or clip in the css.

    SplitText now does this for you by creating “wrapper divs” around the elements that we apply masking to.

    Basic Implementation

    The code below will split the h1 tag into chars and also apply a mask effect, which means the characters will not be visible when they are outside their bounding box.

    const split = SplitText.create("h1", {
    	type:"chars",
    	mask:"chars"
    })

    Demo: Split Text Masking (Basic)

    See the Pen
    Codrops Tip 1: Split Text Masking – Basic by Snorkl.tv (@snorkltv)
    on CodePen.

    This simple implementation works great and is totally fine.

    However, if you inspect the DOM you will see that 2 new <div> elements are created for each character:

    • an outer div with overflow:clip
    • an inner div with text 

    With 17 characters to split this creates 34 divs as shown in the simplified DOM structure below

    <h1>SplitText Masking
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>S</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>p</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>l</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>i</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>t</div>
    	</div>	
    	...
    </h1>

    The More Efficient Approach

    If you want to minimize the amount of DOM elements created you can split your text into characters and lines. Then you can just set the masking on the lines element like so:

    const split = SplitText.create("h1", {
    	type:"chars, lines",
    	mask:"lines"
    })

    Demo: Split Text Masking (Better with chars and lines)

    See the Pen
    Codrops Tip 1: Split Text Masking – Better with chars and lines by Snorkl.tv (@snorkltv)
    on CodePen.

    Now if you inspect the DOM you will see that there is

    • 1 line wrapper div with overflow:clip
    • 1 line div
    • 1 div per character 

    With 17 to characters to split this creates only 19 divs in total:

    <h1>SplitText Masking
    	<div> <!-- line wrapper with overflow:clip -->
    		<div> <!-- line -->
    			<div>S</div>
    			<div>p</div>
    			<div>l</div>
    			<div>i</div>
    			<div>t</div>
    			...
    		</div> 
    	</div> 
    </h1>

    Tip 2: Setting the Stagger Direction

    From my experience 99% of stagger animations go from left to right. Perhaps that’s just because it’s the standard flow of written text.

    However, GSAP makes it super simple to add some animation pizzazz to your staggers.

    To change the direction from which staggered animations start you need to use the object-syntax for the stagger value

    Normal Stagger

    Typically the stagger value is a single number which specifies the amount of time between the start of each target element’s animation.

    gsap.to(targets, {x:100, stagger:0.2}) // 0.2 seconds between the start of each animation

    Stagger Object

    By using the stagger object we can specify multiple parameters to fine-tune our staggers such as each, amount, from, ease, grid and repeat. See the GSAP Stagger Docs for more details.
    Our focus today will be on the from property which allows us to specify from which direction our staggers should start.

    gsap.to(targets, {x:100,
       stagger: {
         each:0.2, // amount of time between the start of each animation
         from:”center” // animate from center of the targets array   
    }

    The from property in the stagger object can be any one of these string values

    • “start” (default)
    • “center”
    • “end”
    • “edges”
    • “random”

    Demo: Stagger Direction Timeline

    In this demo the characters animate in from center and then out from the edges.

    See the Pen
    Codrops Tip 2: Stagger Direction Timeline by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Stagger Direction Visualizer

    See the Pen
    Codrops Tip 2: Stagger Direction Visualizer by Snorkl.tv (@snorkltv)
    on CodePen.

    Tip 3: Wrapping Array Values

    The gsap.utils.wrap() function allows you to pull values from an array and apply them to multiple targets. This is great for allowing elements to animate in from opposite directions (like a zipper), assigning a set of colors to multiple objects and many more creative applications.

    Setting Colors From an Array

    I love using gsap.utils.wrap() with a set() to instantly manipulate a group of elements.

    // split the header
    const split = SplitText.create("h1", {
    	type:"chars"
    })
    
    //create an array of colors
    const colors = ["lime", "yellow", "pink", "skyblue"]
    
    // set each character to a color from the colors array
    gsap.set(split.chars, {color:gsap.utils.wrap(colors)})

    When the last color in the array (skyblue) is chosen GSAP will wrap back to the beginning of the array and apply lime to the next element.

    Animating from Alternating Directions

    In the code below each target will animate in from alternating y values of -50 and 50. 

    Notice that you can define the array directly inside of the wrap() function.

    const tween = gsap.from(split.chars, {
    	y:gsap.utils.wrap([-50, 50]),
    	opacity:0,
    	stagger:0.1
    }) 

    Demo: Basic Wrap

    See the Pen
    Codrops Tip 3: Basic Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Fancy Wrap

    In the demo below there is a timeline that creates a sequence of animations that combine stagger direction and wrap. Isn’t it amazing what GSAP allows you to do with just a few simple shapes and a few lines of code?

    See the Pen
    Codrops Tip 3: Fancy Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    As you watch the animation be sure to go through the GSAP code to see which tween is running each effect. 

    I strongly recommend editing the animation values and experimenting.

    Tip 4: Easy Randomization with the “random()” String Function

    GSAP has its own random utility function gsap.utils.random() that lets you tap into convenient randomization features anywhere in your JavaScript code.

    // generate a random number between 0 and 450
    const randomNumber = gsap.utils.random(0, 450)

    To randomize values in animations we can use the random string shortcut which saves us some typing.

    //animate each target to a random x value between 0 and 450
    gsap.to(targets, {x:"random(0, 450)"})
    
    //the third parameter sets the value to snap to
    gsap.to(targets, {x:"random(0, 450, 50)"}) // random number will be an increment of 50
    
    //pick a random value from an array for each target
    gsap.to(targets, fill:"random([pink, yellow, orange, salmon])" 

    Demo: Random String

    See the Pen
    Codrops Tip 4: Random String by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 5: repeatRefresh:true

    This next tip appears to be pure magic as it allows our animations to produce new results each time they repeat.

    GSAP internally stores the start and end values of an animation the first time it runs. This is a performance optimization so that each time it repeats there is no additional work to do. By default repeating tweens always produce the exact same results (which is a good thing).

    When dealing with dynamic or function-based values such as those generated with the random string syntax “random(0, 100)” we can tell GSAP to record new values on repeat by setting repeatRefresh:true

    You can set repeatRefresh:true in the config object of a single tween OR on a timeline.

    //use on a tween
    gsap.to(target, {x:”random(50, 100”, repeat:10, repeatRefresh:true})
    
    //use on a timeline
    const tl = gsap.timeline({repeat:10, repeatRefresh:true})

    Demo: repeatRefresh Particles

    The demo below contains a single timeline with repeatRefresh:true.

    Each time it repeats the circles get assigned a new random scale and a new random x destination.

    Be sure to study the JS code in the demo. Feel free to fork it and modify the values.

    See the Pen
    Codrops Tip 5: repeatRefresh Particles by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 6: Tween The TimeScale() of an Animation

    GSAP animations have getter / setter values that allow you to get and set properties of an animation.

    Common Getter / Setter methods:

    • paused() gets or sets the paused state
    • duration() gets or sets the duration
    • reversed() gets or sets the reversed state
    • progress() gets or sets the progress
    • timeScale() gets or sets the timeScale

    Getter Setter Methods in Usage

    animation.paused(true) // sets the paused state to true
    console.log(animation.paused()) // gets the paused state
    console.log(!animation.paused()) // gets the inverse of the paused state

    See it in Action

    In the demo from the previous tip there is code that toggles the paused state of the particle effect.

    //click to pause
    document.addEventListener("click", function(){
    	tl.paused(!tl.paused()) 
    })

    This code means “every time the document is clicked the timeline’s paused state will change to the inverse (or opposite) of what it currently is”.

    If the animation is paused, it will become “unpaused” and vice-versa.

    This works great, but I’d like to show you trick for making it less abrupt and smoothing it out.

    Tweening Numeric Getter/Setter Values

    We can’t tween the paused() state as it is either true or false.

    Where things get interesting is that we can tween numeric getter / setter properties of animations like progress() and timeScale().

    timeScale() represents a factor of an animation’s playback speed.

    • timeScale(1): playback at normal speed
    • timeScale(0.5) playback at half speed
    • timeScale(2) playback at double speed

    Setting timeScale()

    //create an animation with a duration of 5 seconds
    const animation = gsap.to(box, {x:500, duration:5})
    
    //playback at half-speed making it take 10 seconds to play
    animation.timeScale(0.5)

    Tweening timeScale()

    const animation = gsap.to(box, {x:500, duration:5}) // create a basic tween
    
    // Over the course of 1 second reduce the timeScale of the animation to 0.5
    gsap.to(animation, {timeScale:0.5, duration:1})

    Dynamically Tweening timeScale() for smooth pause and un-pause

    Instead of abruptly changing the paused state of animation as the particle demo above does we are now going to tween the timeScale() for a MUCH smoother effect.

    Demo: Particles with timeScale() Tween

    See the Pen
    Codrops Tip 6: Particles with timeScale() Tween by Snorkl.tv (@snorkltv)
    on CodePen.

    Click anywhere in the demo above to see the particles smoothly slow down and speed up on each click.

    The code below basically says “if the animation is currently playing then we will slow it down or else we will speed it up”. Every time a click happens the isPlaying value toggles between true and false so that it can be updated for the next click.

    Tip 7: GSDevTools Markers and Animation IDs

    Most of the demos in this article have used GSDevTools to help us control our animations. When building animations I just love being able to scrub at my own pace and study the sequencing of all the moving parts.

    However, there is more to this powerful tool than just scrubbing, playing and pausing.

    Markers

    The in and out markers allow us to loop ANY section of an animation. As an added bonus GSDevTools remembers the previous position of the markers so that each time we reload our animation it will start  and end at the same time.

    This makes it very easy to loop a particular section and study it.

    Image from GSDevTools Docs

    Markers are a huge advantage when building animations longer than 3 seconds.

    To explore, open The Fancy Wrap() demo in a new window, move the markers and reload.

    Important: The markers are only available on screens wider than 600px. On small screens the UI is minimized to only show basic controls.

    Setting IDs for the Animation Menu

    The animation menu allows us to navigate to different sections of our animation based on an animation id. When dealing with long-form animations this feature is an absolute life saver.

    Since GSAP’s syntax makes creating complex sequences a breeze, it is not un-common to find yourself working on animations that are beyond 10, 20 or even 60 seconds!

    To set an animation id:

    const tl = gsap.timeline({id:"fancy"})
    
    //Add the animation to GSDevTools based on variable reference
    GSDevTools.create({animation:tl})
    
    //OR add the animation GSDevTools based on id
    GSDevTools.create({animation:"fancy"})

    With the code above the name “fancy” will display in GSDevTools.

    Although you can use the id with a single timeline, this feature is most helpful when working with nested timelines as discussed below.

    Demo: GSAP for Everyone

    See the Pen
    Codrops Tip 7: Markers and Animation Menu by Snorkl.tv (@snorkltv)
    on CodePen.

    This demo is 26 seconds long and has 7 child timelines. Study the code to see how each timeline has a unique id that is displayed in the animation menu.

    Use the animation menu to navigate to and explore each section.

    Important: The animation menu is only available on screens wider than 600px.

    Hopefully you can see how useful markers and animation ids can be when working with these long-form, hand-coded animations!

    Want to Learn More About GSAP?

    I’m here to help. 

    I’ve spent nearly 5 years archiving everything I know about GSAP in video format spanning 5 courses and nearly 300 lessons at creativeCodingClub.com.

    I spent many years “back in the day” using GreenSock’s ActionScript tools as a Flash developer and this experience lead to me being hired at GreenSock when they switched to JavaScript. My time at GreenSock had me creating countless demos, videos and learning resources.

    Spending years answering literally thousands of questions in the support forums has left me with a unique ability to help developers of all skill levels avoid common pitfalls and get the most out of this powerful animation library.

    It’s my mission to help developers from all over the world discover the joy of animating with code through affordable, world-class training.

    Visit Creative Coding Club to learn more.



    Source link