PostgreSQL is a famous relational database. In this article, we will learn how to run it locally using Docker.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
PostgreSQL is a relational database characterized for being open source and with a growing community supporting the project.
There are several ways to store a Postgres database online so that you can use it to store data for your live applications. But, for local development, you might want to spin up a Postgres database on your local machine.
In this article, we will learn how to run PostgreSQL on a Docker container for local development.
Pull Postgres Docker Image
As you may know, Docker allows you to download images of almost everything you want in order to run them locally (or wherever you want) without installing too much stuff.
The best way to check the available versions is to head to DockerHub and search for postgres.
Here you’ll find a description of the image, all the documentation related to the installation parameters, and more.
If you have Docker already installed, just open a terminal and run
to download the latest image of PostgreSQL.
Run the Docker Container
Now that we have the image in our local environment, we can spin up a container and specify some parameters.
docker run is the command used to create and run a new container based on an already downloaded image.
--name myPostgresDb is the name we assign to the container that we are creating.
-p 5455:5432 is the port mapping. Postgres natively exposes the port 5432, and we have to map that port (that lives within Docker) to a local port. In this case, the local 5455 port maps to Docker’s 5432 port.
-e POSTGRES_USER=postgresUser, -e POSTGRES_PASSWORD=postgresPW, and -e POSTGRES_DB=postgresDB set some environment variables. Of course, we’re defining the username and password of the admin user, as well as the name of the database.
-d indicates that the container run in a detached mode. This means that the container runs in a background process.
postgres is the name of the image we are using to create the container.
As a result, you will see the newly created container on the CLI (running docker ps) or view it using some UI tool like Docker Desktop:
If you forgot which environment variables you’ve defined for that container, you can retrieve them using Docker Desktop or by running docker exec myPostgresDb env, as shown below:
Note: environment variables may change with newer image versions. Always refer to the official docs, specifically to the documentation related to the image version you are consuming.
Navigate the DB with PgAdmin
Now that we have Postgres up and running, we can work with it.
You can work with the DB using the console, or, if you prefer, using a UI.
I prefer the second approach (yes, I know, it’s not cool as using the terminal, but it works), so I downloaded pgAdmin.
There, you can connect to the server by using the environment variable you’ve defined when running docker run. Remember that the hostname is simply localhost.
And we’ve finished! 🥳 Now you can work with a local instance of Postgres and shut it remove it when you don’t need it anymore.
Additional resources
I’ve already introduced Docker in another article, where I explained how to run MongoDB locally:
Finally, a special mention to Francesco Ciulla, who thought me how to run Postgres with Docker while I thought him how to query it with C#. Yes, mutual help! 👏
Hello I’m Thierry Chopain, a freelance interactive art director, co-founder of type8 studio and a UX/UI design instructor at SUP de PUB (Lyon).
Based near Saint-Étienne, I cultivate a balance between creative ambition and local grounding, between high-level design and a more human pace of life. I work remotely with a close-knit team spread between Lyon, Montpellier, and Paris, where we design custom projects that blend strategy, brand identity, and digital experience.
My approach is deeply collaborative. I believe in lasting relationships built on trust, mutual listening, and the value of each perspective. Beyond aesthetics, my role is to bring clarity, meaning, and visual consistency to every project. Alongside my design practice, I teach at SUP de PUB, where I support students not only in mastering UX/UI concepts, but also in shaping their path as independent designers. Sharing what I’ve learned on the ground the wins, the struggles, and the lessons is a mission that matters deeply to me.
My day-to-day life is a mix of slow living and agility. This hybrid rhythm allows me to stay true to my values while continuing to grow in a demanding and inspiring industry. I collaborate with a trusted network of creatives including Jeremy Fagis, Marine Ferrari ,Thomas Aufresne, Jordan Thiervoz, Alexandre Avram, Benoit Drigny and Olivier Marmillon to enrich every project with a shared, high-level creative vision.
It’s an investment fund built around a strong promise: to invest disruptively in the most valuable assets of our time. Type8 studio partnered collaboration with DEPARTMENT Maison de Création and Paul Barbinto design a fully reimagined website that lives up to its bold vision and distinctive positioning. Site structure, visual direction, tone of voice, and user experience were all redefined to reflect the strategic precision, elegance, and forward-thinking nature of the fund.
The goal of this project: Position OVA as a benchmark combining financial performance, innovation, and rarity, through refined design, a seamless interface, and custom development, in order to strengthen its credibility with a discerning audience and strategic partners.
Hocus Pocus is a Lyon based animation studio specialized in creation of CGI and visual effects for television, cinema and video game industry. The studio offer the best quality services with an always higher technical and artistic level of requirement. I worked on this project in collaboration with the Lyon-based studio AKARU which specializes in tailored and meticulously crafted projects.
The goal of this project: Develop a coherent and professional digital brand image that highlights visual effects, while boosting visibility and online presence to attract and inspire trust in customers.
21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA where innovation, design, and technology converge into a rich, immersive journey. We collaborated with DEPARTMENT Maison de Création and Paul Barbin to create something truly unique.
The goal of this project: A website that embodies the DNA of 21TSI: innovation, technology, minimalism. An immersive and aesthetic experience, a clean design, and an approach that explores new ways of engaging with sport through AI.
TERIA is a system that provides real-time centimeter-level positioning. It is an innovative tool that allows the localization and georeferencing. We set out to create an intuitive and innovative experience that perfectly reflects Teria’s precision and forward-thinking vision. A major part of the work focused on a clean, minimalist design that allows for smooth navigation making space to highlight the incredible work of Alexandre Avram, showcasing the products through Spline and 3D motion design.
The goal of this project: Develop a clear and professional digital brand that reflects the brand’s identity and values, showcases product innovation, and boosts visibility to build trust and attract customers.
In a dense and ever-evolving music scene, standing out requires more than just great sound it also takes a strong and cohesive visual presence. Whether it’s the cinematic intensity of Lecomte de Brégeot, the raw emotion of Élimane my approach remains the same: to craft a visual universe that extends and enhances the essence of each artist, regardless of the medium.
Lecomte de Brégeot – French electronic music producerVisual recap – Cover design for “Sequences” (Lecomte de Brégeot)Case study – Création de la cover “fragment” (Lecomte de Brégeot)Élimane – Weaver of Sounds, Sculptor of Emotions.
I’m design visual identities, websites, and digital assets that combine bold aesthetics with clarity. The goal is to give each artist a unique space where their music, vision, and personality can fully come to life both visually and emotionally.
A Defining Moment in My Career
A turning point in my journey was the transition from working as an independent designer to founding a structured creative studio, type8 Studio. For more than ten years, I worked solo or within informal networks, juggling projects, constantly adapting, and learning how to shape my own freedom. That period gave me a lot—not only in terms of experience, but also in understanding what I truly wanted… and what I no longer wanted.
Creating a studio was never a predefined goal. It came together progressively, through encounters, shared values, and the growing need to give form to something more collective and sustainable. Type8 was born from this shared intention: bringing together skills and creative ambitions while preserving individual freedom.
This change was not a rupture but a natural evolution. I didn’t abandon my three identities—independent designer, studio art director, and educator. On the contrary, I integrated them into a more fluid and conscious ecosystem. Today, I can choose the most relevant role depending on the project: sometimes the studio takes the lead, sometimes it’s the freelance spirit that fits best, and at other times, it’s the educator in me who comes forward.
This hybrid model, which some might see as unstable, is for me a tailor-made balance, deeply aligned with how I envision work: adaptive, intentional, and guided by respect for the project’s purpose and values.
My Design Philosophy
I see design as a tool serving meaning, people, and impact beyond mere aesthetics. It’s about creating connection, clarity, and relevance between intention and users. This approach was shaped through my collaboration with my wife, an expert in digital accessibility, who raised my awareness of inclusion and real user needs often overlooked.
Today, I bring ethics, care, and respect into every project, focusing on accessible design and core human values: kindness, clarity, usefulness, and respecting user constraints. I prioritize human collaboration, tailoring each solution to the client’s context and values, even if it means going against trends. My design blends strategic thinking, creativity, and personal commitment to create enriching and socially valuable experiences.
Tools and Techniques
Figma: To design, create, and gather ideas collaboratively.
Jitter: For crafting smooth and engaging motion designs.
Loom: To exchange feedback efficiently with clients.
Tools evolve but they’re just means to an end. What really matters is your ability to think and create. If you’re a good designer, you’ll know how to adapt, no matter the tool.
My Inspirations
My imagination was shaped somewhere between a game screen, a sketchbook. Among all my influences, narrative video games hold a special place. Titles like “The Last of Us” have had a deep impact on me not just for their striking art direction, but for their ability to tell a story in an immersive, emotional, and sensory way. What inspires me in these universes isn’t just the gameplay, but how they create atmosphere, build meaningful moments, and evoke emotion without words. Motion design, sound, typography, lighting all of it is composed like a language. And that’s exactly how I approach interactive design: orchestrating visual and experiential elements to convey a message, an intention, or a feeling.
But my inspirations go beyond the digital world. They lie at the intersection of street art, furniture design, and sneakers. My personal environment also plays a crucial role in fueling my creativity. Living in a small village close to nature, surrounded by calm and serenity, gives me the mental space I need to create. It’s often in these quiet moments, a walk through the woods, a shared silence, the way light plays on a path that my strongest ideas emerge.
I’m a creative who exists at the crossroads: between storytelling and interaction, between city and nature, between aesthetics and purpose. That’s where my work finds its balance.
Final Thoughts
For me, design has always been more than a craft it’s a way to connect ideas, people, and emotions. Every project is an opportunity to tell a story, to create something that feels both meaningful and timeless. Stay curious, stay human, and don’t be afraid to push boundaries. Because the most memorable work is born when passion meets purpose.
“Turning ideas into timeless experiences.”
Contact
Thanks for taking the time to read this article.
If you’re a brand, studio, or institution looking for a strong and distinctive digital identity. I’d be happy to talk whether it’s about a project, a potential collaboration, or just sharing a few ideas.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Every name must be meaningful and clear. If names are not obvious, other developers (or your future self) may misinterpret what you were meaning.
Avoid using mental mapping to abbreviate names, unless the abbreviation is obvious or common.
Names should not be based on mental mapping, even worse without context.
Bad mental mappings
Take this bad example:
publicvoid RenderWOSpace()
What is a WOSpace? Without context, readers won’t understand its meaning. Ok, some people use WO as an abbreviation of without.
So, a better name is, of course:
publicvoid RenderWithoutSpace()
Acceptable mappings
Some abbreviations are quite obvious and are totally fine to be used.
For instance, standard abbreviations, like km for kilometer.
The AI ecosystem is evolving rapidly, and Anthropic releasing the Model Context Protocol on November 25th, 2024 has certainly shaped how LLM’s connect with data. No more building custom integrations for every data source: MCP provides one protocol to connect them all. But here’s the challenge: building MCP servers from scratch can be complex.
TL;DR: What is MCP?
Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources, tools, and services. It’s an open protocol that enables AI applications to safely and efficiently access external context – whether that’s your company’s database, file systems, APIs, or custom business logic.
In practice, this means you can hook LLMs into the things you already work with every day. To name a few examples, you could query databases to visualize trends, pull and resolve issues from GitHub, fetch or update content to a CMS, and so on. Beyond development, the same applies to broader workflows: customer support agents can look up and resolve tickets, enterprise search can fetch and read content scattered across wikis and docs, operations can monitor infrastructure or control devices.
But there’s more to it, and that’s when you really unlock the power of MCP. It’s not just about single tasks, but rethinking entire workflows. Suddenly, we’re shaping our way to interact with products and even our own computers: instead of adapting ourselves to the limitations of software, we can shape the experience around our own needs.
That’s where xmcp comes in: a TypeScript framework designed with DX in mind, for developers who want to build and ship MCP servers without the usual friction. It removes the complexity and gets you up and running in a matter of minutes.
A little backstory
xmcp was born out of necessity at Basement Studio, where we needed to build internal tools for our development processes. As we dove deeper into the protocol, we quickly discovered how fragmented the tooling landscape was and how much time we were spending on setup, configuration, and deployment rather than actually building the tools our team needed.
That’s when we decided to consolidate everything we’d learned into a framework. The philosophy was simple: developers shouldn’t have to become experts just to build AI tools. The focus should be on creating valuable functionality, not wrestling with boilerplate code and all sorts of complexities.
Key features & capabilities
xmcp shines in its simplicity. With just one command, you can scaffold a complete MCP server:
npx create-xmcp-app@latest
The framework automatically discovers and registers tools. No extra setup needed.
All you need is tools/
xmcp abstracts the original tool syntax from the TypeScript SDK and follows a SOC principle, following a simple three-exports structure:
Implementation: The actual tool logic.
Schema: Define input parameters using Zod schemas with automatic validation
Metadata: Specify tool identity and behavior hints for AI models
// src/tools/greet.ts
import { z } from "zod";
import { type InferSchema } from "xmcp";
// Define the schema for tool parameters
export const schema = {
name: z.string().describe("The name of the user to greet"),
};
// Define tool metadata
export const metadata = {
name: "greet",
description: "Greet the user",
annotations: {
title: "Greet the user",
readOnlyHint: true,
destructiveHint: false,
idempotentHint: true,
},
};
// Tool implementation
export default async function greet({ name }: InferSchema<typeof schema>) {
return `Hello, ${name}!`;
}
Transport Options
HTTP: Perfect for server deployments, enabling tools that fetch data from databases or external APIs
STDIO: Ideal for local operations, allowing LLMs to perform tasks directly on your machine
You can tweak the configuration to your needs by modifying the xmcp.config.ts file in the root directory. Among the options you can find the transport type, CORS setup, experimental features, tools directory, and even the webpack config. Learn more about this file here.
const config: XmcpConfig = {
http: {
port: 3000,
// The endpoint where the MCP server will be available
endpoint: "/my-custom-endpoint",
bodySizeLimit: 10 * 1024 * 1024,
cors: {
origin: "*",
methods: ["GET", "POST"],
allowedHeaders: ["Content-Type"],
credentials: true,
exposedHeaders: ["Content-Type"],
maxAge: 600,
},
},
webpack: (config) => {
// Add raw loader for images to get them as base64
config.module?.rules?.push({
test: /\.(png|jpe?g|gif|svg|webp)$/i,
type: "asset/inline",
});
return config;
},
};
Built-in Middleware & Authentication
For HTTP servers, xmcp provides native solutions to add Authentication (JWT, API Key, OAuth). You can always leverage your application by adding custom middlewares, which can even be an array.
While you can bootstrap an application from scratch, xmcp can also work on top of your existing Next.js or Express project. To get started, run the following command:
npx init-xmcp@latest
on your initialized application, and you are good to go! You’ll find a tools directory with the same discovery capabilities. If you’re using Next.js the handler is set up automatically. If you’re using Express, you’ll have to configure it manually.
From zero to prod
Let’s see this in action by building and deploying an MCP server. We’ll create a Linear integration that fetches issues from your backlog and calculates completion rates, perfect for generating project analytics and visualizations.
For this walkthrough, we’ll use Cursor as our MCP client to interact with the server.
Setting up the project
The fastest way to get started is by deploying the xmcp template directly from Vercel. This automatically initializes the project and creates an HTTP server deployment in one click.
Alternative setup: If you prefer a different platform or transport method, scaffold locally with npx create-xmcp-app@latest
Once deployed, you’ll see this project structure:
Building our main tool
Our tool will accept three parameters: team name, start date, and end date. It’ll then calculate the completion rate for issues within that timeframe.
Head to the tools directory, create a file called get-completion-rate.ts and export the three main elements that construct the syntax:
import { z } from "zod";
import { type InferSchema, type ToolMetadata } from "xmcp";
export const schema = {
team: z
.string()
.min(1, "Team name is required")
.describe("The team to get completion rate for"),
startDate: z
.string()
.min(1, "Start date is required")
.describe("Start date for the analysis period (YYYY-MM-DD)"),
endDate: z
.string()
.min(1, "End date is required")
.describe("End date for the analysis period (YYYY-MM-DD)"),
};
export const metadata: ToolMetadata = {
name: "get-completion-rate",
description: "Get completion rate analytics for a specific team over a date range",
};
export default async function getCompletionRate({
team,
startDate,
endDate,
}: InferSchema<typeof schema>) {
// tool implementation we'll cover in the next step
};
Our basic structure is set. We now have to add the client functionality to actually communicate with Linear and get the data we need.
We’ll be using Linear’s personal API Key, so we’ll need to instantiate the client using @linear/sdk . We’ll focus on the tool implementation now:
export default async function getCompletionRate({
team,
startDate,
endDate,
}: InferSchema<typeof schema>) {
const linear = new LinearClient({
apiKey: // our api key
});
};
Instead of hardcoding API keys, we’ll use the native headers utilities to accept the Linear API key securely from each request:
export default async function getCompletionRate({
team,
startDate,
endDate,
}: InferSchema<typeof schema>) {
// API Key from headers
const apiKey = headers()["linear-api-key"] as string;
if (!apiKey) {
return "No linear-api-key header provided";
}
const linear = new LinearClient({
apiKey: apiKey,
});
// rest of the implementation
}
This approach allows multiple users to connect with their own credentials. Your MCP configuration will look like:
Moving forward with the implementation, this is what our complete tool file will look like:
import { z } from "zod";
import { type InferSchema, type ToolMetadata } from "xmcp";
import { headers } from "xmcp/dist/runtime/headers";
import { LinearClient } from "@linear/sdk";
export const schema = {
team: z
.string()
.min(1, "Team name is required")
.describe("The team to get completion rate for"),
startDate: z
.string()
.min(1, "Start date is required")
.describe("Start date for the analysis period (YYYY-MM-DD)"),
endDate: z
.string()
.min(1, "End date is required")
.describe("End date for the analysis period (YYYY-MM-DD)"),
};
export const metadata: ToolMetadata = {
name: "get-completion-rate",
description: "Get completion rate analytics for a specific team over a date range",
};
export default async function getCompletionRate({
team,
startDate,
endDate,
}: InferSchema<typeof schema>) {
// API Key from headers
const apiKey = headers()["linear-api-key"] as string;
if (!apiKey) {
return "No linear-api-key header provided";
}
const linear = new LinearClient({
apiKey: apiKey,
});
// Get the team by name
const teams = await linear.teams();
const targetTeam = teams.nodes.find(t => t.name.toLowerCase().includes(team.toLowerCase()));
if (!targetTeam) {
return `Team "${team}" not found`
}
// Get issues created in the date range for the team
const createdIssues = await linear.issues({
filter: {
team: { id: { eq: targetTeam.id } },
createdAt: {
gte: startDate,
lte: endDate,
},
},
});
// Get issues completed in the date range for the team (for reporting purposes)
const completedIssues = await linear.issues({
filter: {
team: { id: { eq: targetTeam.id } },
completedAt: {
gte: startDate,
lte: endDate,
},
},
});
// Calculate completion rate: percentage of created issues that were completed
const totalCreated = createdIssues.nodes.length;
const createdAndCompleted = createdIssues.nodes.filter(issue =>
issue.completedAt !== undefined &&
issue.completedAt >= new Date(startDate) &&
issue.completedAt <= new Date(endDate)
).length;
const completionRate = totalCreated > 0 ? (createdAndCompleted / totalCreated * 100).toFixed(1) : "0.0";
// Structure data for the response
const analytics = {
team: targetTeam.name,
period: `${startDate} to ${endDate}`,
totalCreated,
totalCompletedFromCreated: createdAndCompleted,
completionRate: `${completionRate}%`,
createdIssues: createdIssues.nodes.map(issue => ({
title: issue.title,
createdAt: issue.createdAt,
priority: issue.priority,
completed: issue.completedAt !== null,
completedAt: issue.completedAt,
})),
allCompletedInPeriod: completedIssues.nodes.map(issue => ({
title: issue.title,
completedAt: issue.completedAt,
priority: issue.priority,
})),
};
return JSON.stringify(analytics, null, 2);
}
Let’s test it out!
Start your development server by running pnpm dev (or the package manager you’ve set up)
The server will automatically restart whenever you make changes to your tools, giving you instant feedback during development. Then, head to Cursor Settings → Tools & Integrations and toggle the server on. You should see it’s discovering one tool file, which is our only file in the directory.
Let’s now use the tool by querying to “Get the completion rate of the xmcp project between August 1st 2025 and August 20th 2025”.
Let’s try using this tool in a more comprehensive way: we want to understand the project’s completion rate in three separate months, June, July and August, and visualize the tendency. So we will ask Cursor to retrieve the information for these months, and generate a tendency chart and a monthly issue overview:
Once we’re happy with the implementation, we’ll push our changes and deploy a new version of our server.
Pro tip: use Vercel’s branch deployments to test new tools safely before merging to production.
Next steps
Nice! We’ve built the foundation, but there’s so much more you can do with it.
Expand your MCP toolkit with a complete workflow automation. Take this MCP server as a starting point and add tools that generate weekly sprint reports and automatically save them to Notion, or build integrations that connect multiple project management platforms.
Leverage the application by adding authentication. You can use the OAuth native provider to add Linear’s authentication instead of using API Keys, or use the Better Auth integration to handle custom authentication paths that fit your organization’s security requirements.
For production workloads, you may need to add custom middlewares, like rate limiting, request logging, and error tracking. This can be easily set up by creating a middleware.ts file in the source directory. You can learn more about middlewares here.
Final thoughts
The best part of what you’ve built here is that xmcp handled all the protocol complexity for you. You didn’t have to learn the intricacies of the Model Context Protocol specification or figure out transport layers: you just focused on solving your actual business problem. That’s exactly how it should be.
Looking ahead, xmcp’s roadmap includes full MCP specification compliance, bringing support for resources, prompts and elicitation. More importantly, the framework is evolving to bridge the gap between prototype and production, with enterprise-grade features for authentication, monitoring, and scalability.
Once we have a Postgres instance running, we can perform operations on it. We will use Npgsql to query a Postgres instance with C#
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
PostgreSQL is one of the most famous relational databases. It has got tons of features, and it is open source.
In a previous article, we’ve seen how to run an instance of Postgres by using Docker.
In this article, we will learn how to perform CRUD operations in C# by using Npgsql.
Introducing the project
To query a Postgres database, I’ve created a simple .NET API application with CRUD operations.
We will operate on a single table that stores info for my board game collection. Of course, we will Create, Read, Update and Delete items from the DB (otherwise it would not be an article about CRUD operations 😅).
Before starting writing, we need to install Npgsql, a NuGet package that acts as a dataprovider for PostgreSQL.
Open the connection
Once we have created the application, we can instantiate and open a connection against our database.
private NpgsqlConnection connection;
public NpgsqlBoardGameRepository()
{
connection = new NpgsqlConnection(CONNECTION_STRING);
connection.Open();
}
We simply create a NpgsqlConnection object, and we keep a reference to it. We will use that reference to perform queries against our DB.
Connection string
The only parameter we can pass as input to the NpgsqlConnection constructor is the connection string.
You must compose it by specifying the host address, the port, the database name we are connecting to, and the credentials of the user that is querying the DB.
If you instantiate Postgres using Docker following the steps I described in a previous article, most of the connection string configurations we use here match the Environment variables we’ve defined before.
CRUD operations
Now that everything is in place, it’s time to operate on our DB!
We are working on a table, Games, whose name is stored in a constant:
To double-check the results, you can use a UI tool to access the Database. For instance, if you use pgAdmin, you can find the list of databases running on a host.
And, if you want to see the content of a particular table, you can select it under Schemas>public>Tables>tablename, and then select View>AllRows
Create
First things first, we have to insert some data in our DB.
The commandText string contains the full command to be issued. In this case, it’s a simple INSERT statement.
We use the commandText string to create a NpgsqlCommandobject by specifying the query and the connection where we will perform that query. Note that the command must be Disposed after its use: wrap it in a using block.
Then, we will add the parameters to the query. AddWithValue accepts two parameters: the first is the name of the key, with the same name defined in the query, but without the @ symbol; in the query, we use @minPl, and as a parameter, we use minPl.
Never, never, create the query by concatenating the input params as a string, to avoid SQL Injection attacks.
Finally, we can execute the query asynchronously with ExecuteNonQueryAsync.
Read
Now that we have some games stored in our table, we can retrieve those items:
publicasync Task<BoardGame> Get(int id)
{
string commandText = $"SELECT * FROM {TABLE_NAME} WHERE ID = @id";
awaitusing (NpgsqlCommand cmd = new NpgsqlCommand(commandText, connection))
{
cmd.Parameters.AddWithValue("id", id);
awaitusing (NpgsqlDataReader reader = await cmd.ExecuteReaderAsync())
while (await reader.ReadAsync())
{
BoardGame game = ReadBoardGame(reader);
return game;
}
}
returnnull;
}
Again, we define the query as a text, use it to create a NpgsqlCommand, specify the parameters’ values, and then we execute the query.
The ExecuteReaderAsync method returns a NpgsqlDataReader object that we can use to fetch the data. We update the position of the stream with reader.ReadAsync(), and then we convert the current data with ReadBoardGame(reader) in this way:
privatestatic BoardGame ReadBoardGame(NpgsqlDataReader reader)
{
int? id = reader["id"] asint?;
string name = reader["name"] asstring;
short? minPlayers = reader["minplayers"] as Int16?;
short? maxPlayers = reader["maxplayers"] as Int16?;
short? averageDuration = reader["averageduration"] as Int16?;
BoardGame game = new BoardGame
{
Id = id.Value,
Name = name,
MinPlayers = minPlayers.Value,
MaxPlayers = maxPlayers.Value,
AverageDuration = averageDuration.Value
};
return game;
}
This method simply reads the data associated with each column (for instance, reader["averageduration"]), then we convert them to their data type. Then we build and return a BoardGame object.
Update
Updating items is similar to inserting a new item.
publicasync Task Update(int id, BoardGame game)
{
var commandText = $@"UPDATE {TABLE_NAME}
SET Name = @name, MinPlayers = @minPl, MaxPlayers = @maxPl, AverageDuration = @avgDur
WHERE id = @id";
awaitusing (var cmd = new NpgsqlCommand(commandText, connection))
{
cmd.Parameters.AddWithValue("id", game.Id);
cmd.Parameters.AddWithValue("name", game.Name);
cmd.Parameters.AddWithValue("minPl", game.MinPlayers);
cmd.Parameters.AddWithValue("maxPl", game.MaxPlayers);
cmd.Parameters.AddWithValue("avgDur", game.AverageDuration);
await cmd.ExecuteNonQueryAsync();
}
}
Of course, the query is different, but the general structure is the same: create the query, create the Command, add parameters, and execute the query with ExecuteNonQueryAsync.
Delete
Just for completeness, here’s how to delete an item by specifying its id.
publicasync Task Delete(int id)
{
string commandText = $"DELETE FROM {TABLE_NAME} WHERE ID=(@p)";
awaitusing (var cmd = new NpgsqlCommand(commandText, connection))
{
cmd.Parameters.AddWithValue("p", id);
await cmd.ExecuteNonQueryAsync();
}
}
Always the same story, so I have nothing to add.
ExecuteNonQueryAsync vs ExecuteReaderAsync
As you’ve seen, some operations use ExecuteNonQueryAsync, while some others use ExecuteReaderAsync. Why?
ExecuteNonQuery and ExecuteNonQueryAsync execute commands against a connection. Those methods do not return data from the database, but only the number of rows affected. They are used to perform INSERT, UPDATE, and DELETE operations.
On the contrary, ExecuteReader and ExecuteReaderAsync are used to perform queries on the database and return a DbDataReader object, which is a read-only stream of rows retrieved from the data source. They are used in conjunction with SELECT queries.
Bonus 1: Create the table if not already existing
Of course, you can also create tables programmatically.
publicasync Task CreateTableIfNotExists()
{
var sql = $"CREATE TABLE if not exists {TABLE_NAME}" +
$"(" +
$"id serial PRIMARY KEY, " +
$"Name VARCHAR (200) NOT NULL, " +
$"MinPlayers SMALLINT NOT NULL, " +
$"MaxPlayers SMALLINT, " +
$"AverageDuration SMALLINT" +
$")";
using var cmd = new NpgsqlCommand(sql, connection);
await cmd.ExecuteNonQueryAsync();
}
Again, nothing fancy: create the command text, create a NpgsqlCommand object, and execute the command.
Bonus 2: Check the database version
To check if the database is up and running, and your credentials are correct (those set in the connection string), you might want to retrieve the DB version.
You can do it in 2 ways.
With the following method, you query for the version directly on the database.
publicasync Task<string> GetVersion()
{
var sql = "SELECT version()";
using var cmd = new NpgsqlCommand(sql, connection);
var versionFromQuery = (await cmd.ExecuteScalarAsync()).ToString();
return versionFromQuery;
}
This method returns lots of info that directly depend on the database instance. In my case, I see PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit.
In this article, we’ve learned how to perform simple operations on a PostgreSQL database to retrieve and update the content of a table.
This is the most basic way to perform those operations. You explicitly write the queries and issue them without much stuff in between.
In future articles, we will see some other ways to perform the same operations in C#, but using other tools and packages. Maybe Entity Framework? Maybe Dapper? Stay tuned!
Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?
With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.
In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.
The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.
What is Video Projection Mapping in the Real World?
When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.
Here are some examples of real-world video projections:
Bringing it to our 3D World
In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.
Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.
Prerequesites:
Three.js r155+
A small, high-contrast mask image (e.g. a heart silhouette).
A video URL with CORS enabled.
Our Boilerplate and Starting Point
Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.
This is the video we are using: Big Buck Bunny (without CORS)
All the meshes have the same texture applied:
Attributing Projection to the Grid
We will be turning the video into a texture atlas split into a gridSize × gridSize lattice. Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.
Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.
export default class Models {
constructor(gl_app) {
...
this.createGrid()
}
createGrid() {
...
// Grid parameters
for (let x = 0; x < this.gridSize; x++) {
for (let y = 0; y < this.gridSize; y++) {
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
// Create individual geometry for each box to have unique UV mapping
// Calculate UV coordinates for this specific box
const uvX = x / this.gridSize
const uvY = y / this.gridSize // Remove the flip to match correct orientation
const uvWidth = 1 / this.gridSize
const uvHeight = 1 / this.gridSize
// Get the UV attribute
const uvAttribute = geometry.attributes.uv
const uvArray = uvAttribute.array
// Map each face of the box to show the same portion of video
// We'll focus on the front face (face 4) for the main projection
for (let i = 0; i < uvArray.length; i += 2) {
// Map all faces to the same UV region for consistency
uvArray[i] = uvX + (uvArray[i] * uvWidth) // U coordinate
uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
}
// Mark the attribute as needing update
uvAttribute.needsUpdate = true
...
}
}
...
}
...
}
The UV window for cell (x, y) For a grid of size N = gridSize:
UV origin of this cell: – uvX = x / N – uvY = y / N
UV size of each cell: – uvWidth = 1 / N – uvHeight = 1 / N
Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).
We need to create a canvas using a mask that determines which cubes are visible in the grid.
Black (dark) pixels → cube is created.
White (light) pixels → cube is skipped.
To do this, we need to:
Load the mask image.
Scale it down to match our grid size.
Read its pixel color data.
Pass that data into the grid-building step.
export default class Models {
constructor(gl_app) {
...
this.createMask()
}
createMask() {
// Create a canvas to read mask pixel data
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')
const maskImage = new Image()
maskImage.crossOrigin = 'anonymous'
maskImage.onload = () => {
// Get original image dimensions to preserve aspect ratio
const originalWidth = maskImage.width
const originalHeight = maskImage.height
const aspectRatio = originalWidth / originalHeight
// Calculate grid dimensions based on aspect ratio
this.gridWidth
this.gridHeight
if (aspectRatio > 1) {
// Image is wider than tall
this.gridWidth = this.gridSize
this.gridHeight = Math.round(this.gridSize / aspectRatio)
} else {
// Image is taller than wide or square
this.gridHeight = this.gridSize
this.gridWidth = Math.round(this.gridSize * aspectRatio)
}
canvas.width = this.gridWidth
canvas.height = this.gridHeight
ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
this.data = imageData.data
this.createGrid()
}
maskImage.src = '../images/heart.jpg'
}
...
}
Match mask resolution to grid
We don’t want to stretch the mask — this keeps it proportional to the grid.
gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
This matches the logical cube grid, so each cube can correspond to one pixel in the mask.
Applying the Mask to the Grid
Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video. Here’s the concept step by step:
Loops through every potential (x, y) position in a virtual grid.
At each grid cell, it will decide whether to place a box and, if so, how to texture it.
flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
pixelIndex: Locates the pixel in the this.data array.
Each pixel stores 4 values: red, green, blue, alpha.
Extracts the R, G, and B values for that mask pixel.
Brightness is calculated as the average of R, G, B.
If the pixel is dark enough (brightness < 128), a cube will be created.
White pixels are ignored → those positions stay empty.
export default class Models {
constructor(gl_app) {
...
this.createMask()
}
createMask() {
...
}
createGrid() {
...
for (let x = 0; x < this.gridSize; x++) {
for (let y = 0; y < this.gridSize; y++) {
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
// Get pixel color from mask (sample at grid position)
// Flip Y coordinate to match image orientation
const flippedY = this.gridHeight - 1 - y
const pixelIndex = (flippedY * this.gridWidth + x) * 4
const r = this.data[pixelIndex]
const g = this.data[pixelIndex + 1]
const b = this.data[pixelIndex + 2]
// Calculate brightness (0 = black, 255 = white)
const brightness = (r + g + b) / 3
// Only create box if pixel is dark (black shows, white hides)
if (brightness < 128) { // Threshold for black vs white
// Create individual geometry for each box to have unique UV mapping
// Calculate UV coordinates for this specific box
const uvX = x / this.gridSize
const uvY = y / this.gridSize // Remove the flip to match correct orientation
const uvWidth = 1 / this.gridSize
const uvHeight = 1 / this.gridSize
// Get the UV attribute
const uvAttribute = geometry.attributes.uv
const uvArray = uvAttribute.array
// Map each face of the box to show the same portion of video
// We'll focus on the front face (face 4) for the main projection
for (let i = 0; i < uvArray.length; i += 2) {
// Map all faces to the same UV region for consistency
uvArray[i] = uvX + (uvArray[i] * uvWidth) // U coordinate
uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
}
// Mark the attribute as needing update
uvAttribute.needsUpdate = true
const mesh = new THREE.Mesh(geometry, this.material);
mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
mesh.position.z = 0;
this.group.add(mesh);
}
}
}
...
}
...
}
Further steps
UV mapping is the process of mapping 2D video pixels onto 3D geometry.
Each cube gets its own unique UV coordinates corresponding to its position in the grid.
uvWidth and uvHeight are how much of the video texture each cube covers.
Modifies the cube’s uv attribute so all faces display the exact same portion of the video.
Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.
Instead of one mask and one video, we now have a list of mask-video pairs.
Each object defines:
id → name/id for each grid.
mask → the black/white image that controls which cubes appear.
video → the texture that will be mapped onto those cubes.
This allows you to have multiple different projections in the same scene.
2. Looping Over All Grids
Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.
For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.
At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.
We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.
And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).
⚠️ Perfomance considerations
The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.
Going further
This a fully functional and versatile concept; Therefore, it opens so many possibilities. Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.
Here are some links for you to get inspired:
Final Words
I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.
And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.
Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?
With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.
In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.
The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.
What is Video Projection Mapping in the Real World?
When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.
Here are some examples of real-world video projections:
Bringing it to our 3D World
In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.
Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.
Prerequesites:
Three.js r155+
A small, high-contrast mask image (e.g. a heart silhouette).
A video URL with CORS enabled.
Our Boilerplate and Starting Point
Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.
This is the video we are using: Big Buck Bunny (without CORS)
All the meshes have the same texture applied:
Attributing Projection to the Grid
We will be turning the video into a texture atlas split into a gridSize × gridSize lattice. Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.
Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.
export default class Models {
constructor(gl_app) {
...
this.createGrid()
}
createGrid() {
...
// Grid parameters
for (let x = 0; x < this.gridSize; x++) {
for (let y = 0; y < this.gridSize; y++) {
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
// Create individual geometry for each box to have unique UV mapping
// Calculate UV coordinates for this specific box
const uvX = x / this.gridSize
const uvY = y / this.gridSize // Remove the flip to match correct orientation
const uvWidth = 1 / this.gridSize
const uvHeight = 1 / this.gridSize
// Get the UV attribute
const uvAttribute = geometry.attributes.uv
const uvArray = uvAttribute.array
// Map each face of the box to show the same portion of video
// We'll focus on the front face (face 4) for the main projection
for (let i = 0; i < uvArray.length; i += 2) {
// Map all faces to the same UV region for consistency
uvArray[i] = uvX + (uvArray[i] * uvWidth) // U coordinate
uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
}
// Mark the attribute as needing update
uvAttribute.needsUpdate = true
...
}
}
...
}
...
}
The UV window for cell (x, y) For a grid of size N = gridSize:
UV origin of this cell: – uvX = x / N – uvY = y / N
UV size of each cell: – uvWidth = 1 / N – uvHeight = 1 / N
Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).
We need to create a canvas using a mask that determines which cubes are visible in the grid.
Black (dark) pixels → cube is created.
White (light) pixels → cube is skipped.
To do this, we need to:
Load the mask image.
Scale it down to match our grid size.
Read its pixel color data.
Pass that data into the grid-building step.
export default class Models {
constructor(gl_app) {
...
this.createMask()
}
createMask() {
// Create a canvas to read mask pixel data
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')
const maskImage = new Image()
maskImage.crossOrigin = 'anonymous'
maskImage.onload = () => {
// Get original image dimensions to preserve aspect ratio
const originalWidth = maskImage.width
const originalHeight = maskImage.height
const aspectRatio = originalWidth / originalHeight
// Calculate grid dimensions based on aspect ratio
this.gridWidth
this.gridHeight
if (aspectRatio > 1) {
// Image is wider than tall
this.gridWidth = this.gridSize
this.gridHeight = Math.round(this.gridSize / aspectRatio)
} else {
// Image is taller than wide or square
this.gridHeight = this.gridSize
this.gridWidth = Math.round(this.gridSize * aspectRatio)
}
canvas.width = this.gridWidth
canvas.height = this.gridHeight
ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
this.data = imageData.data
this.createGrid()
}
maskImage.src = '../images/heart.jpg'
}
...
}
Match mask resolution to grid
We don’t want to stretch the mask — this keeps it proportional to the grid.
gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
This matches the logical cube grid, so each cube can correspond to one pixel in the mask.
Applying the Mask to the Grid
Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video. Here’s the concept step by step:
Loops through every potential (x, y) position in a virtual grid.
At each grid cell, it will decide whether to place a box and, if so, how to texture it.
flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
pixelIndex: Locates the pixel in the this.data array.
Each pixel stores 4 values: red, green, blue, alpha.
Extracts the R, G, and B values for that mask pixel.
Brightness is calculated as the average of R, G, B.
If the pixel is dark enough (brightness < 128), a cube will be created.
White pixels are ignored → those positions stay empty.
export default class Models {
constructor(gl_app) {
...
this.createMask()
}
createMask() {
...
}
createGrid() {
...
for (let x = 0; x < this.gridSize; x++) {
for (let y = 0; y < this.gridSize; y++) {
const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
// Get pixel color from mask (sample at grid position)
// Flip Y coordinate to match image orientation
const flippedY = this.gridHeight - 1 - y
const pixelIndex = (flippedY * this.gridWidth + x) * 4
const r = this.data[pixelIndex]
const g = this.data[pixelIndex + 1]
const b = this.data[pixelIndex + 2]
// Calculate brightness (0 = black, 255 = white)
const brightness = (r + g + b) / 3
// Only create box if pixel is dark (black shows, white hides)
if (brightness < 128) { // Threshold for black vs white
// Create individual geometry for each box to have unique UV mapping
// Calculate UV coordinates for this specific box
const uvX = x / this.gridSize
const uvY = y / this.gridSize // Remove the flip to match correct orientation
const uvWidth = 1 / this.gridSize
const uvHeight = 1 / this.gridSize
// Get the UV attribute
const uvAttribute = geometry.attributes.uv
const uvArray = uvAttribute.array
// Map each face of the box to show the same portion of video
// We'll focus on the front face (face 4) for the main projection
for (let i = 0; i < uvArray.length; i += 2) {
// Map all faces to the same UV region for consistency
uvArray[i] = uvX + (uvArray[i] * uvWidth) // U coordinate
uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
}
// Mark the attribute as needing update
uvAttribute.needsUpdate = true
const mesh = new THREE.Mesh(geometry, this.material);
mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
mesh.position.z = 0;
this.group.add(mesh);
}
}
}
...
}
...
}
Further steps
UV mapping is the process of mapping 2D video pixels onto 3D geometry.
Each cube gets its own unique UV coordinates corresponding to its position in the grid.
uvWidth and uvHeight are how much of the video texture each cube covers.
Modifies the cube’s uv attribute so all faces display the exact same portion of the video.
Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.
Instead of one mask and one video, we now have a list of mask-video pairs.
Each object defines:
id → name/id for each grid.
mask → the black/white image that controls which cubes appear.
video → the texture that will be mapped onto those cubes.
This allows you to have multiple different projections in the same scene.
2. Looping Over All Grids
Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.
For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.
At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.
We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.
And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).
⚠️ Perfomance considerations
The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.
Going further
This a fully functional and versatile concept; Therefore, it opens so many possibilities. Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.
Here are some links for you to get inspired:
Final Words
I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.
And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.
Seqrite Labs APT-Team has been tracking and uncovered a supposedly new threat group since April 2025, that we track by the name Noisy Bear as Noisy Bear. This threat group has targeted entities in Central Asia, such as targeting the Oil and Gas or energy sector of Kazakhstan. The campaign is targeted towards employees of KazMunaiGas or KMG where the threat entity delivered a fake document related to KMG IT department, mimicking official internal communication and leveraging themes such as policy updates, internal certification procedures, and salary adjustments.
In this blog, we will explore the in-depth technical details of the campaign, we encountered during our analysis. We will examine the various stages of this campaign, where infection starts with a phishing email having a ZIP attachment, which contains a malicious LNK downloader along with a decoy, which further downloads a malicious BATCH script, leading to PowerShell loaders, which we dubbed as DOWNSHELL reflectively loading a malicious DLL implant. We will also look into the infrastructure covering the entire campaign.
Key Targets
Industries Affected.
Energy Sector [Oil and Gas]
Geographical Focus.
Infection Chain
Initial Findings
Initially, we have been tracking this threat actor since April 2025, and we observed that this threat entity launched a campaign against KazMunaiGas employees in May 2025 using a spear-phishing-oriented method. A compromised business email was used to deliver a malicious ZIP file, which contained a decoy along with a malicious initial infection-based shortcut (.LNK) file known as График зарплат.lnk, which can be translated to Salary Schedule.lnk. The sample initially surfaced on Virus Total in the first half of May 2025.
Now, let us look into the malicious email and decoy file.
Looking into the malicious email.
Initially, looking into the email file’s sender, we found that the threat actor used a compromised business email of an individual working in Finance Department of KazMunaiGas, using the email and an urgent prioritized subject URGENT! Review the updated salary schedule, they emailed it to the employees of KMG.
Later, upon looking at the contents of the email, it became clear that the message was mostly crafted to look like an internal HR communication related to salary-oriented discussion or decision. The message basically says about reviewing an updated information about lot of things such as work schedules, salaries and incentives related policies and decisions. The TA also instructs the targets of KMG to check for a file known as График.zip translated to Schedule.zip and then to open a file known as График зарплат which translates to Salary Schedule , which is basically the shortcut (LNK) file to be executed to download further stagers.
Well, last but not the least, the email also mentions to complete the instructions by 15th May 2025 enhancing a sense of urgency. Now, let us go ahead and analyze the decoy file.
Looking into the decoy-document.
Looking into the decoy document, we can see that it has an official logo of the targeted entity I.e., KazMunaiGas, along with instructions in both Russian and Kazakh language which instructs the employees through a series of simple steps which is to open the Downloads folder in the browser, extract a ZIP archive named KazMunayGaz_Viewer.zip, and run a file called KazMunayGaz_Viewer, although the file-name is irrelevant, but we believe, this is the exact file dropped from the malicious email. The decoy also mentions users to wait for a console window to appear and specifically advised them not to close or interact with it, to limit suspicion on targets’ ends. Last, not the least, it also mentions the IT-Support team in salutations to make it look completely legitimate, with above artefacts present in the decoy.
Technical Analysis.
We have divided the technical analysis into four parts, where initially we will look into the malicious ZIP containing the LNK file, which further downloads the malicious Batch script, and going ahead with downloading the script-based loader followed by the malicious DLL.
Stage 0 – Malicious ZIP & LNK Files.
Initially, looking into the ZIP file, we found three files, out of which one of them stands to be the decoy document, which we saw initially, the second one turns out to be README.txt, which once again makes sure that the instructions are present, so that it does not seem suspicious and the later one turns out to be malicious LNK file.
Now, upon looking into the malicious shortcut(.LNK) file, named as График зарплат , we found that is using powershell.exe LOLBIN to execute a downloader-based behavior.
It downloads a malicious batch script known as 123.bat, from a remote-server, which is hxxps[://]77[.]239[.]125[.]41[:]8443 and once it is downloaded, it stores the batch script under the path C:\Users\Public, it then executes the batch script using the Start-Process cmdlet from the path.
Similarly, hunting for similar LNK file, we found another LNK, which belongs to the same campaign, looks slightly different.
This malicious LNK file, uses a little operand shenanigan to avoid static signature detection, but concatenation of the string literals and further downloading a batch script from the same remote server, saving it to the Public folder, further executing it via cmdlet.
In, the next section, we will examine the malicious BATCH scripts.
Stage 1 – Malicious BATCH Scripts.
Now, looking into the one of the BATCH scripts, I.e., it.bat , we can see that it is downloading PowerShell Loaders, which we have dubbed as DOWNSHELL, from a remote server known as support.ps1 and a.ps1, once they are downloaded, it then sleeps for a total of 11 seconds.
Now, looking into the second batch script I.e., the 123.bat file, it also does the same which is downloading the PowerShell loaders, followed by a sleep of 10 seconds.
In the next section, we will move ahead to understanding the working of the DOWNSHELL loaders written in PowerShell.
Stage 2 – Malicious DOWNSHELL Loaders.
In, this section we will look into the set of malicious PowerShell scripts, which we have dubbed as DOWNSHELL, the first PowerShell file, also known as support.ps1 is basically a script which is responsible for impairing defense on the target machine and the latter is responsible for performing loader-oriented function.
Looking into the code, we figured out that the script is basically obfuscating, the target namespace by building “System.Management.Automation” via string concatenation, then enumerates all loaded .NET assemblies in the current AppDomain and filters for the one whose FullName matches that namespace.
Then, using reflection technique, it resolves the internal type System.Management.Automation.AmsiUtils, which basically retrieves the private static field amsiInitiFailed, so changing or flipping this flag convinces PowerShell that the AMSI has failed to initialize, so the other malicious script belonging to DOWNSHELL family, does not get scanned and executes without any hassle or interruption. Now, let us look into the second PowerShell script.
Looking into the first part of the code, it looks like a copied version of the famous red-team emulation-based tool known as PowerSploit, the function LookUpFunc basically dynamically retrieves the memory address of any exported function from a specified DLL without using traditional DllImport or Add-Type calls. It performs this by locating the Microsoft.Win32.UnsafeNativeMethods type within the already-loaded System.dll assembly, then extracting and invoking the hidden .NET wrappers for GetModuleHandle and GetProcAddress. By first resolving the base address of the target module ($moduleName) and then passing it along with the target function name ($functionName), it returns a raw function pointer to that API, which is required.
Then, looking into the second part of the code, the function getDelegateType basically creates a custom .NET delegate on the fly, entirely in memory. It takes the parameter types and returns certain type, builds a new delegate class with those, and gives it an Invoke method so it can be used like a normal function. This lets the entire script wrap the raw function pointers (from LookupFunc) into something PowerShell can call directly, making it easy to run WinAPI functions without having to import them in the usual way, followed by querying the process ID of the explorer.exe process and storing it inside a variable.
The latter part of the script is followed by a byte array containing the meterpreter reverse_tcpshellcode, which is basically using classical Create-RemoteThread Injection technique using OpenProcess, VirtualAllocEx, WriteProcessMemory & CreateRemoteThread to inject the shellcode inside the target process which is explorer.exe , followed by a message Injected! Check your listener!.
Well, an interesting part of this script is some part of this is commented, which performs Reflective DLL injection into remote process, which is notepad in this case, using a tool known as PowerSploit , hosted at the remote server, which is downloaded, and the Meterpreter based DLL is being used. Another slight interesting case are the comments in Russian Language. In the next case, we will examine the DLL.
Stage 3 – Malicious DLL Implant.
Initially, we did check out the DLL implant, in a PE-analysis tool, and it was confirmed that the DLL implant or shellcode loader is a 64-bit binary.
Next, moving ahead with the code, we saw that the implant is using Semaphores as a sort of gatekeeper to make sure only one copy of itself runs at a time, in this case the implant uses a named object Local\doSZQmSnP12lu4Pb5FRD. When it starts, it tries to create this semaphore then if it already exists, that means another instance is active. To double-check, it uses WaitForSingleObject on the semaphore and then looks for a specific named event. If the event exists, it knows another instance has already completed its setup. If it doesn’t, it creates the event itself.
Now, depending on the previous function, which is responsible for checking the number of instances, the next step is it spawns a rundll32.exe process in a suspended manner.
After creating the process in a suspended state, the implant performs classic thread-context hijacking: it calls GetThreadContext on the primary thread, uses VirtualAllocEx to reserve RWX memory in the target, WriteProcessMemory to drop the shellcode, updates the thread’s RIP to point to that buffer via SetThreadContext, and finally calls ResumeThread so execution continues at the injected shellcode. In this case, the shellcode basically is a reverse shell.
Infrastructure & Hunting.
Upon looking into the infrastructure, the threat entity had been using, we found a few slightly interesting details about it.
Tool-Arsenal
Along, with the tools, which we saw had been used by the threat actor, we also found that there are more open-source red-team oriented tools, which had been hosted by the threat actor for further usage.
Pivoting
Using similar fingerprint, we hunted a similar infrastructure, which belongs to the similar threat actor.
One of most interesting part, being both the infrastructure is hosted under a sanctioned hosting firm known as Aeza Group LLC.
Another interesting part is, we also discovered a lot of suspicious web applications being hosted, related to wellness, fitness and health assistance for Russian individuals.
Attribution.
Attribution is a very important metric when describing a threat entity. It involved analyzing and correlating various domains, which include Tactics, Techniques and Procedures (TTPs), operational mistakes, rotation and re-use of similar infrastructural artefacts, operational mistakes which could lead to attribution and much more.
In our ongoing tracking of Noisy Bear, we have a lot of artefacts, such as languages present inside the tooling, usage of sanctioned web-hosting services and similar behavioral artefacts with related to Russian threat entities which have previously targeted similar Central Asian nations, we attribute the threat actor possibly could be of Russian origin.
Conclusion.
We have found that a threat entity, dubbed as NoisyBear is targeting Kazakh Energy Sector using company specific lure while heavily depending on PowerShell and open-source post-exploitation tools such as Metasploit, hosting them over a sanctioned web-hosting provider, we can also conclude that the threat actor has been active since the month of April 2025.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
From C# 6 on, you can use the when keyword to specify a condition before handling an exception.
Consider this – pretty useless, I have to admit – type of exception:
publicclassRandomException : System.Exception
{
publicint Value { get; }
public RandomException()
{
Value = (new Random()).Next();
}
}
This exception type contains a Value property which is populated with a random value when the exception is thrown.
What if you want to print a different message depending on whether the Value property is odd or even?
You can do it this way:
try{
thrownew RandomException();
}
catch (RandomException re)
{
if(re.Value % 2 == 0)
Console.WriteLine("Exception with even value");
else Console.WriteLine("Exception with odd value");
}
But, well, you should keep your catch blocks as simple as possible.
That’s where the when keyword comes in handy.
CSharp when clause
You can use it to create two distinct catch blocks, each one of them handles their case in the cleanest way possible.
try{
thrownew RandomException();
}
catch (RandomException re) when (re.Value % 2 == 0)
{
Console.WriteLine("Exception with even value");
}
catch (RandomException re)
{
Console.WriteLine("Exception with odd value");
}
You must use the when keyword in conjunction with a condition, which can also reference the current instance of the exception being caught. In fact, the condition references the Value property of the RandomException instance.
A real usage: HTTP response errors
Ok, that example with the random exception is a bit… useless?
Let’s see a real example: handling different HTTP status codes in case of failing HTTP calls.
In the following snippet, I call an endpoint that returns a specified status code (506, in my case).
try{
var endpoint = "https://mock.codes/506";
var httpClient = new HttpClient();
var response = await httpClient.GetAsync(endpoint);
response.EnsureSuccessStatusCode();
}
catch (HttpRequestException ex) when (ex.StatusCode == (HttpStatusCode)506)
{
Console.WriteLine("Handle 506: Variant also negotiates");
}
catch (HttpRequestException ex)
{
Console.WriteLine("Handle another status code");
}
If the response is not a success, the response.EnsureSuccessStatusCode() throws an exception of type HttpRequestException. The thrown exception contains some info about the returned status code, which we can use to route the exception handling to the correct catch block using when (ex.StatusCode == (HttpStatusCode)506).
Today we’re going to go over some of my favorite GSAP techniques that can bring you great results with just a little code.
Although the GSAP documentation is among the best, I find that developers often overlook some of GSAP’s greatest features or perhaps struggle with finding their practical application.
The techniques presented here will be helpful to GSAP beginners and seasoned pros. It is recommended that you understand the basics of loading GSAP and working with tweens, timelines and SplitText. My free beginner’s course GSAP Express will guide you through everything you need for a firm foundation.
If you prefer a video version of this tutorial, you can watch it here:
GSAP’s SplitText just went through a major overhaul. It has 14 new features and weighs in at roughly 7kb.
SplitText allows you to split HTML text into characters, lines, and words. It has powerful features to support screen-readers, responsive layouts, nested elements, foreign characters, emoji and more.
My favorite feature is its built-in support for masking (available in SplitText version 3.13+).
Prior to this version of SplitText you would have to manually nest your animated text in parent divs that have overflow set to hidden or clip in the css.
SplitText now does this for you by creating “wrapper divs” around the elements that we apply masking to.
Basic Implementation
The code below will split the h1 tag into chars and also apply a mask effect, which means the characters will not be visible when they are outside their bounding box.
See the Pen
Codrops Tip 1: Split Text Masking – Basic by Snorkl.tv (@snorkltv)
on CodePen.
This simple implementation works great and is totally fine.
However, if you inspect the DOM you will see that 2 new <div> elements are created for each character:
an outer div with overflow:clip
an inner div with text
With 17 characters to split this creates 34 divs as shown in the simplified DOM structure below
<h1>SplitText Masking
<div> <!-- char wrapper with overflow:clip -->
<div>S</div>
</div>
<div> <!-- char wrapper with overflow:clip -->
<div>p</div>
</div>
<div> <!-- char wrapper with overflow:clip -->
<div>l</div>
</div>
<div> <!-- char wrapper with overflow:clip -->
<div>i</div>
</div>
<div> <!-- char wrapper with overflow:clip -->
<div>t</div>
</div>
...
</h1>
The More Efficient Approach
If you want to minimize the amount of DOM elements created you can split your text into characters and lines. Then you can just set the masking on the lines element like so:
Demo: Split Text Masking (Better with chars and lines)
See the Pen
Codrops Tip 1: Split Text Masking – Better with chars and lines by Snorkl.tv (@snorkltv)
on CodePen.
Now if you inspect the DOM you will see that there is
1 line wrapper div with overflow:clip
1 line div
1 div per character
With 17 to characters to split this creates only 19 divs in total:
<h1>SplitText Masking
<div> <!-- line wrapper with overflow:clip -->
<div> <!-- line -->
<div>S</div>
<div>p</div>
<div>l</div>
<div>i</div>
<div>t</div>
...
</div>
</div>
</h1>
Tip 2: Setting the Stagger Direction
From my experience 99% of stagger animations go from left to right. Perhaps that’s just because it’s the standard flow of written text.
However, GSAP makes it super simple to add some animation pizzazz to your staggers.
To change the direction from which staggered animations start you need to use the object-syntax for the stagger value
Normal Stagger
Typically the stagger value is a single number which specifies the amount of time between the start of each target element’s animation.
gsap.to(targets, {x:100, stagger:0.2}) // 0.2 seconds between the start of each animation
Stagger Object
By using the stagger object we can specify multiple parameters to fine-tune our staggers such as each, amount, from, ease, grid and repeat. See the GSAP Stagger Docs for more details. Our focus today will be on the from property which allows us to specify from which direction our staggers should start.
gsap.to(targets, {x:100,
stagger: {
each:0.2, // amount of time between the start of each animation
from:”center” // animate from center of the targets array
}
The from property in the stagger object can be any one of these string values
“start” (default)
“center”
“end”
“edges”
“random”
Demo: Stagger Direction Timeline
In this demo the characters animate in from center and then out from the edges.
See the Pen
Codrops Tip 2: Stagger Direction Timeline by Snorkl.tv (@snorkltv)
on CodePen.
Demo: Stagger Direction Visualizer
See the Pen
Codrops Tip 2: Stagger Direction Visualizer by Snorkl.tv (@snorkltv)
on CodePen.
Tip 3: Wrapping Array Values
The gsap.utils.wrap() function allows you to pull values from an array and apply them to multiple targets. This is great for allowing elements to animate in from opposite directions (like a zipper), assigning a set of colors to multiple objects and many more creative applications.
Setting Colors From an Array
I love using gsap.utils.wrap() with a set() to instantly manipulate a group of elements.
// split the header
const split = SplitText.create("h1", {
type:"chars"
})
//create an array of colors
const colors = ["lime", "yellow", "pink", "skyblue"]
// set each character to a color from the colors array
gsap.set(split.chars, {color:gsap.utils.wrap(colors)})
When the last color in the array (skyblue) is chosen GSAP will wrap back to the beginning of the array and apply lime to the next element.
Animating from Alternating Directions
In the code below each target will animate in from alternating y values of -50 and 50.
Notice that you can define the array directly inside of the wrap() function.
See the Pen
Codrops Tip 3: Basic Wrap by Snorkl.tv (@snorkltv)
on CodePen.
Demo: Fancy Wrap
In the demo below there is a timeline that creates a sequence of animations that combine stagger direction and wrap. Isn’t it amazing what GSAP allows you to do with just a few simple shapes and a few lines of code?
See the Pen
Codrops Tip 3: Fancy Wrap by Snorkl.tv (@snorkltv)
on CodePen.
As you watch the animation be sure to go through the GSAP code to see which tween is running each effect.
I strongly recommend editing the animation values and experimenting.
Tip 4: Easy Randomization with the “random()” String Function
GSAP has its own random utility function gsap.utils.random() that lets you tap into convenient randomization features anywhere in your JavaScript code.
// generate a random number between 0 and 450
const randomNumber = gsap.utils.random(0, 450)
To randomize values in animations we can use the random string shortcut which saves us some typing.
//animate each target to a random x value between 0 and 450
gsap.to(targets, {x:"random(0, 450)"})
//the third parameter sets the value to snap to
gsap.to(targets, {x:"random(0, 450, 50)"}) // random number will be an increment of 50
//pick a random value from an array for each target
gsap.to(targets, fill:"random([pink, yellow, orange, salmon])"
Demo: Random String
See the Pen
Codrops Tip 4: Random String by Snorkl.tv (@snorkltv)
on CodePen.
TIP 5: repeatRefresh:true
This next tip appears to be pure magic as it allows our animations to produce new results each time they repeat.
GSAP internally stores the start and end values of an animation the first time it runs. This is a performance optimization so that each time it repeats there is no additional work to do. By default repeating tweens always produce the exact same results (which is a good thing).
When dealing with dynamic or function-based values such as those generated with the random string syntax “random(0, 100)” we can tell GSAP to record new values on repeat by setting repeatRefresh:true.
You can set repeatRefresh:true in the config object of a single tween OR on a timeline.
//use on a tween
gsap.to(target, {x:”random(50, 100”, repeat:10, repeatRefresh:true})
//use on a timeline
const tl = gsap.timeline({repeat:10, repeatRefresh:true})
Demo: repeatRefresh Particles
The demo below contains a single timeline with repeatRefresh:true.
Each time it repeats the circles get assigned a new random scale and a new random x destination.
Be sure to study the JS code in the demo. Feel free to fork it and modify the values.
See the Pen
Codrops Tip 5: repeatRefresh Particles by Snorkl.tv (@snorkltv)
on CodePen.
TIP 6: Tween The TimeScale() of an Animation
GSAP animations have getter / setter values that allow you to get and set properties of an animation.
Common Getter / Setter methods:
paused() gets or sets the paused state
duration() gets or sets the duration
reversed() gets or sets the reversed state
progress() gets or sets the progress
timeScale() gets or sets the timeScale
Getter Setter Methods in Usage
animation.paused(true) // sets the paused state to true
console.log(animation.paused()) // gets the paused state
console.log(!animation.paused()) // gets the inverse of the paused state
See it in Action
In the demo from the previous tip there is code that toggles the paused state of the particle effect.
//click to pause
document.addEventListener("click", function(){
tl.paused(!tl.paused())
})
This code means “every time the document is clicked the timeline’s paused state will change to the inverse (or opposite) of what it currently is”.
If the animation is paused, it will become “unpaused” and vice-versa.
This works great, but I’d like to show you trick for making it less abrupt and smoothing it out.
Tweening Numeric Getter/Setter Values
We can’t tween the paused() state as it is either true or false.
Where things get interesting is that we can tween numeric getter / setter properties of animations like progress() and timeScale().
timeScale() represents a factor of an animation’s playback speed.
timeScale(1): playback at normal speed
timeScale(0.5) playback at half speed
timeScale(2) playback at double speed
Setting timeScale()
//create an animation with a duration of 5 seconds
const animation = gsap.to(box, {x:500, duration:5})
//playback at half-speed making it take 10 seconds to play
animation.timeScale(0.5)
Tweening timeScale()
const animation = gsap.to(box, {x:500, duration:5}) // create a basic tween
// Over the course of 1 second reduce the timeScale of the animation to 0.5
gsap.to(animation, {timeScale:0.5, duration:1})
Dynamically Tweening timeScale() for smooth pause and un-pause
Instead of abruptly changing the paused state of animation as the particle demo above does we are now going to tween the timeScale() for a MUCH smoother effect.
Demo: Particles with timeScale() Tween
See the Pen
Codrops Tip 6: Particles with timeScale() Tween by Snorkl.tv (@snorkltv)
on CodePen.
Click anywhere in the demo above to see the particles smoothly slow down and speed up on each click.
The code below basically says “if the animation is currently playing then we will slow it down or else we will speed it up”. Every time a click happens the isPlaying value toggles between true and false so that it can be updated for the next click.
Tip 7: GSDevTools Markers and Animation IDs
Most of the demos in this article have used GSDevTools to help us control our animations. When building animations I just love being able to scrub at my own pace and study the sequencing of all the moving parts.
However, there is more to this powerful tool than just scrubbing, playing and pausing.
Markers
The in and out markers allow us to loop ANY section of an animation. As an added bonus GSDevTools remembers the previous position of the markers so that each time we reload our animation it will start and end at the same time.
This makes it very easy to loop a particular section and study it.
Important: The markers are only available on screens wider than 600px. On small screens the UI is minimized to only show basic controls.
Setting IDs for the Animation Menu
The animation menu allows us to navigate to different sections of our animation based on an animation id. When dealing with long-form animations this feature is an absolute life saver.
Since GSAP’s syntax makes creating complex sequences a breeze, it is not un-common to find yourself working on animations that are beyond 10, 20 or even 60 seconds!
To set an animation id:
const tl = gsap.timeline({id:"fancy"})
//Add the animation to GSDevTools based on variable reference
GSDevTools.create({animation:tl})
//OR add the animation GSDevTools based on id
GSDevTools.create({animation:"fancy"})
With the code above the name “fancy” will display in GSDevTools.
Although you can use the id with a single timeline, this feature is most helpful when working with nested timelines as discussed below.
Demo: GSAP for Everyone
See the Pen
Codrops Tip 7: Markers and Animation Menu by Snorkl.tv (@snorkltv)
on CodePen.
This demo is 26 seconds long and has 7 child timelines. Study the code to see how each timeline has a unique id that is displayed in the animation menu.
Use the animation menuto navigate to and explore each section.
Important: The animation menu is only available on screens wider than 600px.
Hopefully you can see how useful markers and animation ids can be when working with these long-form, hand-coded animations!
Want to Learn More About GSAP?
I’m here to help.
I’ve spent nearly 5 years archiving everything I know about GSAP in video format spanning 5 courses and nearly 300 lessons at creativeCodingClub.com.
I spent many years “back in the day” using GreenSock’s ActionScript tools as a Flash developer and this experience lead to me being hired at GreenSock when they switched to JavaScript. My time at GreenSock had me creating countless demos, videos and learning resources.
Spending years answering literally thousands of questions in the support forums has left me with a unique ability to help developers of all skill levels avoid common pitfalls and get the most out of this powerful animation library.
It’s my mission to help developers from all over the world discover the joy of animating with code through affordable, world-class training.