Yesterday Online PNG Tools smashed through 6.53M Google clicks and today it’s smashed through 6.54M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
I hadn’t planned on creating a fashion interface. I just needed a reboot. At the time, I was leading art direction at the studio, juggling multiple projects, and emotionally, I was simply exhausted. I joined an Awwwards Masterclass to rediscover the joy of playing with design. I wanted to learn Webflow. I wanted to explore GSAP. But more than that, I wanted to create something unapologetically weird and beautiful.
That seed grew into DICH™, Design Independent Creative House. What started as a design playground became a statement.
We began with light. Not darkness. No glitch, no noise. Just a breath of fresh air.
Designing the Unfuturistic Future
We made a conscious decision: no dark mode. No glitch filters. Most futuristic UIs feel cold. We wanted warmth, softness, a vision of the future that is poetic, not synthetic.
Each section had its own visual temperature. Soft gradients, air, pastel dust. Typography was crucial. The T-12 font had those strange numeric ligatures that felt alien but elegant. Video, color, typography — all speaking the same language.
Each color palette tells a chapter of DICH — sunlit, strange, alive.
We built moodboards, UX pillars, and rhythm plans. That process, taught in the Masterclass, changed how we approached layout. It wasn’t about grids. It was about flow.
Building the Entry Ritual (Preloader)
The preloader wasn’t just an aesthetic flex. It solved three key problems:
Our media-heavy site needed time to load
Browsers block autoplaying audio without user interaction
We wanted to introduce mood and rhythm before the scroll even began
It was animated in After Effects and exported to Lottie, then embedded into Webflow and animated using GSAP.
The Enter button also triggered sound. It was our “permission point” for browser playback.
Live coordinate HUD under the cursor — perfectly suited to our site’s theme, so we decided to include it.
document.addEventListener('DOMContentLoaded', function () {
if (window.innerWidth <= 768) return;
const xCoord = document.getElementById('x-coordinate');
const yCoord = document.getElementById('y-coordinate');
let mouseX = 0;
let mouseY = 0;
let lastX = -1;
let lastY = -1;
let ticking = false;
function formatNumber(num) {
return num.toString().padStart(4, '0');
}
function updateCoordinates() {
if (mouseX !== lastX || mouseY !== lastY) {
xCoord.textContent = formatNumber(mouseX % 10000);
yCoord.textContent = formatNumber(mouseY % 10000);
lastX = mouseX;
lastY = mouseY;
}
ticking = false;
}
document.addEventListener('mousemove', (event) => {
mouseX = event.clientX;
mouseY = event.clientY;
if (!ticking) {
ticking = true;
requestAnimationFrame(updateCoordinates);
}
});
});
Stones That Scroll
We placed a 3D stone (also from Blender) into Spline, gave it orbital motion, and connected it to scroll using Webflow Interactions.
It felt like motion with gravity — guided, yet organic.
Pixel Tracer
With coordinate tracking already in place, we easily applied it to our section and later enhanced it with a pixel tracer inspired by Jean Mazouni’s displacement effect.
Unicorn Everywhere
The cursor wasn’t just a pointer, it became a vibe.
We used Unicorn Studio to create custom cursor trails and animations that followed the user like echoes of intent. Three variations in total:
One for the landing screen — minimal, hypnotic.
One for the project case study — denser, electric.
One for transitions — barely-there glimmer, like a memory.
Each version added tension and curiosity. It wasn’t flashy for the sake of it — it gave rhythm to hovering, a pulse to the interaction. Suddenly, the cursor wasn’t just a tool. It was part of the interface’s voice.
Footer Letters with Physics
Our footer was a personal moment. We wanted the word “DICH” to be hidden inside animated lines and revealed on hover using canvas and brightness sampling.
This one took the longest. We tried Perlin noise, sine curves, and springs, but none worked as we’d hoped or produced results that were sufficiently readable — until we found an old Domestika course that showed getImageData() logic.
Every visual block was lazy-loaded using IntersectionObserver
Safari compatibility issues — reworked unsupported animations for Safari and added fallbacks for AVIF images (even lighter than WebP) to maximize optimization.
Heavy sections only rendered after the preloader finished
We treated the error page like a runway — not a dead-end, but an invitation. Instead of a sad emoji or a bland “page not found,” you get a full-screen glitch-dream: warped typography, soft scans, and a single message that flickers like a memory.
Technically, it was simple — a standalone Webflow page. But visually, it extended the DICH world: same typographic tension, same surreal softness. We even debated adding background audio, but silence won — it made the page feel like a moment suspended in time.
What We Learned
File formats matter more than you think
Glitches aren’t as magical as thoughtful motion
GSAP is our best friend
Webflow is powerful when paired with code
You don’t need a big plan to make something that matters
Closing
I almost gave up. More than once. But every time the team cracked a bug, designed a transition, or made a visual more strange — it reminded me why we build.
DICH™ was a challenge, a love letter, and a reset. And now it’s yours to explore.
Yesterday Online PNG Tools smashed through 6.54M Google clicks and today it’s smashed through 6.55M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
Yesterday Online PNG Tools smashed through 6.51M Google clicks and today it’s smashed through 6.52M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
Browserling is an online service that lets you test how other websites look and work in different web browsers, like Chrome, Firefox, or Safari, without needing to install them. It runs real browsers on real machines and streams them to your screen, kind of like remote desktop but focused on browsers. This helps web developers and regular users check for bugs, suspicious links, and weird stuff that happens in certain browsers. You just go to Browserling, pick a browser and version, and then enter the site you want to test. It’s quick, easy, and works from your browser with no downloads or installs.
What Are Online Tools?
Online Tools is an online service that offers free, browser-based productivity tools for everyday tasks like editing text, converting files, editing images, working with code, and way more. It’s an all-in-one Digital Swiss Army Knife with 1500+ utilities, so you can find the exact tool you need without installing anything. Just open the site, use what you need, and get things done fast.
Who Uses Browserling and Online Tools?
Browserling and Online Tools are used by millions of regular internet users, developers, designers, students, and even Fortune 100 companies. Browserling is handy for testing websites in different browsers without having to install them. Online Tools are used for simple tasks like resizing or converting images, or even fixing small file problems quickly without downloading any apps.
I’ve always been interested in finding simple ways to bring more depth into web interfaces, not just through visuals, but through interaction and space.
In this demo, I explored how flat UI cards can become interactive 3D scenes using GLB models, Three.js, and Webflow. Each card starts as a basic layout but reveals a small, self-contained environment built with real-time rendering and subtle motion.
It’s a lightweight approach to adding spatial storytelling to familiar components, using tools many designers already work with.
Welcome to My Creative World
I’m always drawn to visuals that mix the futuristic with the familiar — space-inspired forms, minimal layouts, and everyday elements seen from a different angle.
Most of my projects start this way: by reimagining ordinary ideas through a more immersive or atmospheric lens.
It All Started with a Moodboard
This one began with a simple inspiration board:
From that board, I picked a few of my favorite visuals and ran them through an AI tool that converts images into GLB 3D models.
The results were surprisingly good! Abstract, textured, and full of character.
The Concept: Flat to Deep
When I saw the output from the AI-generated GLB models, I started thinking about how we perceive depth in UI design, not just visually, but interactively.
That led to a simple idea: what if flat cards could reveal a hidden spatial layer? Not through animation alone, but through actual 3D geometry, lighting, and camera movement.
I designed three UI cards, each styled with minimal HTML and CSS in Webflow. On interaction, they load a unique GLB model into a Three.js scene directly within the card container. Each model is lit, framed, and animated to create the feeling of a self-contained 3D space.
Building the Web Experience
The layout was built in Webflow using a simple flexbox structure with three cards inside a wrapper. Each card contains a div that serves as the mounting point for a 3D object.
The GLB models are rendered using Three.js, which is integrated into the project with custom JavaScript. Each scene is initialized and handled separately, giving each card its own interactive 3D space while keeping the layout lightweight and modular.
Scene Design with Blender
Each GLB model was prepared in Blender, where I added a surrounding sphere to create a sense of depth and atmosphere. This simple shape helps simulate background contrast and encloses the object in a self-contained space.
Lighting played an important role; especially with reflective materials like glass or metal. Highlights and soft shadows were used to create that subtle, futuristic glow.
The result is that each 3D model feels like it lives inside its own ambient environment, even when rendered in a small card.
Bringing It Together with Three.js
Once the models were exported from Blender as .glb files, I used Three.js to render them inside each card. Each card container acts as its own 3D scene, initialized through a custom JavaScript function.
The setup involves creating a basic scene with a perspective camera, ambient and directional lighting, and a WebGL renderer. I used GLTFLoader to load each .glb file and OrbitControls to enable subtle rotation. Zooming and panning are disabled to keep the interaction focused and controlled.
Each model is loaded into a separate container, making it modular and easy to manage. The camera is offset slightly for a more dynamic starting view, and the background is kept dark to help the lighting pop.
Here’s the full JavaScript used to load and render the models:
// Import required libraries
import * as THREE from 'three';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
import gsap from 'gsap';
/**
* This function initializes a Three.js scene inside a given container
* and loads a .glb model into it.
*/
function createScene(containerSelector, glbPath) {
const container = document.querySelector(containerSelector);
// 1. Create a scene
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x202020); // dark background
// 2. Set up the camera with perspective
const camera = new THREE.PerspectiveCamera(
45, // Field of view
container.clientWidth / container.clientHeight, // Aspect ratio
0.1, // Near clipping plane
100 // Far clipping plane
);
camera.position.set(2, 0, 0); // Offset to the side for better viewing
// 3. Create a renderer and append it to the container
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(container.clientWidth, container.clientHeight);
container.appendChild(renderer.domElement);
// 4. Add lighting
const light = new THREE.DirectionalLight(0xffffff, 4);
light.position.set(30, -10, 20);
scene.add(light);
const ambientLight = new THREE.AmbientLight(0x404040); // soft light
scene.add(ambientLight);
// 5. Set up OrbitControls to allow rotation
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableZoom = false; // no zooming
controls.enablePan = false; // no dragging
controls.minPolarAngle = Math.PI / 2; // lock vertical angle
controls.maxPolarAngle = Math.PI / 2;
controls.enableDamping = true; // smooth movement
// 6. Load the GLB model
const loader = new GLTFLoader();
loader.load(
glbPath,
(gltf) => {
scene.add(gltf.scene); // Add model to the scene
},
(xhr) => {
console.log(`${containerSelector}: ${(xhr.loaded / xhr.total) * 100}% loaded`);
},
(error) => {
console.error(`Error loading ${glbPath}`, error);
}
);
// 7. Make it responsive
window.addEventListener("resize", () => {
camera.aspect = container.clientWidth / container.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize(container.clientWidth, container.clientHeight);
});
// 8. Animate the scene
function animate() {
requestAnimationFrame(animate);
controls.update(); // updates rotation smoothly
renderer.render(scene, camera);
}
animate(); // start the animation loop
}
// 9. Initialize scenes for each card (replace with your URLs)
createScene(".div", "https://yourdomain.com/models/yourmodel.glb");
createScene(".div2", "https://yourdomain.com/models/yourmodel2.glb");
createScene(".div3", "https://yourdomain.com/models/yourmodel3.glb");
This script is added via a <script type="module"> tag, either in the Webflow page settings or as an embedded code block. Each call to createScene() initializes a new card, linking it to its corresponding .glb file.
How This Works in Practice
In Webflow, create three containers with the classes .div, .div2, and .div3. Each one will act as a canvas for a different 3D scene.
Embed the JavaScript module shown above by placing it just before the closing </body> tag in your Webflow project, or by using an Embed block with <script type="module">.
Once the page loads, each container initializes its own Three.js scene and loads the corresponding GLB model. The result: flat UI cards become interactive, scrollable 3D objects — all directly inside Webflow.
This approach is lightweight, clean, and performance-conscious, while still giving you the flexibility to work with real 3D content.
Important Note for Webflow Users
This setup works in Webflow, but only if you structure it correctly.
To make it work, you’ll need to:
Host your Three.js code externally using a bundler like Vite, Parcel, or Webpack
Or bundle the JavaScript manually and embed it as a <script type="module"> in your exported site
Keep in mind: Webflow’s Designer does not support ES module imports (import) directly. Pasting the code into an Embed block won’t work unless it’s already built and hosted elsewhere.
You’ll need to export your Webflow project or host the script externally, then link it via your project settings.
Final Thoughts
Thanks for following along with this project. What started as a simple moodboard turned into a small experiment in mixing UI design with real-time 3D.
Taking flat cards and turning them into interactive scenes was a fun way to explore how much depth you can add with just a few tools: Webflow, Three.js, and GLB models.
If this gave you an idea or made you want to try something similar, that’s what matters most. Keep experimenting, keep learning, and keep building.
This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!
After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.
After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!
In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.
1. The Edge Detection Shader
Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.
To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.
Here’s my model with all the materials applied:
This way, Three.js can accurately detect each area I want to highlight!
As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.
What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.
This way, I can modify the output of the edge detection shader before passing it back to the composer:
float G = sobel(t,texel);
G= G > 0.001 ? 1. : 0.;
G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).
As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).
This way I can get all the edges to have the same intensity.
Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:
Next, let’s take a look at the lens parts section.
This is mainly achieved using a Three.js utility called RenderTarget.
A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.
Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.
In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).
This way, in my render loop, I can create two distinct renders:
One render (renderTargetA) with all the meshes except the hovered mesh
Another render (renderTargetB) with only the hovered mesh
As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.
At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:
What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.
To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.
Instead, I used the alpha channel of each individual vertex to set the distance from the camera.
First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
The uColor value represents the initial color of the mesh.
The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.
And here is the updated logic for the postprocessing shader:
Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.
3. The Film Roll Effect
Next is this film roll component that moves and twist on scroll.
This effect is achieved using only shaders, the component is a single plane component with a shader material.
All the data is sent to the shader through uniforms:
export default class Film {
constructor() {
//...code
}
createGeometry() {
this.geometry = new THREE.PlaneGeometry(
60,
2,
100,
10
)
}
createMaterial() {
this.material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: true,
depthWrite: false,
blending: THREE.CustomBlending,
blendEquation: THREE.MaxEquation,
blendSrc: THREE.SrcAlphaFactor,
blendDst: THREE.OneMinusSrcAlphaFactor,
uniforms: {
uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
uRadius: new THREE.Uniform(2),
uXZfreq: new THREE.Uniform(3.525),
uYfreq: new THREE.Uniform(2.155),
uOffset: new THREE.Uniform(0),
uAlphaMap: new THREE.Uniform(
window.preloader.loadTexture(
"./alpha-map.jpg",
"film-alpha-map",
(texture) => {
texture.wrapS = THREE.RepeatWrapping
const { width, height } = texture.image
this.material.uniforms.uAlphaMapResolution.value =
new THREE.Vector2(width, height)
}
)
),
//uImages: new THREE.Uniform(new THREE.Vector4()),
uImages: new THREE.Uniform(
window.preloader.loadTexture(
"/film-texture.png",
"film-image-texture",
(tex) => {
tex.wrapS = THREE.RepeatWrapping
}
)
),
uRepeatFactor: new THREE.Uniform(this.repeatFactor),
uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
uFilmColor: new THREE.Uniform(window.colors.orange1),
},
})
}
createMesh() {
this.mesh = new THREE.Mesh(this.geometry, this.material)
this.scene.add(this.mesh)
}
}
The main vertex shader uniforms are:
uRadius is the radius of the cylinder shape
uXZfreq is the frequency of the twists on the (X,Z) plane
uYfreq is a cylinder height factor
uOffset is the vertical offset of the roll when you scroll up and down
As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.
That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.
I learned so much from this project, and I hope you’ll find it just as useful!
Thank you for reading, and thanks to Codrops for featuring me again!
A simple way to improve efficiency is knowing your IDE shortcuts. Let’s learn how to create custom ones to generate code automatically.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
One of the best tricks to boost productivity is knowing your tools.
I’m pretty sure you’ve already used some predefined snippets in Visual Studio. For example, when you type ctor and hit Tab twice, VS automatically creates an empty constructor for the current class.
In this article, we will learn how to create custom snippets: in particular, we will design a snippet that automatically creates a C# Unit Test method with some placeholders and predefined Arrange-Act-Assert blocks.
Snippet Designer: a Visual Studio 2022 extension to add a UI to your placeholders
Snippets are defined in XML-like files with .snippet extension. But we all know that working with XMLs can be cumbersome, especially if you don’t have a clear idea of the expected structure.
Therefore, even if not strictly necessary, I suggest installing a VS2022 extension called Snippet Designer 2022.
This extension, developed by Matthew Manela, can be found on GitHub, where you can view the source code.
This extension gives you a UI to customize the snippet instead of manually editing the XML nodes. It allows you to customize the snippet, the related metadata, and even the placeholders.
Create a basic snippet in VS2022 using a .snippet file
As we saw, snippets are defined in a simple XML.
In order to have your snippets immediately available in Visual Studio, I suggest you create those files in a specific VS2022 folder under the path \Documents\Visual Studio 2022\Code Snippets\Visual C#\My Code Snippets\.
So, create an empty file, change its extension to .snippet, and save it to that location.
Now, you can open Visual Studio (it’s not necessary to open a project, but I’d recommend you to do so). Then, head to File > Open, and open the file you saved under the My Code Snippets directory.
Thanks to Snippet Designer, you will be able to see a nice UI instead of plain XML content.
Have a look at how I filled in the several parts to create a snippet that generates a variable named x, assigns to it a value, and then calls x++;
Have a look at the main parts:
the body, which contains the snippet to be generated;
the top layer, where we specified:
the Snippet name: Int100; it’s the display name of the shortcut
the code language: C#;
the shortcut: int100; it’s the string you’ll type in that allows you to generate the expected snippet;
the bottom table, which contains the placeholders used in the snippet; more on this later;
the properties tab, on the sidebar: here is where you specify some additional metadata, such as:
Author, Description, and Help Url of the snippet, in case you want to export it;
the kind of snippet: possible values are MethodBody, MethodDecl and TypeDecl. However, this value is supported only in Visual Basic.
Now, hit save and be ready to import it!
Just for completeness, here’s the resulting XML:
<?xml version="1.0" encoding="utf-8"?><CodeSnippetsxmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"><CodeSnippetFormat="1.0.0"><Header><SnippetTypes><SnippetType>Expansion</SnippetType></SnippetTypes><Title>Int100</Title><Author></Author><Description></Description><HelpUrl></HelpUrl><Shortcut>int100</Shortcut></Header><Snippet><CodeKind="method decl"Language="csharp"Delimiter="$"><![CDATA[int x = 100;
x++;]]></Code></Snippet></CodeSnippet></CodeSnippets>
Notice that the actual content of the snippet is defined in the CDATA block.
Import the snippet in Visual Studio
It’s time to import the snippet. Open the Tools menu item and click on Code Snippets Manager.
From here, you can import a snippet by clicking the Import… button. Given that we’ve already saved our snippet in the correct folder, we’ll find it under the My Code Snippets folder.
Now it’s ready! Open a C# class, and start typing int100. You’ll see our snippet in the autocomplete list.
By hitting Tab twice, you’ll see the snippet’s content being generated.
How to use placeholders when defining snippets in Visual Studio
Wouldn’t it be nice to have the possibility to define customizable parts of your snippets?
Let’s see a real example: I want to create a snippet to create the structure of a Unit Tests method with these characteristics:
it already contains the AAA (Arrange, Act, Assert) sections;
the method name should follow the pattern “SOMETHING should DO STUFF when CONDITION”. I want to be able to replace the different parts of the method name by using placeholders.
You can define placeholders using the $ symbol. You will then see the placeholders in the table at the bottom of the UI. In this example, the placeholders are $TestMethod$, $DoSomething$, and $Condition$. I also added a description to explain the purpose of each placeholder better.
The XML looks like this:
<?xml version="1.0" encoding="utf-8"?><CodeSnippetsxmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"><CodeSnippetFormat="1.0.0"><Header><SnippetTypes><SnippetType>Expansion</SnippetType></SnippetTypes><Title>Test Sync</Title><Author>Davide Bellone</Author><Description>Scaffold the AAA structure for synchronous NUnit tests</Description><HelpUrl></HelpUrl><Shortcut>testsync</Shortcut></Header><Snippet><Declarations><LiteralEditable="true"><ID>TestMethod</ID><ToolTip>Name of the method to be tested</ToolTip><Default>TestMethod</Default><Function></Function></Literal><LiteralEditable="true"><ID>DoSomething</ID><ToolTip>Expected behavior or result</ToolTip><Default>DoSomething</Default><Function></Function></Literal><LiteralEditable="true"><ID>Condition</ID><ToolTip>Initial conditions</ToolTip><Default>Condition</Default><Function></Function></Literal></Declarations><CodeLanguage="csharp"Delimiter="$"Kind="method decl"><![CDATA[[Test]
public void $TestMethod$_Should_$DoSomething$_When_$Condition$()
{
// Arrange
// Act
// Assert
}]]></Code></Snippet></CodeSnippet></CodeSnippets>
Now, import it as we already did before.
Then, head to your code, start typing testsync, and you’ll see the snippet come to life. The placeholders we defined are highlighted. You can then fill in these placeholders, hit tab, and move to the next one.
Bonus: how to view all the snippets defined in VS
If you want to learn more about your IDE and the available snippets, you can have a look at the Snippet Explorer table.
You can find it under View > Tools > Snippet Explorer.
Here, you can see all the snippets, their shortcuts, and the content of each snippet. You can also see the placeholders highlighted in green.
It’s always an excellent place to learn more about Visual Studio.
Further readings
As always, you can read more on Microsoft Docs. It’s a valuable resource, although I find it difficult to follow.
There are some tips that may improve both the code quality and the developer productivity.
If you want to enforce some structures or rules, add such snippets in your repository; when somebody joins your team, teach them how to import those snippets.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Hi, my alias is “MisterPrada,” and I’d like to share a short story about how I came to the decision to become a creative developer.
Background
I’m just about to turn 30, and over the years I’ve come to many realizations that I’d like to share as echoes of my journey. I’ve been consciously programming for about 14 years, and I’ve been using Windows since childhood—battling the infamous “blue screen of death.”
From a young age, I knew who I wanted to be—a programmer. In my childhood, nothing was more exciting than a computer. However, my academic skills weren’t strong enough to get into university easily. I was never particularly gifted in any subject; my grades were average or worse.
Somehow, I managed to get accepted into a university for an engineering program related to programming. I tried hard, but nothing worked—I ended up copying others just to pass exams. After some time, I realized it was time to get serious. I had no special talents, no head start—just the need for hard work. I wrote my first function, my first loop over a two-dimensional array, my first structure, my first doubly linked list—and I realized I liked it. I really, really liked the fact that I was starting to make progress.
I didn’t stop copying completely, but I began writing my own programs. We studied C++, C#, Assembly, databases, and lots of things I couldn’t yet apply in real life. So I bought a book on PHP, JS, and MySQL and realized I could build websites using WordPress and other popular CMS platforms at the time like Joomla, Drupal, etc. And you know what? That made money—and it was insanely cool. I just took on any work I could find. Since I had spent all of university copying code, I found it really easy to understand and adapt other people’s code.
Years passed, and I was building simple websites—tweaking templates downloaded from torrents, grabbing CSS styles from random websites, and so on. Something like these:
Eventually, I realized that my growth had stalled and I needed to act fast. I started reading various books, trying to improve my skills and learn new, trending technologies. This mostly broadened my technical horizons—I understood more, copied more, and tried harder to boost my self-esteem.
At one point, I felt confident, thinking I was pretty good and could handle anything. But then something happened during the final year of university. A classmate told me he had gone for an interview at a major company, and they asked him to implement a binary tree. I was shocked—I had no idea what a binary tree was, how to build one, or why I was even supposed to know it.
Honestly, it hit me hard. I started questioning everything—was I even a real programmer? Maybe I was third, fourth, or even fifth-rate at best, especially with my modest PHP/JS skill set…
No matter how tough things got, I never felt like this wasn’t for me. I never thought of quitting or doing something else. I just accepted that I wasn’t the best, not the smartest, and unlikely to be in Steve Jobs’ dream dev team. And you know what? Something strange happened.
One day, while playing my favorite game, World of Warcraft, I decided I wanted to become a cheater. And it wasn’t just a casual thought or curiosity—it became a full-blown obsession. I was just a regular programmer with average web development knowledge, yet I decided to write a cheat, dive into hacking, and understand how it all worked.
For a whole year, I obsessively studied the C++ source code of the game—despite not really using C++ at all. I explored how the server worked, dug into Assembly, network traffic, data packets, and hex code. I read books on cybersecurity and anything even remotely related. It felt like an endless world of discovery. I could spend months trying to understand things that didn’t make sense to me at first—occasionally achieving small victories, but victories nonetheless.
I started building a toolkit of tools like IDA Pro, xDbg, and even something as simple as https://hexed.it/, which let me quickly modify binary files.
After achieving real success—writing my first memory manipulation programs for protected software—I realized that what really makes a difference is a mix of luck, hard work, and a genuine passion for what you’re doing. And I had both of those things.
That became a kind of guiding principle for my further development. Sure, I’m not the most talented or naturally gifted, but I began to understand that even without full knowledge, with persistence and effort, you can achieve goals that seem impossible at first—or even at second or third glance.
Getting to Work
I got a job at an outsourcing company, and honestly, I felt confident thanks to my freelance commercial experience. At work, I handled whatever tasks the client needed—it didn’t matter whether I already knew how to do it or not. My goals were simple: learn more and earn money. What did I work on? Pretty much everything, except I always thought of myself as more of a logic guy, and frontend wasn’t really my thing. It was easier for me to deploy and configure a server than to write 10 lines of CSS.
So I focused mostly on backend logic, building systems, and I’d often hand off frontend tasks to others. Still, I was always afraid of losing touch with those skills, so I made an effort to study Vue, React, Angular, and various frontend libraries—just to understand the logic behind it.
I read a lot of books, mostly on JavaScript, DevOps, and hacking. At work, I grew horizontally, gaining experience based on the clients’ needs. In my personal time, I was deeply interested in hacking and reverse engineering—not because of any grand ambition, but simply because I loved it. I saw myself in it, because I was good at it. I definitely had some luck—I could click randomly through code and somehow land on exactly what I needed. It’s comforting to know that not everything is hopeless.
Years went by, and as backend developers and DevOps engineers, we often felt invisible. Over time, the huge amount of backend code I wrote stopped bringing the same satisfaction. There were more systems, more interfaces, and less recognition—because no one really sees what you do behind the scenes. So why not switch to frontend? Well, I just hate CSS. And building simple landing pages or generic websites with nothing unique? That’s just not interesting. I need something bold and impressive—something that grabs me the way watching *Dune* does. Game development? Too complex, and I never had the desire to make games.
But then, at work, I was given a task to create a WebAR experience for a client. It required at least some basic 3D knowledge, which I didn’t have. So I dove in blindly and started building the app using 8thWall. That’s when I discovered A-Frame, which was super easy and incredibly fun—seeing results so different from anything I had done before. When A-Frame became limiting, I started using Three.js directly on commercial projects. I had zero understanding of vector math, zero 3D modeling experience (like in Blender), but I still managed to build something. Some things worked, some didn’t—but in the end, the client was happy.
WebAR experience
After creating dozens of such projects and nearly a hundred backend projects, I eventually grew tired of both. Out of boredom, I started reading books on Linux Bash, Kubernetes, WebAssembly, Security, and code quality—good and bad.
All of this only expanded my technical perspective. I didn’t become a hero or some programming guru, but I felt like I was standing alone at the summit of my own mountain. There was this strange emptiness—an aimless desire to keep learning, and yet I kept doing it day after day. Some topics I studied still haven’t revealed their meaning to me, while others only made sense years later, or proved useful when I passed that knowledge on to others.
Over the years, I became a team lead—not because I was naturally suited for it, but because there was simply no one else. I took on responsibility, began teaching others what to do, even though I wasn’t always sure what was right or wrong—I just shared my logic and experience.
Alongside trends, I had to learn CI/CD and Docker to solve tasks more efficiently—tasks that used to be handled differently. And you know what? I really learned something from this period: that most tools are quite similar, and you don’t need to master all of them to solve real business problems. In my mind, they became just that—tools.
All you need is to read the documentation, run a few basic examples, and you’re good to go. I’m simply not one of those people who wants to stick to one technology for life and squeeze value out of it forever. That’s not me. For over 5 years, I built 70–80 websites using just WordPress and Laravel—covering everything from custom themes and templating systems to multisites and even deep dives into the WordPress core. I worked with some truly awful legacy code that I wouldn’t wish on anyone.
Eventually, I decided to move on. The developers I worked with came and went, and that cycle never ended—it’s still ongoing to this day. Then came my “day X.” I was given a project I couldn’t turn down. It involved GLSL shaders. I had to create a WebAR scene with a glass beverage placed on a table. The challenge was that it was a glass cup, and around version 130 of Three.js, this couldn’t be done using a simple material. The client provided ready-made code written in Three.js with custom shaders. I looked at it and saw nothing but math—math I couldn’t understand. It was way too complex. The developer who created it had written a shader for glass, water, ice, and other elements. My task was to integrate this scene into WebAR. I was lucky enough to get a call with the developer who built it, and I asked what seemed like a straightforward question at the time:
(Me) – How did you manage to create such effects using pure math? Can you actually visualize it all in your head? (Shader Developer) – Yeah, it looks complicated, but if you start writing shaders, borrowing small snippets from elsewhere and understanding how different effects work, eventually you start to look at that mathematical code and visualize those parts in your head.
His answer blew me away. I realized—this guy is brilliant. And I honestly hadn’t seen anyone cooler. I barely understood anything about what he’d done—it was all incredibly hard to grasp. Back then, I didn’t have ChatGPT or anything like it to help. I started searching for books on the subject, but there were barely any. It was like this secret world where everyone knew everything but never shared. And if they did, it was in dry, unreadable math-heavy documentation that someone like me just couldn’t digest. At that point, I thought maybe I was simply too weak to write anything like that, and I went back to what I was doing before.
The Beginning of the Creative Developer Journey
About a year later, I came across this website, which struck me with its minimalistic and stylish design—totally my vibe. Without hesitation, I bought the course by Bruno Simon, not even digging into the details. If he said he’d teach shaders, I was all in. My obsession was so intense that I completed the course in just two weeks, diving into every single detail. Thanks to my background, most of the lessons were just a pleasant refresher—but the shader sections truly changed my life.
So, I finished the course. What now? I didn’t yet have real-world projects that matched the new skills I had gained, so I decided to just start coding and releasing my own work. I spent a long time thinking about what my first project should be. Being a huge fan of the Naruto universe, I chose to dedicate my first creative project to my favorite character—Itachi.
I already had some very basic skills in Blender, and of course, there was no way I could create a model like that myself. Luckily, I stumbled upon one on Sketchfab and managed to download it (haha). I built the project almost the way I envisioned it, though I lacked the experience for some finer details. Still, I did everything I could at the time. God rays were already available in the Three.js examples, so creating a project like that was pretty straightforward. And man, it was so cool—the feeling of being able to build something immersive was just amazing.
Next, I decided to create something in honor of my all-time favorite game, which I’ve been playing for over 15 years—World of Warcraft.
In this project, the real challenge for me was linking the portal shader to sound, as well as creating particle motion along Bézier curves. But by this point, I already had ChatGPT—and my capabilities skyrocketed. This is my favorite non-commercial project. Still, copying and modifying something isn’t the same as creating it from scratch.
The shaders I used here were pieced together from different sources—I borrowed some from Bruno Simon’s projects, and in other cases, I reverse-engineered other projects just to figure out what I could replicate instead of truly engaging my own thinking. It was like always taking the path of least resistance. Ironically, reverse engineering a Webpack-compiled site often takes more time than simply understanding the problem yourself. But that was my default mode—copy, modify, move on.
For this particular project, it wasn’t a big deal, but I’ve had projects in the past that got flagged for copyright issues. I knew everything lived on the frontend and could be broken down and analyzed bit by bit—especially shaders. You might not know this, but in Safari on a MacBook, you can use developer tools to view all the shaders used on a site and even modify them in real time. Naturally, I used every trick I knew to reach my goals.
That shader developer’s comment—about being able to read math and visualize it—kept echoing in my mind. After Bruno’s course, I started to believe he might have been right. I was beginning to understand fragments of shader code, even if not all of it. I ended up watching every single video on the YouTube channel “The Art Of Code“.
After watching those videos, I started to notice my growth in writing shaders. I began to see, understand, and even visualize what I was writing. So I decided to create a fragment shader based on my own experience:
Along my shader-writing journey, I came across someone everyone in the shader world knows—Inigo Quilez. Man, what an absolute legend. There’s this overwhelming feeling that you’ll never reach his level. His understanding of mathematics and computer graphics is just on another planet compared to mine. For a long time, that thought really got to me—20 years ago, he was creating things I still can’t do today, despite programming for so long. But looking back, I realized something: some of the people I once admired, I’ve actually surpassed in some ways—not because I aimed to, but simply by moving forward every day. And I came to believe that if I keep going, maybe I’ll reach my own peak—one where my ideas can be truly useful to others.
So here I am, moving forward, and creating what I believe is a beautiful shader of the aurora.
I realized that I could now create shaders based on models made in Blender—and do it with a full understanding of what’s going on. I was finally capable of building something entirely on my own.
So what’s next? I dove back into Three.js and began trying to apply everything I had learned to create something new. You can find a list of those projects here.
Public Projects
I bought and completed all the courses by Simon Dev. By then, the shader course wasn’t anything groundbreaking for me anymore, but the math course was something I really needed. I wanted to deepen my understanding of how to apply math in practice. I also played through this game, which demonstrates how vector math works—highly recommended for anyone struggling with the concept. It really opened my eyes to things I hadn’t understood before.
I became obsessed with making sure I didn’t miss anything shared by the people who helped shape my knowledge. I watched 100% of the videos on his YouTube channel and those of other creators who were important to me in this field. And to this day, I keep learning, studying other developers’ techniques, and growing in the field of computer graphics.
Interesting Projects
I really enjoy working with particles—and I also love motion blur. I came up with an approach where each particle blurs in the direction of its movement based on its velocity. I left some empty space on the plane where the particle is drawn so the blur effect wouldn’t get cut off.
Using particles and distance-based blur effects in commercial projects.
After watching Dune, I decided to play around with sound.
I really enjoy playing with light sources.
Or even creating custom light sources using TSL.
I consider this project my most underrated one. I’m a huge fan of the Predator and Alien universes. I did borrow the plasma shader from CodePen, but honestly, that’s not the most important detail here. At the time I made this project, Three.js had just introduced a new material property called AlphaHash, which allowed me to create an awesome laser effect. It really looks great. Maybe no one notices such small details, but for me, it was an achievement to come up with that solution right as the new version of Three.js was released. That’s where my luck comes in—I had no idea how I’d implement the laser at the start of the project and thought, “Oh well, I’ll figure something out.” And luckily, the engine developers delivered exactly what I needed just in time.
You may have already noticed that I don’t build full frontend solutions with lots of interfaces and traditional layout work—that just doesn’t interest me, so I don’t do it. In commercial development, I focus on solving niche problems—problems other developers won’t spend hours watching videos to figure out. I create concepts that later get integrated into projects. You might have already seen some 3D scenes or visual effects I’ve built—without even knowing it. A lot of development happens through two, three, or even four layers of hands. That’s why, sometimes, creating something for Coca-Cola is more realistic than making a simple online store for a local business.
And what have I learned from this journey?
Never give up. Be like Naruto—better to fail 100 times than never try at all.
I’m not a saint of a developer—I forget things just like you, I use ChatGPT, I get lazy, and sometimes, in trying to do more than I’m capable of, I give in to the temptation of borrowing code. And yes, that has sometimes ended badly for me.
I assure you, even top developers—the ones who seem untouchably brilliant—also borrow or adapt code. I’ve reverse-engineered projects and clearly seen others use code they didn’t write, even while they rake in thousands of views and win awwwards. Meanwhile, the original authors stay invisible. That’s why I now try to focus more on creating things that are truly mine, to grow the ability to create rather than just consume. And to you, I say—do whatever helps you get better. The takeaway for me is this: share what you’ve made today, because tomorrow it might be irrelevant. And believe me, if someone really wants what you’ve built, they’ll take it anyway—and you won’t even know.
Even if your job makes you build projects that don’t excite you, don’t assume it’s someone else’s job to teach you. You have to sit down, start learning on your own, and work toward what truly inspires you.
Don’t be afraid to forget things—remembering something isn’t the same as learning it from scratch, especially with ChatGPT around.
See new technologies as tools to reach your goals. Don’t fear them—use everything, including AI, as long as it helps you move forward. Making mistakes is the most normal thing that can happen to you.
Nothing is impossible—it’s just a matter of time you personally need to spend to understand something that currently feels incomprehensible.
When using ChatGPT, think critically and read what it outputs. Don’t blindly copy and paste code—I’ve done that, and it cost me a lot of time. If I had just thought it through, I could’ve solved it in five minutes.
If new technologies seem absurd to you, maybe you’re starting to age—or refusing to accept change. Try to shake yourself up and think critically. If you don’t do it, someone else will—and they’ll leave you behind.
Hard work and determination beat talent (Inigo Quilez is still out of reach for now), but the price is your time.
In the pursuit of your own achievements, don’t forget about your family, loved ones, and friends—otherwise your 30s will fly by even faster than mine did.
The more techniques you learn in digital art, the more you’ll want to understand math and physics—and many things you once found boring may suddenly gain new meaning and purpose.
Ideas that you create yourself may become more valuable to you than everything you’ve ever studied.
Programming books are often so huge that you don’t even want to buy them—but you don’t have to read them cover to cover. Learn to filter information. Don’t worry about skipping something—if you miss it, GPT can explain it later. So feel free to skip the chapters you don’t need right now or won’t retain anyway.
In the past, it was important to know what a certain technology could do and how to use it by memory or with references. Today, it’s enough to simply know what’s possible—documentation and ChatGPT can help you figure out the rest. Don’t memorize things that will be irrelevant or replaced by new tech in a few days.
Start gradually learning TSL—the node-based system will make it easier to create materials designed by artists in Blender. (Year 2025)
Don’t be afraid to dig into the core to read or even modify something. The people who build the tools you use are just people too, and they write readable code. Take Three.js, for example—when you dive into the material declarations, the hierarchy becomes much clearer, something that wasn’t obvious to me when I first started learning Three.js. Or with TSL—even though the documentation is still weak, looking at function declarations often reveals helpful comments that make it easier to understand how to use different features.
To be honest, I didn’t really want to write about myself—but Manoela pushed me, so I decided to help. And you know, helping people often comes back around as luck 🍀—and that always comes in handy later!
Alright, I won’t bore you any longer—just take a look at my cat ♥️