Ponpon Mania is an animated comic featuring Ponpon, a megalomaniac sheep dreaming of becoming a DJ. We wanted to explore storytelling beyond traditional comics by combining playful interactions, smooth GSAP-powered motion, and dynamic visuals. The goal was to create a comic that feels alive, where readers engage directly with Ponpon’s world while following the narrative. The project evolved over several months, moving from early sketches to interactive prototypes.
About us
We are Justine Soulié (Art Director & Illustrator) and Patrick Heng (Creative Developer), a creative duo passionate about storytelling through visuals and interaction. Justine brings expertise in illustration, art direction, and design, while Patrick focuses on creative development and interactive experiences. Together, we explore ways to make stories more playful, immersive, and engaging.
Art Direction
Our visual direction emphasizes clean layouts, bold colors, and playful details. From the start, we wanted the comic to feel vibrant and approachable while using design to support the story. On the homepage, we aimed to create a simple, welcoming scene that immediately draws the user in, offering many interactive elements to explore and encouraging engagement from the very first moment.
Homepage animations
The comic is mostly black and white, providing a simple and striking visual base. Color appears selectively, especially when Ponpon dreams of being a DJ and is fully immersed in his imagined world, highlighting these key moments and guiding the reader’s attention. Scroll-triggered animations naturally direct focus, while hover effects and clickable elements invite exploration without interrupting the narrative flow.
Chapter navigation
To reinforce Ponpon’s connection to music, we designed the navigation to resemble a music player. Readers move through chapters as if they were albums, with each panel functioning like a song. This structure reflects Ponpon’s DJ aspirations, making the reading experience intuitive, dynamic, and closely tied to the story.
Chapters menu
Technical Approach
Our main goal was to reduce technical friction so we could dedicate our energy to refining the artistic direction, motion design, and animation of the website.
We used WebGL because it gave us full creative freedom over rendering. Even though the comic has a mostly 2D look, we wanted the flexibility to add depth and apply shader-based effects.
Starting from Justine’s illustrator files, every layer and visual element from each panel was exported as an individual image. These assets were then packed into optimized texture atlases using Free TexturePacker.
Atlas example
Once exported, the images were further compressed into GPU-friendly formats to reduce memory usage. Using the data generated by the packer, we reconstructed each scene in WebGL by generating planes at the correct size. Finally, everything was placed in a 3D scene where we applied the necessary shaders and animations to achieve the desired visual effects.
Debug view
Tech Stack & Tools
Design
Adobe Photoshop & Illustrator – illustration and asset preparation
Free TexturePacker – for creating optimized texture atlases from exported assets
Tweakpane – GUI tool for real-time debugging and fine-tuning parameters
Animating using GSAP
GSAP makes it easy to animate both DOM elements and WebGL objects with a unified syntax. Its timeline system brought structure to complex sequences, while combining it with ScrollTrigger streamlined scroll-based animations. We also used SplitText to handle text animations.
Home page
For the homepage, we wanted the very first thing users see to feel playful and full of life. It introduces the three main characters, all animated, and sets the tone for the rest of the experience. Every element reacts subtly to the mouse: the Ponpon mask deforms slightly, balloons collide softly, and clouds drift away in gentle repulsion. These micro-interactions make the scene feel tangible and invite visitors to explore the world of Ponpon Mania with curiosity and delight. We used GSAP timeline to choreograph the intro animation, allowing us to trigger each element in sequence for a smooth and cohesive reveal.
// Simple repulsion we used for the clouds in our render function
const dx = baseX - mouse.x;
const dy = baseY - mouse.y;
const dist = Math.sqrt(dx * dx + dy * dy);
// Repel the cloud if the mouse is near
const radius = 2; // interaction radius
const strength = 1.5; // repulsion force
const repulsion = Math.max(0, 1 - dist / radius) * strength;
// Apply the repulsion with smooth spring motion
const targetX = basePosX + dx * repulsion;
const targetY = basePosY - Math.abs(dy * repulsion) / 2;
velocity.x += (targetX - position.x) * springStrength * deltaTime;
velocity.y += (targetY - position.y) * springStrength * deltaTime;
position.x += velocity.x;
position.y += velocity.y;
Chapter Selection
For the chapter selection, we wanted something simple yet evocative of Ponpon musical universe. Each chapter is presented as an album cover, inviting users to browse through them as if flipping through a record collection. We try to have a smooth and intuitive navigation, users can drag, scroll, or click to explore and each chapter snaps into place for an easy and satisfying selection experience.
Panel Animation
For the panel animations, we wanted each panel to feel alive bringing Justine’s illustrations to life through motion. We spent a lot of time refining every detail so that each scene feels expressive and unique. Using GSAP timelines made it easy to structure and synchronize the different animations, keeping them flexible and reusable. Here’s an example of a GSAP timeline animating a panel, showing how sequences can be chained together smoothly.
// Animate ponpons in sequence with GSAP timelines
const timeline = gsap.timeline({ repeat: -1, repeatDelay: 0.7 });
const uFlash = { value: 0 };
const flashTimeline = gsap.timeline({ paused: true });
function togglePonponGroup(index) {
ponponsGroups.forEach((g, i) => (g.mesh.visible = i === index));
}
function triggerFlash() {
const flashes = Math.floor(Math.random() * 2) + 1; // 1–2 flashes
const duration = 0.4 / flashes;
flashTimeline.clear();
for (let i = 0; i < flashes; i++) {
flashTimeline
.set(uFlash, { value: 0.6 }, i * duration) // bright flash
.to(uFlash, { value: 0, duration: duration * 0.9 }, i * duration + duration * 0.1); // fade out
}
flashTimeline.play();
}
ponponMeshes.forEach((ponpon, i) => {
timeline.fromTo(
ponpon.position,
{ y: ponpon.initialY - 0.2 }, // start slightly below
{
y: ponpon.initialY, // bounce up
duration: 1,
ease: "elastic.out",
onStart: () => {
togglePonponGroup(i); // show active group
triggerFlash(); // trigger flash
}
},
i * 1.6 // stagger delay between ponpons
);
});
About Page
On the About page, GSAP ScrollTrigger tracks the scroll progress of each section. These values drive the WebGL scenes, controlling rendering, transitions, and camera movement. This ensures the visuals stay perfectly synchronized with the user’s scrolling.
const sectionUniform = { progress: { value: 0 } };
// create a ScrollTrigger for one section
const sectionTrigger = ScrollTrigger.create({
trigger: ".about-section",
start: "top bottom",
end: "bottom top",
onUpdate: (self) => {
sectionUniform.progress.value = self.progress; // update uniform
}
});
// update scene each frame using trigger values
function updateScene() {
const progress = sectionTrigger.progress;
const velocity = sectionTrigger.getVelocity();
// drive camera movement with scroll progress
camera.position.y = map(progress, 0.75, 1, -0.4, 3.4);
camera.position.z =
5 + map(progress, 0, 0.3, -4, 0) +
map(progress, 0.75, 1, 0, 2) + velocity * 0.01;
// subtle velocity feedback on ponpon and camera
ponpon.position.y = ponpon.initialY + velocity * 0.01;
}
Thanks to the SplitText plugin, we can animate each section title line by line as it comes into view while scrolling.
// Split the text into lines for staggered animation
const split = new SplitText(titleDomElement, { type: "lines" });
const lines = split.lines;
// Create a timeline for the text animation
const tl = gsap.timeline({ paused: true });
tl.from(lines, {
x: "100%",
skewX: () => Math.random() * 50 - 25,
rotation: 5,
opacity: 0,
duration: 1,
stagger: 0.06,
ease: "elastic.out(0.7, 0.7)"
});
// Trigger the timeline when scrolling the section into view
ScrollTrigger.create({
trigger: ".about-section",
start: "top 60%",
end: "bottom top",
onEnter: () => tl.play(),
onLeaveBack: () => tl.reverse()
});
Page transitions
For the page transitions, we wanted them to add a sense of playfulness to the experience while keeping navigation snappy and fluid. Each transition was designed to fit the mood of the page so rather than using a single generic effect, we built variations that keep the journey fresh.
Technically, the transitions blend two WebGL scenes together using a custom shader, where the previous and next pages are rendered and mixed in real time. The animation of the blend is driven by GSAP tweens, which lets us precisely control the timing and progress of the shader for smooth, responsive transitions.
Designing Playful Experiences
Ponpon Mania pushed us to think beyond traditional storytelling. It was a joy to work on the narrative and micro-interactions that add playfulness and energy to the comic.
Looking ahead, we plan to create new chapters, expand Ponpon’s story, and introduce small games and interactive experiences within the universe we’ve built. We’re excited to keep exploring Ponpon’s world and share more surprises with readers along the way.
Thank you for reading! We hope you enjoyed discovering the creative journey behind Ponpon Mania and the techniques we used to bring Ponpon’s world to life.
If you want to follow Ponpon, check us out on TikTok or Instagram.
It’s not a good practice to return the ID of a newly created item in the HTTP Response Body. What to do? You can return it in the HTTP Response Headers, with CreatedAtAction and CreatedAtRoute.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Even though many devs (including me!) often forget about it, REST is not a synonym of HTTP API: it is an architectural style based on the central idea of resource.
So, when you are seeing an HTTP request like GET http://api.example.com/games/123 you may correctly think that you are getting the details of the game with ID 123. You are asking for the resource with ID 123.
But what happens when you create a new resource? You perform a POST, insert a new item… and then? How can you know the ID of the newly created resource – if the ID is created automatically – and use it to access the details of the new item?
Get item detail
For .NET APIs, all the endpoints are exposed inside a Controller, which is a class that derives from ControllerBase:
[ApiController][Route("[controller]")]
publicclassGameBoardController : ControllerBase
{
// all the actions here!}
So, to define a GET endpoint, we have to create an Action and specify the HTTP verb associated by using [HttpGet].
[HttpGet][Route("{id}")]public IActionResult GetDetail(Guid id)
{
var game = Games.FirstOrDefault(_ => _.Id.Equals(id));
if (game is not null)
{
return Ok(game);
}
else {
return NotFound();
}
}
This endpoint is pretty straightforward: if the game with the specified ID exists, the method returns it; otherwise, the method returns a NotFoundResult object that corresponds to a 404 HTTP Status Code.
Notice the [Route("{id}")] attribute: it means that the ASP.NET engine when parsing the incoming HTTP requests, searches for an Action with the required HTTP method and a route that matches the required path. Then, when it finds the Action, it maps the route parameters ({id}) to the parameters of the C# method (Guid id).
Hey! in this section I inserted not-so-correct info: I mean, it is generally right, but not precise. Can you spot it? Drop a comment😉
What to do when POST-ing a resource?
Of course, you also need to create new resources: that’s where the HTTP POST verb comes in handy.
Suppose a simple data flow: you create a new object, you insert it in the database, and it is the database itself that assigns to the object an ID.
Then, you need to use the newly created object. How to proceed?
You could return the ID in the HTTP Response Body. But we are using a POST verb, so you should not return data – POST is meant to insert data, not return values.
Otherwise, you can perform a query to find an item with the exact fields you’ve just inserted. For example
POST /item {title:"foo", description: "bar"}
GET /items?title=foo&description=bar
Not a good idea to use those ways, uh?
We have a third possibility: return the resource location in the HTTP Response Header.
How to return it? We have 2 ways: returning a CreatedAtActionResult or a CreatedAtRouteResult.
Using CreatedAtAction
With CreatedAtAction you can specify the name of the Action (or, better, the name of the method that implements that action) as a parameter.
ps: for the sake of simplicity, the new ID is generated directly into the method – no DBs in sight!
[HttpPost]public IActionResult Create(GameBoard game)
{
var newGameId = Guid.NewGuid();
var gameBoard = new GameBoardEntity
{
Title = game.Title,
Duration = game.Duration,
Id = newGameId
};
Games.Add(gameBoard);
return CreatedAtAction(nameof(GetDetail), new { id = newGameId }, game);
}
What are the second and third parameters?
We can see a new { id = newGameId } that indicates the route parameters defined in the GET endpoint (remember the [Route("{id}")] attribute? ) and assigns to each parameter a value.
The last parameter is the newly created item – or any object you want to return in that field.
Using CreatedAtRoute
Similar to the previous method we have CreatedAtRoute. As you may guess by the name, it does not refer to a specific Action by using the name, but it refers to the Route.
[HttpPost]public IActionResult Create(GameBoard game)
{
var newGameId = Guid.NewGuid();
var gameBoard = new GameBoardEntity
{
Title = game.Title,
Duration = game.Duration,
Id = newGameId
};
Games.Add(gameBoard);
return CreatedAtRoute("EndpointName", new { id = newGameId }, game);
}
To give a Route a name, we need to add a Name attribute to it:
[HttpGet]
- [Route("{id}")]
+ [Route("{id}", Name = "EndpointName")]
public IActionResult GetDetail(Guid id)
That’s it! Easy Peasy!
Needless to say, when we perform a GET at the URL specified in the Location attribute, we get the details of the item we’ve just created.
What about Routes and Query Strings?
We can use the same technique to get the details of an item by retrieving it using a query string parameter instead of a route parameter:
[HttpGet]
- [Route("{id}")]
- public IActionResult GetDetail(Guid id)
+ [Route("details")]
+ public IActionResult GetDetail([FromQuery] Guid id)
{
This means that the corresponding path is /GameBoard/details?id=123.
And, without modifying the Create methods we’ve seen before, we can let ASP.NET resolve the routing and create for us the URL:
And, surprise surprise, there’s more!
We can mix route parameters with query string parameters, and the Location attribute will hold the right value for the path.
Let’s update the GetDetail method: now the resource ID is included in the route, and a new parameter – title – is passed in query string to filter for the name of the game:
[HttpGet][Route("details/{id}")]public IActionResult GetDetail([FromRoute] Guid id, [FromQuery] string title)
{
var game = Games.FirstOrDefault(_ =>
_.Id.Equals(id) && _.Title.Equals(title, StringComparison.OrdinalIgnoreCase)
);
This means that we need to pass a new field in the object passed to the CreatedAtRoute and CreatedAtAction methods:
- return CreatedAtRoute("EndpointName", new { id = newGameId }, game);
+ return CreatedAtRoute("EndpointName", new { id = newGameId, title = game.Title }, game);
see, the title field?
When creating a new item, we can see the correct path in the Response Header:
Wrapping up
We’ve seen how to manage the creation of an item when developing a REST API: depending on the way you define routes, you can use CreatedAtRoute or CreatedAtAction.
Remember that REST APIs are based on the idea of manipulation of resources: you should remember that every HTTP Verb has its meaning, and you should always consider it when developing an endpoint. Is it a GET? We should not change the status of a resource. Is it a POST? We should not return the resource itself – but we can return a reference to it.
At the beginning of 2025, I finally decided to build myself a new portfolio. I still pretty much liked the one I made back in 2021, but I felt the need to put to good use all the cool stuff I’ve learned these past couple years working with WebGPU. And, besides, half of the projects featured in my case studies had been put offline anyway, so it was about time.
I didn’t really know where I was going at this point, except that:
It would, of course, feature multiple procedurally generated WebGPU scenes. I already had a few concepts to explore in mind, like particles or boids simulation.
I wanted to take care of the design myself. It may seem weird, especially since I was very happy with what Gilles came up designing for my last portfolio, and also because I do suck at design. But this would give me more freedom, and I’ve also always liked building things from scratch on my own.
Last but not least, it had to be fun!
1. The journey
The (tough) design and content process
Don’t do this!
At first, I had no idea what to do design wise. Fonts, colors: there are so many things that could go wrong.
I started with simple light and dark colors, kept the fonts Gilles had chosen for my previous portfolio and started to copy/paste its old text content. It didn’t feel that great, and it wasn’t fun for sure.
The very first design iterations… Still a long way to go!
I definitely needed colors. I could have wasted a few hours (or days) choosing the right pairing, but instead I decided this could be the right opportunity to use this random color palette generator utility I’ve coded a few years ago. I cleaned the code a bit, created a repo, published it to npm and added it to my project. I also slightly changed the tone of the copywriting, and that led me to something still not that great, but a bit more fun.
Slowly getting there
I let it site for a while and started working on other parts of the site, such as integrating the CMS or experimenting with the WebGPU scenes. It’s only after a long iteration process that I’ve finally set up my mind on this kind of old school video games retro vibe mixed with a more cheerful, cartoonish aesthetic, almost Candy Crush-esque. Impactful headings, popping animations, banded gradients… you name it.
Of course, I’ve never gone as far as creating a Figma project (I did select a few reference images as a moodboard though) and just tested a ton of stuff directly with code until I felt it wasn’t that bad anymore. All in all, it was a very long and painful process, and I guess every designer would agree at this point: don’t do this!
A few images from my final moodboard – all credits go to their respective authors.
Do you actually read portfolios content?
Another painful point was to settle on the actual content and overall structure of the site. Do I need detailed case studies pages? Do I need pages at all? Will the users even read all those long blocks of text I will struggle to write?
In the end, I chose to drop the case studies pages. I had a couple of reasons to do so:
Often times the project ends up being put offline for various reasons, and you end up showcasing something the user cannot visit anymore. This is exactly what happened on my previous portfolio.
Most of the client work I’ve been doing those past years has been for agencies, and I’m not always allowed to publicly share them. I have no problem with that, but it slightly reduced the number of projects I could highlight.
From there on, it was a quick decision to just go with a single landing page. I’d put direct links to the projects I could highlight and small videos of all the other projects or personal works I could feature. On top of that, I’d add a few “about” sections mixed with my WebGPU scenes, and that’d be the gist of it.
Speaking of the WebGPU scenes, I really wanted them to be meaningful, not just a technical demonstration of what I could do. But we’ll get to that later.
The final UX twist
After a few months, I felt like I was entering the final stage of development. The page structure was mostly done, all my various sections were there and I was working on the final animations and micro-interactions tweakings.
So I took a step back, and looked back at my initial expectations. I had my WebGPU scenes showcasing my various technical skills. I had handled the design myself, and it wasn’t that bad. But were the flashy colors and animations enough to make it a really fun experience overall?
I think you already know the answer. Something was missing. Except for the random color palette switcher, the UX basically consisted of scroll-driven animations. Most of the 3D scenes interactions were rudimentary. I needed an idea.
The design already had this video game cheerful look. So… What if I turned my whole portfolio into a game? Once again, I started writing down my ideas:
The user would need to interact with the different UI elements to unlock the theme switcher and color palette generator buttons.
Each WebGPU scene could serve as a way to unlock the following content, acting as a very basic “puzzle” game.
Keep track of the user overall progress.
Allow the user to skip the whole game process if they want to.
This means most of the users wouldn’t ever make it to the footer, or use this random palette generator tool I’ve struggled to implement. This might very well be the most riskiest, stupidest decision I’ve made so far. But it would give my portfolio this unique and fun touch I was looking for in the first place, so I went all in.
Of course, it goes without saying it implied a major refactoring of the whole code and I needed to come up with original interaction ideas for the WebGPU scenes, but I like to think it was worth it.
Gamification mechanisms: unlocking content and rewarding message
Are you one of the few that unlocked the color palette generator button?
2. Technical study
Now that you know all the whys, let’s have a look at the hows!
Tech stack
I’ve decided to try Sanity Studio as I’ve never worked with it before and as I knew it would be a relatively small project, it’d be a perfect fit to start using it. Even though I felt like I just scratched its surface, I liked the overall developer experience it provided. On the other hand, I already had a good experience working with Nuxt3 so this was an easy choice.
No need to mention why I chose GSAP and Lenis — everyone knows those are great tools to deliver smooth animated websites.
Of course, the WebGPU scenes had to be done with gpu-curtains, the 3D engine I spent so much time working on these past two years. It was a great way to test it in a real-life scenario and gave me the opportunity to fix a few bugs or add a couple features along the way.
And since I wanted the whole process to be as transparent as possible, I’ve published the whole source code as a monorepo on GitHub.
Animations
I won’t go too deep into how I handled the various animations, simply because I’ve essentially used CSS and a bit of GSAP here and there, mostly for canvas animations, SplitText effects or the videos carousel using ScrollTrigger observer.
The basic scenes
There are a lot of components on the website that needed to draw something onto a <canvas> and react to the theme and/or color palette changes.
Since switching theme from light to dark (or vice versa) also updates the color palette by tweaking the HSV value component of the colors a bit, I’ve just put a setColors() method in there to handle these changes.
The progress handling here is actually a remain of when the WebGPU scenes animations were mostly scroll-driven (before I introduced the game mechanisms), but since a few scenes still used it, I kept it in there.
All the 2D canvas scenes extend that class, including the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you notice that?).
The WebGPU scenes
One of the very cool features introduced by WebGPU is that you can render to multiple <canvas> elements using only one WebGPU device. I used this to build 4 different scenes (we’ll take a closer look at each of them below), that all extend a WebGPUScene.ts class:
In the real version, this class also handles the creation of a Tweakpane GUI folder (useful for debugging or tweaking values), but for the sake of clarity I removed the related code here.
As you can see, each of these scenes closely monitors its own performance using a custom QualityManager class. We’ll talk about that later, in the performance section.
Okay, now that we have the basic architecture in mind, let’s break down each of the WebGPU scenes!
Since WebGPU is not fully supported yet, I’ve created fallback versions using the 2D canvas API and the Scene class we’ve seen above for each of the following scenes.
Hero scene
The scenes featured in the portfolio somehow respect a kind of complexity order, meaning the more you advance in the portfolio, the more technically involved the scenes become.
In that way, the hero scene is by far the most simple technically speaking, but it had to look particularly striking and engaging to immediately capture the user’s attention. It was thought as some sort of mobile puzzle game splash screen.
Let’s go!
It’s made of a basic, single fullscreen quad. The idea here is to first rotate its UV components each frame, map them to polar coordinates and use that to create colored triangles segments.
// Center UVs at (0.5, 0.5)
var centeredUV = uv - vec2f(0.5);
// Apply rotation using a 2D rotation matrix
let angleOffset = params.time * params.speed; // Rotation angle in radians
let cosA = cos(angleOffset);
let sinA = sin(angleOffset);
// Rotate the centered UVs
centeredUV = vec2<f32>(
cosA * centeredUV.x - sinA * centeredUV.y,
sinA * centeredUV.x + cosA * centeredUV.y
);
// Convert to polar coordinates
let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
let radius = length(centeredUV);
// Map angle to triangle index
let totalSegments = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
let triIndex = floor(normalizedAngle * totalSegments); // Get triangle index
// Compute fractional part for blending
let segmentFraction = fract(normalizedAngle * totalSegments); // Value in [0,1] within segment
let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many color indices
let color = select(vec4(params.colors[colorIndex], 1.0), vec4f(0.0), isEmpty);
There’s actually a wavy noise applied to the UV beforehand using concentric circles, but you get the idea.
Interestingly enough, the most difficult part was to achieve the rounded rectangle entering animation while preserving the correct aspect ratio. This was done using this function:
fn roundedRectSDF(uv: vec2f, resolution: vec2f, radiusPx: f32) -> f32 {
let aspect = resolution.x / resolution.y;
// Convert pixel values to normalized UV space
let marginUV = vec2f(radiusPx) / resolution;
let radiusUV = vec2f(radiusPx) / resolution;
// Adjust radius X for aspect ratio
let radius = vec2f(radiusUV.x * aspect, radiusUV.y);
// Center UV around (0,0) and apply scale (progress)
var p = uv * 2.0 - 1.0; // [0,1] → [-1,1]
p.x *= aspect; // fix aspect
p /= max(0.0001, params.showProgress); // apply scaling
p = abs(p);
// Half size of the rounded rect
let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
let halfSizeScaled = vec2f(halfSize.x * aspect, halfSize.y);
let d = p - halfSizeScaled;
let outside = max(d, vec2f(0.0));
let dist = length(outside) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
return dist;
}
Highlighted videos slider scene
Next up is the highlighted videos slider. The original idea came from an old WebGL prototype I had built a few years ago and never used.
Do you spot any similarities?
The idea is to displace the planes vertices to wrap them around a cylinder.
var position: vec3f = attributes.position;
// curve
let angle: f32 = 1.0 / curve.nbItems;
let cosAngle = cos(position.x * PI * angle);
let sinAngle = sin(position.x * PI * angle);
position.z = cosAngle * curve.itemWidth;
position.x = sinAngle;
I obviously used this for the years titles, whereas the videos and trail effects behind them are distorted using a post-processing pass.
While this was originally tied to the vertical scroll values (and I really liked the feeling it produced), I had to update its behavior when I switched to the whole gamification idea, making it an horizontal carousel.
Going at the speed of light!
Thanks to gpu-curtains DOM to WebGPU syncing capabilities, it was relatively easy to set up the videos grid prototype using the Plane class.
The trail effect is done using a compute shader writing to a storage texture. The compute shader only runs when necessary, which means when the slider is moving. I’m sure it could have been done in a thousands different ways, but it was a good excuse to play with compute shaders and storage textures. Here’s the compute shader involved:
I thought I was done here, but while running production build tests I stumbled upon an issue. Unfortunately, preloading all those videos to use as WebGPU textures resulted in a huge initial payload and also significantly affected the CPU load. To mitigate that, I’ve implemented a sequential video preloading where I’d have to wait for each video to have enough data before loading the next one. This gave a huge boost regarding initial load time and CPU overhead.
Sequential videos loading waterfall
Invoices scene
The third WebGPU scene was initially supposed to constitute my own take at 3D boids simulations, using instancing and a compute shader. After a bit of work, I had a bunch of instances that were following my mouse, but the end result was not living up to my expectations. The spheres were sometimes overlapping each other, or disappearing behind the edges of the screen. I kept improving it, adding self-collision, edge detections and attraction/repulsion mechanisms until I was happy enough with the result.
I like to call it the “invoices” scene, because the sphere instances here actually represent all the invoices I actually issued during my freelance career, scaled based on the amounts. Since I’m using google sheets to handle most of my accounting, I’ve made a little script that gathers all my invoices amount in a single, separate private sheet each time I’m updating my accounting sheets. I then fetch and parse that sheet to create the instances. It was a fun little side exercise and turns this scene into an ironically meaningful experiment: each time you click and hold, you kind of help me collect my money.
Give me my money!
The compute shader uses a buffer ping-pong technique: you start with two identically filled buffers (e.g. packed raw data) then at each compute dispatch call, you read the data from the first buffer and update the second one accordingly. Once done, you swap the two buffers before the next call and repeat the process. If you’re familiar with WebGL, this is often done with textures. WebGPU and compute shaders allow us to do so with buffers, which is way more powerful. Here is the complete compute shader code:
struct ParticleB {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct ParticleA {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct SimParams {
deltaT: f32,
mousePosition: vec3f,
mouseAttraction: f32,
spheresRepulsion: f32,
boxReboundFactor: f32,
boxPlanes: array<vec4f, 6>
};
@group(0) @binding(0) var<uniform> params: SimParams;
@group(0) @binding(1) var<storage, read> particlesA: array<ParticleA>;
@group(0) @binding(2) var<storage, read_write> particlesB: array<ParticleB>;
fn constrainToFrustum(pos: vec3<f32>, ptr_velocity: ptr<function, vec3<f32>>, radius: f32) -> vec3<f32> {
var correctedPos = pos;
for (var i = 0u; i < 6u; i++) { // Loop through 6 frustum planes
let plane = params.boxPlanes[i];
let dist = dot(plane.xyz, correctedPos) + plane.w;
if (dist < radius) { // If inside the plane boundary (radius = 1)
// Move the point inside the frustum
let correction = plane.xyz * (-dist + radius); // Push inside the frustum
// Apply the position correction
correctedPos += correction;
// Reflect velocity with damping
let normal = plane.xyz;
let velocityAlongNormal = dot(*(ptr_velocity), normal);
if (velocityAlongNormal < 0.0) { // Ensure we only reflect if moving towards the plane
*(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * normal;
}
}
}
return correctedPos;
}
fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
let theta = length(omega) * dt;
if (theta < 1e-5) {
return vec4(0.0, 0.0, 0.0, 1.0);
}
let axis = normalize(omega);
let halfTheta = 0.5 * theta;
let sinHalf = sin(halfTheta);
return vec4(axis * sinHalf, cos(halfTheta));
}
fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
return vec4(
a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
a.w * b.w - dot(a.xyz, b.xyz)
);
}
fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
let omega = vec4(angularVel, 0.0);
let dq = 0.5 * quaternionMul(q, omega);
return normalize(q + dq * dt);
}
@compute @workgroup_size(64) fn main(
@builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
) {
var index = GlobalInvocationID.x;
var vPos = particlesA[index].position.xyz;
var vVel = particlesA[index].velocity.xyz;
var collision = particlesA[index].velocity.w;
var vQuat = particlesA[index].rotation;
var angularVelocity = particlesA[index].angularVelocity.xyz;
var vData = particlesA[index].data;
let sphereRadius = vData.x;
var newCollision = vData.y;
collision += (newCollision - collision) * 0.2;
collision = smoothstep(0.0, 1.0, collision);
newCollision = max(0.0, newCollision - 0.0325);
let mousePosition: vec3f = params.mousePosition;
let minDistance: f32 = sphereRadius; // Minimum allowed distance between spheres
// Compute attraction towards sphere 0
var directionToCenter = mousePosition - vPos;
let distanceToCenter = length(directionToCenter);
// Slow down when close to the attractor
var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Only attract if outside the minimum distance
vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
vVel *= 0.95;
}
// Collision Handling: Packing spheres instead of pushing them away
var particlesArrayLength = arrayLength(&particlesA);
for (var i = 0u; i < particlesArrayLength; i++) {
if (i == index) {
continue;
}
let otherPos = particlesA[i].position.xyz;
let otherRadius = particlesA[i].data.x;
let collisionMinDist = sphereRadius + otherRadius;
let toOther = otherPos - vPos;
let dist = length(toOther);
if (dist < collisionMinDist) {
let pushDir = normalize(toOther);
let overlap = collisionMinDist - dist;
let pushStrength = otherRadius / sphereRadius; // radius
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
newCollision = min(1.0, pushStrength * 1.5);
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.1 * pushStrength;
}
}
let projectedVelocity = dot(vVel, directionToCenter); // Velocity component towards mouse
let mainSphereRadius = 1.0;
if(distanceToCenter <= (mainSphereRadius + minDistance)) {
let pushDir = normalize(directionToCenter);
let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
newCollision = 1.0;
if(params.mouseAttraction > 0.0) {
vPos -= pushDir * overlap;
}
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.05;
}
vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
// Apply velocity update
vPos += vVel * params.deltaT;
angularVelocity *= 0.98;
let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
// Write back
particlesB[index].position = vec4(vPos, 0.0);
particlesB[index].velocity = vec4(vVel, collision);
particlesB[index].data = vec4(vData.x, newCollision, vData.z, vData.w);
particlesB[index].rotation = updatedQuat;
particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
}
One of my main inspirations for this scene was this awesome demo by Patrick Schroen. I spent a lot of time looking for the right rendering tricks to use and finally set up my mind on volumetric lighting. The implementation is quite similar to what Maxime Heckel explained in this excellent breakdown article. Funnily enough, I was already deep into my own implementation when he released that piece, and I owe him the idea of using a blue noise texture.
Volumetric lighting debugging
As a side note, during the development phase this was the first scene that required an actual user interaction and it played a pivotal role in my decision to turn my folio into a game.
Open source scene
For the last scene, I wanted to experiment a bit more with particles and curl noise because I’ve always liked how organic and beautiful it can get. I had already published an article using these concepts, so I had to come up with something different. Jaume Sanchez’ Polygon Shredder definitely was a major inspiration here.
Since this experiment was part of my open source commitment section, I had the idea to use my GitHub statistics as a data source for the particles. Each statistic (number of commits, followers, issues closed and so on) is assigned to a color and turned into a bunch of particles. You can even toggle them on and off using the filters in the information pop-up. Once again, this changed a rather technical demo into something more meaningful.
Curl noise and particles are always a good match
While working on the portfolio, I was also exploring new rendering techniques with gpu-curtains such as planar reflections. Traditionally used for mirror effects or floor reflections, it consists of rendering a part of your scene a second time but from a different camera angle and projecting it onto a plane. Having nailed this, I thought it would be a perfect match there and added it to the scene.
Last but not least, and as a reminder of the retro video games vibe, I wanted to add a pixelated mouse trail post-processing effect. I soon realized it would be way too much though, and ended up showing it only when the user is actually drawing a line, making it more subtle.
Using the filters can actually help you unlock features!
Performance and accessibility
On such highly interactive and immersive pages, performance is key. Here are a few tricks I’ve used to try to maintain the most fluid experience across all devices.
Dynamic imports
I’ve used Nuxt dynamic imported components and lazy hydration for almost every non critical components of the page. In the same way, all WebGPU scenes are dynamically loaded only if WebGPU is supported. This significantly decreased the initial page load time.
// pseudo code
import type { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
let scene: WebGPUHeroScene | CanvasHeroScene | null;
const canvas = useTemplateRef("canvas");
const { colors } = usePaletteGenerator();
onMounted(async () => {
const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
if ($hasWebGPU && canvas.value) {
const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
scene = new WebGPUHeroScene({
gpuCurtains: $gpuCurtains,
container: canvas.value,
colors: colors.value,
});
} else if (canvas.value) {
scene = new CanvasHeroScene({
container: canvas.value,
isReducedMotion: $isReducedMotion,
colors: colors.value,
});
}
});
I’m not particularly fond of Lighthouse reports but as you can see the test result is quite good (note that it’s running without WebGPU though).
PageSpeed Insights report
Monitoring WebGPU performance in real time
I’ve briefly mentionned it earlier, but each WebGPU scene actually monitors its own performance by keeping track of its FPS rate in real time. To do so, I’ve written 2 separate classes: FPSWatcher, that records the average FPS over a given period of time, and QualityManager, that uses a FPSWatcher to set a current quality rating on a 0 to 10 scale based on the average FPS.
It’s very basic: I just record the elapsed time between two render calls, put that into an array and run a callback every updateDelay milliseconds with the latest FPS average value. It is then used by the QualityManager class, that does all the heavy lifting to assign an accurate current quality score:
import type { FPSWatcherParams } from "./FPSWatcher";
import FPSWatcher from "./FPSWatcher";
export interface QualityManagerParams {
label?: string;
updateDelay?: FPSWatcherParams["updateDelay"];
targetFPS?: number;
onQualityChange?: (newQuality: number) => void;
}
export class QualityManager {
label: string;
fpsWatcher: FPSWatcher;
targetFPS: number;
#lastFPS: number | null;
#active: boolean;
onQualityChange: (newQuality: number) => void;
quality: {
current: number;
min: number;
max: number;
};
constructor({
label = "Quality manager",
updateDelay = 1000,
targetFPS = 60,
onQualityChange = (newQuality) => {},
}: QualityManagerParams = {}) {
this.label = label;
this.onQualityChange = onQualityChange;
this.quality = {
min: 0,
max: 10,
current: 7,
};
this.#active = true;
this.targetFPS = targetFPS;
this.#lastFPS = null;
this.fpsWatcher = new FPSWatcher({
updateDelay,
onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
});
}
get active() {
return this.#active;
}
set active(value: boolean) {
if (!this.active && value) {
this.fpsWatcher.restart();
}
this.#active = value;
}
onFPSWatcherUpdate(averageFPS = 0) {
const lastFpsRatio = this.#lastFPS
? Math.round(averageFPS / this.#lastFPS)
: 1;
const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
// if fps ratio is over 0.95, we should increase
// else we decrease
const boostedFpsRatio = fpsRatio / 0.95;
// smooth change multiplier avoid huge changes in quality
// except if we've seen a big change from last FPS values
const smoothChangeMultiplier = 0.5 * lastFpsRatio;
// quality difference that should be applied (number with 2 decimals)
const qualityDiff =
Math.round((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
if (Math.abs(qualityDiff) > 0.25) {
const newQuality = Math.min(
Math.max(
this.quality.current + Math.round(qualityDiff),
this.quality.min
),
this.quality.max
);
this.setCurrentQuality(newQuality);
}
this.#lastFPS = averageFPS;
}
setCurrentQuality(newQuality: number) {
this.quality.current = newQuality;
this.onQualityChange(this.quality.current);
}
update() {
if (this.active) {
this.fpsWatcher.update();
}
}
}
The most difficult part here is to smoothly handle the quality changes to avoid huge drops or gains in quality. You also don’t want to fall in a loop where for example:
The average FPS are poor, so you degrade your current quality.
You detect a quality loss and therefore decide to switch off an important feature, such as shadow mapping.
Removing the shadow mapping gives you a FPS boost and after the expected delay the current quality is upgraded.
You detect a quality gain, decide to re-enable shadow mapping and soon enough, you’re back to step 1.
Typically, the quality rating is used to update things such as the current pixel ratio of the scene, frame buffers resolutions, number of shadow maps PCF samples, volumetric raymarching steps and so on. In worst case scenarios, it can even disable shadow mapping or post processing effects.
Accessibility
Finally, the site had to respect at least the basic accessibility standards. I’m not an accessibility expert and I may have made a few mistakes here and there, but the key points are that the HTML is semantically correct, it is possible to navigate using the keyboard and the prefers-reduced-motion preference is respected. I achieved that by disabling entirely the gamification concept for these users, removing every CSS and JavaScript animations, and made the scenes fall back to their 2D canvas versions, without being animated at all.
Conclusion
Well, it was a long journey, wasn’t it?
Working on my portfolio these past 6 months has been a truly demanding task, technically but also emotionally. I’m still having a lot of self doubts about the overall design, key UX choices or level of creativity. I also do think that it kind of honestly sums up who I am as a developer but also as a person. In the end, it’s probably what matters most.
I hope that you’ve learnt a few things reading this case study, whether it’d be about technical stuff or my own creative process. Thank you all, and remember: stay fun!
Doing a tech talk is easy. Doing a good talk is harder. We’re going to see some tips to improve the delivery of your conferences.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
I love to deliver tech talks: they help me improve both my technical and communication skills.
Hey! If you’re starting doing tech talks, don’t miss my article Thoughts after my very first public speech where I explained what I did right and what I did wrong at my very first tech talk. Learn from my errors, and avoid them!💪
On one hand, teaching stuff requires technical preparations: you need to know what you’re talking about, and you need to know it pretty well. Even more, you need to know some advanced stuff to give the audience something they will remember – if everything is obvious, what will they remember from your talk?
On the other hand, tech talks require good communication skills: your job is to deliver a message to your audience, and you can do it only if your intent is clear and you avoid talking of useless (or misleading) stuff.
But, in the end, only having good content is not enough: you need to shape the talk in a way that stimulates the attention of the public and does not bore them.
note: I still have a lot of room for improvement, so I still have to work on myself to improve my talks!
1- Tell what are the topics covered by your talk
Why should someone attend your talk?
This is a simple question, but it must be clear to you way before submitting your talk to CFPs. Usually, the best reason to attend is because of the content of the conference (unless you attend a conference only for the free pizza and swags!).
You should always express what is the topic of your talk.
Where, and when?
In the title: the title should express what you’re going to say. «Azure DevOps: an intro to build and release pipelines» is better than «Let’s work with Azure DevOps!». Yes, it’s less fancy, but you are making the scope clear (build and release pipelines), the tool (Azure DevOps), and the difficulty of your talk (it’s an intro, not a talk that targets experts)
In the description of your talk: when submitting CFP, when sharing it on social media, and everywhere else you can add some text to describe your talk, you should add some more details. For instance, «In this session, we’re gonna see how to build and release .NET Core projects with Azure DevOps pipelines, how to use PR builds, how to manage variable substitution with Variable Groups…». This will help the reader decide whether or not attending to your session.
At the beginning of your talk: this is for people who forgot to read the session description. Repeat the points you’re gonna cover at the beginning of your talk, like right after the title and the slide about who are you. In this way, attendees can leave if they find out that the topic is not what they were expecting from the title. They don’t lose time on anything not interesting for them, and you don’t lose your focus watching at their bored faces.
2- Divide the talks into smaller blocks
Think of your own experience: are you able to keep the focus on a 1-hour long talk? Or do you get distracted after 10 minutes, start wandering with the mind, and so on?
Well, that’s normal. Generally, people have a short attention span. This means that you cannot talk for 60 minutes about the same topic: your audience will get bored soon.
So, you should split your talk into several smaller blocks. A good idea is to separate the sub-topics into 5 or 10 minutes slots, to help people understanding the precise topic of a block and, in case, pay less attention to that specific block (maybe because that’s a topic they already know, so not focusing 100% is fine).
3- Wake up the audience with simple questions
Sometimes the easiest way to regain the attention of the attendees is to ask them some simple questions: «Can you see my screen?», «Does any of you already used this tool?».
It’s easy to reply to these questions, even without thinking too much about the answer.
This kind of questions will wake up the audience and let them focus on what you’re saying for a bit more.
Needless to say, avoid asking those questions too many times, and don’t repeat always the same question.
4- Choose the right slide layout
Many monitors and screens are now in 16:9. So remember to adapt the slide layout to that format.
In the image below, we can see how the slide layout impacts the overall look: slides with a 4:3 layout are too small for current devices, and they just look… ugly!
Slides in 16:9 feel more natural for many screen layouts.
It’s a simple trick to remember, but it may have a great impact on your delivery.
5- Don’t move hands and body if it’s not necessary
Moving too much your body drives the attention away from the content of your talk. Avoid fidgeting, moving too much your hands and head.
Remember that every movement of your body should have a meaning. Use your movements to drive attention to a specific topic, or to imitate and explain some details.
For instance, use your hands to simulate how some modules communicate with each other.
When preparing your presentation, you are used to thinking of how you see the screen: you have your monitor size and resolution, and you can adjust your content based on that info.
But you don’t know how the audience will see your screen.
If you are doing an in-person talk, pay attention to the screens the audience sees: is the resolution fine? Do you have to increase the font size? Is it fine both for folks on the front and the last seats?
On the contrary, when doing an online talk, you don’t know the device your audience will use: PC, tablet, smart tv, smartphone?
This means that you can’t rely on the mouse cursor to point at a specific part of your monitor (eg: some text, a button, a menu item) as your audience may not see it.
A good idea is to use some kind of tools like ZoomIt: it allows you to zoom in a part of your screen and to draw lines in a virtual layer.
So, instead of saying «now click this button – hey, can you see my cursor?», use Zoomit to zoom on that button or, even better, to draw a rectangle or an arrow to highlight it.
7- Pin presentation folder on Resource Explorer
As we’ve already discussed in my article 10 underestimated tasks to do before your next virtual presentation, you should hide all the desktop icons – they tend to distract the audience. This also implies that even your folder you use to store the presentation assets has to be hidden.
But now… Damn, you’ve just closed the folder with all the conference assets! Now you have to find it again and navigate through your personal folders.
If you use Windows, luckily you can simply right-click on your folder, click Pin to Quick access
and have it displayed on the right bar of any folder you open.
In this way, you can easily reach any folder with just one click.
So your “main” folder will not be visible on your desktop, but you can still open it via the Quick Access panel.
8- Stress when a topic is important
You have created the presentation. You know why you built it, and what are the important stuff. Does your audience know what is important to remember?
If you are talking for one hour, you are giving the public a lot of information. Some are trivia, some are niche details, some are the key point of a topic.
So, make it clear what is important to remember and what is just a “good-to-know”.
For instance, when talking about clean code, stress why it is important to follow a certain rule if it can be a game-changer. «Use consistent names when classes have similar meaning» and «Choose whether using tabs or spaces, and use them for all your files» are both valid tips, but the first one has a different weight compared to the latter one.
Again, spend more time on the important stuff, and tell explicitly the audience that that part is important (and why).
9- Use the slide space in the best way possible
Let’s talk about the size of the slides’ font: keep it consistent or adapt it to the text and space in the slide?
I thought that keeping it consistent was a good idea – somehow it hurts my brain seeing different sizes in different slides.
But then I realized that there are some exceptions: for example, when a slide contains only a few words or a few points in a bullet list. In that case, you should occupy the space in a better way, to avoid all the emptiness around your text.
Here we have 2 slides with the same font:
The first one is fine, the second one is too empty.
Let’s adjust the font of the second slide:
It’s a bit better. Not excellent, but at least the audience can read it. The text is a bit bigger, but you’ll hardly notice it.
10- Turn off all the notifications
It’s simple: if you are sharing your screen, you don’t want your audience to see those weird messages you receive on Discord or the spam emails on Outlook.
So, turn off all the notifications. Of course, unless you are demonstrating how to integrate your stuff with platforms like Slack, Teams et cetera.
11- Use the slides as a reference, not as a teleprompter
Avoid full sentences in your slides. Nobody’s gonna read them – even more, if the audience is not paying attention!
So, prefer putting just a few words instead of full, long sentences: you should not read your slides as if they were a teleprompter, and you should help your audience getting back on track if they lost their focus.
Two bullet points like “Keep track of your progress” and “Fix weakness” are better than a single phrase like “Remember to use some tool to keep track of the progress of your project, so that you can find the weak points and fix them”.
– of course, unless it is a quote: you should write the full text if it is important.
12- “End” is the word
We’re nearly at the end of this session.
A simple yet powerful statement that can wake up your audience.
When you’ve lost your focus, you get triggered by some words, like end. You unconsciously remember that you are at that conference for some reason, and you have to focus to get the most from the last minutes of the conference.
So, try triggering the subconscious of your audience with some words like ending.
13- Recap what you’ve explained
Finally, you’re at the end of your talk.
What should the audience remember from it?
Spend some time to recap what you’ve seen, what are the key points of your conference, and what you’d like the others to remember.
It is a good way to help the audience focus again and thinking of questions to bring to your attention.
Wrapping up
In this article, I’ve summarized some of the things I’ve worked on to improve my tech talks.
There is still a lot to do, and a lot to learn. But I hope that those simple tricks will help other newbies like me to improve their talks.
Knowing the Assembly Version of the API you’ve deployed on an environment may be helpful for many reasons. We’re gonna see why, how to retrieve it, and how to expose it with Endpoint Routing (bye-bye Controllers and Actions!)
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes it can be useful to show the version of the running Assembly in one .NET Core API endpoint: for example, when you want to know which version of your code is running in an environment, or to expose a simple endpoint that acts as a “minimal” health check.
In this article, we’re going to see how to retrieve the assembly version at runtime using C#, then we will expose it under the root endpoint of a .NET Core API without creating an API Controller, and lastly we’ll see how to set the Assembly version with Visual Studio.
How to get Assembly version
To get the Assembly version, everything we need is this snippet:
The first step is to get the info about the running assembly:
Assembly assembly = Assembly.GetEntryAssembly();
The Assembly class is part of the System.Reflection namespace, so you have to declare the corresponding using statement.
The AssemblyInformationalVersionAttribute attribute comes from the same namespace, and contains some info for the assembly manifest. You can get that info with the second line of the snippet:
Next, we need to expose that value using .NET Core API.
Since we’re exposing only that value, we might not want to create a new Controller with a single Action: in this case, endpoint routing is the best choice!
In the Startup.cs file, under the Configure method, we can define how the HTTP request pipeline is configured.
By default, for ASP.NET Core APIs, you’ll see a section that allows the engine to map the Controllers to the endpoints:
In this section, we can configure some other endpoints.
The easiest way is to map a single path to an endpoint and specify the returned content. We can do it by using the MapGet method, which accepts a string for the path pattern and an async Delegate for the execution:
In this way, we will receive the message Hi there every time we call the root of our API (because of the first parameter, /), and it happens only when we use the GET HTTP Verb, because of the MapGet method.
Putting all together
Now that we have all in place, we can join the two parts and return the Assembly version on the root of our API.
You could just return the string as it is returned from the versionAttribute.InformationalVersion property we’ve seen before. Or you could wrap it into an object.
If you don’t want to specify a class for it, you can use an ExpandoObject instance and create new properties on the fly. Then, you have to serialize it into a string, and return it in the HTTP Response:
endpoints.MapGet("/", async context =>
{
// get assembly version Assembly assembly = Assembly.GetEntryAssembly();
AssemblyInformationalVersionAttribute versionAttribute = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>();
string assemblyVersion = versionAttribute.InformationalVersion;
// create the dynamic objectdynamic result = new ExpandoObject();
result.version = assemblyVersion;
// serialize the objectstring versionAsText = JsonSerializer.Serialize(result);
// return it as a stringawait context.Response.WriteAsync(versionAsText);
});
That’s it!
Of course, if you want only the version as a string without the dynamic object, you can simplify the MapGet method in this way:
endpoints.MapGet("/", async context =>
{
var version = Assembly.GetEntryAssembly().GetCustomAttribute<AssemblyInformationalVersionAttribute>().InformationalVersion;
await context.Response.WriteAsync(version);
});
But, for this example, let’s stay with the full object.
Let’s try it: update Assembly version and retrieve it from API
After tidying up the code, the UseEndpoints section will have this form:
WithAssemblyVersionOnRoot is an extension method I created to wrap that logic and make the UseEndpoints method cleaner. If you want to learn how to create extension methods with C#, and what are some gotchas, head to this article!
To see the result, open Visual Studio, select the API project and click alt + Enter to navigate to the Project properties. Here, under the Package tag, define the version in the Package version section.
In this screen, you can set the value of the package that will be built.
To double-check that the version is correct, head to the bin folder and locate the exe related to your project: right-click on it, go to properties and to the details tab. Here you can see the details of that exe:
Noticed the Product version? That’s exactly what we’ve set up on Visual Studio.
So, now it’s time to run the application.
Get back to Visual Studio, run the application, and navigate to the root of the API.
Finally, we can enjoy the result!
Quite simple, isn’t it?
Wrapping up
In this article, we’ve seen how to expose on a specific route the version of the assembly running at a specified URL.
This is useful to help you understand which version is currently running in an environment without accessing the CD pipelines to see which version has been deployed.
Also, you can use this information as a kind of health check, since the data exposed are static and do not depend on any input or database status: the simplest match for getting info about the readiness of your application.
What other info would you add to the exposed object? Let me know in the comment section 👇
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
One of the fundamentals of clean code is: use meaningful names.
But choosing meaningful names takes time!
Time spent finding good names is time saved trying to figure out what you meant.
How to approach it? Good names do not come on the first try!
My suggestion is: at first, write down the code as it comes.
publicstaticstring Get(string input)
{
char[] arr = newchar[input.Length];
int i = input.Length - 1;
foreach (var e in input)
{
arr[i] = e;
i--;
}
returnnew String(arr);
}
And then, when you have almost everything clear, choose better names for
Probably, you’ll never reach perfection. Sad, but true.
You might want to add some tests to your code, right? RIGHT??
A good moment to choose better names is while writing test: at that moment your tests act as Clients to your production code, so if you find that the name of the method does not fully represent its meaning, or the parameter names are misleading, this is a good moment to improve them.
And don’t forget about private variables and methods!
So, what is “a good name”?
A good name should express:
its meaning (what a method does?)
its scope (for items in loops, even var i = 0 is acceptable, if the scope is small)
what it represents (originalString is, of course, the original string)
👉 Let’s discuss it on Twitter or on the comment section below!
Azure Service bus is a message broker generally used for sharing messages between applications. In this article, we’re gonna see an introduction to Azure Service Bus, and how to work with it with .NET and C#
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Azure Service Bus is a message broker that allows you to implement queues and pub-subs topics. It is incredibly common to use queues to manage the communication between microservices: it is a simple way to send messages between applications without bind them tightly.
In this introduction, we’re going to learn the basics of Azure Service Bus: what it is, how to create a Bus and a Queue, how to send and receive messages on the Bus with C#, and more.
This is the first part of a series about Azure Service Bus. We will see:
Azure Service Bus is a complex structure that allows you to send content through a queue.
As you may already know, a queue is… well, a queue! First in, first out!
This means that the messages will be delivered in the same order as they were sent.
Why using a queue is becoming more and more common, for scalable applications?
Let’s consider this use case: you are developing a microservices-based application. With the common approach, communication occurs via HTTP: this means that
if the receiver is unreachable, the HTTP message is lost (unless you add some kind of retry policy)
if you have to scale out, you will need to add a traffic manager/load balancer to manage which instance must process the HTTP Request
On the contrary, by using a queue,
if the receiver is down, the message stays in the queue until the receiver becomes available again
if you have to scale out, nothing changes, because the first instance that receives the message removes it from the queue, so you will not have multiple receivers that process the same message.
How to create an Azure Service Bus instance
It is really simple to create a new Service Bus on Azure!
Just open Portal Azure, head to the Service Bus section, and start creating a new resource.
You will be prompted to choose which subscription will be linked to this new resource, and what will be the name of that resource.
Lastly, you will have to choose which will be the pricing tier to apply.
There are 3 pricing tiers available:
Basic: its price depends on how many messages you send. At the moment of writing, with Basic tier you pay 0.05$ for every million messages sent.
Standard: Similar to the Basic tier, but allows you to have both Queues and Topics. You’ll see the difference between Queue and Topics in the next article
Premium: zone-redundant, with both Queues and Topics; of course, quite expensive
So now, you can create the resource and see it directly on the browser.
Policies and Connection Strings
The first thing to do to connect to the Azure Service Bus is to create a Policy that allows you to perform specific operations on the Bus.
By default, under the Shared access policies tab you’ll see a policy called RootManageSharedAccessKey: this is the default Policy that allows you to send and receive messages on the Bus.
To get the connection string, click on that Policy and head to Primary Connection String:
A connection string for the Service Bus looks like this:
The first part represents the Host name: this is the value you’ve set in the creation wizard, and the one you can see on the Overview tab:
Then, you’ll see the SharedAccessKeyName field, which contains the name of the policy to use (in this case, RootManageSharedAccessKey).
Then, we have the secret Key. If you select the Primary Connection String you will use the Primary Key; same if you use the Secondary Connection String.
Keep that connection string handy, we’re gonna use it in a moment!
Adding a queue
Now that we have created the general infrastructure, we need to create a Queue. This is the core of the bus – all the messages pass through a queue.
To create one, on the Azure site head to Entities > Queues and create a new queue.
You will be prompted to add different values, but for now, we are only interested in defining its name.
Write the name of the queue and click Create.
Once you’ve created your queue (for this example, I’ve named it PizzaOrders), you’ll be able to see it in the Queues list and see its details.
You can even define one or more policies for that specific queue just as we did before: you’ll be able to generate a connection string similar to the one we’ve already analyzed, with the only difference that, here, you will see a new field in the connection string, EntityPath, whose value is the name of the related queue.
So, a full connection string will have this form:
ServiceBusExplorer – and OSS UI for accessing Azure Service Bus
How can you see what happens inside the Service Bus?
You have two options: use the Service Bus Explorer tool directly on Azure:
Or use an external tool.
I honestly prefer to use ServiceBusExplorer, a project that you can download from Chocolatey: this open source tool allows you to see what is happening inside Azure Service Bus: just insert your connection string and… voilá! You’re ready to go!
With this tool, you can see the status of all the queues, as well as send, read, and delete messages.
If you want to save a connection, you have to open that tool as Administrator, otherwise, you won’t have enough rights to save it.
How to send and receive messages with .NET 5
To test it, we’re gonna create a simple project that manages pizza orders.
A .NET 5 API application receives a list of pizzas to be ordered, then it creates a new message for every pizza received and sends them into the PizzaOrders queue.
With another application, we’re gonna receive the order of every single pizza by reading it from the same queue.
For both applications, you’ll need to install the Azure.Messaging.ServiceBus NuGet package.
How to send messages on Azure Service Bus
The API application that receives pizza orders from the clients is very simple: just a controller with a single action.
Lastly, for every pizza order, we convert the object into a string and we send it as a message in the queue.
// Serialize as JSON stringstring jsonEntity = JsonSerializer.Serialize(order);
/// Create Bus MessageServiceBusMessage serializedContents = new ServiceBusMessage(jsonEntity);
// Send the message on the Busawait sender.SendMessageAsync(serializedContents);
Hey! Never used async, await, and Task? If you want a short (but quite thorough) introduction to asynchronous programming, head to this article!
And that’s it! Now the message is available on the PizzaOrders queue and can be received by any client subscribed to it.
Here I serialized the PizzaOrder into a JSON string. This is not mandatory: you can send messages in whichever format you want: JSON, XML, plain text, BinaryData… It’s up to you!
Once we have the messages on the Bus, we need to read them.
To demonstrate how to read messages from a queue using C#, I have created a simple Console App, named PizzaChef. The first thing to do, of course, is to install the Azure.Messaging.ServiceBus NuGet package.
As usual, we need a ServiceBusClient object to access the resources on Azure Service Bus. Just as we did before, create a new Client in this way:
ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
This time, instead of using a ServiceBusSender, we need to create a ServiceBusProcessor object which, of course, will process all the messages coming from the Queue. Since receiving a message on the queue is an asynchronous operation, we need to register an Event Handler both for when we receive the message and when we receive an error:
Note: in this article I’ll implement only the PizzaItemMessageHandler method. The PizzaItemErrorHandler, however, must be at least declared, even if empty: you will get an exception if you forget about it. Anyways, we’ll implement it in the last article of this series, the one about error handling.
To read the content received in the PizzaItemMessageHandler method, you must simply access the Message.Body property of the args parameter:
string body = args.Message.Body.ToString();
And, from here, you can do whatever you want with the body of the message. For instance, deserialize it into an object. Of course, you can reuse the PizzaOrder class we used before, or create a new class with more properties but, still, compatible with the content of the message.
publicclassProcessedPizzaOrder{
publicstring Name { get; set; }
publicstring[] Toppings { get; set; }
publicoverridestring ToString()
{
if (Toppings?.Any() == true)
return$"Pizza {Name} with some toppings: {string.Join(',', Toppings)}";
elsereturn$"Pizza {Name} without toppings";
}
}
Lastly, we need to mark the message as complete.
await args.CompleteMessageAsync(args.Message);
Now we can see the full example of the PizzaItemMessageHandler implementation:
privateasync Task PizzaItemMessageHandler(ProcessMessageEventArgs args)
{
try {
string body = args.Message.Body.ToString();
Console.WriteLine("Received " + body);
var processedPizza = JsonSerializer.Deserialize<ProcessedPizzaOrder>(body);
Console.WriteLine($"Processing {processedPizza}");
// complete the message. messages is deleted from the queue.await args.CompleteMessageAsync(args.Message);
}
catch (System.Exception ex)
{
// handle exception }
}
Does it work? NO.
We forgot to start processing the incoming messages. It’s simple: in the Main method, right after the declaration of the ServiceBusProcessor object, we need to call StartProcessingAsync to start processing and, similarly, StartProcessingAsync to end the processing.
Here’s the full example of the Main method: pay attention to the calls to Start and Stop processing.
While the call to StartProcessingAsync is mandatory (otherwise, how would you receive messages?), the call to StopProcessingAsync, in a console application, can be skipped, since we are destroying the application. At least, I think so. I still haven’t found anything that says whether to call or skip it. If you know anything, please contact me on Twitter or, even better, here in the comments section – so that we can let the conversation going.
Wrapping up
This is part of what I’ve learned from my first approach with Azure Service Bus, and the use of Queues in microservice architectures.
Is there anything else I should say? Have you ever used queues in your applications? As usual, feel free to drop a comment in the section below, or to contact me on Twitter.
In the next article, we’re gonna explore another topic about Azure Service Bus, called… Topic! We will learn how to use them and what is the difference between a Queue and a Topic.
Since 2020, I have been working as a frontend engineer in Japan.
While my main role is in engineering, I also have a strong interest in design, and I have created numerous personal projects that reflect this passion.
I began studying web development independently in 2020, and after working as a freelancer for about four years, I am now employed at a production company (as of 2025). Recently, I’ve also developed an interest in graphic design and 3D modeling. I hope to continue expanding my knowledge across a wide range of fields—not limited to the web.
My strengths lie in creating interactive experiences, particularly in the area of animation. When it comes to design, I excel at producing visuals that are both aesthetically appealing and highly functional.
This website showcases the photographs I have taken so far. I aimed to create a soft, layered visual experience where each image gently overlaps the others.
One of the most distinctive features of the site is the view-mode toggle on the main page. It offers two modes: a refined grid layout and a slider layout. The transition animation between these modes is something I’m particularly proud of, as I believe it turned out beautifully.
I paid close attention to implementing animations that complement—rather than overpower—the photographs, which are the true focus of the site. Thanks to this balanced approach, the project has received recognition and multiple awards from various design platforms. It’s a piece of work that highlights my strengths in design.
This is the website I released as Portfolio Version 7. It became a site defined by bold parallax effects and smooth transitions. There were several challenges in this project, one of which was creating visuals using fluid expressions and generative AI.
The flower image on the homepage was generated with AI, and I explored how to present this image as beautifully as possible. That exploration led me to fluid expression. By blending monochrome and color visuals that appear and disappear within the fluid, I was able to capture both fragility and beauty.
This site embodies many of my creative experiments and challenges.
This site is a commemorative project celebrating the creation of 100 personal works. The name Layrid is a coined word combining layer and grid, reflecting the idea of organizing accumulated works within a structured grid.
Key features include a smooth opening animation, seamless navigation between subpages, and dynamic transitions of the top images. By incorporating bold WebGL-driven transitions within an experience that maintains continuity, I was able to create a striking contrast between stillness and motion.
This is my latest portfolio as of September 2025. For this site, I focused on achieving both clarity and functionality. On the homepage, you can view several of my latest projects at a glance, presented with unique open/close animations and organized for easy exploration.
In addition, the site includes an archive of personal projects as well as a Playground that showcases graphics created purely for fun. Through this site, you can explore the wide range of works I continue to produce and gain a deeper understanding of who I am.
I’m especially proud of the details, such as the subtle click sounds and the color scheme that shifts depending on the time of day.
What I value
Enjoying my work
This doesn’t mean playing around or having fun in a casual sense. We believe that great websites are born when creators approach each project with passion and genuine enjoyment. That’s why we value making the process enjoyable for ourselves first—it forms the foundation for meaningful, high-quality work.
Ensuring smooth communication
Clear and smooth communication allows us to align on direction and create with a strong shared understanding. We make it a point to engage thoughtfully and carefully with both clients and team members, ensuring we move forward without misunderstandings.
Continuing to learn every day
This industry evolves at a rapid pace, with trends constantly shifting. That’s why I make it a priority to keep learning and updating my knowledge every day, never standing still.
Final Thoughts
First of all, I would like to express my heartfelt gratitude to Codrops for giving me this opportunity. Although I’m a developer, I’m also eager to further explore the field of design. I believe that having a well-rounded skill set is essential when working in the web industry. I will continue to grow by strengthening my passions while also improving in areas where I still lack.
I actively share various websites and insights on social media, so please feel free to check them out.
Ransomware is one of the most disruptive cyber threats, encrypting critical organizational data and demanding ransom payments for restoration. While early campaigns relied on mass phishing or opportunistic malware distribution, modern ransomware operations have evolved into highly sophisticated, targeted attacks. Today’s adversaries not only infect machines but also move laterally across networks, harvest credentials, neutralize defences, and maintain persistent control—all while remaining stealthy and evading detection.
Disclaimer: The Remote Access Tools discussed in this blog are legitimate software products designed to support IT administration and remote support. This article highlights how adversaries may misuse them in ransomware campaigns if they are misconfigured, poorly managed, or left unmonitored. It does not suggest that the tools themselves are inherently vulnerable or malicious.
A key enabler of these attacks is the exploitation of legitimate Remote Access Tools (RATs) such as AnyDesk, UltraViewer, AppAnywhere, RustDesk, CloneDesk, Splashtop, and TightVNC. Originally designed for IT administration and remote support, many of these tools offer free or freely available versions, which attackers often abuse because they are easy to deploy, widely trusted, and frequently whitelisted in enterprise environments. These tools provide:
Unattended access: Connect without user interaction
Organizations often whitelist Remote Access Tools and trust their digital signatures, which attackers exploit to bypass security controls and persist stealthily. Understanding how Remote Access Tools are abused is critical for building effective defences against modern ransomware threats.
The Ransomware Kill Chain: A Step-by-Step Breakdown
The ransomware kill-chain outlines each stage of an attack, from initial access to final impact. When attackers leverage legitimate Remote Access Tools, they gain stealth, persistence, and control, making detection and mitigation more challenging.
Stage 1: Initial Access – Credential Compromise
Attackers gain legitimate access using stolen or brute-forced credentials, bypassing defences while appearing as trusted users. Targeting administrator accounts provides maximum control and enables later stages like Remote Access Tool deployment and lateral movement.
Common Attack Pathways:
Brute-force attacks against RDP/SMB endpoints
Credential reuse from leaks or past breaches
Targeting administrator accounts for maximum privileges
Detection Indicators:
Windows Event IDs 4625 → 4624 (multiple failed logins immediately followed by success)
RDP logon type 10 at unusual hours
Logins from unexpected geolocations.
Stage 2: Remote Tool Abuse – Hijacking vs. Silent Installation
After gaining access, attackers focus on Remote Access Tool deployment for stealthy persistence. They can either hijack an existing Remote Access Tool to avoid detection or perform a silent installation using signed installers with minimal footprint. Silent installation often leverages known command-line flags, vendor documentation, or reverse-engineering to find deployment parameters.
Method 1: Hijacking Existing Remote Access Tools
Enumerate installed Remote Access Tools via WMI, registry, or PowerShell.
Add attacker credentials or modify access configurations.
Avoids creating new files or processes, reducing detection risk.
Method 2: Silent Installation of Remote Access Tools
Deploy lightweight, signed installers without user interaction.
Attackers leverage registry run keys (HKCU\Software\Microsoft\Windows\CurrentVersion\Run), hidden scheduled tasks, and configuration file modifications to maintain persistence. Privilege escalation is achieved using tools like PowerRun or TrustedInstaller, allowing Remote Access Tools to run with SYSTEM privileges and bypass user-level restrictions.
Mechanisms:
Registry Run Keys: HKCU\Software\Microsoft\Windows\CurrentVersion\Run
Scheduled Tasks: Hidden tasks to auto-restart Remote Access Tools
Configuration Files: Modify config.toml (RustDesk) for unattended access
Privilege Escalation: Launch Remote Access Tool as SYSTEM using PowerRun or TrustedInstaller
Using Remote Access Tools, attackers can interactively stop Antivirus services, manipulate group policies, and add Remote Access Tool directories to exclusion lists. Critical logs are cleared, and file shredding tools are used to remove forensic evidence, making post-incident investigation difficult.
Techniques:
Stop Antivirus services: sc stop <service> or net stop <service>
Policy manipulation: Add Remote Access Tool directories to exclusions.
Log clearing: Adversaries often use the following command lines as part of Anti-Forensics to clear event logs: wevtutil cl Security wevtutil cl System wevtutil cl Application
File shredding: Remove forensic artifacts
Stage 5: Payload Deployment & Execution
Attackers stop Antivirus services, modify security policies, disable recovery mechanisms, clear event logs, and shred sensitive files to evade detection and hinder forensic investigations. They may also tamper with backup solutions, disable shadow copies, and use Living-off-the-Land Binaries (LOLBins) like rundll32 or PowerShell to blend malicious actions with legitimate processes. These actions ensure minimal visibility for defenders and create a safe environment for ransomware execution.
Mechanism:
Ransomware is delivered through Remote Access Tool channels, often disguised as trusted updates or administrative actions, and executed within existing remote sessions to bypass user suspicion and security monitoring.
Stage 6: Lateral Expansion
Lateral movement is facilitated through credential reuse, Remote Access Tool propagation, or exploiting enterprise Remote Access Tool deployments.
Mechanisms:
Credential reuse across endpoints
Enterprise Remote Access Tool exploitation for mass deployment
Indicators:
Multiple endpoints reporting new Remote Access Tool connections
Unauthorized scheduled tasks or registry modifications across machines
Stage 7: Impact – Encryption & Lockout
Ransomware payload execution triggers data encryption, account lockouts, and Remote Access Tool credential changes to block administrative remediation. Campaigns such as LockBit, Black and Basta variants demonstrate this final stage in live attacks.
Outcome:
Encrypt files on target systems
Lock accounts or change Remote Access Tool credentials to prevent remediation
Real-World Campaign Examples
Below are commonly abused Remote Access Tools leveraged by adversaries in ransomware campaigns for persistence, deployment, and lateral movement.
Understanding the tactics, techniques, and procedures (TTPs) used by adversaries is crucial to defending against Remote Access Tool-driven ransomware campaigns. By mapping these activities to the MITRE ATT&CK framework, security teams can visualize how attackers gain access, deploy tools, maintain persistence, escalate privileges, and eventually deliver impactful payloads. The table below highlights the key stages of attack, the techniques leveraged, and the commonly abused remote access tools aligned to each step.
Stages
Technique
MITRE ATT&CK Sub-Technique ID
Observations
Initial Access
Brute Force
T1110.001
Targeting RDP/SMB endpoints to gain initial access
Tool Deployment
Ingress Tool Transfer
T1105
Remote access utilities transferred for execution
Execution
Remote Services
T1021.001
Remote sessions used to execute payloads
Persistence
Registry Run Keys
T1547.001
Registry keys created/modified for tool persistence
Privilege Escalation
Abuse Elevation Control Mechanism
T1548.002
Elevation of privileges observed to run tools with SYSTEM rights
Defense Evasion
Impair Defenses
T1562.001
Security services disabled, logs cleared
Lateral Movement
Remote Services
T1021.001
Remote services abused to move across endpoints
Impact
Data Encrypted for Impact
T1486
Tools leveraged to deploy ransomware and encrypt data
Emerging Trends & Future Threats
As ransomware operators evolve, new tactics are emerging that expand beyond traditional on-premise exploitation. These trends highlight how attackers are combining automation, cloud abuse, and RaaS ecosystems to maximize the scale and stealth of their operations.
AI-driven Remote Access Tool deployment: Automated decision-making for payloads
RaaS integration: Remote Access Tools embedded in ransomware-as-a-service offerings for enterprise campaigns
Multi-stage attacks: Initial Remote Access Tool compromise followed by secondary payloads (data exfiltration, cryptojacking, lateral ransomware)
How Quick Heal / Seqrite Protect Against These Activities.
Ransomware actors may try to weaponize trusted tools, but Quick Heal and Seqrite are built with multiple layers of defence to stop them in their tracks. By combining real-time monitoring, self-protection, and advanced behavioural detection, these solutions ensure that attackers can’t easily disable security or slip past unnoticed.
Virus Protection: Actively detects and neutralizes trojanized installers or hidden payloads before they can execute.
Antivirus Self Protection: Prevents attackers from forcefully terminating or uninstalling security services.
Behaviour-Based Detection: Monitors for abnormal activities linked to ransomware, such as mass file changes or suspicious process launches.
Ransomware Protection: Blocks unauthorized encryption attempts in real time, cutting off the attack before data is locked.
Application Control: Restricts the use of unauthorized remote tools, ensuring only trusted applications are allowed to run.
Security Best Practices & Recommendations
Defending against ransomware isn’t just about having the right tools — it’s also about using them wisely and building strong day-to-day habits. Here are some practical steps every organization can take to stay ahead of attackers:
Restrict Remote Access Tool Usage: Only keep the remote tools you really need and remove the rest. The fewer entry points, the safer your systems are.
Enforce Multi-Factor Authentication (MFA): Even if attackers steal a password, MFA makes it much harder for them to log in.
Limit Administrative Rights: Don’t hand out admin privileges unless absolutely necessary. Less privilege means less damage if an account is compromised.
Audit & Monitor Logs Continuously: Keep a close watch on your logs — unusual logins, silent installs, or strange setup commands can be early warning signs.
Regular Updates & Patching: Stay on top of updates for both your operating systems and security tools so attackers can’t exploit old flaws.
User Awareness Training: People are the first line of defence. Training staff to spot phishing emails or suspicious remote support activity can stop attacks before they even start.
Conclusion:
Legitimate IT tools can easily become hidden attack vectors when mismanaged, and Remote Access Tool abuse is now a critical enabler of next-generation ransomware. To counter this risk, enterprises need a layered approach that combines governance, monitoring, and rapid response.
Quick Heal and Seqrite play a central role in this defence strategy, providing strong Antivirus protection, behavioural detection, and Anti-Ransomware protection. When paired with strict governance and incident response, organizations can stay ahead of attackers.
Key measures include:
Remote Access Tool governance and whitelisting
Multi-layered Antivirus protections powered by Quick Heal / Seqrite
Behavioural detection and outbound filtering
Rapid incident response for containment and recovery
By adopting this multi-layered defence strategy, organizations can proactively detect, contain, and mitigate Remote Access Tool–based ransomware campaigns—turning trusted tools from potential threats into controlled, manageable assets.
Do you need the index of the current item in a foreach loop with C#? Here you’ll see two approaches.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes, when looping over a collection of elements in C#, you need not only the items itself, but also its position in the collection.
How to get the index of the current element in a foreach loop?
The easiest way is to store and update the index in a separate variable
List<string> myFriends = new List<string> {
"Emma", "Rupert", "Daniel", "Maggie", "Alan"};
int index = 0;
foreach (var friend in myFriends)
{
Console.WriteLine($"Friend {index}: {friend}");
index++;
}
This works fine, nothing to add.
But, if you want something a little more elegant and compact, you can use the Select method from LINQ:
there is a tight bond between the current item in the loop and the index
I find it cleaner and easier to read
Or… You can just replace it with a simple for loop!
What about performance?
I’ve done a simple benchmark (see here), and it resulted that for lists with less than 1000 items, the first solution is faster, and for lists with 10000 items, using LINQ is way faster than using an external index.
Size (#items)
With simple index (ms)
With LINQ (ms)
100
96
128
1000
1225
1017
10000
5523
786
This happens with .NET 5.
Update 2021-06-09: the previous benchmark was wrong!!😐
The times listed in the previous table were misleading: I calculated those durations using a StopWatch and calling it in different methods.
But, when performing a more precise benchmark using Benchmark.NET, the results are totally different.
With .NET Core 3.1.14 I get the following results:
Method
array
Mean
Error
WithIndex
Int32[10000]
269,386.4 ns
6,168.76 ns
WithLinq
Int32[10000]
396,421.3 ns
7,778.64 ns
WithIndex
Int32[1000]
25,438.3 ns
504.03 ns
WithLinq
Int32[1000]
39,981.3 ns
1,578.48 ns
WithIndex
Int32[100]
2,440.8 ns
48.34 ns
WithLinq
Int32[100]
3,687.7 ns
73.60 ns
WithIndex
Int32[10]
185.6 ns
3.52 ns
WithLinq
Int32[10]
369.5 ns
9.51 ns
While with .NET 5 I get these results:
Method
array
Mean
Error
WithIndex
Int32[10000]
134,431.02 ns
2,181.244 ns
WithLinq
Int32[10000]
273,691.68 ns
5,334.833 ns
WithIndex
Int32[1000]
12,961.69 ns
233.351 ns
WithLinq
Int32[1000]
26,023.63 ns
495.341 ns
WithIndex
Int32[100]
1,088.25 ns
21.485 ns
WithLinq
Int32[100]
2,299.12 ns
21.901 ns
WithIndex
Int32[10]
48.01 ns
0.748 ns
WithLinq
Int32[10]
228.66 ns
4.531 ns
As you can see, actually using LINQ is slower than using a simple index. While in .NET Core 3 the results were quite similar, with .NET 5 there was a huge improvement both cases, but now using a simple index is two times faster than using LINQ.
SORRY FOR THAT MISLEADING INFO! Thank you, Ben, for pointing it out in the comments section! 🙏
Below you can see the code I used for this benchmark. I you want to get started with Benchmark.NET, look at the documentation or to my article Enum.HasFlag performance with BenchmarkDotNet
We’ve discovered that there are many ways to use indexes tightly bound with items. If you look at performance, go for the simplest ways (for loop or foreach with simple index). If you want a more concise code, go for LINQ.
Anything else to add?
👉 Let’s discuss it on Twitter or on the comment section below!