You’ve probably seen this kind of scroll effect before, even if it doesn’t have a name yet. (Honestly, we need a dictionary for all these weird and wonderful web interactions. If you’ve got a talent for naming things…do it. Seriously. The internet is waiting.)
Imagine a grid of images. As you scroll, the columns don’t move uniformly but instead, the center columns react faster, while those on the edges trail behind slightly. It feels soft, elastic, and physical, almost like scrolling with weight, or elasticity.
You can see this amazing effect on sites like yzavoku.com (and I’m sure there’s a lot more!).
So what better excuse to use the now-free GSAP ScrollSmoother? We can recreate it easily, with great performance and full control. Let’s have a look!
What We’re Building
We’ll take CSS grid based layout and add some magic:
Inertia-based scrolling using ScrollSmoother
Per-column lag, calculated dynamically based on distance from the center
A layout that adapts to column changes
HTML Structure
Let’s set up the markup with figures in a grid:
<div class="grid">
<figure class="grid__item">
<div class="grid__item-img" style="background-image: url(assets/1.webp)"></div>
<figcaption class="grid__item-caption">Zorith - L91</figcaption>
</figure>
<!-- Repeat for more items -->
</div>
Inside the grid, we have many .grid__item figures, each with a background image and a label. These will be dynamically grouped into columns by JavaScript, based on how many columns CSS defines.
In our JavaScript then, we’ll change the DOM structure by inserting .grid__column wrappers around groups of items, one per colum, so we can control their motion individually. Why are we doing this? It’s a bit lighter to move columns rather then each individual item.
This method groups your grid items into arrays, one for each visual column, using the actual number of columns calculated from the CSS.
3. Create Column Wrappers and Assign Lag
const buildGrid = (columns, numColumns) => {
const fragment = document.createDocumentFragment(); // Efficient DOM batch insertion
const mid = (numColumns - 1) / 2; // Center index (can be fractional)
const columnContainers = [];
// Loop over each column
columns.forEach((column, i) => {
const distance = Math.abs(i - mid); // Distance from center column
const lag = baseLag + distance * lagScale; // Lag based on distance from center
const columnContainer = document.createElement('div'); // New column wrapper
columnContainer.className = 'grid__column';
// Append items to column container
column.forEach((item) => columnContainer.appendChild(item));
fragment.appendChild(columnContainer); // Add to fragment
columnContainers.push({ element: columnContainer, lag }); // Save for lag effect setup
});
grid.appendChild(fragment); // Add all columns to DOM at once
return columnContainers;
};
The lag value increases the further a column is from the center, creating that elastic “catch up” feel during scroll.
4. Apply Lag Effects to Each Column
const applyLagEffects = (columnContainers) => {
columnContainers.forEach(({ element, lag }) => {
smoother.effects(element, { speed: 1, lag }); // Apply individual lag per column
});
};
ScrollSmoother handles all the heavy lifting, we just pass the desired lag.
5. Handle Layout on Resize
// Rebuild the layout only if the number of columns has changed on window resize
window.addEventListener('resize', () => {
const newColumnCount = getColumnCount();
if (newColumnCount !== currentColumnCount) {
init();
}
});
This ensures our layout stays correct across breakpoints and column count changes (handled via CSS).
Now, there’s lots of ways to build upon this and add more jazz!
For example, you could:
add scroll-triggered opacity or scale animations
use scroll velocity to control effects (see demo 2)
adapt this pattern for horizontal scroll layouts
Exploring Variations
Once you have the core concept in place, there are four demo variations you can explore. Each one shows how different lag values and scroll-based interactions can influence the experience.
You can adjust which columns respond faster, or play with subtle scaling and transforms based on scroll velocity. Even small changes can shift the rhythm and tone of the layout in interesting ways. And don’t forget: changing the look of the grid itself, like the image ratio or gaps, will give this a whole different feel!
Now it’s your turn. Tweak it, break it, rebuild it, and make something cool.
I really hope you enjoy this effect! Thanks for checking by 🙂
Creating a third-person character controller involves more than just moving an object around a 3D scene. Realistic movement, grounded physics, responsive jumping, and animation blending are essential for a polished feel. This article explores how these elements can be assembled — not through traditional manual coding, but via AI-assisted development using Bolt.new, a browser-based AI-assisted development tool that generates web code from natural language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It provides a lightweight environment where developers can focus on describing functionality rather than writing boilerplate.
For this character controller, Bolt handled tasks like setting up physics, integrating animations, and managing input systems, making it easier to test ideas and iterate quickly without switching between tools or writing everything from scratch.
If you’re curious to learn more, check out this article on Codrops, which also explores the platform’s capabilities and showcases another real-world project built entirely with AI.
The final project is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create complex, interactive 3D experiences by guiding AI — focusing on behavior and structure rather than syntax.
Step 1: Setting Up Physics with a Capsule and Ground
The character controller begins with a simple setup: a capsule collider for the player and a ground plane to interact with. Rapier, a fast and lightweight physics engine built in WebAssembly, handles gravity, rigid body dynamics, and collisions. This forms the foundation for player movement and world interaction.
The capsule shape was chosen for its stability when sliding across surfaces and climbing over small obstacles — a common pattern in real-time games.
Step 2: Real-Time Tuning with a GUI
To enable rapid iteration and balance gameplay feel, a visual GUI was introduced (using Leva.js). This panel exposes parameters such as:
Player movement speed
Jump force
Gravity scale
Follow camera offset
Debug toggles
By integrating this directly into the experience, developers can tune the controller live without needing to edit or recompile code, speeding up testing and design decisions.
Step 3: Ground Detection with Raycasting
A raycast is used to detect whether the player is grounded. This simple yet effective check prevents the character from jumping mid-air or triggering multiple jumps in sequence.
The logic is executed on every frame, casting a ray downward from the base of the capsule collider. When contact is confirmed, the jump input is enabled. This technique also allows smooth transitions between grounded and falling states in the animation system.
Step 4: Integrating a Rigged Character with Animation States
The visual character uses a rigged GLB model via Mixamo, with three key animations: Idle, Run, and Fall. These are integrated as follows:
The GLB character is attached as a child of the capsule collider
The animation state switches dynamically based on velocity and grounded status
Transitions are handled via animation blending for a natural feel
This setup keeps the visuals in sync with physics, while preserving modular control over the physical capsule.
Step 5: World Building and Asset Integration
The environment was arranged in Blender, then exported as a single .glb file and imported into the bolt.new project scene. This approach allows for efficient scene composition while keeping asset management simple.
For web, using .glb keeps geometry and textures bundled together. To maintain performance, it’s recommended to keep textures at 1024×1024 resolution or other square power-of-two sizes (e.g. 256, 512, 2048). This ensures optimal GPU memory usage and faster load times across devices.
Special thanks to KayLousberg for the low-poly 3D kit used for prototyping.
Step 6: Cross-Platform Input Support
The controller was designed to work seamlessly across desktop, mobile, and gamepad platforms — all built using AI-generated logic through Bolt.
Gamepad support was added using the Gamepad API, allowing players to plug in a controller and play with analog input.
On desktop, the controller uses standard keyboard input (WASD or arrow keys) and mouse movement for camera control.
On mobile, AI-generated code enabled an on-screen joystick and jump button, making the game fully touch-compatible.
All input types control the same physics-driven character, ensuring consistent behavior across devices — whether you’re playing on a laptop, touchscreen, or game controller.
This cross-platform support was implemented entirely through natural language prompts, showcasing how AI can translate high-level intent into working input systems.
The Role of AI in the Workflow
What makes this controller unique isn’t the mechanics — it’s the process. Every system was generated by AI through descriptive prompts, allowing the developer to work more like a creative director than a traditional engineer.
AI handled the boilerplate, the physics setup, the animation switching logic — all based on clear creative goals. This opens new doors for prototyping and interactive design, where iteration speed matters more than syntax.
This character controller demo includes:
Capsule collider with physics
Grounded detection via raycast
State-driven animation blending
GUI controls for tuning
Environment interaction with static/dynamic objects
Cross-Platform Input Support
It’s a strong starting point for creating browser-based games, interactive experiences, or prototyping new ideas — all with the help of AI.
Check out the full game built using this setup as a base: 🎮 Demo Game
My (design) partner, Gaetan Ferhah, likes to send me his design and motion experiments throughout the week. It’s always fun to see what he’s working on, and it often sparks ideas for my own projects. One day, he sent over a quick concept for making a product grid feel a bit more creative and interactive. 💬 The idea for this tutorial came from that message.
We’ll explore a “grid to preview” hover interaction that transforms product cards into a full preview. As with many animations and interactions, there are usually several ways to approach the implementation—ranging in complexity. It can feel intimidating (or almost impossible) to recreate a designer’s vision from scratch. But I’m a huge fan of simplifying wherever possible and leaning on optical illusions (✨ fake it ’til you make it ✨).
All images are generated with DALL-E (sad, I know.. I wanted everything to be real too 💔)
For this tutorial, I knew I wanted to keep things straightforward and recreate the effect of puzzle pieces shifting into place using a combination of clip-path animation and an image overlay.
Let’s break it down in a few steps:
Layout and Overlay (HTML, CSS)Set up the initial layout and carefully match the position of the preview overlay to the grid.
Build JavaScript structure (JavaScript)Creating some classes to keep us organised, add some interactivity (event listeners).
Clip-Path Creationand Animation (CSS, JS, GSAP)Adding and animating the clip-path, including some calculations on resize—this forms a key part of the puzzle effect.
Moving Product Cards (JS, GSAP)Set up animations to move the product cards towards each other on hover.
Preview Image Scaling (JS, GSAP)Slightly scaling down the preview overlay in response to the inward movement of the other elements.
Adding Images (HTML, JS, GSAP)Enough with the solid colours, let’s add some images and a gallery animation.
Debouncingevents (JS)Debouncing the mouse-enter event to prevent excessive triggering and reduce jitter.
Final tweaks Crossed the t’s and dotted the i’s—small clean-ups and improvements.
Layout and Overlay
At the foundation of every good tutorial is a solid HTML structure. In this step, we’ll create two key elements: the product grid and the overlay for the preview cards. Since both need a similar layout, we’ll place them inside the same container (.products).
Our grid will consist of 8 products (4 columns by 2 rows) with a gutter of 5vw. To keep things simple, I’m only adding the corresponding li elements for the products, but not yet adding any other elements. In the HTML, you’ll notice there are two preview containers: one for the left side and one for the right. If you want to see the preview overlays right away, head to the CodePen and set the opacity of .product-preview to 1.
Why I Opted for Two Containers
At first, I planned to use just one preview container and move it to the opposite side of the hovered card by updating the grid-column-start. That approach worked fine—until I got to testing.
When I hovered over a product card on the left and quickly switched to one on the right, I realised the problem: with only one container, I also had just one timeline controlling everything inside it. That made it basically impossible to manage the “in/out” transition between sides smoothly.
So, I decided to go with two containers—one for the left side and one for the right. This way, I could animate both sides independently and avoid timeline conflicts when switching between them.
See the Pen
Untitled by Gwen Bogaert (@gwen-bo)
on CodePen.
JavaScript Set-up
In this step, we’ll add some classes to keep things structured before adding our event listeners and initiating our timelines. To keep things organised, let’s split it into two classes: ProductGrid and ProductPreview.
ProductGrid will be fairly basic, responsible for handling the split between left and right, and managing top-level event listeners (such as mouseenter and mouseleave on the product cards, and a general resize).
ProductPreview is where the magic happens. ✨ This is where we’ll control everything that happens once a mouse event is triggered (enter or leave). To pass the ‘active’ product, we’ll define a setProduct method, which, in later steps, will act as the starting point for controlling our GSAP animation(s).
Splitting Products (Left – Right)
In the ProductGrid class, we will split all the products into left and right groups. We have 8 products arranged in 4 columns, with each row containing 4 items. We are splitting the product cards into left and right groups based on their column position.
this.ui.products.filter((_, i) => i % 4 === 2 || i % 4 === 3)
The logic relies on the modulo or remainder operator. The line above groups the product cards on the right. We use the index (i) to check if it’s in the 3rd (i % 4 === 2) or 4th (i % 4 === 3) position of the row (remember, indexing starts at 0). The remaining products (with i % 4 === 0 or i % 4 === 1) will be grouped on the left.
Now that we know which products belong to the left and right sides, we will initiate a ProductPreview for both sides and pass along the products array. This will allow us to define productPreviewRight and productPreviewLeft.
To finalize this step, we will define event listeners. For each product, we’ll listen for mouseenter and mouseleave events, and either set or unset the active product (both internally and in the corresponding ProductPreview class). Additionally, we’ll add a resize event listener, which is currently unused but will be set up for future use.
This is where we’re at so far (only changes in JavaScript):
See the Pen
Tutorial – step 2 (JavaScript structure) by Gwen Bogaert (@gwen-bo)
on CodePen.
Clip-path
At the base of our effect lies the clip-path property and the ability to animate it with GSAP. If you’re not familiar with using clip-path to clip content, I highly recommend this article by Sarah Soueidan.
Even though I’ve used clip-path in many of my projects, I often struggle to remember exactly how to define the shape I’m looking for. As before, I’ve once again turned to the wonderful tool Clippy, to get a head start on defining (or exploring) clip-path shapes. For me, it helps demystify which value influences which part of the shape.
Let’s start with the cross (from Clippy) and modify the points to create a more mathematical-looking cross (✚) instead of the religious version (✟).
Feel free to experiment with some of the values, and soon you’ll notice that with small adjustments, we can get much closer to the desired shape! For example, by stretching the horizontal arms completely to the sides (set to 10% and 90% before) and shifting everything more equally towards the center (with a 10% difference from the center — so either 40% or 60%).
And bada bing, bada boom! This clip-path almost immediately creates the illusion that our single preview container is split into four parts — exactly the effect we want to achieve! Now, let’s move on to animating the clip-path to get one step closer to our final result:
Animating Clip-paths
The concept of animating clip-paths is relatively simple, but there are a few key things to keep in mind to ensure a smooth transition. One important consideration is that it’s best to define an equal number of points for both the start and end shapes.
The idea is fairly straightforward: we begin with the clipped parts hidden, and by the end of the animation, we want the clip-path to disappear, revealing the entire preview container (by making the arms of the cross so thin that they’re barely visible or not visible at all). This can be achieved easily with a fromTo animation in GSAP (though it’s also supported in CSS animations).
The Catch
You might think, “That’s it, we’re done!” — but alas, there’s a catch when it comes to using this as our puzzle effect. To make it look realistic, we need to ensure that the shape of the cross aligns with the underlying product grid. And that’s where a bit of JavaScript comes in!
We need to factor in the gutter of our grid (5vw) to calculate the width of the arms of our cross shape. It could’ve been as simple as adding or subtracting (half!) of the gutter to/from the 50%, but… there’s a catch in the catch!
We’re not working with a square, but with a rectangle. Since our values are percentages, subtracting 2.5vw (half of the gutter) from the center wouldn’t give us equal-sized arms. This is because there would still be a difference between the x and y dimensions, even when using the same percentage value. So, let’s take a look at how to fix that:
In the code above (triggered on each resize), we get the width and height of the preview container (which spans 4 product cards — 2 columns and 2 rows). We then calculate what percentage 5vw would be, relative to both the width and height.
To conclude this step, we would have something like:
See the Pen
Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
on CodePen.
Moving Product Cards
Another step in the puzzle effect is moving the visible product cards together so they appear to form one piece. This step is fairly simple — we already know how much they need to move (again, gutter divided by 2 = 2.5vw). The only thing we need to figure out is whether a card needs to move up, down, left, or right. And that’s where GSAP comes to the rescue!
We need to define both the vertical (y) and horizontal (x) movement for each element based on its index in the list. Since we only have 4 items, and they need to move inward, we can check whether the index is odd or even to determine the desired value for the horizontal movement. For vertical movement, we can decide whether it should move to the top or bottom depending on the position (top or bottom).
In GSAP, many properties (like x, y, scale, etc.) can accept a function instead of a fixed value. When you pass a function, GSAP calls it for each target element individually.
Horizontal (x): cards with an even index (0, 2) get shifted right by 2.5vw, the other (two) move to the left. Vertical (y): cards with an index lower than 2 (0,1) are located at the top, so need to move down, the other (two) move up.
See the Pen
Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
on CodePen.
Preview Image (Scaling)
Cool, we’re slowly getting there! We have our clip-path animating in and out on hover, and the cards are moving inward as well. However, you might notice that the cards and the image no longer have an exact overlap once the cards have been moved. To fix that and make everything more seamless, we’ll apply a slight scale to the preview container.
This is where a bit of extra calculation comes in, because we want it to scale relative to the gutter. So we take into account the height and width of the container.
This calculation determines a scale factor to shrink our preview container inward, matching the cards coming together. First, the rectangle’s width/height (in pixels) is converted into viewport width units (vw) by dividing it by the pixel value of 1vw. Next, the shrink amount (5vw) is subtracted from that width/height. Finally, the result is divided by the original width in vw to calculate the scale factor (which will be slightly below 1). Since we’re working with a rectangle, the scale factor for the x and y axes will be slightly different.
In the codepen below, you’ll see the puzzle effect coming along nicely on each container. Pink are the product cards (not moving), red and blue are the preview containers.
See the Pen
Tutorial – step 4 (moving cards) by Gwen Bogaert (@gwen-bo)
on CodePen.
Adding Pictures
Let’s make our grid a little more fun to look at!
In this step, we’re going to add the product images to our grid, and the product preview images inside the preview container. Once that’s done, we’ll start our image gallery on hover.
The HTML changes are relatively simple. We’ll add an image to each product li element and… not do anything with it. We’ll just leave the image as is.
The rest of the magic will happen inside the preview container. Each container will hold the preview images of the products from the other side (those that will be visible). So, the left container will contain the images of the 4 products on the right, and the right container will contain the images of the 4 products on the left. Here’s an example of one of these:
Once that’s done, we can initialise by querying those images in the constructor of the ProductPreview, sorting them by their dataset.id. This will allow us to easily access the images later via the data-index attribute that each product has. To sum up, at the end of our animate-in timeline, we can call startPreviewGallery, which will handle our gallery effect.
startPreviewGallery(id) {
const images = this.ui.previewImagesPerID[id]
const timeline = gsap.timeline({ repeat: -1 })
// first image is already visible (do not hide)
gsap.set([...images].slice(1), { opacity: 0 })
images.forEach((image) => {
timeline
.set(images, { opacity: 0 }) // Hide all images
.set(image, { opacity: 1 }) // Show only this one
.to(image, { duration: 0, opacity: 1 }, '+=0.5')
})
this.galleryTimeline = timeline
}
Debouncing
One thing I’d like to do is debounce hover effects, especially if they are more complex or take longer to complete. To achieve this, we’ll use a simple (and vanilla) JavaScript approach with setTimeout. Each time a hover event is triggered, we’ll set a very short timer that acts as a debouncer, preventing the effect from firing if someone is just “passing by” on their way to the product card on the other side of the grid.
I ended up using a 100ms “cooldown” before triggering the animation, which helped reduce unnecessary animation starts and minimise jitter when interacting with the cards.
productMouseEnter(product, preview) {
// If another timer (aka hover) was running, cancel it
if (this.hoverDelay) {
clearTimeout(this.hoverDelay)
this.hoverDelay = null
}
// Start a new timer
this.hoverDelay = setTimeout(() => {
this.activeProduct = product
preview.setProduct(product)
this.hoverDelay = null // clear reference
}, 100)
}
productMouseLeave() {
// If user leaves before debounce completes
if (this.hoverDelay) {
clearTimeout(this.hoverDelay)
this.hoverDelay = null
}
if (this.activeProduct) {
const preview = this.getProductSide(this.activeProduct)
preview.setProduct(null)
this.activeProduct = null
}
}
Final Tweaks
I can’t believe we’re almost there! Next up, it’s time to piece everything together and add some small tweaks, like experimenting with easings, etc. The final timeline I ended up with (which plays or reverses depending on mouseenter or mouseleave) is:
While this interaction may look cool and visually engaging, it’s important to be mindful of usability and accessibility. In its current form, this effect relies quite heavily on motion and hover interactions, which may not be ideal for all users. Here are a few things that should be considered if you’d be planning on implementing a similar effect:
Motion sensitivity: Be sure to respect the user’s prefers-reduced-motion setting. You can easily check this with a media query and provide a simplified or static alternative for users who prefer minimal motion.
Keyboard navigation: Since this interaction is hover-based, it’s not currently accessible via keyboard. If you’d like to make it more inclusive, consider adding support for focus events and ensuring that all interactive elements can be reached and triggered using a keyboard.
Think of this as a playful, exploratory layer — not a foundation. Use it thoughtfully, and prioritise accessibility where it counts. 💛
Acknowledgements
I am aware that this tutorial assumes an ideal scenario of only 8 products, because what happens if you have more? I didn’t test it out myself, but the important part is that the preview containers feel like an exact overlay of the product grid. If more cards are present, you could try ‘mapping’ the coordinates of the preview container to the 8 products that are completely in view. Or.. go crazy with your own approach if you had another idea. That’s the beauty of it, there’s always many approaches that would lead to the same (visual) outcome. 🪄
Thank you so much for following along! A big thanks to Codrops for giving me the opportunity to contribute. I’m excited to see what you’ll create when inspired by this tutorial! If you have any questions, feel free to drop me a line!
The quality of a project can be measured by having a look at how the code is written. SonarQube can help you by running static code analysis and letting you spot the pain points. Let’s learn how to install and run it locally with Docker.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Code quality is important, and having the right tool can be terribly beneficial for an application’s long-term success.
Although maintainability problems often come from module separation and cannot be solved by making a single class cleaner, a tool like SonarQube can pave the way to a cleaner codebase.
In this article, we will learn how to download and install SonarQube Community using Docker. We will see how to configure it and run your very first code analysis on a .NET-based application.
Scaffold a dummy ASP.NET Core API project
To try it out, you need- of course! – a repository to analyse.
In this article, I will set up SonarQube to analyse a tiny, dummy ASP.NET Core API project. You are probably already familiar with this API project: it’s the default one created by Visual Studio – the one with the Weather Forecast.
I chose to use Controllers instead of Minimal APIs so that we could analyse some more code.
Have a look at the code: you will notice that the default implementation of the WeatherForecastController injects an instance of ILogger, stores it, and then never references it in other places. This sounds like a good maintainability issue that SonarQube should be able to identify.
To better locate which files SonarQube is creating, I decided to put this project under source control, but only locally. This way, when we run the SonarQube analysis, we will be able to see the files created and modified by SonarQube.
Clearly, the first step is to have SonaQube installed on your machine.
I’m going to install SonarQube Community Build. It contains almost all the functionalities of SonarQube, and it’s available for free (of course, to have additional functionalities, you have to pick the proper pricing tier).
SonarQube Community Build can be installed via Docker: this way, SonarQube can run in a containerised environment, regardless of your Operating System.
To do that, you can run the following command:
docker run --name sonarqube-community -p 9001:9000 sonarqube:community
This Docker command downloads the latest version of the sonarqube:community Docker Image, and runs it locally, making it available at localhost:9001.
As briefly explained in an old article, the -p 9001:9000 part of the CLI command means that you are exposing the port 9000 of the “inner” container to the world via the port 9001 of the host.
Once the command has finished downloading all the dependencies and loading all the resources, you will be able to access SonarQube on localhost:9001.
You will be asked to log in: the default username is admin, and the password is (again) admin.
After the first login, you will be asked to change your password.
Create a SonarQube Project
It’s time to link SonarQube to your repository.
To do that, you have to create a so-called Project. Ideally, you may want to integrate SonarQube into your CI pipeline, but having it run locally is fine for tying it out.
So, on the Projects page, you can create a new project. Click on “Create a local project” and follow the wizard.
First, create a new Project by defining the Display name (in my case, code4it-sonarqube-local) and the project key (code4it-sonarqube-local-project-key). The Project Key is used in the command line to execute the code analysis using the rules defined in this project.
Also, you have to specify the name of the branch that you will be using as a baseline: generally, it’s either “main” or “master”, but it can be anything.
Follow the wizard, choosing some configurations (I suggest you start with the default values), and you’ll end up with a Project ready to be initialised.
Then, you will have to generate a token to run the analysis (I know, it feels like there are too many similar steps. But bear with me; we’re almost ready to run the analysis).
By hitting the “generate” button you’ll see a new token like this: sqp_fd71f97760c84539b579713f18a07c790432cfe8. Remember to store it somewhere, as you’ll gonna be using it later.
The last step is to make sure that you have sonarscanner available as a .NET Core Global Tool in your machine.
Just open a terminal as an administrator and run:
dotnet tool install --global dotnet-sonarscanner
Run the SonarQube analysis on your local repository
Finally, we are ready to run the first analysis of the code!
I suggest you commit all your changes so that you’ll see the files generated by SonarQube.
Open a Terminal, navigate to the root of the Solution, and follow these steps.
Prepare the SonarQube analysis
You first have to instruct SonaQube on the configurations to be used for the current analysis.
The command to run is something like this:
dotnet sonarscanner begin /k:"<your key here>" /d:sonar.host.url="<your-host-root-url>" /d:sonar.token="<your-project-token>"
For my specific execution context, using the values you can see in this article, I have to run the command with the following parameters:
dotnet sonarscanner begin /k:"code4it-sonarqube-local-project-key" /d:sonar.host.url="http://localhost:9001" /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
The flags represent the configurations of SonarQube:
/k is the Project Key, as defined before: it contains the rules to be used; /d:sonar.host.url is the url that will receive the result of the analysis, allowing SonarQube to aggregate the issues and display them on a UI; /d:sonar.token is the Token you created before.
After the command completes, you’ll see that SonarQube created some files to prepare the code analysis. These files contain all the rules under code analysis and their related severity.
From now on, SonarQube will be able to run the analysis and understand how to treat each issue.
Build the solution
Now you have to build the whole solution, running:
You can, of course, choose to run the command specifying the solution file to build.
Even if it seems trivial, this step is crucial for SonarQube: in fact, it generates some new metadata files that list all the files that have to be taken into account when running the analysis, as well as the path to the output folder:
Run the actual SonarQube analysis
Finally, it’s time to run the actual analysis.
Again, head to the root of the application, and on a terminal run the following command:
dotnet sonarscanner end /d:sonar.token="<your-token>"
In my case, the full command is
dotnet sonarscanner end /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
Depending on the size of the project, it will take different amounts of time. For this simple project, it took 7 seconds. For a huge project I worked on, it took almost 2 hours.
Also, the run time depends on the amount of new code to be analyzed: the very first run is the slowest one, and then all the subsequent analyses will focus on the latest code. In fact, most of the issues are stored in a cache.
No new files are created, as the result is directly sent to the SonarQube server.
Navigate the SonarQube report via UI
The result is now available at localhost!
Open a browser, open the website at the port you defined before, and get ready to navigate the status of the static analysis.
As I was expecting, the project passed the so-called Quality Gates – the minimum level set to consider a project “good”.
Yet, as you can see under the “Issues” tab, there are actually two issues. For example, there’s a suggested improvement that says to remove the _logger field, it is not used:
Of course, in a more complex project, you’ll find more issues, with different severity.
In this article, I assumed you know the basics of Docker. If not, or if you want to brush up your knowledge about the basics of Docker, here’s an article for you.
Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.
However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.
In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.
What Is Code Coverage?
Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:
How much of my code is tested?
Are there any untested paths or dead code?
Which parts of the application need additional test coverage?
In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.
You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.
Why Code Coverage Matters
Clearly, if you write valuable tests, Code Coverage is a great ally.
A high value of Code Coverage helps you with:
Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.
The Limitations of Code Coverage
While Code Coverage is valuable, it has limitations:
False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.
3 Practical reasons why Code Coverage percentage can be misleading
For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.
Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!
As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.
Great job! Or is it?
You can cheat by excluding parts of the code
In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.
While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).
We can, in fact, add that attribute to every single class like this:
You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.
Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.
As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.
We can, indeed, focus our efforts in different areas:
Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.
Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.
Further readings
To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).
In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.
This way of defining tests is called Testing Diamond, and I explained it here:
Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Can your system withstand heavy loads? You can answer this question by running Load Tests. Maybe, using K6 as a free tool.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Understanding how your system reacts to incoming network traffic is crucial to determining whether it’s stable, able to meet the expected SLO, and if the underlying infrastructure and architecture are fine.
How can we simulate many incoming requests? How can we harvest the results of our API calls?
In this article, we will learn how to use K6 to run load tests and display the final result locally in Windows 11.
This article will be the foundation of future content, in which I’ll explore more topics related to load testing, performance tips, and more.
What is Load Testing?
Load testing simulates real-world usage conditions to ensure the software can handle high traffic without compromising performance or user experience.
The importance of load testing lies in its ability to identify bottlenecks and weak points in the system that could lead to slow response times, errors, or crashes when under stress.
By conducting load testing, developers can make necessary optimizations and improvements, ensuring the software is robust, reliable, and scalable. It’s an essential step in delivering a quality product that meets user expectations and maintains business continuity during peak usage times. If you think of it, a system unable to handle the incoming traffic may entirely or partially fail, leading to user dissatisfaction, loss of revenue, and damage to the company’s reputation.
Ideally, you should plan to have automatic load tests in place in your Continuous Delivery pipelines, or, at least, ensure that you run Load tests in your production environment now and then. You then want to compare the test results with the previous ones to ensure that you haven’t introduced bottlenecks in the last releases.
The demo project
For the sake of this article, I created a simple .NET API project: it exposes just one endpoint, /randombook, which returns info about a random book stored in an in-memory Entity Framework DB context.
There are some details that I want to highlight before moving on with the demo.
As you can see, I added a random delay to simulate a random RTT (round-trip time) for accessing the database:
var delayMs = Random.Shared.Next(10, 10000);
// omitawait Task.Delay(delayMs);
I then added a thread-safe counter to keep track of the active operations. I increase the value when the request begins, and decrease it when the request completes. The log message is defined in the lock section to avoid concurrency issues.
Of course, it’s not a perfect solution: it just fits my need for this article.
Install and configure K6 on Windows 11
With K6, you can run the Load Tests by defining the endpoint to call, the number of requests per minute, and some other configurations.
It’s a free tool, and you can install it using Winget:
winget install k6 --source winget
You can ensure that you have installed it correctly by opening a Bash (and not a PowerShell) and executing the following command.
Note: You can actually use PowerShell, but you have to modify some system keys to make K6 recognizable as a command.
The --version prints the version installed and the id of the latest GIT commit belonging to the installed package. For example, you will see k6.exe v0.50.0 (commit/f18209a5e3, go1.21.8, windows/amd64).
Now, we can initialize the tool. Open a Bash and run the following command:
This command generates a script.js file, which you will need to configure in order to set up the Load Testing configurations.
Here’s the scaffolded file (I removed the comments that refer to parts we are not going to cover in this article):
importhttpfrom"k6/http"import { sleep } from"k6"exportconstoptions= {
// A number specifying the number of VUs to run concurrently.
vus:10, // A string specifying the total duration of the test run.
duration:"30s",
}
exportdefaultfunction () {
http.get("https://test.k6.io")
sleep(1)
}
Let’s analyze the main parts:
vus: 10: VUs are the Virtual Users: they simulate the incoming requests that can be executed concurrently.
duration: '30s': this value represents the duration of the whole test run;
http.get('https://test.k6.io');: it’s the main function. We are going to call the specified endpoint and keep track of the responses, metrics, timings, and so on;
sleep(1): it’s the sleep time between each iteration.
To run it, you need to call:
Understanding Virtual Users (VUs) in K6
VUs, Iterations, Sleep time… how do they work together?
I updated the script.js file to clarify how K6 works, and how it affects the API calls.
We are saying “Run the load testing for 30 seconds. I want only ONE execution to exist at a time. After each execution, sleep for 1 second”.
Make sure to run the API project, and then run k6 run script.js.
Let’s see what happens:
K6 starts, and immediately calls the API.
On the API, we can see the first incoming call. The API sleeps for 1 second, and then starts sending other requests.
By having a look at the logs printed from the application, we can see that we had no more than one concurrent request:
From the result screen, we can see that we have run our application for 30 seconds (plus another 30 seconds for graceful-stop) and that the max number of VUs was set to 1.
Here, you can find the same results as plain text, making it easier to follow.
Now, let me run the same script but update the VUs. We are going to run this configuration:
exportconstoptions= {
vus:3,
duration:"30s",
}
The result is similar, but this time we had performed 16 requests instead of 6. That’s because, as you can see, there were up to 3 concurrent users accessing our APIs.
The final duration was still 30 seconds. However, we managed to accept 3x users without having impacts on the performance, and without returning errors.
Customize Load Testing properties
We have just covered the surface of what K6 can do. Of course, there are many resources in the official K6 documentation, so I won’t repeat everything here.
There are some parts, though, that I want to showcase here (so that you can deep dive into the ones you need).
HTTP verbs
In the previous examples, we used the post HTTP method. As you can imagine, there are other methods that you can use.
Each HTTP method has a corresponding Javascript function. For example, we have
get() for the GET method
post() for the POST method
put() for the PUT method
del() for the DELETE method.
Stages
You can create stages to define the different parts of the execution:
With the previous example, I defined three stages:
the first one lasts 30 seconds, and brings the load to 20 VUs;
next, during the next 90 second, the number of VUs decreases to 10;
finally, in the last 20 seconds, it slowly shuts down the remaining calls.
As you can see from the result, the total duration was 2m20s (which corresponds to the sum of the stages), and the max amount of requests was 20 (the number defined in the first stage).
Scenarios
Scenarios allow you to define the details of requests iteration.
We always use a scenario, even if we don’t create one: in fact, we use the default scenario that gives us a predetermined time for the gracefulStop value, set to 30 seconds.
We can define custom scenarios to tweak the different parameters used to define how the test should act.
A scenario is nothing but a JSON element where you define arguments like duration, VUs, and so on.
By defining a scenario, you can also decide to run tests on the same endpoint but using different behaviours: you can create a scenario for a gradual growth of users, one for an immediate peak, and so on.
A glimpse to the final report
Now, we can focus on the meaning of the data returned by the tool.
Let’s use again the image we saw after running the script with the complex stages:
We can see lots of values whose names are mostly self-explaining.
We can see, for example, data_received and data_sent, which tell you the size of the data sent and received.
We have information about the duration and response of HTTP requests (http_req_duration, http_req_sending, http_reqs), as well as information about the several phases of an HTTP connection, like http_req_tls_handshaking.
We finally have information about the configurations set in K6, such as iterations, vus, and vus_max.
You can see the average value, the min and max, and some percentiles for most of the values.
Wrapping up
K6 is a nice tool for getting started with load testing.
You can see more examples in the official documentation. I suggest to take some time and explore all the possibilities provided by K6.
As I said before, this is just the beginning: in future articles, we will use K6 to understand how some technical choices impact the performance of the whole application.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛
You don’t need a physical database to experiment with ORMs. You can use an in-memory DB and seed the database with realistic data generated with Bogus.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes, you want to experiment with some features or create a demo project, but you don’t want to instantiate a real database instance.
Also, you might want to use some realistic data – not just “test1”, 123, and so on. These values are easy to set but not very practical when demonstrating functionalities.
In this article, we’re going to solve this problem by using Bogus and Entity Framework: you will learn how to generate realistic data and how to store them in an in-memory database.
Bogus, a C# library for generating realistic data
Bogus is a popular library for generating realistic data for your tests. It allows you to choose the category of dummy data that best suits your needs.
It all starts by installing Bogus via NuGet by running Install-Package Bogus.
From here, you can define the so-called Fakers, whose purpose is to generate dummy instances of your classes by auto-populating their fields.
Let’s see a simple example. We have this POCO class named Book:
Note: for the sake of simplicity, I used a dumb approach: author’s first and last name are part of the Book info itself, and the Genres property is treated as an array of enums and not as a flagged enum.
From here, we can start creating our Faker by specifying the referenced type:
Faker<Book> bookFaker = new Faker<Book>();
We can add one or more RuleFor methods to create rules used to generate each property.
The simplest approach is to use the overload where the first parameter is a Function pointing to the property to be populated, and the second is a Function that calls the methods provided by Bogus to create dummy data.
Think of it as this pseudocode:
faker.RuleFor(sm => sm.SomeProperty, f => f.SomeKindOfGenerator.GenerateSomething());
Another approach is to pass as the first argument the name of the property like this:
Now that we’ve learned how to generate a Faker, we can refactor the code to make it easier to read:
private List<Book> GenerateBooks(int count)
{
Faker<Book> bookFaker = new Faker<Book>()
.RuleFor(b => b.Id, f => f.Random.Guid())
.RuleFor(b => b.Title, f => f.Lorem.Text())
.RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>())
.RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName)
.RuleFor(b => b.AuthorLastName, f => f.Person.LastName)
.RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800))
.RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
return bookFaker.Generate(count);
}
If we run it, we can see it generates the following items:
Seeding InMemory Entity Framework with dummy data
Entity Framework is among the most famous ORMs in the .NET ecosystem. Even though it supports many integrations, sometimes you just want to store your items in memory without relying on any specific database implementation.
Using Entity Framework InMemory provider
To add this in-memory provider, you must install the Microsoft.EntityFrameworkCore.InMemory NuGet Package.
Now you can add a new DbContext – which is a sort of container of all the types you store in your database – ensuring that the class inherits from DbContext.
Notice that we first create the items and then, using modelBuilder.Entity<Book>().HasData(booksFromBogus), we set the newly generated items as content for the Books DbSet.
Consume dummy data generated with EF Core
To wrap up, here’s the complete implementation of the DbContext:
We are now ready to instantiate the DbContext, ensure that the Database has been created and seeded with the correct data, and perform the operations needed.
using var dbContext = new BooksDbContext();
dbContext.Database.EnsureCreated();
var allBooks = await dbContext.Books.ToListAsync();
var thrillerBooks = dbContext.Books
.Where(b => b.Genres.Contains(Genre.Thriller))
.ToList();
Further readings
In this blog, we’ve already discussed the Entity Framework. In particular, we used it to perform CRUD operations on a PostgreSQL database.
I suggest you explore the potentialities of Bogus: there are a lot of functionalities that I didn’t cover in this article, and they may make your tests and experiments meaningful and easier to understand.
Bogus is a great library for creating unit and integration tests. However, I find it useful to generate dummy data for several purposes, like creating a stub of a service, populating a UI with realistic data, or trying out other tools and functionalities.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
In Postman, you can define scripts to be executed before the beginning of a request. Can we use them to work with endpoints using Cookie Authentication?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Nowadays, it’s rare to find services that use Cookie Authentication, yet they still exist. How can we configure Cookie Authentication with Postman? How can we centralize the definition using pre-request scripts?
I had to answer these questions when I had to integrate a third-party system that was using Cookie Authentication. Instead of generating a new token manually, I decided to centralize the Cookie creation in a single place, making it automatically available to every subsequent request.
In order to generate the token, I had to send a request to the Authentication endpoint, sending a JSON payload with data coming from Postman’s variables.
In this article, I’ll recap what I learned, teach you some basics of creating pre-request scripts with Postman, and provide a full example of how I used it to centralize the generation and usage of a cookie for a whole Postman collection.
Introducing Postman’s pre-request scripts
As you probably know, Postman allows you to create scripts that are executed before and after an HTTP call.
These scripts are written in JavaScript and can use some objects and methods that come out of the box with Postman.
You can create such scripts for a single request or the whole collection. In the second case, you write the script once so that it becomes available for all the requests stored within that collection.
The operations defined in the Scripts section of the collection are then executed before (or after) every request in the collection.
Here, you can either use the standard JavaScript code—like the dear old console.log— or the pm object to reference the context in which the script will be executed.
For example, you can print the value of a Postman variable by using:
How to send a POST request with JSON body in Postman pre-request scripts
How can we issue a POST request in the pre-request script, specifying a JSON body?
Postman’s pm object, along with some other methods, exposes the sendRequest function. Its first parameter is the “description” of the request; its second parameter is the callback to execute after the request is completed.
pm.sendRequest(request, (errorResponse, successfulResponse) => {
// do something here
})
You have to carefully craft the request, by specifying the HTTP method, the body, and the content type:
Pay particular attention to the options node: it tells Postman how to treat the body content and what the content type is. Because I was missing this node, I spent too many minutes trying to figure out why this call was badly formed.
options: {
raw: {
language:"json" }
}
Now, the result of the operation is used to execute the callback function. Generally, you want it to be structured like this:
pm.sendRequest(request, (err, response) => {
if (err) {
// handle error
}
if (response) {
// handle success
}
})
Storing Cookies in Postman (using a Jar)
You have received the response with the token, and you have parsed the response to retrieve the value. Now what?
You cannot store Cookies directly as it they were simple variables. Instead, you must store Cookies in a Jar.
Postman allows you to programmatically operate with cookies only by accessing them via a Jar (yup, pun intended!), that can be initialized like this:
constjar=pm.cookies.jar()
From here, you can add, remove or retrieve cookies by working with the jar object.
To add a new cookie, you must use the set() method of the jar object, specifying the domain the cookie belongs to, its name, its value, and the callback to execute when the operation completes.
You can try it now: execute a request, have a look at the console logs, and…
We’ve received a strange error:
An error occurred: Error: CookieStore: programmatic access to “add-your-domain-here.com” is denied
Wait, what? What does “programmatic access to X is denied” mean, and how can we solve this error?
For security reasons, you cannot handle cookies via code without letting Postman know that you explicitly want to operate on the specified domain. To overcome this limitation, you need to whitelist the domain associated with the cookie so that Postman will accept that the operation you’re trying to achieve via code is legit.
To enable a domain for cookies operations, you first have to navigate to the headers section of any request under the collection and click the Cookies button.
From here, select Domains Allowlist:
Finally, add your domain to the list of the allowed ones.
Now Postman knows that if you try to set a cookie via code, it’s because you actively want it, allowing you to add your cookies to the jar.
If you open again the Cookie section (see above), you will be able to see the current values for the cookies associated with the domain:
Further readings
Clearly, we’ve just scratched the surface of what you can do with pre-request scripts in Postman. To learn more, have a look at the official documentation:
In this article, we learned what pre-request scripts are, how to execute a POST request passing a JSON object as a body, and how to programmatically add a Cookie in Postman by operating on the Jar object.
For clarity, here’s the complete code I used in my pre-request script.
Notice that to parse the response from the authentication endpoint I used the .json() method, that allows me to access the internal values using the property name, as in jresponse["Token"].
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛