In today’s world, organizations are rapidly embracing cloud security to safeguard their data and operations. However, as cloud adoption grows, so do the risks. In this post, we highlight the top cloud security challenges and show how Seqrite can help you tackle them with ease.
1. Misconfigurations
One of the simplest yet most dangerous mistakes is misconfiguring cloud workloads think storage buckets left public, weak IAM settings, or missing encryption. Cybercriminals actively scan for these mistakes. A small misconfiguration can lead to significant data leakage or worst-case, ransomware deployment. Seqrite Endpoint Protection Cloud ensure your cloud environment adheres to best-practice security settings before threats even strike.
2. Shared Responsibility Confusion
The cloud model operates on shared responsibility: providers secure infrastructure, you manage your data and configurations. Too many teams skip this second part. Inadequate control over access, authentication, and setup drives serious risks. With Seqrite’s unified dashboard for access control, IAM, and policy enforcement, you stay firmly in control without getting overwhelmed.
3. Expanded Attack Surface
More cloud services, more code, more APIs, more opportunities for attacks. Whether it’s serverless functions or public API endpoints, the number of access points grows quickly. Seqrite tackles this with integrated API scanning, vulnerability assessment, and real-time threat detection. Every service, even ephemeral ones is continuously monitored.
4. Unauthorized Access & Account Hijacking
Attackers often gain entry via stolen credentials, especially in shared or multi-cloud environments. Once inside, they move laterally and hijack more resources. Seqrite’s multi-factor authentication, adaptive risk scoring, and real-time anomaly detection lock out illicit access and alert you instantly.
5. Insufficient Data Encryption
Unencrypted data whether at rest or in transit is a gold mine for attackers. Industries with sensitive or regulated information, like healthcare or finance, simply can’t afford this. Seqrite ensures enterprise-grade encryption everywhere you store or transmit data and handles key management so that it’s secure and hassle-free.
6. Poor Visibility and Monitoring
Without centralized visibility, security teams rely on manual cloud consoles and piecemeal logs. That slows response and leaves gaps. Seqrite solves this with a unified monitoring layer that aggregates logs and events across all your cloud environments. You get complete oversight and lightning-fast detection.
7. Regulatory Compliance Pressures
Compliance with GDPR, HIPAA, PCI-DSS, DPDPA and other regulations is mandatory—but complex in multi-cloud environments. Seqrite Data Privacy simplifies compliance with continuous audits, policy enforcement, and detailed reports, helping you reduce audit stress and regulatory risk.
8. Staffing & Skills Gap
Hiring cloud-native, security-savvy experts is tough. Many teams lack the expertise to monitor and secure dynamic cloud environments. Seqrite’s intuitive interface, automation, and policy templates remove much of the manual work, allowing lean IT teams to punch above their weight.
9. Multi-cloud Management Challenges
Working across AWS, Azure, Google Cloud and maybe even private clouds? Each has its own models and configurations. This fragmentation creates blind spots and policy drift. Seqrite consolidates everything into one seamless dashboard, ensuring consistent cloud security policies across all environments.
10. Compliance in Hybrid & Multi-cloud Setups
Hybrid cloud setups introduce additional risks, cross-environment data flows, networking complexities, and inconsistent controls. Seqrite supports consistent security policy application across on-premises, private clouds, and public clouds, no matter where a workload lives.
Bring in Seqrite to secure your cloud journey, safe, compliant, and hassle-free.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In my opinion, Unit tests should be well structured and written even better than production code.
In fact, Unit Tests act as a first level of documentation of what your code does and, if written properly, can be the key to fixing bugs quickly and without adding regressions.
One way to improve readability is by grouping similar tests that only differ by the initial input but whose behaviour is the same.
Let’s use a dummy example: some tests on a simple Calculator class that only performs sums on int values.
publicstaticclassCalculator{
publicstaticint Sum(int first, int second) => first + second;
}
One way to create tests is by creating one test for each possible combination of values:
publicclassSumTests{
[Test]publicvoid SumPositiveNumbers()
{
var result = Calculator.Sum(1, 5);
Assert.That(result, Is.EqualTo(6));
}
[Test]publicvoid SumNegativeNumbers()
{
var result = Calculator.Sum(-1, -5);
Assert.That(result, Is.EqualTo(-6));
}
[Test]publicvoid SumWithZero()
{
var result = Calculator.Sum(1, 0);
Assert.That(result, Is.EqualTo(1));
}
}
However, it’s not a good idea: you’ll end up with lots of identical tests (DRY, remember?) that add little to no value to the test suite. Also, this approach forces you to add a new test method to every new kind of test that pops into your mind.
When possible, we should generalize it. With NUnit, we can use the TestCase attribute to specify the list of parameters passed in input to our test method, including the expected result.
We can then simplify the whole test class by creating only one method that accepts the different cases in input and runs tests on those values.
[Test][TestCase(1, 5, 6)][TestCase(-1, -5, -6)][TestCase(1, 0, 1)]publicvoid SumWorksCorrectly(int first, int second, int expected)
{
var result = Calculator.Sum(first, second);
Assert.That(result, Is.EqualTo(expected));
}
By using TestCase, you can cover different cases by simply adding a new case without creating new methods.
Clearly, don’t abuse it: use it only to group methods with similar behaviour – and don’t add if statements in the test method!
There is a more advanced way to create a TestCase in NUnit, named TestCaseSource – but we will talk about it in a future C# tip 😉
Further readings
If you are using NUnit, I suggest you read this article about custom equality checks – you might find it handy in your code!
Hey! Jorge Toloza again, Co-Founder and Creative Director at DDS Studio. In this tutorial, we’re going to build a visually rich, infinitely scrolling grid where images move with a parallax effect based on scroll and drag interactions.
We’ll use GSAP for buttery-smooth animations, add a sprinkle of math to achieve infinite tiling, and bring it all together with dynamic visibility animations and a staggered intro reveal.
Let’s get started!
Setting Up the HTML Container
To start, we only need a single container to hold all the tiled image elements. Since we’ll be generating and positioning each tile dynamically with JavaScript, there’s no need for any static markup inside. This keeps our HTML clean and scalable as we duplicate tiles for infinite scrolling.
<div id="images"></div>
Basic Styling for the Grid Items
Now that we have our container, let’s give it the foundational styles it needs to hold and animate a large set of tiles.
We’ll use absolute positioning for each tile so we can freely place them anywhere in the grid. The outer container (#images) is set to relative so that all child .item elements are positioned correctly inside it. Each image fills its tile, and we’ll use will-change: transform to optimize animation performance.
To control the visual layout of our grid, we’ll use design data exported directly from Figma. This gives us pixel-perfect placement while keeping layout logic separate from our code.
I created a quick layout in Figma using rectangles to represent tile positions and dimensions. Then I exported that data into a JSON file, giving us a simple array of objects containing x, y, w, and h values for each tile.
With the layout data defined, the next step is to dynamically generate our tile grid in the DOM and enable it to scroll infinitely in both directions.
This involves three main steps:
Compute the scaled tile dimensions based on the viewport and the original Figma layout’s aspect ratio.
Duplicate the grid in both the X and Y axes so that as one tile set moves out of view, another seamlessly takes its place.
Store metadata for each tile, such as its original position and a random easing value, which we’ll use to vary the parallax animation slightly for a more organic effect.
The infinite scroll illusion is achieved by duplicating the entire tile set horizontally and vertically. This 2×2 tiling approach ensures there’s always a full set of tiles ready to slide into view as the user scrolls or drags.
onResize() {
// Get current viewport dimensions
this.winW = window.innerWidth;
this.winH = window.innerHeight;
// Scale tile size to match viewport width while keeping original aspect ratio
this.tileSize = {
w: this.winW,
h: this.winW * (this.originalSize.h / this.originalSize.w),
};
// Reset scroll state
this.scroll.current = { x: 0, y: 0 };
this.scroll.target = { x: 0, y: 0 };
this.scroll.last = { x: 0, y: 0 };
// Clear existing tiles from container
this.$container.innerHTML = '';
// Scale item positions and sizes based on new tile size
const baseItems = this.data.map((d, i) => {
const scaleX = this.tileSize.w / this.originalSize.w;
const scaleY = this.tileSize.h / this.originalSize.h;
const source = this.sources[i % this.sources.length];
return {
src: source.src,
caption: source.caption,
x: d.x * scaleX,
y: d.y * scaleY,
w: d.w * scaleX,
h: d.h * scaleY,
};
});
this.items = [];
// Offsets to duplicate the grid in X and Y for seamless looping (2x2 tiling)
const repsX = [0, this.tileSize.w];
const repsY = [0, this.tileSize.h];
baseItems.forEach((base) => {
repsX.forEach((offsetX) => {
repsY.forEach((offsetY) => {
// Create item DOM structure
const el = document.createElement('div');
el.classList.add('item');
el.style.width = `${base.w}px`;
const wrapper = document.createElement('div');
wrapper.classList.add('item-wrapper');
el.appendChild(wrapper);
const itemImage = document.createElement('div');
itemImage.classList.add('item-image');
itemImage.style.width = `${base.w}px`;
itemImage.style.height = `${base.h}px`;
wrapper.appendChild(itemImage);
const img = new Image();
img.src = `./img/${base.src}`;
itemImage.appendChild(img);
const caption = document.createElement('small');
caption.innerHTML = base.caption;
// Split caption into lines for staggered animation
const split = new SplitText(caption, {
type: 'lines',
mask: 'lines',
linesClass: 'line'
});
split.lines.forEach((line, i) => {
line.style.transitionDelay = `${i * 0.15}s`;
line.parentElement.style.transitionDelay = `${i * 0.15}s`;
});
wrapper.appendChild(caption);
this.$container.appendChild(el);
// Observe caption visibility for animation triggering
this.observer.observe(caption);
// Store item metadata including offset, easing, and bounding box
this.items.push({
el,
container: itemImage,
wrapper,
img,
x: base.x + offsetX,
y: base.y + offsetY,
w: base.w,
h: base.h,
extraX: 0,
extraY: 0,
rect: el.getBoundingClientRect(),
ease: Math.random() * 0.5 + 0.5, // Random parallax easing for organic movement
});
});
});
});
// Double the tile area to account for 2x2 duplication
this.tileSize.w *= 2;
this.tileSize.h *= 2;
// Set initial scroll position slightly off-center for visual balance
this.scroll.current.x = this.scroll.target.x = this.scroll.last.x = -this.winW * 0.1;
this.scroll.current.y = this.scroll.target.y = this.scroll.last.y = -this.winH * 0.1;
}
Key Concepts
Scaling the layout ensures that your Figma-defined design adapts to any screen size without distortion.
2×2 duplication ensures seamless continuity when the user scrolls in any direction.
Random easing values create slight variation in tile movement, making the parallax effect feel more natural.
extraX and extraY values will later be used to shift tiles back into view once they scroll offscreen.
SplitText animation is used to break each caption (<small>) into individual lines, enabling line-by-line animation.
Adding Interactive Scroll and Drag Events
To bring the infinite grid to life, we need to connect it to user input. This includes:
Scrolling with the mouse wheel or trackpad
Dragging with a pointer (mouse or touch)
Smooth motion between input updates using linear interpolation (lerp)
Rather than instantly snapping to new positions, we interpolate between the current and target scroll values, which creates fluid, natural transitions.
Scroll and Drag Tracking
We capture two types of user interaction:
1) Wheel Events Wheel input updates a target scroll position. We multiply the deltas by a damping factor to control sensitivity.
In the render loop, we interpolate between the current and target scroll values using a lerp function. This creates smooth, decaying motion rather than abrupt changes.
The scroll.ease value controls how fast the scroll position catches up to the target—smaller values result in slower, smoother motion.
Animating Item Visibility with IntersectionObserver
To enhance the visual hierarchy and focus, we’ll highlight only the tiles that are currently within the viewport. This creates a dynamic effect where captions appear and styling changes as tiles enter view.
We’ll use the IntersectionObserver API to detect when each tile becomes visible and toggle a CSS class accordingly.
this.observer = new IntersectionObserver(entries => {
entries.forEach(entry => {
entry.target.classList.toggle('visible', entry.isIntersecting);
});
});
// …and after appending each wrapper:
this.observer.observe(wrapper);
Creating an Intro Animation with GSAP
To finish the experience with a strong visual entry, we’ll animate all currently visible tiles from the center of the screen into their natural grid positions. This creates a polished, attention-grabbing introduction and adds a sense of depth and intentionality to the layout.
We’ll use GSAP for this animation, utilizing gsap.set() to position elements instantly, and gsap.to() with staggered timing to animate them into place.
Selecting Visible Tiles for Animation
First, we filter all tile elements to include only those currently visible in the viewport. This avoids animating offscreen elements and keeps the intro lightweight and focused:
x: 0, y: 0 restores the original position set via CSS transforms.
expo.inOut provides a dramatic but smooth easing curve.
stagger creates a cascading effect, enhancing visual rhythm
Wrapping Up
What we’ve built is a scrollable, draggable image grid with a parallax effect, visibility animations, and a smooth GSAP-powered intro. It’s a flexible base you can adapt for creative galleries, interactive backgrounds, or experimental interfaces.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Asynchronous programming enables you to execute multiple operations without blocking the main thread.
In general, we often think of the Happy Scenario, when all the operations go smoothly, but we rarely consider what to do when an error occurs.
In this article, we will explore how Task.WaitAll and Task.WhenAll behave when an error is thrown in one of the awaited Tasks.
Prepare the tasks to be executed
For the sake of this article, we are going to use a silly method that returns the same number passed in input but throws an exception in case the input number can be divided by 3:
public Task<int> Echo(intvalue) => Task.Factory.StartNew(
() =>
{
if (value % 3 == 0)
{
Console.WriteLine($"[LOG] You cannot use {value}!");
thrownew Exception($"[EXCEPTION] Value cannot be {value}");
}
Console.WriteLine($"[LOG] {value} is a valid value!");
returnvalue;
}
);
Those Console.WriteLine instructions will allow us to see what’s happening “live”.
We prepare the collection of tasks to be awaited by using a simple Enumerable.Range
var tasks = Enumerable.Range(1, 11).Select(Echo);
And then, we use a try-catch block with some logs to showcase what happens when we run the application.
try{
Console.WriteLine("START");
// await all the tasks Console.WriteLine("END");
}
catch (Exception ex)
{
Console.WriteLine("The exception message is: {0}", ex.Message);
Console.WriteLine("The exception type is: {0}", ex.GetType().FullName);
if (ex.InnerException is not null)
{
Console.WriteLine("Inner exception: {0}", ex.InnerException.Message);
}
}
finally{
Console.WriteLine("FINALLY!");
}
If we run it all together, we can notice that nothing really happened:
In fact, we just created a collection of tasks (which does not actually exist, since the result is stored in a lazy-loaded enumeration).
We can, then, call WaitAll and WhenAll to see what happens when an error occurs.
Error handling when using Task.WaitAll
It’s time to execute the tasks stored in the tasks collection, like this:
try{
Console.WriteLine("START");
// await all the tasks Task.WaitAll(tasks.ToArray());
Console.WriteLine("END");
}
Task.WaitAll accepts an array of tasks to be awaited and does not return anything.
The execution goes like this:
START
1 is a valid value!
2 is a valid value!
:( You cannot use 6!
5 is a valid value!
:( You cannot use 3!
4 is a valid value!
8 is a valid value!
10 is a valid value!
:( You cannot use 9!
7 is a valid value!
11 is a valid value!
The exception message is: One or more errors occurred. ([EXCEPTION] Value cannot be 3) ([EXCEPTION] Value cannot be 6) ([EXCEPTION] Value cannot be 9)
The exception type is: System.AggregateException
Inner exception: [EXCEPTION] Value cannot be 3
FINALLY!
There are a few things to notice:
the tasks are not executed in sequence: for example, 6 was printed before 4. Well, to be honest, we can say that Console.WriteLine printed the messages in that sequence, but maybe the tasks were executed in another different order (as you can deduce from the order of the error messages);
all the tasks are executed before jumping to the catch block;
the exception caught in the catch block is of type System.AggregateException; we’ll come back to it later;
the InnerException property of the exception being caught contains the info for the first exception that was thrown.
There are two main differences to notice when comparing Task.WaitAll and Task.WhenAll:
Task.WhenAll accepts in input whatever type of collection (as long as it is an IEnumerable);
it returns a Task that you have to await.
And what happens when we run the program?
START
2 is a valid value!
1 is a valid value!
4 is a valid value!
:( You cannot use 3!
7 is a valid value!
5 is a valid value!
:( You cannot use 6!
8 is a valid value!
10 is a valid value!
11 is a valid value!
:( You cannot use 9!
The exception message is: [EXCEPTION] Value cannot be 3
The exception type is: System.Exception
FINALLY!
Again, there are a few things to notice:
just as before, the messages are not printed in order;
the exception message contains the message for the first exception thrown;
the exception is of type System.Exception, and not System.AggregateException as we saw before.
This means that the first exception breaks everything, and you lose the info about the other exceptions that were thrown.
📩 but now, a question for you: we learned that, when using Task.WhenAll, only the first exception gets caught by the catch block. What happens to the other exceptions? How can we retrieve them? Drop a message in the comment below ⬇️
Comparing Task.WaitAll and Task.WhenAll
Task.WaitAll and Task.WhenAll are similar but not identical.
Task.WaitAll should be used when you are in a synchronous context and need to block the current thread until all tasks are complete. This is common in simple old-style console applications or scenarios where asynchronous programming is not required. However, it is not recommended in UI or modern ASP.NET applications because it can cause deadlocks or freeze the UI.
Task.WhenAll is preferred in modern C# code, especially in asynchronous methods (where you can use async Task). It allows you to await the completion of multiple tasks without blocking the calling thread, making it suitable for environments where responsiveness is important. It also enables easier composition of continuations and better exception handling.
Let’s wrap it up in a table:
Feature
Task.WaitAll
Task.WhenAll
Return Type
void
Task or Task<TResult[]>
Blocking/Non-blocking
Blocking (waits synchronously)
Non-blocking (returns a Task)
Exception Handling
Throws AggregateException immediately
Exceptions observed when awaited
Usage Context
Synchronous code (e.g., console apps)
Asynchronous code (e.g., async methods)
Continuation
Not possible (since it blocks)
Possible (use .ContinueWith or await)
Deadlock Risk
Higher in UI contexts
Lower (if properly awaited)
Bonus tip: get the best out of AggregateException
We can expand a bit on the AggregateException type.
That specific type of exception acts as a container for all the exceptions thrown when using Task.WaitAll.
It contains a property named InnerExceptions that contains all the exceptions thrown so that you can access them using an Enumerator.
A common example is this:
if (ex is AggregateException aggEx)
{
Console.WriteLine("There are {0} exceptions in the aggregate exception.", aggEx.InnerExceptions.Count);
foreach (var innerEx in aggEx.InnerExceptions)
{
Console.WriteLine("Inner exception: {0}", innerEx.Message);
}
}
Further readings
This article is all about handling the unhappy path.
If you want to learn more about Task.WaitAll and Task.WhenAll, I’d suggest you read the following two articles that I find totally interesting and well-written:
Small changes sometimes make a huge difference. Learn these 6 tips to improve the performance of your application just by handling strings correctly.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Sometimes, just a minor change makes a huge difference. Maybe you won’t notice it when performing the same operation a few times. Still, the improvement is significant when repeating the operation thousands of times.
In this article, we will learn five simple tricks to improve the performance of your application when dealing with strings.
Note: this article is part of C# Advent Calendar 2023, organized by Matthew D. Groves: it’s maybe the only Christmas tradition I like (yes, I’m kind of a Grinch 😂).
Benchmark structure, with dependencies
Before jumping to the benchmarks, I want to spend a few words on the tools I used for this article.
The project is a .NET 8 class library running on a laptop with an i5 processor.
Running benchmarks with BenchmarkDotNet
I’m using BenchmarkDotNet to create benchmarks for my code. BenchmarkDotNet is a library that runs your methods several times, captures some metrics, and generates a report of the executions. If you follow my blog, you might know I’ve used it several times – for example, in my old article “Enum.HasFlag performance with BenchmarkDotNet”.
All the benchmarks I created follow the same structure:
the class is marked with the [MemoryDiagnoser] attribute: the benchmark will retrieve info for both time and memory usage;
there is a property named Size with the attribute [Params]: this attribute lists the possible values for the Size property;
there is a method marked as [IterationSetup]: this method runs before every single execution, takes the value from the Size property, and initializes the AllStrings array;
the methods that are parts of the benchmark are marked with the [Benchmark] attribute.
Generating strings with Bogus
I relied on Bogus to create dummy values. This NuGet library allows you to generate realistic values for your objects with a great level of customization.
The string array generation strategy is shared across all the benchmarks, so I moved it to a static method:
Here I have a default set of predefined values ([string.Empty, " ", "\n \t", null]), which can be expanded with the values coming from the additionalStrings array. These values are then placed in random positions of the array.
In most cases, though, the value of the string is defined by Bogus.
Generating plots with chartbenchmark.net
To generate the plots you will see in this article, I relied on chartbenchmark.net, a fantastic tool that transforms the output generated by BenchmarkDotNet on the console in a dynamic, customizable plot. This tool created by Carlos Villegas is available on GitHub, and it surely deserves a star!
Please note that all the plots in this article have a Log10 scale: this scale allows me to show you the performance values of all the executions in the same plot. If I used the Linear scale, you would be able to see only the biggest values.
We are ready. It’s time to run some benchmarks!
Tip #1: StringBuilder is (almost always) better than String Concatenation
Let’s start with a simple trick: if you need to concatenate strings, using a StringBuilder is generally more efficient than concatenating string.
Whenever you concatenate strings with the + sign, you create a new instance of a string. This operation takes some time and allocates memory for every operation.
On the contrary, using a StringBuilder object, you can add the strings in memory and generate the final string using a performance-wise method.
Here’s the result table:
Method
Size
Mean
Error
StdDev
Median
Ratio
RatioSD
Allocated
Alloc Ratio
WithStringBuilder
4
4.891 us
0.5568 us
1.607 us
4.750 us
1.00
0.00
1016 B
1.00
WithConcatenation
4
3.130 us
0.4517 us
1.318 us
2.800 us
0.72
0.39
776 B
0.76
WithStringBuilder
100
7.649 us
0.6596 us
1.924 us
7.650 us
1.00
0.00
4376 B
1.00
WithConcatenation
100
13.804 us
1.1970 us
3.473 us
13.800 us
1.96
0.82
51192 B
11.70
WithStringBuilder
10000
113.091 us
4.2106 us
12.081 us
111.000 us
1.00
0.00
217200 B
1.00
WithConcatenation
10000
74,512.259 us
2,111.4213 us
6,058.064 us
72,593.050 us
666.43
91.44
466990336 B
2,150.05
WithStringBuilder
100000
1,037.523 us
37.1009 us
108.225 us
1,012.350 us
1.00
0.00
2052376 B
1.00
WithConcatenation
100000
7,469,344.914 us
69,720.9843 us
61,805.837 us
7,465,779.900 us
7,335.08
787.44
46925872520 B
22,864.17
Let’s see it as a plot.
Beware of the scale in the diagram!: it’s a Log10 scale, so you’d better have a look at the value displayed on the Y-axis.
As you can see, there is a considerable performance improvement.
There are some remarkable points:
When there are just a few strings to concatenate, the + operator is more performant, both on timing and allocated memory;
When you need to concatenate 100000 strings, the concatenation is ~7000 times slower than the string builder.
In conclusion, use the StringBuilder to concatenate more than 5 or 6 strings. Use the string concatenation for smaller operations.
Edit 2024-01-08: turn out that string.Concat has an overload that accepts an array of strings. string.Concat(string[]) is actually faster than using the StringBuilder. Read more this article by Robin Choffardet.
Tip #2: EndsWith(string) vs EndsWith(char): pick the right overload
One simple improvement can be made if you use StartsWith or EndsWith, passing a single character.
There are two similar overloads: one that accepts a string, and one that accepts a char.
Again, let’s generate the plot using the Log10 scale:
They appear to be almost identical, but look closely: based on this benchmark, when we have 10000, using EndsWith(string) is 10x slower than EndsWith(char).
Also, here, the duration ratio on the 1.000.000-items array is ~3.5. At first, I thought there was an error on the benchmark, but when rerunning it on the benchmark, the ratio did not change.
It looks like you have the best improvement ratio when the array has ~10.000 items.
Tip #3: IsNullOrEmpty vs IsNullOrWhitespace vs IsNullOrEmpty + Trim
As you might know, string.IsNullOrWhiteSpace performs stricter checks than string.IsNullOrEmpty.
To demonstrate it, I have created three benchmarks: one for string.IsNullOrEmpty, one for string.IsNullOrWhiteSpace, and another one that lays in between: it first calls Trim() on the string, and then calls string.IsNullOrEmpty.
As you can see from the Log10 table, the results are pretty similar:
On average, StringIsNullOrWhitespace is ~2 times slower than StringIsNullOrEmpty.
So, what should we do? Here’s my two cents:
For all the data coming from the outside (passed as input to your system, received from an API call, read from the database), use string.IsNUllOrWhiteSpace: this way you can ensure that you are not receiving unexpected data;
If you read data from an external API, customize your JSON deserializer to convert whitespace strings as empty values;
Needless to say, choose the proper method depending on the use case. If a string like “\n \n \t” is a valid value for you, use string.IsNullOrEmpty.
Tip #4: ToUpper vs ToUpperInvariant vs ToLower vs ToLowerInvariant: they look similar, but they are not
Even though they look similar, there is a difference in terms of performance between these four methods.
[MemoryDiagnoser]publicclassToUpperVsToLower()
{
[Params(100, 1000, 10_000, 100_000, 1_000_000)]publicint Size;
publicstring[] AllStrings { get; set; }
[IterationSetup]publicvoid Setup()
{
AllStrings = StringArrayGenerator.Generate(Size);
}
[Benchmark]publicvoid WithToUpper()
{
foreach (string s in AllStrings)
{
_ = s?.ToUpper();
}
}
[Benchmark]publicvoid WithToUpperInvariant()
{
foreach (string s in AllStrings)
{
_ = s?.ToUpperInvariant();
}
}
[Benchmark]publicvoid WithToLower()
{
foreach (string s in AllStrings)
{
_ = s?.ToLower();
}
}
[Benchmark]publicvoid WithToLowerInvariant()
{
foreach (string s in AllStrings)
{
_ = s?.ToLowerInvariant();
}
}
}
What will this benchmark generate?
Method
Size
Mean
Error
StdDev
Median
P95
Ratio
WithToUpper
100
9.153 us
0.9720 us
2.789 us
8.200 us
14.980 us
1.57
WithToUpperInvariant
100
6.572 us
0.5650 us
1.639 us
6.200 us
9.400 us
1.14
WithToLower
100
6.881 us
0.5076 us
1.489 us
7.100 us
9.220 us
1.19
WithToLowerInvariant
100
6.143 us
0.5212 us
1.529 us
6.100 us
8.400 us
1.00
WithToUpper
1000
69.776 us
9.5416 us
27.833 us
68.650 us
108.815 us
2.60
WithToUpperInvariant
1000
51.284 us
7.7945 us
22.860 us
38.700 us
89.290 us
1.85
WithToLower
1000
49.520 us
5.6085 us
16.449 us
48.100 us
79.110 us
1.85
WithToLowerInvariant
1000
27.000 us
0.7370 us
2.103 us
26.850 us
30.375 us
1.00
WithToUpper
10000
241.221 us
4.0480 us
3.588 us
240.900 us
246.560 us
1.68
WithToUpperInvariant
10000
339.370 us
42.4036 us
125.028 us
381.950 us
594.760 us
1.48
WithToLower
10000
246.861 us
15.7924 us
45.565 us
257.250 us
302.875 us
1.12
WithToLowerInvariant
10000
143.529 us
2.1542 us
1.910 us
143.500 us
146.105 us
1.00
WithToUpper
100000
2,165.838 us
84.7013 us
223.137 us
2,118.900 us
2,875.800 us
1.66
WithToUpperInvariant
100000
1,885.329 us
36.8408 us
63.548 us
1,894.500 us
1,967.020 us
1.41
WithToLower
100000
1,478.696 us
23.7192 us
50.547 us
1,472.100 us
1,571.330 us
1.10
WithToLowerInvariant
100000
1,335.950 us
18.2716 us
35.203 us
1,330.100 us
1,404.175 us
1.00
WithToUpper
1000000
20,936.247 us
414.7538 us
1,163.014 us
20,905.150 us
22,928.350 us
1.64
WithToUpperInvariant
1000000
19,056.983 us
368.7473 us
287.894 us
19,085.400 us
19,422.880 us
1.41
WithToLower
1000000
14,266.714 us
204.2906 us
181.098 us
14,236.500 us
14,593.035 us
1.06
WithToLowerInvariant
1000000
13,464.127 us
266.7547 us
327.599 us
13,511.450 us
13,926.495 us
1.00
Let’s see it as the usual Log10 plot:
We can notice a few points:
The ToUpper family is generally slower than the ToLower family;
The Invariant family is faster than the non-Invariant one; we will see more below;
So, if you have to normalize strings using the same casing, ToLowerInvariant is the best choice.
Tip #5: OrdinalIgnoreCase vs InvariantCultureIgnoreCase: logically (almost) equivalent, but with different performance
Comparing strings is trivial: the string.Compare method is all you need.
There are several modes to compare strings: you can specify the comparison rules by setting the comparisonType parameter, which accepts a StringComparison value.
As you can see, there’s a HUGE difference between Ordinal and Invariant.
When dealing with 100.000 items, StringComparison.InvariantCultureIgnoreCase is 12 times slower than StringComparison.OrdinalIgnoreCase!
Why? Also, why should we use one instead of the other?
Have a look at this code snippet:
var s1 = "Aa";
var s2 = "A" + newstring('\u0000', 3) + "a";
string.Equals(s1, s2, StringComparison.InvariantCultureIgnoreCase); //Truestring.Equals(s1, s2, StringComparison.OrdinalIgnoreCase); //False
As you can see, s1 and s2 represent equivalent, but not equal, strings. We can then deduce that OrdinalIgnoreCase checks for the exact values of the characters, while InvariantCultureIgnoreCase checks the string’s “meaning”.
So, in most cases, you might want to use OrdinalIgnoreCase (as always, it depends on your use case!)
Tip #6: Newtonsoft vs System.Text.Json: it’s a matter of memory allocation, not time
For the last benchmark, I created the exact same model used as an example in the official documentation.
This benchmark aims to see which JSON serialization library is faster: Newtonsoft or System.Text.Json?
As you might know, the .NET team has added lots of performance improvements to the JSON Serialization functionalities, and you can really see the difference!
Method
Size
Mean
Error
StdDev
Median
Ratio
RatioSD
Gen0
Gen1
Allocated
Alloc Ratio
WithJson
100
2.063 ms
0.1409 ms
0.3927 ms
1.924 ms
1.00
0.00
–
–
292.87 KB
1.00
WithNewtonsoft
100
4.452 ms
0.1185 ms
0.3243 ms
4.391 ms
2.21
0.39
–
–
882.71 KB
3.01
WithJson
10000
44.237 ms
0.8787 ms
1.3936 ms
43.873 ms
1.00
0.00
4000.0000
1000.0000
29374.98 KB
1.00
WithNewtonsoft
10000
78.661 ms
1.3542 ms
2.6090 ms
78.865 ms
1.77
0.08
14000.0000
1000.0000
88440.99 KB
3.01
WithJson
1000000
4,233.583 ms
82.5804 ms
113.0369 ms
4,202.359 ms
1.00
0.00
484000.0000
1000.0000
2965741.56 KB
1.00
WithNewtonsoft
1000000
5,260.680 ms
101.6941 ms
108.8116 ms
5,219.955 ms
1.24
0.04
1448000.0000
1000.0000
8872031.8 KB
2.99
As you can see, Newtonsoft is 2x slower than System.Text.Json, and it allocates 3x the memory compared with the other library.
So, well, if you don’t use library-specific functionalities, I suggest you replace Newtonsoft with System.Text.Json.
Wrapping up
In this article, we learned that even tiny changes can make a difference in the long run.
Let’s recap some:
Using StringBuilder is generally WAY faster than using string concatenation unless you need to concatenate 2 to 4 strings;
Sometimes, the difference is not about execution time but memory usage;
EndsWith and StartsWith perform better if you look for a char instead of a string. If you think of it, it totally makes sense!
More often than not, string.IsNullOrWhiteSpace performs better checks than string.IsNullOrEmpty; however, there is a huge difference in terms of performance, so you should pick the correct method depending on the usage;
ToUpper and ToLower look similar; however, ToLower is quite faster than ToUpper;
Ordinal and Invariant comparison return the same value for almost every input; but Ordinal is faster than Invariant;
Newtonsoft performs similarly to System.Text.Json, but it allocates way more memory.
My suggestion is always the same: take your time to explore the possibilities! Toy with your code, try to break it, benchmark it. You’ll find interesting takes!
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.
In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.
Overview
Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.
We arrange spheres along the mouse trail to create a stretchy, elastic motion.
Let’s get started!
1. Setup
We render a single fullscreen plane that covers the entire viewport.
// Output.ts
const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
const planeMaterial = new THREE.RawShaderMaterial({
vertexShader: base_vert,
fragmentShader: output_frag,
uniforms: this.uniforms,
});
const plane = new THREE.Mesh(planeGeometry, planeMaterial);
this.scene.add(plane);
We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.
The vertex shader receives the position attribute.
Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.
The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.
Now we’re all set to start drawing in the fragment shader! Next, let’s move on to actually rendering the spheres.
2. Ray Marching
2.1. What is Ray Marching?
As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:
Define the scene
Set the camera (viewing) direction
Cast rays
Evaluate the distance from the current ray position to the nearest object in the scene.
Move the ray forward by that distance
Check for a hit
For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.
First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.
Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.
After obtaining this distance, we move the ray forward by that amount.
We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached. If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.
For example, in the figure above, a hit is detected on the 8th ray marching step.
If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.
Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.
To better understand this process, try running this demo to see how it works in practice.
2.2. Signed Distance Function
In the previous section, we briefly mentioned the SDF (Signed Distance Function). Let’s take a moment to understand what it is.
An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.
For example, here is the distance function for a sphere:
Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.
This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.
If the result is positive, the point is outside the sphere.
If negative, it is inside the sphere.
If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).
In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.
After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.
Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):
for ( int i = 0; i < ITR; ++ i ) {
dist = map(ray);
ray += rayDirection * dist;
if ( dist < EPS ) break ;
}
Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:
vec3 color = vec3(0.0);
if ( dist < EPS ) {
color = vec3(1.0);
}
We’ve successfully rendered two overlapping spheres using ray marching!
2.4. Normals
Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.
While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.
At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.
If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.
However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.
The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:
To compute this gradient numerically, we can use the central difference method. For example:
We apply the same idea for the 𝑦 and 𝑧 components. Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().
Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.
Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:
Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.
Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:
This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.
Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.
2.5. Visualizing Normals with Color
To verify that the surface normals are being calculated correctly, we can visualize them using color.
if ( dist < EPS ) {
vec3 normal = generateNormal(ray);
color = normal;
}
Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.
When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.
This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.
When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary. To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.
This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.
The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.
For more details, please refer to the following two articles:
So far, we’ve covered how to calculate normals and how to smoothly blend objects.
Next, let’s tune the surface appearance to make things feel more realistic.
In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.
To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:
3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:
Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
Top face interpolation: Similarly, we interpolate between the four corner values on the top face
Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis
This triple interpolation process is called trilinear interpolation.
The following code demonstrates the trilinear interpolation process for 3D value noise:
By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:
It’s starting to look quite like a water droplet! However, it still appears a bit murky. To improve this, let’s add the following post-processing step:
Davide Bellone is a Principal Backend Developer with more than 10 years of professional experience with Microsoft platforms and frameworks.
He loves learning new things and sharing these learnings with others: that’s why he writes on this blog and is involved as speaker at tech conferences.
You’ve probably seen this kind of scroll effect before, even if it doesn’t have a name yet. (Honestly, we need a dictionary for all these weird and wonderful web interactions. If you’ve got a talent for naming things…do it. Seriously. The internet is waiting.)
Imagine a grid of images. As you scroll, the columns don’t move uniformly but instead, the center columns react faster, while those on the edges trail behind slightly. It feels soft, elastic, and physical, almost like scrolling with weight, or elasticity.
You can see this amazing effect on sites like yzavoku.com (and I’m sure there’s a lot more!).
So what better excuse to use the now-free GSAP ScrollSmoother? We can recreate it easily, with great performance and full control. Let’s have a look!
What We’re Building
We’ll take CSS grid based layout and add some magic:
Inertia-based scrolling using ScrollSmoother
Per-column lag, calculated dynamically based on distance from the center
A layout that adapts to column changes
HTML Structure
Let’s set up the markup with figures in a grid:
<div class="grid">
<figure class="grid__item">
<div class="grid__item-img" style="background-image: url(assets/1.webp)"></div>
<figcaption class="grid__item-caption">Zorith - L91</figcaption>
</figure>
<!-- Repeat for more items -->
</div>
Inside the grid, we have many .grid__item figures, each with a background image and a label. These will be dynamically grouped into columns by JavaScript, based on how many columns CSS defines.
In our JavaScript then, we’ll change the DOM structure by inserting .grid__column wrappers around groups of items, one per colum, so we can control their motion individually. Why are we doing this? It’s a bit lighter to move columns rather then each individual item.
This method groups your grid items into arrays, one for each visual column, using the actual number of columns calculated from the CSS.
3. Create Column Wrappers and Assign Lag
const buildGrid = (columns, numColumns) => {
const fragment = document.createDocumentFragment(); // Efficient DOM batch insertion
const mid = (numColumns - 1) / 2; // Center index (can be fractional)
const columnContainers = [];
// Loop over each column
columns.forEach((column, i) => {
const distance = Math.abs(i - mid); // Distance from center column
const lag = baseLag + distance * lagScale; // Lag based on distance from center
const columnContainer = document.createElement('div'); // New column wrapper
columnContainer.className = 'grid__column';
// Append items to column container
column.forEach((item) => columnContainer.appendChild(item));
fragment.appendChild(columnContainer); // Add to fragment
columnContainers.push({ element: columnContainer, lag }); // Save for lag effect setup
});
grid.appendChild(fragment); // Add all columns to DOM at once
return columnContainers;
};
The lag value increases the further a column is from the center, creating that elastic “catch up” feel during scroll.
4. Apply Lag Effects to Each Column
const applyLagEffects = (columnContainers) => {
columnContainers.forEach(({ element, lag }) => {
smoother.effects(element, { speed: 1, lag }); // Apply individual lag per column
});
};
ScrollSmoother handles all the heavy lifting, we just pass the desired lag.
5. Handle Layout on Resize
// Rebuild the layout only if the number of columns has changed on window resize
window.addEventListener('resize', () => {
const newColumnCount = getColumnCount();
if (newColumnCount !== currentColumnCount) {
init();
}
});
This ensures our layout stays correct across breakpoints and column count changes (handled via CSS).
Now, there’s lots of ways to build upon this and add more jazz!
For example, you could:
add scroll-triggered opacity or scale animations
use scroll velocity to control effects (see demo 2)
adapt this pattern for horizontal scroll layouts
Exploring Variations
Once you have the core concept in place, there are four demo variations you can explore. Each one shows how different lag values and scroll-based interactions can influence the experience.
You can adjust which columns respond faster, or play with subtle scaling and transforms based on scroll velocity. Even small changes can shift the rhythm and tone of the layout in interesting ways. And don’t forget: changing the look of the grid itself, like the image ratio or gaps, will give this a whole different feel!
Now it’s your turn. Tweak it, break it, rebuild it, and make something cool.
I really hope you enjoy this effect! Thanks for checking by 🙂