نویسنده: post Bina

  • Top 6 Performance Tips when dealing with strings in C# 12 and .NET 8 | Code4IT

    Top 6 Performance Tips when dealing with strings in C# 12 and .NET 8 | Code4IT


    Small changes sometimes make a huge difference. Learn these 6 tips to improve the performance of your application just by handling strings correctly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, just a minor change makes a huge difference. Maybe you won’t notice it when performing the same operation a few times. Still, the improvement is significant when repeating the operation thousands of times.

    In this article, we will learn five simple tricks to improve the performance of your application when dealing with strings.

    Note: this article is part of C# Advent Calendar 2023, organized by Matthew D. Groves: it’s maybe the only Christmas tradition I like (yes, I’m kind of a Grinch 😂).

    Benchmark structure, with dependencies

    Before jumping to the benchmarks, I want to spend a few words on the tools I used for this article.

    The project is a .NET 8 class library running on a laptop with an i5 processor.

    Running benchmarks with BenchmarkDotNet

    I’m using BenchmarkDotNet to create benchmarks for my code. BenchmarkDotNet is a library that runs your methods several times, captures some metrics, and generates a report of the executions. If you follow my blog, you might know I’ve used it several times – for example, in my old article “Enum.HasFlag performance with BenchmarkDotNet”.

    All the benchmarks I created follow the same structure:

    [MemoryDiagnoser]
    public class BenchmarkName()
    {
        [Params(1, 5, 10)] // clearly, I won't use these values
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark(Baseline=true)]
        public void FirstMethod()
        {
            //omitted
        }
    
        [Benchmark]
        public void SecondMethod()
        {
            //omitted
        }
    }
    

    In short:

    • the class is marked with the [MemoryDiagnoser] attribute: the benchmark will retrieve info for both time and memory usage;
    • there is a property named Size with the attribute [Params]: this attribute lists the possible values for the Size property;
    • there is a method marked as [IterationSetup]: this method runs before every single execution, takes the value from the Size property, and initializes the AllStrings array;
    • the methods that are parts of the benchmark are marked with the [Benchmark] attribute.

    Generating strings with Bogus

    I relied on Bogus to create dummy values. This NuGet library allows you to generate realistic values for your objects with a great level of customization.

    The string array generation strategy is shared across all the benchmarks, so I moved it to a static method:

     public static class StringArrayGenerator
     {
         public static string[] Generate(int size, params string[] additionalStrings)
         {
             string[] array = new string[size];
             Faker faker = new Faker();
    
             List<string> fixedValues = [
                 string.Empty,
                 "   ",
                 "\n  \t",
                 null
             ];
    
             if (additionalStrings != null)
                 fixedValues.AddRange(additionalStrings);
    
             for (int i = 0; i < array.Length; i++)
             {
                 if (Random.Shared.Next() % 4 == 0)
                 {
                     array[i] = Random.Shared.GetItems<string>(fixedValues.ToArray(), 1).First();
                 }
                 else
                 {
                     array[i] = faker.Lorem.Word();
                 }
             }
    
             return array;
         }
     }
    

    Here I have a default set of predefined values ([string.Empty, " ", "\n \t", null]), which can be expanded with the values coming from the additionalStrings array. These values are then placed in random positions of the array.

    In most cases, though, the value of the string is defined by Bogus.

    Generating plots with chartbenchmark.net

    To generate the plots you will see in this article, I relied on chartbenchmark.net, a fantastic tool that transforms the output generated by BenchmarkDotNet on the console in a dynamic, customizable plot. This tool created by Carlos Villegas is available on GitHub, and it surely deserves a star!

    Please note that all the plots in this article have a Log10 scale: this scale allows me to show you the performance values of all the executions in the same plot. If I used the Linear scale, you would be able to see only the biggest values.

    We are ready. It’s time to run some benchmarks!

    Tip #1: StringBuilder is (almost always) better than String Concatenation

    Let’s start with a simple trick: if you need to concatenate strings, using a StringBuilder is generally more efficient than concatenating string.

    [MemoryDiagnoser]
    public class StringBuilderVsConcatenation()
    {
        [Params(4, 100, 10_000, 100_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark]
        public void WithStringBuilder()
        {
            StringBuilder sb = new StringBuilder();
    
            foreach (string s in AllStrings)
            {
                sb.Append(s);
            }
    
            var finalString = sb.ToString();
        }
    
        [Benchmark]
        public void WithConcatenation()
        {
            string finalString = "";
            foreach (string s in AllStrings)
            {
                finalString += s;
            }
        }
    }
    

    Whenever you concatenate strings with the + sign, you create a new instance of a string. This operation takes some time and allocates memory for every operation.

    On the contrary, using a StringBuilder object, you can add the strings in memory and generate the final string using a performance-wise method.

    Here’s the result table:

    Method Size Mean Error StdDev Median Ratio RatioSD Allocated Alloc Ratio
    WithStringBuilder 4 4.891 us 0.5568 us 1.607 us 4.750 us 1.00 0.00 1016 B 1.00
    WithConcatenation 4 3.130 us 0.4517 us 1.318 us 2.800 us 0.72 0.39 776 B 0.76
    WithStringBuilder 100 7.649 us 0.6596 us 1.924 us 7.650 us 1.00 0.00 4376 B 1.00
    WithConcatenation 100 13.804 us 1.1970 us 3.473 us 13.800 us 1.96 0.82 51192 B 11.70
    WithStringBuilder 10000 113.091 us 4.2106 us 12.081 us 111.000 us 1.00 0.00 217200 B 1.00
    WithConcatenation 10000 74,512.259 us 2,111.4213 us 6,058.064 us 72,593.050 us 666.43 91.44 466990336 B 2,150.05
    WithStringBuilder 100000 1,037.523 us 37.1009 us 108.225 us 1,012.350 us 1.00 0.00 2052376 B 1.00
    WithConcatenation 100000 7,469,344.914 us 69,720.9843 us 61,805.837 us 7,465,779.900 us 7,335.08 787.44 46925872520 B 22,864.17

    Let’s see it as a plot.

    Beware of the scale in the diagram!: it’s a Log10 scale, so you’d better have a look at the value displayed on the Y-axis.

    StringBuilder vs string concatenation in C#: performance benchmark

    As you can see, there is a considerable performance improvement.

    There are some remarkable points:

    1. When there are just a few strings to concatenate, the + operator is more performant, both on timing and allocated memory;
    2. When you need to concatenate 100000 strings, the concatenation is ~7000 times slower than the string builder.

    In conclusion, use the StringBuilder to concatenate more than 5 or 6 strings. Use the string concatenation for smaller operations.

    Edit 2024-01-08: turn out that string.Concat has an overload that accepts an array of strings. string.Concat(string[]) is actually faster than using the StringBuilder. Read more this article by Robin Choffardet.

    Tip #2: EndsWith(string) vs EndsWith(char): pick the right overload

    One simple improvement can be made if you use StartsWith or EndsWith, passing a single character.

    There are two similar overloads: one that accepts a string, and one that accepts a char.

    [MemoryDiagnoser]
    public class EndsWithStringVsChar()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void EndsWithChar()
        {
        foreach (string s in AllStrings)
        {
            _ = s?.EndsWith('e');
        }
        }
    
        [Benchmark]
        public void EndsWithString()
        {
        foreach (string s in AllStrings)
        {
            _ = s?.EndsWith("e");
        }
        }
    }
    

    We have the following results:

    Method Size Mean Error StdDev Median Ratio
    EndsWithChar 100 2.189 us 0.2334 us 0.6771 us 2.150 us 1.00
    EndsWithString 100 5.228 us 0.4495 us 1.2970 us 5.050 us 2.56
    EndsWithChar 1000 12.796 us 1.2006 us 3.4831 us 12.200 us 1.00
    EndsWithString 1000 30.434 us 1.8783 us 5.4492 us 29.250 us 2.52
    EndsWithChar 10000 25.462 us 2.0451 us 5.9658 us 23.950 us 1.00
    EndsWithString 10000 251.483 us 18.8300 us 55.2252 us 262.300 us 10.48
    EndsWithChar 100000 209.776 us 18.7782 us 54.1793 us 199.900 us 1.00
    EndsWithString 100000 826.090 us 44.4127 us 118.5465 us 781.650 us 4.14
    EndsWithChar 1000000 2,199.463 us 74.4067 us 217.0480 us 2,190.600 us 1.00
    EndsWithString 1000000 7,506.450 us 190.7587 us 562.4562 us 7,356.250 us 3.45

    Again, let’s generate the plot using the Log10 scale:

    EndsWith(char) vs EndsWith(string) in C# performance benchmark

    They appear to be almost identical, but look closely: based on this benchmark, when we have 10000, using EndsWith(string) is 10x slower than EndsWith(char).

    Also, here, the duration ratio on the 1.000.000-items array is ~3.5. At first, I thought there was an error on the benchmark, but when rerunning it on the benchmark, the ratio did not change.

    It looks like you have the best improvement ratio when the array has ~10.000 items.

    Tip #3: IsNullOrEmpty vs IsNullOrWhitespace vs IsNullOrEmpty + Trim

    As you might know, string.IsNullOrWhiteSpace performs stricter checks than string.IsNullOrEmpty.

    (If you didn’t know, have a look at this quick explanation of the cases covered by these methods).

    Does it affect performance?

    To demonstrate it, I have created three benchmarks: one for string.IsNullOrEmpty, one for string.IsNullOrWhiteSpace, and another one that lays in between: it first calls Trim() on the string, and then calls string.IsNullOrEmpty.

    [MemoryDiagnoser]
    public class StringEmptyBenchmark
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void StringIsNullOrEmpty()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrEmpty(s);
            }
        }
    
        [Benchmark]
        public void StringIsNullOrEmptyWithTrim()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrEmpty(s?.Trim());
            }
        }
    
        [Benchmark]
        public void StringIsNullOrWhitespace()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrWhiteSpace(s);
            }
        }
    }
    

    We have the following values:

    Method Size Mean Error StdDev Ratio
    StringIsNullOrEmpty 100 1.723 us 0.2302 us 0.6715 us 1.00
    StringIsNullOrEmptyWithTrim 100 2.394 us 0.3525 us 1.0282 us 1.67
    StringIsNullOrWhitespace 100 2.017 us 0.2289 us 0.6604 us 1.45
    StringIsNullOrEmpty 1000 10.885 us 1.3980 us 4.0781 us 1.00
    StringIsNullOrEmptyWithTrim 1000 20.450 us 1.9966 us 5.8240 us 2.13
    StringIsNullOrWhitespace 1000 13.160 us 1.0851 us 3.1482 us 1.34
    StringIsNullOrEmpty 10000 18.717 us 1.1252 us 3.2464 us 1.00
    StringIsNullOrEmptyWithTrim 10000 52.786 us 1.2208 us 3.5222 us 2.90
    StringIsNullOrWhitespace 10000 46.602 us 1.2363 us 3.4668 us 2.54
    StringIsNullOrEmpty 100000 168.232 us 12.6948 us 36.0129 us 1.00
    StringIsNullOrEmptyWithTrim 100000 439.744 us 9.3648 us 25.3182 us 2.71
    StringIsNullOrWhitespace 100000 394.310 us 7.8976 us 20.5270 us 2.42
    StringIsNullOrEmpty 1000000 2,074.234 us 64.3964 us 186.8257 us 1.00
    StringIsNullOrEmptyWithTrim 1000000 4,691.103 us 112.2382 us 327.4040 us 2.28
    StringIsNullOrWhitespace 1000000 4,198.809 us 83.6526 us 161.1702 us 2.04

    As you can see from the Log10 table, the results are pretty similar:

    string.IsNullOrEmpty vs string.IsNullOrWhiteSpace vs Trim in C#: performance benchmark

    On average, StringIsNullOrWhitespace is ~2 times slower than StringIsNullOrEmpty.

    So, what should we do? Here’s my two cents:

    1. For all the data coming from the outside (passed as input to your system, received from an API call, read from the database), use string.IsNUllOrWhiteSpace: this way you can ensure that you are not receiving unexpected data;
    2. If you read data from an external API, customize your JSON deserializer to convert whitespace strings as empty values;
    3. Needless to say, choose the proper method depending on the use case. If a string like “\n \n \t” is a valid value for you, use string.IsNullOrEmpty.

    Tip #4: ToUpper vs ToUpperInvariant vs ToLower vs ToLowerInvariant: they look similar, but they are not

    Even though they look similar, there is a difference in terms of performance between these four methods.

    [MemoryDiagnoser]
    public class ToUpperVsToLower()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark]
        public void WithToUpper()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToUpper();
            }
        }
    
        [Benchmark]
        public void WithToUpperInvariant()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToUpperInvariant();
            }
        }
    
        [Benchmark]
        public void WithToLower()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToLower();
            }
        }
    
        [Benchmark]
        public void WithToLowerInvariant()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToLowerInvariant();
            }
        }
    }
    

    What will this benchmark generate?

    Method Size Mean Error StdDev Median P95 Ratio
    WithToUpper 100 9.153 us 0.9720 us 2.789 us 8.200 us 14.980 us 1.57
    WithToUpperInvariant 100 6.572 us 0.5650 us 1.639 us 6.200 us 9.400 us 1.14
    WithToLower 100 6.881 us 0.5076 us 1.489 us 7.100 us 9.220 us 1.19
    WithToLowerInvariant 100 6.143 us 0.5212 us 1.529 us 6.100 us 8.400 us 1.00
    WithToUpper 1000 69.776 us 9.5416 us 27.833 us 68.650 us 108.815 us 2.60
    WithToUpperInvariant 1000 51.284 us 7.7945 us 22.860 us 38.700 us 89.290 us 1.85
    WithToLower 1000 49.520 us 5.6085 us 16.449 us 48.100 us 79.110 us 1.85
    WithToLowerInvariant 1000 27.000 us 0.7370 us 2.103 us 26.850 us 30.375 us 1.00
    WithToUpper 10000 241.221 us 4.0480 us 3.588 us 240.900 us 246.560 us 1.68
    WithToUpperInvariant 10000 339.370 us 42.4036 us 125.028 us 381.950 us 594.760 us 1.48
    WithToLower 10000 246.861 us 15.7924 us 45.565 us 257.250 us 302.875 us 1.12
    WithToLowerInvariant 10000 143.529 us 2.1542 us 1.910 us 143.500 us 146.105 us 1.00
    WithToUpper 100000 2,165.838 us 84.7013 us 223.137 us 2,118.900 us 2,875.800 us 1.66
    WithToUpperInvariant 100000 1,885.329 us 36.8408 us 63.548 us 1,894.500 us 1,967.020 us 1.41
    WithToLower 100000 1,478.696 us 23.7192 us 50.547 us 1,472.100 us 1,571.330 us 1.10
    WithToLowerInvariant 100000 1,335.950 us 18.2716 us 35.203 us 1,330.100 us 1,404.175 us 1.00
    WithToUpper 1000000 20,936.247 us 414.7538 us 1,163.014 us 20,905.150 us 22,928.350 us 1.64
    WithToUpperInvariant 1000000 19,056.983 us 368.7473 us 287.894 us 19,085.400 us 19,422.880 us 1.41
    WithToLower 1000000 14,266.714 us 204.2906 us 181.098 us 14,236.500 us 14,593.035 us 1.06
    WithToLowerInvariant 1000000 13,464.127 us 266.7547 us 327.599 us 13,511.450 us 13,926.495 us 1.00

    Let’s see it as the usual Log10 plot:

    ToUpper vs ToLower comparison in C#: performance benchmark

    We can notice a few points:

    1. The ToUpper family is generally slower than the ToLower family;
    2. The Invariant family is faster than the non-Invariant one; we will see more below;

    So, if you have to normalize strings using the same casing, ToLowerInvariant is the best choice.

    Tip #5: OrdinalIgnoreCase vs InvariantCultureIgnoreCase: logically (almost) equivalent, but with different performance

    Comparing strings is trivial: the string.Compare method is all you need.

    There are several modes to compare strings: you can specify the comparison rules by setting the comparisonType parameter, which accepts a StringComparison value.

    [MemoryDiagnoser]
    public class StringCompareOrdinalVsInvariant()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark(Baseline = true)]
        public void WithOrdinalIgnoreCase()
        {
            foreach (string s in AllStrings)
            {
                _ = string.Equals(s, "hello!", StringComparison.OrdinalIgnoreCase);
            }
        }
    
        [Benchmark]
        public void WithInvariantCultureIgnoreCase()
        {
            foreach (string s in AllStrings)
            {
                _ = string.Equals(s, "hello!", StringComparison.InvariantCultureIgnoreCase);
            }
        }
    }
    

    Let’s see the results:

    Method Size Mean Error StdDev Ratio
    WithOrdinalIgnoreCase 100 2.380 us 0.2856 us 0.8420 us 1.00
    WithInvariantCultureIgnoreCase 100 7.974 us 0.7817 us 2.3049 us 3.68
    WithOrdinalIgnoreCase 1000 11.316 us 0.9170 us 2.6603 us 1.00
    WithInvariantCultureIgnoreCase 1000 35.265 us 1.5455 us 4.4591 us 3.26
    WithOrdinalIgnoreCase 10000 20.262 us 1.1801 us 3.3668 us 1.00
    WithInvariantCultureIgnoreCase 10000 225.892 us 4.4945 us 12.5289 us 11.41
    WithOrdinalIgnoreCase 100000 148.270 us 11.3234 us 32.8514 us 1.00
    WithInvariantCultureIgnoreCase 100000 1,811.144 us 35.9101 us 64.7533 us 12.62
    WithOrdinalIgnoreCase 1000000 2,050.894 us 59.5966 us 173.8460 us 1.00
    WithInvariantCultureIgnoreCase 1000000 18,138.063 us 360.1967 us 986.0327 us 8.87

    As you can see, there’s a HUGE difference between Ordinal and Invariant.

    When dealing with 100.000 items, StringComparison.InvariantCultureIgnoreCase is 12 times slower than StringComparison.OrdinalIgnoreCase!

    Ordinal vs InvariantCulture comparison in C#: performance benchmark

    Why? Also, why should we use one instead of the other?

    Have a look at this code snippet:

    var s1 = "Aa";
    var s2 = "A" + new string('\u0000', 3) + "a";
    
    string.Equals(s1, s2, StringComparison.InvariantCultureIgnoreCase); //True
    string.Equals(s1, s2, StringComparison.OrdinalIgnoreCase); //False
    

    As you can see, s1 and s2 represent equivalent, but not equal, strings. We can then deduce that OrdinalIgnoreCase checks for the exact values of the characters, while InvariantCultureIgnoreCase checks the string’s “meaning”.

    So, in most cases, you might want to use OrdinalIgnoreCase (as always, it depends on your use case!)

    Tip #6: Newtonsoft vs System.Text.Json: it’s a matter of memory allocation, not time

    For the last benchmark, I created the exact same model used as an example in the official documentation.

    This benchmark aims to see which JSON serialization library is faster: Newtonsoft or System.Text.Json?

    [MemoryDiagnoser]
    public class JsonSerializerComparison
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
        List<User?> Users { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            Users = UsersCreator.GenerateUsers(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithJson()
        {
            foreach (User? user in Users)
            {
                var asString = System.Text.Json.JsonSerializer.Serialize(user);
    
                _ = System.Text.Json.JsonSerializer.Deserialize<User?>(asString);
            }
        }
    
        [Benchmark]
        public void WithNewtonsoft()
        {
            foreach (User? user in Users)
            {
                string asString = Newtonsoft.Json.JsonConvert.SerializeObject(user);
                _ = Newtonsoft.Json.JsonConvert.DeserializeObject<User?>(asString);
            }
        }
    }
    

    As you might know, the .NET team has added lots of performance improvements to the JSON Serialization functionalities, and you can really see the difference!

    Method Size Mean Error StdDev Median Ratio RatioSD Gen0 Gen1 Allocated Alloc Ratio
    WithJson 100 2.063 ms 0.1409 ms 0.3927 ms 1.924 ms 1.00 0.00 292.87 KB 1.00
    WithNewtonsoft 100 4.452 ms 0.1185 ms 0.3243 ms 4.391 ms 2.21 0.39 882.71 KB 3.01
    WithJson 10000 44.237 ms 0.8787 ms 1.3936 ms 43.873 ms 1.00 0.00 4000.0000 1000.0000 29374.98 KB 1.00
    WithNewtonsoft 10000 78.661 ms 1.3542 ms 2.6090 ms 78.865 ms 1.77 0.08 14000.0000 1000.0000 88440.99 KB 3.01
    WithJson 1000000 4,233.583 ms 82.5804 ms 113.0369 ms 4,202.359 ms 1.00 0.00 484000.0000 1000.0000 2965741.56 KB 1.00
    WithNewtonsoft 1000000 5,260.680 ms 101.6941 ms 108.8116 ms 5,219.955 ms 1.24 0.04 1448000.0000 1000.0000 8872031.8 KB 2.99

    As you can see, Newtonsoft is 2x slower than System.Text.Json, and it allocates 3x the memory compared with the other library.

    So, well, if you don’t use library-specific functionalities, I suggest you replace Newtonsoft with System.Text.Json.

    Wrapping up

    In this article, we learned that even tiny changes can make a difference in the long run.

    Let’s recap some:

    1. Using StringBuilder is generally WAY faster than using string concatenation unless you need to concatenate 2 to 4 strings;
    2. Sometimes, the difference is not about execution time but memory usage;
    3. EndsWith and StartsWith perform better if you look for a char instead of a string. If you think of it, it totally makes sense!
    4. More often than not, string.IsNullOrWhiteSpace performs better checks than string.IsNullOrEmpty; however, there is a huge difference in terms of performance, so you should pick the correct method depending on the usage;
    5. ToUpper and ToLower look similar; however, ToLower is quite faster than ToUpper;
    6. Ordinal and Invariant comparison return the same value for almost every input; but Ordinal is faster than Invariant;
    7. Newtonsoft performs similarly to System.Text.Json, but it allocates way more memory.

    This article first appeared on Code4IT 🐧

    My suggestion is always the same: take your time to explore the possibilities! Toy with your code, try to break it, benchmark it. You’ll find interesting takes!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link

  • How Seqrite Endpoint Protection Blocks Bots, Scripts, and Malware


    In today’s hyper-connected digital world, the cybersecurity landscape is shifting dramatically. Gone are the days when cyberattacks primarily relied on human intervention. We’re now facing a new breed of silent, swift adversaries: non-human threats. These automated entities—bots, malicious scripts, and sophisticated malware—are designed to operate at machine speed, exploiting vulnerabilities, bypassing traditional defenses, and often remaining undetected until significant damage has occurred. So, how do you defend against something you can’t see, something that moves faster than human reaction? The answer lies in intelligent, automated endpoint security. Enter Seqrite Endpoint Protection (EPP), your robust shield against these invisible invaders. Available for both cloud-based and on-premise deployments, Seqrite EPP is engineered with cutting-edge technologies specifically designed to identify and neutralize these stealthy, non-human threats.

    Understanding the Enigma: What Exactly Are Non-Human Cyber Threats?

    When we talk about “non-human cyber threats,” we’re referring to automated programs and code snippets that launch attacks without requiring direct human interaction. These include:

    • Bots: Automated programs designed to perform repetitive tasks at scale. Think credential stuffing attacks where bots try thousands of username/password combinations, or Distributed Denial of Service (DDoS) attacks that flood a server with traffic.
    • Malicious Scripts: These are pieces of automated code, often hidden within legitimate-looking files or web pages, designed to exploit system weaknesses, exfiltrate sensitive data, or spread malware across your network.
    • Exploit Kits: These are sophisticated toolkits that automatically scan systems for unpatched vulnerabilities and then deploy exploits to gain unauthorized access or deliver payloads like ransomware.

    The key characteristic of these threats is their autonomy and speed. They operate under the radar, making traditional, reactive security measures largely ineffective. This is precisely why proactive, automated detection and prevention mechanisms are absolutely critical for modern businesses.

    Seqrite Endpoint Protection: Your Multi-Layered Defense Against Automation

    Seqrite’s EPP doesn’t just offer a single line of defense; it deploys a comprehensive, multi-layered security framework. This framework is specifically engineered to detect and block automation-driven threats using a powerful combination of intelligent rule-based systems, behavioral analysis, and advanced AI-powered capabilities.

    Let’s dive into the key features that make Seqrite EPP a formidable opponent against non-human threats:

    1. Advanced Device Control: Many non-human threats, especially scripts and certain types of malware, are delivered via external devices like USB drives. Seqrite’s Advanced Device Control enforces strict usage policies, allowing you to define what devices can connect to your endpoints and how they can be used. By controlling storage, network, and wireless interfaces, you effectively close off a major entry point for automated attacks.
    2. Application Control with Zero Trust: Imagine only allowing approved applications and scripts to run on your systems. That’s the power of Seqrite’s Application Control. By implementing a Zero Trust model, it blocks unknown or unapproved applications and scripts from executing. Through meticulous allowlisting and blocklisting, only trusted applications can operate, making it incredibly effective against stealthy automation tools that attempt to execute malicious code.
    3. Behavior-Based Detection (GoDeep.AI): This is where Seqrite truly shines. Leveraging cutting-edge AI and machine learning, GoDeep.AI continuously monitors endpoint activity to identify abnormal and suspicious behaviors that indicate a non-human threat. This includes detecting:
      • Repetitive access patterns: A hallmark of bots attempting to brute-force accounts or scan for vulnerabilities.
      • Scripted encryption behavior: Instantly flags the tell-tale signs of ransomware encrypting files.
      • Silent data exfiltration attempts: Catches automated processes trying to siphon off sensitive information. The system doesn’t just detect; it actively stops suspicious activity in its tracks before it can cause any harm.
    4. Intrusion Detection & Prevention System (IDS/IPS): Seqrite’s integrated IDS/IPS actively monitors network traffic for known exploit patterns and anomalous behavior. This robust system is crucial for blocking automation-based threats that attempt to infiltrate your network through known vulnerabilities or launch network-based attacks like port scanning.
    5. File Sandboxing: When a suspicious file or script enters your environment, Seqrite doesn’t let it run directly on your system. Instead, it’s whisked away to a secure, isolated virtual sandbox environment for deep analysis. Here, the file is allowed to execute and its behavior is meticulously observed. If it exhibits any malicious traits—like attempting to mimic user behavior, access restricted resources, or encrypt files—it’s immediately flagged and stopped, preventing any potential damage to your actual endpoints.
    6. Web Protection & Phishing Control: Many non-human threats, particularly bots and sophisticated malware, rely on communication with remote command-and-control (C2) servers. Seqrite’s Web Protection proactively blocks:
      • Access to known malicious domains.
      • Phishing sites designed to steal credentials.
      • Unauthorized web access that could lead to malware downloads.
      • Crucially, it cuts off botnet callbacks, effectively severing the communication lines between bots and their command centers, rendering them inert.

    Enhancing Your Defense: Essential Supporting Features

    Beyond its core capabilities, Seqrite Endpoint Protection is bolstered by a suite of supporting features that further strengthen your organization’s resilience against non-human threats and beyond:

    Feature Benefit
    Patch Management Automatically identifies and fixes software vulnerabilities that bots and scripts often exploit to gain entry. Proactive patching is key to prevention.
    Firewall Provides a critical layer of defense by filtering unauthorized network traffic and blocking communication with known botnet IP addresses.
    Data Loss Prevention (DLP) Prevents automated data theft by monitoring and controlling data in transit, ensuring sensitive information doesn’t leave your network without authorization.
    Centralized Log Management Offers a unified view of security events, allowing for rapid detection and auditing of unusual or suspicious behaviors across all endpoints.
    Disk Encryption Management Safeguards your data by encrypting entire disks, stopping automated decryption attempts even if data is stolen, and protecting against ransomware.

     

    The Future of Endpoint Security: Why Non-Human Threat Detection is Non-Negotiable

    As we move deeper into 2025 and beyond, cyber threats are becoming increasingly automated, sophisticated, and often, AI-driven. Relying on traditional, signature-based security solutions is no longer enough to match the speed, stealth, and evolving tactics of automation-based attacks.

    Seqrite Endpoint Protection is built for this future. It leverages intelligent automation to effectively combat automation—blocking bots, malicious scripts, advanced ransomware, and other non-human threats before they can execute and wreak havoc on your systems and data.

    Final Takeaway: Don’t Let Invisible Threats Compromise Your Business

    In a world where cyberattacks are increasingly executed by machines, your defense must be equally advanced. With its comprehensive suite of features—including cutting-edge device and application control, AI-driven behavioral detection (GoDeep.AI), robust network-level protection, and secure sandboxing—Seqrite Endpoint Protection ensures your endpoints remain locked down and secure.

    Whether your organization operates with a cloud-first strategy or relies on a traditional on-premise infrastructure, Seqrite provides the adaptable and powerful security solutions you need.

    Ready to Fortify Your Defenses?

    It’s time to upgrade your endpoint security and protect your organization from both human-initiated and the ever-growing wave of non-human cyber threats.

    Explore how Seqrite can secure your business today. Request a Free Trial or Schedule a Demo.

     



    Source link

  • [ENG] MVPbuzzChat with Davide Bellone


    About the author

    Davide Bellone is a Principal Backend Developer with more than 10 years of professional experience with Microsoft platforms and frameworks.

    He loves learning new things and sharing these learnings with others: that’s why he writes on this blog and is involved as speaker at tech conferences.

    He’s a Microsoft MVP 🏆, conference speaker (here’s his Sessionize Profile) and content creator on LinkedIn.



    Source link

  • Motion Highlights #9 | Codrops

    Motion Highlights #9 | Codrops


    The

    New

    Collective

    🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

    Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.



    Source link

  • How to kill a process running on a local port in Windows &vert; Code4IT

    How to kill a process running on a local port in Windows | Code4IT


    Now you can’t run your application because another process already uses the port. How can you find that process? How to kill it?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, when trying to run your ASP.NET application, there’s something stopping you.

    Have you ever found a message like this?

    Failed to bind to address https://127.0.0.1:7261: address already in use.

    You can try over and over again, you can also restart the application, but the port still appears to be used by another process.

    How can you find the process that is running on a local port? How can you kill it to free up the port and, eventually, be able to run your application?

    In this article, we will learn how to find the blocking port in Windows 10 and Windows 11, and then we will learn how to kill that process given its PID.

    How to find the process running on a port on Windows 11 using PowerShell

    Let’s see how to identify the process that is running on port 7261.

    Open a PowerShell and run the netstat command:

    NETSTAT is a command that shows info about the active TCP/IP network connections. It accepts several options. In this case, we will use:

    • -n: Displays addresses and port numbers in numerical form.
    • -o: Displays the owning process ID associated with each connection.
    • -a: Displays all connections and listening ports;
    • -p: Filter for a specific protocol (TCP or UDP)

    Netstat command to show all active TCP connections

    Notice that the last column lists the PID (Process ID) bound to each connection.

    From here, we can use the findstr command to get only the rows with a specific string (the searched port number).

    netstat -noa -p TCP | findstr 7261
    

    Netstat info filtered by string

    Now, by looking at the last column, we can identify the Process ID: 19160.

    How to kill a process given its PID on Windows or PowerShell

    Now that we have the Process ID (PID), we can open the Task Manager, paste the PID value in the topmost textbox, and find the related application.

    In our case, it was an instance of Visual Studio running an API application. We can now kill the process by hitting End Task.

    Using Task Manager on Windows11 to find the process with specified ID

    If you prefer working with PowerShell, you can find the details of the related process by using the Get-Process command:

    Process info found using PowerShell

    Then, you can use the taskkill command by specifying the PID, using the /PID flag, and adding the /F flag to force the killing of the process.

    We have killed the process related to the running application. Visual Studio is still working, of course.

    Further readings

    Hey, what are these fancy colours on the PowerShell?

    It’s a customization I added to show the current folder and the info about the associated GIT repository. It’s incredibly useful while developing and navigating the file system with PowerShell.

    🔗 OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal

    This article first appeared on Code4IT 🐧

    Wrapping up

    As you can imagine, this article exists because I often forget how to find the process that stops my development.

    It’s always nice to delve into these topics to learn more about what you can do with PowerShell and which flags are available for a command.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • C# Tip: ObservableCollection – a data type to intercept changes to the collection &vert; Code4IT

    C# Tip: ObservableCollection – a data type to intercept changes to the collection | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Imagine you need a way to raise events whenever an item is added or removed from a collection.

    Instead of building a new class from scratch, you can use ObservableCollection<T> to store items, raise events, and act when the internal state of the collection changes.

    In this article, we will learn how to use ObservableCollection<T>, an out-of-the-box collection available in .NET.

    Introducing the ObservableCollection type

    ObservableCollection<T> is a generic collection coming from the System.Collections.ObjectModel namespace.

    It allows the most common operations, such as Add<T>(T item) and Remove<T>(T item), as you can expect from most of the collections in .NET.

    Moreover, it implements two interfaces:

    • INotifyCollectionChanged can be used to raise events when the internal collection is changed.
    • INotifyPropertyChanged can be used to raise events when one of the properties of the changes.

    Let’s see a simple example of the usage:

    var collection = new ObservableCollection<string>();
    
    collection.Add("Mario");
    collection.Add("Luigi");
    collection.Add("Peach");
    collection.Add("Bowser");
    
    collection.Remove("Luigi");
    
    collection.Add("Waluigi");
    
    _ = collection.Contains("Peach");
    
    collection.Move(1, 2);
    

    As you can see, we can do all the basic operations: add, remove, swap items (with the Move method), and check if the collection contains a specific value.

    You can simplify the initialization by passing a collection in the constructor:

     var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    
     collection.Add("Bowser");
    
     collection.Remove("Luigi");
    
     collection.Add("Waluigi");
    
     _ = collection.Contains("Peach");
    
     collection.Move(1, 2);
    

    How to intercept changes to the underlying collection

    As we said, this data type implements INotifyCollectionChanged. Thanks to this interface, we can add event handlers to the CollectionChanged event and see what happens.

    var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    collection.CollectionChanged += WhenCollectionChanges;
    
    Console.WriteLine("Adding Bowser...");
    collection.Add("Bowser");
    Console.WriteLine("");
    
    
    Console.WriteLine("Removing Luigi...");
    collection.Remove("Luigi");
    Console.WriteLine("");
    
    Console.WriteLine("Adding Waluigi...");
    collection.Add("Waluigi");
    Console.WriteLine("");
    
    Console.WriteLine("Searching for Peach...");
    var containsPeach = collection.Contains("Peach");
    Console.WriteLine("");
    
    Console.WriteLine("Swapping items...");
    collection.Move(1, 2);
    

    The WhenCollectionChanges method accepts a NotifyCollectionChangedEventArgs that gives you info about the intercepted changes:

    private void WhenCollectionChanges(object? sender, NotifyCollectionChangedEventArgs e)
    {
        var allItems = ((IEnumerable<object>)sender)?.Cast<string>().ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Currently, the collection is {string.Join(',', allItems)}");
    
        Console.WriteLine($"> The operation is {e.Action}");
    
        var previousItems = e.OldItems?.Cast<string>()?.ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Before the operation it was {string.Join(',', previousItems)}");
    
    
        var currentItems = e.NewItems?.Cast<string>()?.ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Now, it is {string.Join(',', currentItems)}");
    }
    

    Every time an operation occurs, we write some logs.

    The result is:

    Adding Bowser...
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > The operation is Add
    > Before the operation it was <empty>
    > Now, it is Bowser
    
    Removing Luigi...
    > Currently, the collection is Mario,Peach,Bowser
    > The operation is Remove
    > Before the operation it was Luigi
    > Now, it is <empty>
    
    Adding Waluigi...
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > The operation is Add
    > Before the operation it was <empty>
    > Now, it is Waluigi
    
    Searching for Peach...
    
    Swapping items...
    > Currently, the collection is Mario,Bowser,Peach,Waluigi
    > The operation is Move
    > Before the operation it was Peach
    > Now, it is Peach
    

    Notice a few points:

    • the sender property holds the current items in the collection. It’s an object?, so you have to cast it to another type to use it.
    • the NotifyCollectionChangedEventArgs has different meanings depending on the operation:
      • when adding a value, OldItems is null and NewItems contains the items added during the operation;
      • when removing an item, OldItems contains the value just removed, and NewItems is null.
      • when swapping two items, both OldItems and NewItems contain the item you are moving.

    How to intercept when a collection property has changed

    To execute events when a property changes, we need to add a delegate to the PropertyChanged event. However, it’s not available directly on the ObservableCollection type: you first have to cast it to an INotifyPropertyChanged:

    var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    (collection as INotifyPropertyChanged).PropertyChanged += WhenPropertyChanges;
    
    Console.WriteLine("Adding Bowser...");
    collection.Add("Bowser");
    Console.WriteLine("");
    
    
    Console.WriteLine("Removing Luigi...");
    collection.Remove("Luigi");
    Console.WriteLine("");
    
    Console.WriteLine("Adding Waluigi...");
    collection.Add("Waluigi");
    Console.WriteLine("");
    
    Console.WriteLine("Searching for Peach...");
    var containsPeach = collection.Contains("Peach");
    Console.WriteLine("");
    
    Console.WriteLine("Swapping items...");
    collection.Move(1, 2);
    

    We can now specify the WhenPropertyChanges method as such:

    private void WhenPropertyChanges(object? sender, PropertyChangedEventArgs e)
    {
        var allItems = ((IEnumerable<object>)sender)?.Cast<string>().ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Currently, the collection is {string.Join(',', allItems)}");
        Console.WriteLine($"> Property {e.PropertyName} has changed");
    }
    

    As you can see, we have again the sender parameter that contains the collection of items.

    Then, we have a parameter of type PropertyChangedEventArgs that we can use to get the name of the property that has changed, using the PropertyName property.

    Let’s run it.

    Adding Bowser...
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > Property Count has changed
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > Property Item[] has changed
    
    Removing Luigi...
    > Currently, the collection is Mario,Peach,Bowser
    > Property Count has changed
    > Currently, the collection is Mario,Peach,Bowser
    > Property Item[] has changed
    
    Adding Waluigi...
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > Property Count has changed
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > Property Item[] has changed
    
    Searching for Peach...
    
    Swapping items...
    > Currently, the collection is Mario,Bowser,Peach,Waluigi
    > Property Item[] has changed
    

    As you can see, for every add/remove operation, we have two events raised: one to say that the Count has changed, and one to say that the internal Item[] is changed.

    However, notice what happens in the Swapping section: since you just change the order of the items, the Count property does not change.

    This article first appeared on Code4IT 🐧

    Final words

    As you probably noticed, events are fired after the collection has been initialized. Clearly, it considers the items passed in the constructor as the initial state, and all the subsequent operations that mutate the state can raise events.

    Also, notice that events are fired only if the reference to the value changes. If the collection holds more complex classes, like:

    public class User
    {
        public string Name { get; set; }
    }
    

    No event is fired if you change the value of the Name property of an object already part of the collection:

    var me = new User { Name = "Davide" };
    var collection = new ObservableCollection<User>(new User[] { me });
    
    collection.CollectionChanged += WhenCollectionChanges;
    (collection as INotifyPropertyChanged).PropertyChanged += WhenPropertyChanges;
    
    me.Name = "Updated"; // It does not fire any event!
    

    Notice that ObservableCollection<T> is not thread-safe! You can find an interesting article by Gérald Barré (aka Meziantou) where he explains a thread-safe version of ObservableCollection<T> he created. Check it out!

    As always, I suggest exploring the language and toying with the parameters, properties, data types, etc.

    You’ll find lots of exciting things that may come in handy.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Happy June Sale! 🎁

    Happy June Sale! 🎁


    At Browserling and Online Tools we love sales.

    We just created a new automated sale called Happy June Sale.

    Now each June on the first day we show a 50% discount offer to all users who visit our site. BOOM SHAKA LAKA!

    Buy a Sub Now!

    What Is Browserling?

    Browserling is an online service that lets you test how other websites look and work in different web browsers, like Chrome, Firefox, or Safari, without needing to install them. It runs real browsers on real machines and streams them to your screen, kind of like remote desktop but focused on browsers. This helps web developers and regular users check for bugs, suspicious links, and weird stuff that happens in certain browsers. You just go to Browserling, pick a browser and version, and then enter the site you want to test. It’s quick, easy, and works from your browser with no downloads or installs.

    What Are Online Tools?

    Online Tools is an online service that offers free, browser-based productivity tools for everyday tasks like editing text, converting files, editing images, working with code, and way more. It’s an all-in-one Digital Swiss Army Knife with 1500+ utilities, so you can find the exact tool you need without installing anything. Just open the site, use what you need, and get things done fast.

    Who Uses Browserling and Online Tools?

    Browserling and Online Tools are used by millions of regular internet users, developers, designers, students, and even Fortune 100 companies. Browserling is handy for testing websites in different browsers without having to install them. Online Tools are used for simple tasks like resizing or converting images, or even fixing small file problems quickly without downloading any apps.

    Buy a subscription now and see you next time!



    Source link

  • Try It On: A Playful Drag-and-Drop Styling UI

    Try It On: A Playful Drag-and-Drop Styling UI


    I recently helped my friends with their brand, www.laughwithtic.com, and wanted to create something distinctive for their pre-launch. My design drew inspiration from classic dress-up games, focusing on a playful, interactive element.Initially, we featured a Rat character as the main model. Users could simply drag-and-drop a selection of t-shirts onto the rat. This approach was effective and added a fresh element to the site.

    Evolving the Design: From Rat to Human

    A few weeks later, I saw a video by @samdape on X, showcasing a similar UI layout, but enhanced with a real human character at an angle. This immediately inspired me to redesign our pre-launch experience, transitioning to a human model in that dynamic pose.

    To further enhance the interaction, I integrated several subtle refinements. A slight shadow behind the character adds depth. When a T-shirt is dragged, it subtly skews and shakes, making the interaction feel more tactile. Perhaps the most engaging detail is how the model raises her hand as you drag a t-shirt nearby, signaling readiness for the change. These small touches contribute to an experience that feels immersive and unexpected. This entire system is built with vanilla JS, HTML, and CSS, operating on the simple principle of changing PNG images based on drag-and-drop collisions.

    The Tech Behind the Interaction

    The core of this experience is a vanilla JavaScript-driven drag-and-drop mechanism, designed to allow users to visually try different t-shirts on a central model.

    Here’s a breakdown of its key phases:

    • Initiation: When a user clicks or touches a t-shirt, it becomes the active element. Its zIndex is raised, and a grabbed CSS class is applied for immediate visual feedback.
    • Dragging: The active t-shirt’s position continuously updates to follow the cursor.
      • Skewing Effect: Horizontal dragging applies CSS classes that subtly skew the t-shirt, adding a dynamic feel. These classes are removed if movement pauses.
      • Model Readiness: The system constantly checks for collision with the model. If the t-shirt hovers over the model, the model’s image changes to a “ready” version (e.g., raising a hand), providing clear feedback.
    • Dropping: Upon release, collision with the model is checked.
      • On Model: If dropped on the model, the model’s image updates to wear the new t-shirt. The dragged t-shirt then resets to its original layout position.
      • Off Model: If dropped elsewhere, the t-shirt animates back to its initial position. The model reverts to its default state if it was in a “ready” pose.
    • Image Preloading: All t-shirt and model images (including hover states) are preloaded on page load using a dedicated function, ensuring smooth visual transitions without flickers.

    This combination of event handling, CSS for nuanced visual effects, and dynamic image swapping creates an engaging and interactive try-on experience. You can check out the full website at www.laughwithtic.com.

    I hope you find the interaction both fun and inspiring!

    Check out the GitHub repo here.



    Source link

  • Chinese Telecom Targeted by VELETRIX & VShell Malware

    Chinese Telecom Targeted by VELETRIX & VShell Malware


    Contents

    • Introduction
    • Initial Findings
    • Infection Chain.
    • Technical Analysis
      • Stage 0 – Malicious ZIP File.
      • Stage 1 – Malicious VELETRIX implant.
      • Stage 2 – Malicious V-Shell implant.
    • Hunting and Infrastructure.
    • Attribution
    • Conclusion
    • Seqrite Protection.
    • IOCs
    • MITRE ATT&CK.

    Authors: Subhajeet Singha and Sathwik Ram Prakki

    Introduction

    Seqrite Labs APT-Team has recently found a campaign, which has been targeting the Chinese Telecom Industry. The campaign is aimed at targeting China Mobile Tietong Co., Ltd. which is a well-known subsidiary of China Mobile, one of the major telecom companies in China. The entire malware ecosystem involved in this campaign is based on usage of VELETRIX malware and VShell malware a very well-known adversary simulation tool, which is also known for widely being adopted by threat actors from China to target various western entities in-the-wild.

    In this blog, we will explore the technical sophistication of the campaign, we encountered during our analysis. We will examine the various stages of this campaign, starting with deep dive into the initial infection stage to implants used in this campaign, ending with a final overview covering the campaign.

    Initial Findings

    Recently, on 13th of May, our team found a malicious ZIP file, which surfaced both on various sources like VirusTotal, where ZIP file has been used as preliminary source of infection, containing multiple EXE and DLLs inside the ZIP folder. The same file was also found by other threat researchers the very same day.

    The ZIP contains an interesting executable file known as 2025 China Mobile Tietong Co., Ltd. Internal Training Program is about to launch, please register as soon as possible.exewhich loads a bunch of interesting DLLs such as drstat.dll and much more. Then, we decided to look into the workings of these bunch of files.

    Infection Chain

    Technical Analysis

    We will break down analysis into three different parts, starting with looking into the malicious ZIP attachment, followed by malicious Veletrix implant and then we will look into some brief analysis into the VShell malware.

    Stage 0 – Malicious ZIP File.

    Initially, we found a malicious ZIP file, known as 附件.zip, also known as attachment.zip. Upon, looking into the contents of the ZIP file.

    We found a set of interesting EXE and DLL and XML files, amongst them most of them were legitimately Microsoft Signed binaries, whereas some of them had have code-signing certificate by Shenzhen Thunder Networking Technologies Ltd , while an interesting DLL file drstat.dll which is often associated with WonderShare RepairIt software.

    Upon confirming from an official website of Wondershare Repairit , we can confirm that an executable known as drstat.exe which have been renamed and packaged thrice with three different names, which are:

    • China Mobile Limited’s 2025 internal training program is about to begin. Please register as soon as possible.
    • Uninstall.
    • Registration-link.

    Next, we decided to confirm further that, either Wondershare does sign the actual binary, which is officially available from their website.

    Finally, we could confirm, that the threat entity used the same file, which is available for download from Wondershare’s official website. Looking into this code-signing maneuver from Wondershare, and post-analyzing this malicious we can confirm that the threat actor used DLL-Sideloading against the target to launch the implant, which we have decided to term as VELETRIX .

    Before, diving into the next section, we also confirm that the other code signing certificate packed into this compressed executable by ‘Shenzhen Thunder Networking Technologies Ltd’ has frequently been associated with malicious executables in various reports and discussions as abused by Chinese-origin threat entities.

    Stage 1 – Malicious VELETRIX Implant.

    Initially, looking into the implant, we figured out a few basic information about the implant, that is it is a 64-bit binary along with which it contains a few interesting export functions. Next, we will focus on the code analysis of this malicious implant.

    Upon checking into all the exports, out of all the exports, we found dr_data_stop to be the one containing interesting malicious code.

    Initially, the implant starts with a little anti-analysis trick, which uses a combination of Sleep & Beep Windows API, which basically runs inside a do-while loop, which basically runs inside a do-while loop that delays execution for ~10 seconds and plays a Beep noise to evade automated sandbox analysis. The loop sleeps for 1 second and beeps 10 times, this entire mechanism is caused to delay the analysis of the analyst or confuse the automated sandbox.

    This technique leverages NtDelayExecution at the system level – Beep internally call NtDelayExecution, which accepts a “DelayInterval” parameter specifying milliseconds to delay. When executed, NtDelayExecution pauses the calling thread, which causes sandbox timeouts or loss of debugger control making it a not so harmful, yet effective anti-sandbox technique. The Beep API is particularly clever because it serves dual purposes: creating execution delays through its internal NtDelayExecution calls while also generating audio artifacts that may trigger different behavior in analysis environments or alert researchers to active code execution.

    Then, it moves ahead with loading kernel32.dll , further once the DLL is being loaded using LoadLibraryA, once the DLL is loaded, further GetProcAddress is used to resolve some interesting set of APIs, which are VirtualAllocExNuma, VirtualProtect & EnumCalendarInfo.

    Similarly, it loads the ADVAPI32.dll and once the DLL is loaded, it resolves using the same technique, which are SystemFunction036, HeapAlloc and HeapFree.

    Finally, the ntdll.dll is loaded, and an interesting Windows API is resolved which is known as RtlIpV4StringToAddressA.

    Next, this malicious loader, uses a technique called IPFuscation, which basically converts the malicious shellcode into a list of IPV4 address.

    Further, a while-loop along with using the RtlIpv4StringToAddressA API is used to decode the obfuscated shellcode, which is done by converting the ASCII IP string to binary, where the binary further executes as a shellcode.

    Once the shellcode is extracted in form of binary, then VirtualAllocExNuma API is used to allocate a fresh memory block with only Read & Write permission into the current process.

    Now, once the memory is allocated, further using a simple XOR operation, the encoded blob which was de-obfuscated from the IpFuscation technique via the windows API, is used to further decode via XOR-operation and copied to the allocated memory.

    Then, it uses VirtualProtect to change the memory protection of the allocated memory to Execute-Read-Write.

    Then, finally, it uses a slightly innovative technique of shellcode execution via callback function, that is by using EnumCalendarInfoA API to execute the shellcode. This technique leverages the fact that EnumCalendarInfoA expects a callback function pointer as a parameter – the malware passes its shellcode address as this callback, causing Windows to unknowingly execute the malicious code when the API tries to call what it thinks is a legitimate calendar enumeration function, whereas in our case the shellcode, which is basically an windows implant of the VShell OST framework, is being executed.

    Finally, we can conclude that the Veletrix implant which performs code injection via callback mechanism. In, the next section, we will look into the Vshell implant, which is pretty well known, and look into the workings of it.

    Stage 2 – Malicious Vshell Implant.

    Well, VShell, is pretty well-known cross-platform OST framework developed in Golang, initially developed by a researcher, which was later taken-down mysteriously as mentioned in multiple research blogs by various researchers who have tracked various campaigns such as UNC5174 and similar have been used by threat actors originating from Chinese geosphere.

    As mentioned, in the previous section VELETRIX loads this windows implant into memory. Looking inside the file, we found that the specific implant, which have been dropped goes by the name tcp_windows_amd64.dll .As, this framework is well-researched, we will only look into the key-artefacts and more of a basic overview of the implant.

    Upon, looking into the implant, we have multiple functionalities of this implant such as connect, send, receive which is used to interact with the operator. All these functions use underlying code from multiple Windows APIs from WinSock library.

    Further, analyzing we uncovered the command-and-control server along with an import config I.e., the salt which is qwe123qwe . In, the next section, we will look into further, hunting and infrastructural artefacts.

    Hunting and Infrastructure.

    Upon looking into the previous implants, we hunted and found some interesting artefacts.

    Based on the analysis and extraction of the salt used in the campaign mentioned in this research, we found a total number of 44 implants, using the exact similar salt, that is qwe123qwe. Along, with that as Vshell is a cross-platform tool, we found, multiple EXEs, ELF, DLLs both signed and unsigned.

    We, also found a few samples whose C2s range from multiple locations such as US, Hong Kong and much more, along with which, we found that a few samples out of 44 implants using same salt, have co-relations with the APT group Earth Lamia which has targeted Indian entities in few cases. While, upon hunting, we also found, that a lot of similar implants, have multiple overlaps with UNC5174’s campaign abusing ScreenConnect CVE-2024-1709 reported by researchers.

    Now, looking into the infrastructural overlaps, the similar indicator has been attributed to the cluster of China-Nexus-State-Sponsored threat actor which have been abusing CVE-2025-31324 to target SAP NetWeaver Visual Composer.

    We also found that on the same infrastructure, a login-based webpage has been hosted which is related to the Asset Lighthouse System — an open-source asset discovery and reconnaissance platform developed by Tophant Competence Center (TCC). It is primarily used for mapping external attack surfaces by identifying exposed IPs, domains, ports, and web services. Therefore, we decided to pivot using these artefacts and found few interesting overlaps.

    Post-pivoting, we discovered multiple malicious webservers with similar port-configurations such as running ASL over port 5003, have had hosted Cobalt Strike and SuperShell, which have been known as go-to implants used by UNC5174 aka Uteus and along with that we also uncovered multiple webservers with similar port-configurations related to Earth Lamia.

    Well, the last but not the least, we also saw that the command-and-control server, has also been hosting Cobalt Strike to be used against the targets making it the second post-exploitation framework used by this threat entity.

    Attribution.

    Through analysis of implant usage and overlapping infrastructure patterns, we identified the threat actor leveraging VELETRIX, a relatively new loader designed to execute VShell in memory. Although VShell was initially released as an open-source project and later taken down by its original developer, it has since been widely abused by China-aligned threat groups.

    Further threat hunting revealed similar behavioral patterns that align with known activity from UNC5174 (Uteus) and Earth Lamia, as recently documented by researchers. The current infrastructure associated with this actor exhibits consistent use of tools such as SuperShell, Cobalt Strike, VShell, and the Asset Lighthouse System—an open-source platform for asset discovery and reconnaissance. These tools have previously been attributed to various China-based APT clusters and observed actively deployed in-the-wild (ITW).

    Given the technical and infrastructural overlaps, we assess with high confidence that this threat actor is part of threat entity belong to China-Nexus cluster.

    Conclusion.

    Upon carefully researching the campaign, we found that the China-nexus threat entity which we have termed as Operation DRAGONCLONE has been using DLL-Sideloading technique against Wondershare Recoverit software, along with loading VELETRIX DLL implant, which uses interesting techniques such as anti-sandbox, IPFuscation technique along with callback technique to execute Vshell malware, along with having multiple overlaps with UNC5174 and Earth Lamia and the recent campaign have been active since March 2025.

    Seqrite Protection.

    IOCs

    SHA-256 Filenames
    40450b4212481492d2213d109a0cd0f42de8e813de42d53360da7efac7249df4 \附件.zip
    ac6e0ee1328cfb1b6ca0541e4dfe7ba6398ea79a300c4019253bd908ab6a3dc0 drstat.dll
    645f9f81eb83e52bbbd0726e5bf418f8235dd81ba01b6a945f8d6a31bf406992 drstat.exe
    ba4f9b324809876f906f3cb9b90f8af2f97487167beead549a8cddfd9a7c2fdc tcp_windows_amd64.dll
    bb6ab67ddbb74e7afb82bb063744a91f3fecf5fd0f453a179c0776727f6870c7 mscoree.dll
    2206cc6bd9d15cf898f175ab845b3deb4b8627102b74e1accefe7a3ff0017112 tcp_windows_amd64.exe
    a0f4ee6ea58a8896d2914176d2bfbdb9e16b700f52d2df1f77fe6ce663c1426a memfd:a(deleted)

     

     

    IP/Domains

    IP
    62.234.24.38
    47.115.51.44
    47.123.7.206

    MITRE ATT&CK

    Tactic Technique ID Technique Name Sub-technique ID Sub-Technique Name
    Reconnaissance T1595 Active Scanning T1595.002 Vulnerability Scanning
    Reconnaissance T1588 Obtain Capabilities T1588.002 Tool
    Initial Access T1566 Phishing T1566.001 Spear phishing Attachment
    Execution T1204 User Execution T1204.002 Malicious File.
    Persistence
    Defense Evasion T1140 Deobfuscate/Decode Files or Information
    Defense Evasion T1574 Hijack Execution Flow T1574.001 DLL
    Defense Evasion T1027 Obfuscation Files or Information T1027.007 Dynamic API Resolution
    Defense Evasion T1027 Obfuscation Files or Information T1027.013 Encrypted/Encoded File
    Defense Evasion T1055 Process Injection
    Defense Evasion T1497 Virtualization/Sandbox Evasion T1497.003 Time Based Evasion
    Discovery T1046 Network Service Discovery

     



    Source link