برچسب: create

  • How to extract, create, and navigate Zip Files in C# | Code4IT

    How to extract, create, and navigate Zip Files in C# | Code4IT


    Learn how to zip and unzip compressed files with C#. Beware: it’s not as obvious as it might seem!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working with local files, you might need to open, create, or update Zip files.

    In this article, we will learn how to work with Zip files in C#. We will learn how to perform basic operations such as opening, extracting, and creating a Zip file.

    The main class we will use is named ZipFile, and comes from the System.IO.Compression namespace. It’s been present in C# since .NET Framework 4.5, so we can say it’s pretty stable 😉 Nevertheless, there are some tricky points that you need to know before using this class. Let’s learn!

    Using C# to list all items in a Zip file

    Once you have a Zip file, you can access the internal items without extracting the whole Zip.

    You can use the ZipFile.Open method.

    using ZipArchive archive = ZipFile.Open(zipFilePath, ZipArchiveMode.Read);
    System.Collections.ObjectModel.ReadOnlyCollection<ZipArchiveEntry> entries = archive.Entries;
    

    Notice that I specified the ZipArchiveMode. This is an Enum whose values are Read, Create, and Update.

    Using the Entries property of the ZipArchive, you can access the whole list of files stored within the Zip folder, each represented by a ZipArchiveEntry instance.

    All entries in the current Zip file

    The ZipArchiveEntry object contains several fields, like the file’s name and the full path from the root archive.

    Details of a single ZipEntry item

    There are a few key points to remember about the entries listed in the ZipArchiveEntry.

    1. It is a ReadOnlyCollection<ZipArchiveEntry>: it means that even if you find a way to add or update the items in memory, the changes are not applied to the actual files;
    2. It lists all files and folders, not only those at the root level. As you can see from the image above, it lists both the files at the root level, like File.txt, and those in inner folders, such as TestZip/InnerFolder/presentation.pptx;
    3. Each file is characterized by two similar but different properties: Name is the actual file name (like presentation.pptx), while FullName contains the path from the root of the archive (e.g. TestZip/InnerFolder/presentation.pptx);
    4. It lists folders as if they were files: in the image above, you can see TestZip/InnerFolder. You can recognize them because their Name property is empty and their Length is 0;

    Folders are treated like files, but with no Size or Name

    Lastly, remember that ZipFile.Open returns an IDisposable, so you should place the operations within a using statement.

    ❓❓A question for you! Why do we see an item for the TestZip/InnerFolder folder, but there is no reference to the TestZip folder? Drop a comment below 📩

    Extracting a Zip folder is easy but not obvious.

    We have only one way to do that: by calling the ZipFile.ExtractToDirectory method.

    It accepts as mandatory parameters the path of the Zip file to be extracted and the path to the destination:

    var zipPath = @"C:\Users\d.bellone\Desktop\TestZip.zip";
    var destinationPath = @"C:\Users\d.bellone\Desktop\MyDestination";
    ZipFile.ExtractToDirectory(zipPath, destinationPath);
    

    Once you run it, you will see the content of the Zip copied and extracted to the MyDestination folder.

    Note that this method creates the destination folder if it does not exist.

    This method accepts two more parameters:

    • entryNameEncoding, by which you can specify the encoding. The default value is UTF-8.
    • overwriteFiles allows you to specify whether it must overwrite existing files. The default value is false. If set to false and the destination files already exist, this method throws a System.IO.IOException saying that the file already exists.

    Using C# to create a Zip from a folder

    The key method here is ZipFile.CreateFromDirectory, which allows you to create Zip files in a flexible way.

    The first mandatory value is, of course, the source directory path.

    The second mandatory parameter is the destination of the resulting Zip file.

    It can be the local path to the file:

    string sourceFolderPath = @"\Desktop\myFolder";
    string destinationZipPath = @"\Desktop\destinationFile.zip";
    
    ZipFile.CreateFromDirectory(sourceFolderPath, destinationZipPath);
    

    Or it can be a Stream that you can use later for other operations:

    using (MemoryStream memStream = new MemoryStream())
    {
        string sourceFolderPath = @"\Desktop\myFolder";
        ZipFile.CreateFromDirectory(sourceFolderPath, memStream);
    
        var lenght = memStream.Length;// here the Stream is populated
    }
    

    You can finally add some optional parameters:

    • compressionLevel, whose values are Optimal, Fastest, NoCompression, SmallestSize.
    • includeBaseDirectory: a flag that defines if you have to copy only the first-level files or also the root folder.

    A quick comparison of the four Compression Levels

    As we just saw, we have four compression levels: Optimal, Fastest, NoCompression, and SmallestSize.

    What happens if I use the different values to zip all the photos and videos of my latest trip?

    The source folder’s size is 16.2 GB.

    Let me zip it with the four compression levels:

     private long CreateAndTrack(string sourcePath, string destinationPath, CompressionLevel compression)
     {
         Stopwatch stopwatch = Stopwatch.StartNew();
    
         ZipFile.CreateFromDirectory(
             sourceDirectoryName: sourcePath,
             destinationArchiveFileName: destinationPath,
             compressionLevel: compression,
             includeBaseDirectory: true
             );
         stopwatch.Stop();
    
         return stopwatch.ElapsedMilliseconds;
     }
    
    // in Main...
    
    var smallestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Smallest.zip"),
        CompressionLevel.SmallestSize);
    
    var noCompressionTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "NoCompression.zip"),
        CompressionLevel.NoCompression);
    
    var fastestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Fastest.zip"),
        CompressionLevel.Fastest);
    
    var optimalTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Optimal.zip"),
        CompressionLevel.Optimal);
    

    By executing this operation, we have this table:

    Compression Type Execution time (ms) Execution time (s) Size (bytes) Size on disk (bytes)
    Optimal 483481 483 17,340,065,594 17,340,067,840
    Fastest 661674 661 16,935,519,764 17,004,888,064
    Smallest 344756 344 17,339,881,242 17,339,883,520
    No Compression 42521 42 17,497,652,162 17,497,653,248

    We can see a bunch of weird things:

    • Fastest compression generates a smaller file than Smallest compression.
    • Fastest compression is way slower than Smallest compression.
    • Optimal lies in the middle.

    This is to say: don’t trust the names; remember to benchmark the parts where you need performance, even with a test as simple as this.

    Wrapping up

    This was a quick article about one specific class in the .NET ecosystem.

    As we saw, even though the class is simple and it’s all about three methods, there are some things you should keep in mind before using this class in your code.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 4 ways to create Unit Tests without Interfaces in C# &vert; Code4IT

    4 ways to create Unit Tests without Interfaces in C# | Code4IT


    C# devs have the bad habit of creating interfaces for every non-DTO class because «we need them for mocking!». Are you sure it’s the only way?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common traits of C# developers is the excessive usage of interfaces.

    For every non-DTO class we define, we usually also create the related interface. Most of the time, we don’t need it because we have multiple implementations of an interface. Instead, we say that we need an interface to enable mocking.

    That’s true; it’s pretty straightforward to mock an interface: lots of libraries, like Moq and NSubstitute, allow you to create mocks and pass them to the class under test. What if there were another way?

    In this article, we will learn how to have complete control over a dependency while having the concrete class, and not the related interface, injected in the constructor.

    C# devs always add interfaces, just in case

    If you’re a developer like me, you’ve been taught something like this:

    One of the SOLID principles is Dependency Inversion; to achieve it, you need Dependency Injection. The best way to do that is by creating an interface, injecting it in the consumer’s constructor, and then mapping the interface and the concrete class.

    Sometimes, somebody explains that we don’t need interfaces to achieve Dependency Injection. However, there are generally two arguments proposed by those who keep using interfaces everywhere: the “in case I need to change the database” argument and, even more often, the “without interfaces, I cannot create mocks”.

    Are we sure?

    The “Just in case I need to change the database” argument

    One phrase that I often hear is:

    Injecting interfaces allows me to change the concrete implementation of a class without worrying about the caller. You know, just in case I had to change the database engine…

    Yes, that’s totally right – using interfaces, you can change the internal implementation in a bat of an eye.

    Let’s be honest: in all your career, how many times have you changed the underlying database? In my whole career, it happened just once: we tried to build a solution using Gremlin for CosmosDB, but it turned out to be too expensive – so we switched to a simpler MongoDB.

    But, all in all, it wasn’t only thanks to the interfaces that we managed to switch easily; it was because we strictly separated the classes and did not leak the models related to Gremlin into the core code. We structured the code with a sort of Hexagonal Architecture, way before this term became a trend in the tech community.

    Still, interfaces can be helpful, especially when dealing with multiple implementations of the same methods or when you want to wrap your head around the methods, inputs, and outputs exposed by a module.

    The “I need to mock” argument

    Another one I like is this:

    Interfaces are necessary for mocking dependencies! Otherwise, how can I create Unit Tests?

    Well, I used to agree with this argument. I was used to mocking interfaces by using libraries like Moq and defining the behaviour of the dependency using the SetUp method.

    It’s still a valid way, but my point here is that that’s not the only one!

    One of the simplest tricks is to mark your classes as abstract. But… this means you’ll end up with every single class marked as abstract. Not the best idea.

    We have other tools in our belt!

    A realistic example: Dependency Injection without interfaces

    Let’s start with a real-ish example.

    We have a NumbersRepository that just exposes one method: GetNumbers().

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, int.MaxValue).ToArray();
        }
    
        public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Generally, one would be tempted to add an interface with the same name as the class, INumbersRepository, and include the GetNumbers method in the interface definition.

    We are not going to do that – the interface is not necessary, so why clutter the code with something like that?

    Now, for the consumer. We have a simple NumbersSearchService that accepts, via Dependency Injection, an instance of NumbersRepository (yes, the concrete class!) and uses it to perform a simple search:

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    }
    

    To add these classes to your ASP.NET project, you can add them in the DI definition like this:

    builder.Services.AddSingleton<NumbersRepository>();
    builder.Services.AddSingleton<NumbersSearchService>();
    

    Without adding any interface.

    Now, how can we test this class without using the interface?

    Way 1: Use the “virtual” keyword in the dependency to create stubs

    We can create a subclass of the dependency, even if it is a concrete class, by overriding just some of its functionalities.

    For example, we can choose to mark the GetNumbers method in the NumbersRepository class as virtual, making it easily overridable from a subclass.

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
    -    public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    +    public virtual IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Yes, we can mark a method as virtual even if the class is concrete!

    Now, in our Unit Tests, we can create a subtype of NumbersRepository to have complete control of the GetNumbers method:

    internal class StubNumberRepo : NumbersRepository
    {
        private IEnumerable<int> _numbers;
    
        public void SetNumbers(params int[] numbers) => _numbers = numbers;
    
        public override IEnumerable<int> GetNumbers() => _numbers;
    }
    

    We have overridden the GetNumbers method, but to do so, we had to include a new method, SetNumbers, to define the expected result of the former method.

    We then can use it in our tests like this:

    [Test]
    public void Should_WorkWithStubRepo()
    {
        // Arrange
        var repository = new StubNumberRepo();
        repository.SetNumbers(1, 2, 3);
        var service = new NumbersSearchService(repository);
    
        // Act
        var result = service.Contains(3);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    You now have the full control over the subclass. But this approach comes with a problem: if you have multiple methods marked as virtual, and you are going to use all of them in your test classes, then you will need to override every single method (to have control over them) and work out how to decide whether to use the concrete method or the stub implementation.

    For example, we can update the StubNumberRepo to let the consumer choose if we need the dummy values or the base implementation:

    internal class StubNumberRepo : NumbersRepository
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers;
            _useStubNumbers = true;
        }
    
        public override IEnumerable<int> GetNumbers()
        {
            if (_useStubNumbers)
                return _numbers;
            return base.GetNumbers();
        }
    }
    

    With this approach, by default, we use the concrete implementation of NumbersRepository because _useStubNumbers is false. If we call the SetNumbers method, we also specify that we don’t want to use the original implementation.

    Way 2: Use the virtual keyword in the service to avoid calling the dependency

    Similar to the previous approach, we can mark some methods of the caller as virtual to allow us to change parts of our class while keeping everything else as it was.

    To achieve it, we have to refactor a little our Service class:

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
    -       var numbers = _repository.GetNumbers();
    +       var numbers = GetNumbers();
            return numbers.Contains(number);
        }
    
    +    public virtual IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    }
    

    The key is that we moved the calls to the external references to a separate method, marking it as virtual.

    This way, we can create a stub class of the Service itself without the need to stub its dependencies:

    internal class StubNumberSearch : NumbersSearchService
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public StubNumberSearch() : base(null)
        {
        }
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers.ToArray();
            _useStubNumbers = true;
        }
    
        public override IEnumerable<int> GetNumbers()
            => _useStubNumbers ? _numbers : base.GetNumbers();
    }
    

    The approach is almost identical to the one we saw before. The difference can be seen in your tests:

    [Test]
    public void Should_UseStubService()
    {
        // Arrange
        var service = new StubNumberSearch();
        service.SetNumbers(12, 15, 30);
    
        // Act
        var result = service.Contains(15);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    There is a problem with this approach: many devs (correctly) add null checks in the constructor to ensure that the dependencies are not null:

    public NumbersSearchService(NumbersRepository repository)
    {
        ArgumentNullException.ThrowIfNull(repository);
        _repository = repository;
    }
    

    While this approach makes it safe to use the NumbersSearchService reference within the class’ methods, it also stops us from creating a StubNumberSearch. Since we want to create an instance of NumbersSearchService without the burden of injecting all the dependencies, we call the base constructor passing null as a value for the dependencies. If we validate against null, the stub class becomes unusable.

    There’s a simple solution: adding a protected empty constructor:

    public NumbersSearchService(NumbersRepository repository)
    {
        ArgumentNullException.ThrowIfNull(repository);
        _repository = repository;
    }
    
    protected NumbersSearchService()
    {
    }
    

    We mark it as protected because we want that only subclasses can access it.

    Way 3: Use the “new” keyword in methods to hide the base implementation

    Similar to the virtual keyword is the new keyword, which can be applied to methods.

    We can then remove the virtual keyword from the base class and hide its implementation by marking the overriding method as new.

    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            ArgumentNullException.ThrowIfNull(repository);
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    
    -    public virtual IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    +    public IEnumerable<int> GetNumbers() => _repository.GetNumbers();
    }
    

    We have restored the original implementation of the Repository.

    Now, we can update the stub by adding the new keyword.

    internal class StubNumberSearch : NumbersSearchService
    {
        private IEnumerable<int> _numbers;
        private bool _useStubNumbers;
    
        public void SetNumbers(params int[] numbers)
        {
            _numbers = numbers.ToArray();
            _useStubNumbers = true;
        }
    
    -    public override IEnumerable<int> GetNumbers() => _useStubNumbers ? _numbers : base.GetNumbers();
    +    public new IEnumerable<int> GetNumbers() => _useStubNumbers ? _numbers : base.GetNumbers();
    }
    

    We haven’t actually solved any problem except for one: we can now avoid cluttering all our classes with the virtual keyword.

    A question for you! Is there any difference between using the new and the virtual keyword? When you should pick one instead of the other? Let me know in the comments section! 📩

    Way 4: Mock concrete classes by marking a method as virtual

    Sometimes, I hear developers say that mocks are the absolute evil, and you should never use them.

    Oh, come on! Don’t be so silly!

    That’s true, when using mocks you are writing tests on a irrealistic environment. But, well, that’s exactly the point of having mocks!

    If you think about it, at school, during Science lessons, we were taught to do our scientific calculations using approximations: ignore the air resistance, ignore friction, and so on. We knew that that world did not exist, but we removed some parts to make it easier to validate our hypothesis.

    In my opinion, it’s the same for testing. Mocks are useful to have full control of a specific behaviour. Still, only relying on mocks makes your tests pretty brittle: you cannot be sure that your system is working under real conditions.

    That’s why, as I explained in a previous article, I prefer the Testing Diamond over the Testing Pyramid. In many real cases, five Integration Tests are more valuable than fifty Unit Tests.

    But still, mocks can be useful. How can we use them if we don’t have interfaces?

    Let’s start with the basic example:

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
        public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    
    public class NumbersSearchService
    {
        private readonly NumbersRepository _repository;
    
        public NumbersSearchService(NumbersRepository repository)
        {
            ArgumentNullException.ThrowIfNull(repository);
            _repository = repository;
        }
    
        public bool Contains(int number)
        {
            var numbers = _repository.GetNumbers();
            return numbers.Contains(number);
        }
    }
    

    If we try to use Moq to create a mock of NumbersRepository (again, the concrete class) like this:

    [Test]
    public void Should_WorkWithMockRepo()
    {
        // Arrange
        var repository = new Moq.Mock<NumbersRepository>();
        repository.Setup(_ => _.GetNumbers()).Returns(new int[] { 1, 2, 3 });
        var service = new NumbersSearchService(repository.Object);
    
        // Act
        var result = service.Contains(3);
    
        // Assert
        Assert.That(result, Is.True);
    }
    

    It will fail with this error:

    System.NotSupportedException : Unsupported expression: _ => _.GetNumbers()
    Non-overridable members (here: NumbersRepository.GetNumbers) may not be used in setup / verification expressions.

    This error occurs because the implementation GetNumbers is fixed as defined in the NumbersRepository class and cannot be overridden.

    Unless you mark it as virtual, as we did before.

    public class NumbersRepository
    {
        private readonly int[] _allNumbers;
    
        public NumbersRepository()
        {
            _allNumbers = Enumerable.Range(0, 100).ToArray();
        }
    
    -    public IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    +    public virtual IEnumerable<int> GetNumbers() => Random.Shared.GetItems(_allNumbers, 50);
    }
    

    Now the test passes: we have successfully mocked a concrete class!

    Further readings

    Testing is a crucial part of any software application. I personally write Unit Tests even for throwaway software – this way, I can ensure that I’m doing the correct thing without the need for manual debugging.

    However, one part that is often underestimated is the code quality of tests. Tests should be written even better than production code. You can find more about this topic here:

    🔗 Tests should be even more well-written than production code | Code4IT

    Also, Unit Tests are not enough. You should probably write more Integration Tests than Unit Tests. This one is a testing strategy called Testing Diamond.

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    This article first appeared on Code4IT 🐧

    Clearly, you can write Integration Tests for .NET APIs easily. In this article, I explain how to create and customize Integration Tests using NUnit:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    In this article, we learned that it’s not necessary to create interfaces for the sake of having mocks.

    We have different other options.

    Honestly speaking, I’m still used to creating interfaces and using them with mocks.

    I find it easy to do, and this approach provides a quick way to create tests and drive the behaviour of the dependencies.

    Also, I recognize that interfaces created for the sole purpose of mocking are quite pointless: we have learned that there are other ways, and we should consider trying out these solutions.

    Still, interfaces are quite handy for two “non-technical” reasons:

    • using interfaces, you can understand in a glimpse what are the operations that you can call in a clean and concise way;
    • interfaces and mocks allow you to easily use TDD: while writing the test cases, you also define what methods you need and the expected behaviour. I know you can do that using stubs, but I find it easier with interfaces.

    I know, this is a controversial topic – I’m not saying that you should remove all your interfaces (I think it’s a matter of personal taste, somehow!), but with this article, I want to highlight that you can avoid interfaces.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link

  • How to Create Responsive and SEO-friendly WebGL Text

    How to Create Responsive and SEO-friendly WebGL Text


    Responsive text article cover image

    Rendering text in WebGL opens up a whole new world of unique visual effects and creative possibilities that are often
    impossible to achieve with traditional HTML, CSS and JavaScript alone. By the end of this tutorial, we’ll have created
    WebGL-rendered text elements that perfectly mimic the underlying HTML structure. Since we’ll be taking an HTML-first
    approach, it will be responsive, SEO-friendly and accessible. We’ll be doing every step manually so you’ll gain a
    solid understanding of the principles behind merging HTML and WebGL, and text-specific stuff like how to translate CSS
    styles into the 3D world.

    We’ll be creating the below demo:

    We’ll start off with a standard styled HTML setup. Then we’ll recreate the text elements we have inside a 3D world.
    From there, we’ll position, scale and make the text responsive with the 3D space. Next, we’ll replicate the “mask
    reveal effect” in WebGL. And finally, we’ll apply some scroll-driven post processing effects to the scene.

    Below are the core steps we’ll follow to achieve the final result:

    1. Create the text as a HTML element and style it regularly using CSS
    2. Create a 3D world and recreate the text element within it
    3. Merge the 3D and 2D world, so that we can style the 3D text by using our viewport’s dimensions
    4. Sync the key properties like position, size and font — from the HTML element to the WebGL text element
    5. Hide the original HTML element
    6. Be left with only the 3D text, fully styled and positioned in sync with the hidden HTML structure
    7. Apply animations and post-processing to enhance our 3D scene

    Necessities and Prerequisites

    We’ll be using the Three.js library to create the 3D world, so you should already be familiar with its basics. For the
    creation of text meshes, we’ll be using the
    troika-three-text
    library, but you don’t have to be familiar with the library beforehand. If you’ve used HTML, CSS and JavaScript, know
    the basics of Three.JS,
    you’re good to go.

    Let’s get started.

    1. Creating the Regular HTML and Making it Responsive

    Before diving into the WebGL and Three.js implementation, we first need to create the HTML structure that we’ll later
    mimic in the 3D world. I’ve set up a very simple page with some quick responsive content — you can find the
    setup content
    in the demo repository under
    index.html
    and
    styles.css
    .

    HTML
    :

    <div class="content">
      <div class="container">
        <section class="section__heading">
          <h3 data-animation="webgl-text" class="text__2">THREE.JS</h3>
          <h2 data-animation="webgl-text" class="text__1">
            RESPONSIVE AND ACCESSIBLE TEXT
          </h2>
        </section>
        <section class="section__main__content">
          <p data-animation="webgl-text" class="text__2">
            THIS TEXT IS STYLED TO LOOK LIKE A TYPICAL BLOCK OF TEXT ON A STANDARD
            WEBSITE. BUT UNDER THE SURFACE, IT'S BEING RENDERED WITH WEBGL INSTEAD
            OF TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            THIS OPENS THE DOOR TO CUSTOM SHADER EFFECTS AND INTERACTIONS THAT GO
            BEYOND WHAT'S POSSIBLE WITH TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            WE KEEP THE UNDERYLING HTML STRUCTURE PRESENT IN THE DOM. RATHER THAN
            CREATING MESHES DIRECTLY IN THREE.JS, THE SCENE IS BUILT BY READING FROM
            THE EXISTING HTML CONTENT. THIS WAY, SCREEN READERS, SEARCH ENGINES, AND
            OTHER TOOLS CAN STILL INTERPRET THE PAGE AS EXPECTED.
          </p>
        </section>
        <section class="section__footer">
          <p data-animation="webgl-text" class="text__3">
            NOW GO CRAZY WITH THE SHADERS :)
          </p>
        </section>
      </div>
    </div>
    

    styles.css

    :root {
      --clr-text: #fdcdf9;
      --clr-selection: rgba(255, 156, 245, 0.3);
      --clr-background: #212720;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Black.ttf") format("truetype");
      font-weight: 900;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Bold.ttf") format("truetype");
      font-weight: 700;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraBold.ttf") format("truetype");
      font-weight: 800;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraLight.ttf") format("truetype");
      font-weight: 200;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Light.ttf") format("truetype");
      font-weight: 300;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Medium.ttf") format("truetype");
      font-weight: 500;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Regular.ttf") format("truetype");
      font-weight: 400;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-SemiBold.ttf") format("truetype");
      font-weight: 600;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Thin.ttf") format("truetype");
      font-weight: 100;
      font-style: normal;
      font-display: swap;
    }
    
    body {
      background: var(--clr-background);
    }
    
    canvas {
      position: fixed;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100vh;
      pointer-events: none;
    }
    
    ::selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    ::-moz-selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    .text__1,
    .text__2,
    .text__3 {
      color: var(--clr-text);
      text-align: center;
      margin-block-start: 0;
      margin-block-end: 0;
    }
    
    .content {
      width: 100%;
      font-family: Humane;
      font-size: 0.825vw;
    
      @media (max-width: 768px) {
        font-size: 2vw;
      }
    }
    .container {
      display: flex;
      flex-direction: column;
      align-items: center;
    
      width: 70em;
      gap: 17.6em;
      padding: 6em 0;
    
      @media (max-width: 768px) {
        width: 100%;
      }
    }
    
    .container section {
      display: flex;
      flex-direction: column;
      align-items: center;
      height: auto;
    }
    
    .section__main__content {
      gap: 5.6em;
    }
    
    .text__1 {
      font-size: 19.4em;
      font-weight: 700;
      max-width: 45em;
    
      @media (max-width: 768px) {
        font-size: 13.979em;
      }
    }
    
    .text__2 {
      font-size: 4.9em;
      max-width: 7.6em;
      letter-spacing: 0.01em;
    }
    
    .text__3 {
      font-size: 13.979em;
      max-width: 2.4em;
    }
    

    A Few Key Notes about the Setup

    • The
      <canvas>
      element is set to cover the entire screen, fixed in place behind the main content. We want a full screen canvas
      covering the entire screen behind our main content at all times.
    • All text elements intended for WebGL rendering are marked with data-animation=”webgl-text” for clarity and easy
      selection when we begin scripting.

    The purpose of this setup is to function as the “placeholder” that we can mimic in our 3D implementation. So, it’s
    important to
    position and style your text at this stage
    to ensure it matches the final sizing and positioning that you want to achieve. All text formatting properties like
    font-size, letter-spacing, line-height etc. are the properties you want to focus on, because we’ll later read these
    computed styles directly from the DOM during the WebGL phase. Color is optional here, as we can handle text coloring
    later with shaders inside WebGL.

    That’s it for the HTML and CSS setup! It’s all we need for the foundation to move onto our JavaScript and WebGL
    implementation.

    2. Initial 3D World Setup

    Let’s move onto the JavaScript and WebGL implementation. I’ll be using TypeScript, but you can easily follow along
    with vanilla JavaScript if you prefer. I’m assuming you’re already familiar with the basics of Three.js, so I’ll focus
    on explaining the high-level setup rather than covering every detail.

    Below is the starter TypeScript and Three.JS base that I’ll be using for this demo.

    // main.ts
    
    import Commons from "./classes/Commons";
    import * as THREE from "three";
    
    /**
     * Main entry-point.
     * Creates Commons and Scenes
     * Starts the update loop
     * Eventually creates Postprocessing and Texts.
      */
    class App {
      private commons!: Commons;
    
      scene!: THREE.Scene;
    
      constructor() {
        document.addEventListener("DOMContentLoaded", async () => {
          await document.fonts.ready; // Important to wait for fonts to load when animating any texts.
    
          this.commons = Commons.getInstance();
          this.commons.init();
    
          this.createScene();
          
          this.addEventListeners();
    
          this.update();
        });
      }
    
      private createScene() {
        this.scene = new THREE.Scene();
      }
    
      /**
       * The main loop handler of the App
       * The update function to be called on each frame of the browser.
       * Calls update on all other parts of the app
       */
      private update() {
        this.commons.update();
    
        this.commons.renderer.render(this.scene, this.commons.camera);
    
        window.requestAnimationFrame(this.update.bind(this));
      }
    
      private addEventListeners() {
        window.addEventListener("resize", this.onResize.bind(this));
      }
    
      private onResize() {
        this.commons.onResize();
      }
    }
    
    export default new App();
    
    // Commons.ts
    
    import { PerspectiveCamera, WebGLRenderer, Clock } from "three";
    
    import Lenis from "lenis";
    
    export interface Screen {
      width: number;
      height: number;
      aspect: number;
    }
    
    export interface Sizes {
      screen: Screen;
      pixelRatio: number
    }
    
    /**
     * Singleton class for Common stuff.
     * Camera
     * Renderer
     * Lenis
     * Time
     */
    export default class Commons {
      private constructor() {}
      
      private static instance: Commons;
    
      lenis!: Lenis;
      camera!: PerspectiveCamera;
      renderer!: WebGLRenderer;
    
      private time: Clock = new Clock();
      elapsedTime!: number;
    
      sizes: Sizes = {
        screen: {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        },
        pixelRatio: this.getPixelRatio(),
      };
    
      private distanceFromCamera: number = 1000;
    
      /**
       * Function to be called to either create Commons Singleton instance, or to return existing one.
       * TODO AFTER: Call instances init() function.
       * @returns Commons Singleton Instance.
       */
      static getInstance() {
        if (this.instance) return this.instance;
    
        this.instance = new Commons();
        return this.instance;
      }
    
      /**
       * Initializes all-things Commons. To be called after instance is set.
       */
      init() {
        this.createLenis();
        this.createCamera();
        this.createRenderer();
      }
    
      /**
       * Creating Lenis instance.
       * Sets autoRaf to true so we don't have to manually update Lenis on every frame.
       * Resets possible saved scroll position.
       */
      private createLenis() {
        this.lenis = new Lenis({ autoRaf: true, duration: 2 });
      }
    
      private createCamera() {
        this.camera = new PerspectiveCamera(
          70,
          this.sizes.screen.aspect,
          200,
          2000
        );
        this.camera.position.z = this.distanceFromCamera;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * createRenderer(): Creates the common WebGLRenderer to be used.
       */
      private createRenderer() {
        this.renderer = new WebGLRenderer({
          alpha: true, // Sets scene background to transparent, so our body background defines the background color
        });
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
    
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
    	  // Creating canvas element and appending to body element.
        document.body.appendChild(this.renderer.domElement); 
      }
    
      /**
       * Single source of truth to get pixelRatio.
       */
      getPixelRatio() {
        return Math.min(window.devicePixelRatio, 2);
      }
    
      /**
       * Resize handler function is called from the entry-point (main.ts)
       * Updates the Common screen dimensions.
       * Updates the renderer.
       * Updates the camera.
       */
      onResize() {
        this.sizes.screen = {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        };
        this.sizes.pixelRatio = this.getPixelRatio();
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
        this.onResizeCamera();
      }
    
      /**
       * Handler function that is called from onResize handler.
       * Updates the perspective camera with the new adjusted screen dimensions
       */
      private onResizeCamera() {
        this.camera.aspect = this.sizes.screen.aspect;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * Update function to be called from entry-point (main.ts)
       */
      update() {
        this.elapsedTime = this.time.getElapsedTime();
      }
    }
    

    A Note About Smooth Scroll

    When syncing HTML and WebGL worlds,
    you should use a custom scroll
    . This is because the native scroll in browsers updates the scroll position at irregular intervals and does not thus
    guarantee frame-perfect updates with our requestAnimationFrame loop and our WebGL world, causing a
    jittery and unsynchronized movement
    .

    By integrating a custom scroll (Lenis in this case), we ensure our scroll updates perfectly match the frame updates of
    our WebGL world.

    Right now we are seeing an empty 3D world, continuously being rendered.

    We’re only missing one thing to get something visible in our 3D world: the creation of the text elements. So let’s
    move onto creating our WebGLText class next.

    3. Creating WebGLText Class and Texts Meshes

    For the creation of the text meshes, we’ll be using
    troika-three-text
    library.

    npm i troika-three-text

    We’ll now create a reusable

    WebGLText
    class

    . This will handle turning each HTML element into a 3D text mesh, using Troika and our Three.js scene.

    Here’s the basic setup:

    // WebGLText.ts
    
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // @ts-ignore
    import { Text } from "troika-three-text";
    
    interface Props {
      scene: THREE.Scene;
      element: HTMLElement;
    }
    
    export default class WebGLText {
      commons: Commons;
    
      scene: THREE.Scene;
      element: HTMLElement;
    
      computedStyle: CSSStyleDeclaration;
      font!: string; // Path to our .ttf font file.
      bounds!: DOMRect;
      color!: THREE.Color;
      material!: THREE.ShaderMaterial;
      mesh!: Text;
    
      // We assign the correct font bard on our element's font weight from here
      weightToFontMap: Record<string, string> = {
        "900": "/fonts/Humane-Black.ttf",
        "800": "/fonts/Humane-ExtraBold.ttf",
        "700": "/fonts/Humane-Bold.ttf",
        "600": "/fonts/Humane-SemiBold.ttf",
        "500": "/fonts/Humane-Medium.ttf",
        "400": "/fonts/Humane-Regular.ttf",
        "300": "/fonts/Humane-Light.ttf",
        "200": "/fonts/Humane-ExtraLight.ttf",
        "100": "/fonts/Humane-Thin.ttf",
      };
      
      private y: number = 0; // Scroll-adjusted bounds.top
      
      private isVisible: boolean = false;
    
      constructor({ scene, element }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
        this.element = element;
    
        this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
      }
    }
    

    We have access to the
    Text class
    from Troika, which allows us to create text meshes elements and apply styling to it using familiar properties like
    fontSize, letterSpacing, and font. I’ll cover everything you need to style your text responsively in this tutorial,
    but I implore you to take a look at the full documentation and its possibilities
    here
    .

    Troika doesn’t ship with TypeScript definitions out of the box, so if you’re using TS, you can quickly get around this
    by creating a type declaration file in the root of your project. It’s not pretty, but it gets the job done while
    keeping TypeScript happy.

    // troika.d.ts
    
    declare module "troika-three-text" {
      const value: any;
      export default value;
    }

    Let’s start by creating new methods called createFont(), createColor() and createMesh().

    createFont()
    : Selects the appropriate .ttf file based on the DOM element’s font-weight. If a match isn’t found, we fall back to
    the regular weight. Adjust the mapping to match your own font files and multiple font families if needed.

    // WebGLText.ts 
    
    private createFont() {
        this.font =
          this.weightToFontMap[this.computedStyle.fontWeight] ||
          "/fonts/Humane-Regular.ttf";
    }

    createColor()
    : Converts the computed CSS color into a THREE.Color instance:

    // WebGLText.ts 
    
    private createColor() {
        this.color = new THREE.Color(this.computedStyle.color);
    }

    createMesh():
    Instantiates the text mesh and sets some basic properties. Copies the text’s inner text and sets it onto the mesh.
    Adds the mesh to our Three.JS scene. We anchor the text from the left-center to match typical HTML layout
    expectations.

    // WebGLText.ts 
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Copying HTML content over to the mesh
      this.mesh.font = this.font;
    
      // Anchor the text to the left-center (instead of center-center)
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.color = this.color;
    
      this.scene.add(this.mesh);
    }

    ⚠️ When copying text contents over to the Mesh, avoid using innerHTML or textContent and use innerText instead as it
    gives the most layout-accurate and consistent results.

    setStaticValues
    (): Let’s also create a baby setStaticValues() method which will set the critical properties of our text mesh based on
    the computedStyle.

    We sets values like font size based on computed CSS. We’ll expand this more as we sync more styles down the line.

    We want to call all these methods in the constructor like this:

    // WebGLText.ts 
     constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createMesh();
      this.setStaticValues();
    }

    Instantiating Text Elements from DOM

    Finally, let’s update our App class (main.ts), and hook this all up by scanning for DOM elements with a
    data-animation=”webgl-text” attribute — creating a WebGLText instance for each one:

    // main.ts
    
    texts!: Array<WebGLText>;
    
    // ...
    
    private createWebGLTexts() {
      const texts = document.querySelectorAll('[data-animation="webgl-text"]');
    
      if (texts) {
        this.texts = Array.from(texts).map((el) => {
          const newEl = new WebGLText({
            element: el as HTMLElement,
            scene: this.scene,
          });
    
          return newEl;
        });
      }
    }
    

    Make sure to call this method in the constructor on initialization. This will populate our scene with styled text
    meshes based on our DOM content.

    That’s all we need to have our text meshes visible, it’s not the prettiest sight to behold, but at least we got
    everything working:

    Next Challenge: Screen vs. 3D Space Mismatch

    Even though we copy the font size directly from the DOM, the scale looks different in 3D. That’s because
    WebGL units don’t map 1:1 with screen pixels
    , and they operate in different coordinate systems. This mismatch will become even more obvious if we start
    positioning and animating elements.

    To get true visual parity between our DOM elements and WebGL text, we need to bridge the gap between screen space and
    3D space. Let’s tackle that next so our text sizes and positions actually match with what we see in the browser.

    4. Syncing Dimensions

    The major problem when syncing HTML and WebGL dimensions is that things between them aren’t exactly pixel-perfect.
    This is because the DOM and WebGL don’t “speak the same units” by default.

    • Web browsers work in screen pixels.
    • WebGL uses arbitrary units

    Our goal is simple:

    💡 Make one unit in the WebGL scene equal one pixel on the screen.

    To achieve this, we’ll adjust the camera’s field of view (FOV) so that visible area through the camera exactly matches
    the dimensions of the browser window in pixels.

    So, we’ll create a
    syncDimensions()
    function under our Commons class, which calculates our camera’s field of view such that 1 unit in the WebGL scene
    corresponds to 1 pixel on the screen —
    at a given distance from the camera.

     // Commons.ts 
    /**
      * Helper function that is called upon creation and resize
      * Updates the camera's fov according to the new dimensions such that the window's pixels match with that of WebGL scene
      */
    private syncDimensions() {
      this.camera.fov =
        2 *
        Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
        (180 / Math.PI);
    }

    This function will be called once when we create the camera, and every time that the screen is resized.

    
    //Commons.ts
    
    private createCamera() {
      this.camera = new PerspectiveCamera(
        70,
        this.sizes.screen.aspect,
        200,
        2000
      );
      this.camera.position.z = this.distanceFromCamera;
      this.syncDimensions(); // Syncing dimensions
      this.camera.updateProjectionMatrix();
    }
    
    // ...
    
    private onResizeCamera() {
      this.syncDimensions(); // Syncing dimensions
    
      this.camera.aspect = this.sizes.screen.aspect;
      this.camera.updateProjectionMatrix();
    }

    Let’s break down what’s actually going on here using the image below:

    We know:

    • The height of the screen
    • The distance from camera (Z)
    • The FOV of the camera is the vertical angle (fov y in the image)

    So our main goal is to set how wide (vertical angle) we see according to our screen height.

    Because the Z (distance from camera) and half of the screen height
    forms a right triangle
    (distance + height), we can solve for the angle using some basic trigonometry, and compute the FOV using the inverse
    tangent (
    atan
    ) of this triangle.

    Step-by-step Breakdown of the Formula

    this.sizes.screen.height / 2

    → This gives us half the screen’s pixel height — the opposite side of our triangle.

    this.distanceFromCamera

    → This is the adjacent side of the triangle — the distance from the camera to the 3D scene.

    Math.atan(opposite / adjacent)

    → Calculates half of the vertical FOV (in radians).

    *2

    → Since atan only gives half of the angle, we multiply it by 2 to get the full FOV.

    * (180 / Math.PI)

    → Converts the angle from radians to degrees (Three.js expects degrees for PerspectiveCamera’s fov)

    So the final formula comes down to:

    this.camera.fov =
      2 *
      Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
      (180 / Math.PI);

    That’s all we need to sync our dimensions, and this setup ensures that 1 unit in WebGL = 1 pixel on screen.

    Let’s move back to the text implementation.

    5. Setting Text Properties and Positioning

    Now that we’ve synced the WebGL scene to our screen’s pixel dimensions, we can start mapping HTML styles to our 3D
    text.

    If everything’s wired up correctly, you should see that the font size of the WebGL-rendered text matches the size of
    the underlying HTML, although the positioning is still off.

    Let’s sync more styling properties and positioning.

    Before we can position the 3D text, we need to get the DOM element’s position and size. We’ll create a new method in
    the WebGLText class called
    createBounds()
    ,
    and use the browser’s built-in getBoundingClientRect() method:

    // WebGLText.ts
    
    private createBounds() {
      this.bounds = this.element.getBoundingClientRect();
      this.y = this.bounds.top + this.commons.lenis.actualScroll;
    }

    And call this in the constructor:

      // WebGLText.ts
    
    constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createBounds(); // Creating bounds
      this.createMesh();
      this.setStaticValues();
    }

    Next, we’ll pull important typographic properties from the DOM (computed style) and pass them to the 3D mesh, so that
    it behaves like our native HTML text. (Again, you can see the full documentation and possible properties of troika
    here
    ). Below I’ve included the most important ones.

      // WebGLText.ts 
    
    private setStaticValues() {
      const { fontSize, letterSpacing, lineHeight, whiteSpace, textAlign } =
        this.computedStyle;
    
      const fontSizeNum = window.parseFloat(fontSize);
    
      this.mesh.fontSize = fontSizeNum;
    
      this.mesh.textAlign = textAlign;
    
      // Troika defines letter spacing in em's, so we convert to them
      this.mesh.letterSpacing = parseFloat(letterSpacing) / fontSizeNum;
    
      // Same with line height
      this.mesh.lineHeight = parseFloat(lineHeight) / fontSizeNum;
    
      // Important to define maxWidth for the mesh, so that our text doesn't overflow
      this.mesh.maxWidth = this.bounds.width;
    
      // Match whiteSpace behavior (e.g., 'pre', 'nowrap')
      this.mesh.whiteSpace = whiteSpace;
    }

    Troika accepts some of the properties in local em units, so we have to convert pixels into em’s by dividing the pixel
    values by the font size.

    Also, it’s important to set a maximum width (in pixels) to constrain the mesh’s layout — this prevents text from
    overflowing and ensures proper text wrapping.

    And finally, let’s create an
    update()
    function to be called on each frame that consistently positions our mesh according to the underlying DOM position.

    This is what it looks like:

    //WebGLText.ts
    
    update() {
      this.mesh.position.y =
        -this.y +
        this.commons.lenis.animatedScroll +
        this.commons.sizes.screen.height / 2 -
        this.bounds.height / 2;
    
      this.mesh.position.x =
        this.bounds.left - this.commons.sizes.screen.width / 2;
    }

    Breakdown:

    • this.y
      shifts the mesh upward by the element’s absolute Y offset.
    • lenis.animatedScroll
      re-applies the live animated scroll position.
    • Together, they give the current relative position inside the viewport.

    Since our WebGL coordinate system is centered in the middle of the screen (Y = 0 is center), we also:

    • Add half the screen height (to convert from DOM top-left origin to WebGL center origin)
    • Subtract half the text height to vertically center the text
    • Subtract half the screen width

    Now, we call this update function for each of the text instances in our entry-file:

      // main.ts
    
    private update() {
      this.commons.update();
    
      this.commons.renderer.render(this.scene, this.commons.camera);
    
    
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
    
      window.requestAnimationFrame(this.update.bind(this));
    }

    And now, the
    texts will perfectly follow DOM counterparts
    , even as the user scrolls.

    Let’s finalize our base text class implementation before diving into effects:

    Resizing

    We need to ensure that our WebGL text updates correctly on window resize events. This means
    recreating the computedStyle, bounds, and static values
    whenever the window size changes.

    Here’s the resize event handler:

     // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
    }

    And, call it in the entry-point for each of the text instances:

      // main.ts
    
    private onResize() {
      this.commons.onResize();
    
      // Resizing texts
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    }

    Once everything is working responsively and perfectly synced with the DOM, we can finally
    hide the original HTML text by setting it transparent
    — but we’ll keep it in place so it’s still selectable and accessible to the user.

    // WebGLText.ts
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMesh();
    this.setStaticValues();
    
    this.element.style.color = "transparent"; // Hide DOM element

    We should now have our perfectly responsive text meshes, and the user only sees the rendered WebGL text, while the DOM
    element remains fully intact for accessibility.

    Let’s add some effects!

    6. Adding a Custom shader and Replicating Mask Reveal Animations

    Troika also lets us use custom shader materials for meshes, giving us the flexibility to create complex effects beyond
    just setting colors.

    Let’s set up our initial custom shaders:

    Fragment Shader:

    // text.frag
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      gl_FragColor = vec4(uColor, 1.0); // Applying our custom color.
    }

    The fragment shader defines the color of the text using the uColor uniform.

    Vertex Shader:

    // text.vert
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    The vertex shader passes the texture coordinates (uv) to the fragment shader for the text rendering.

    Shader File Imports using Vite

    To handle shader files more easily, we can use the
    vite-plugin-glsl
    plugin together with Vite to directly import shader files like .frag and .vert in code:

    npm i vite-plugin-glsl -D
    // vite.config.ts
    
    import { defineConfig } from "vite";
    import glsl from "vite-plugin-glsl";
    
    export default defineConfig({
      plugins: [
        glsl({
          include: [
            "**/*.glsl",
            "**/*.wgsl",
            "**/*.vert",
            "**/*.frag",
            "**/*.vs",
            "**/*.fs",
          ],
          warnDuplicatedImports: true,
          defaultExtension: "glsl",
          watch: true,
          root: "/",
        }),
      ],
    });
    

    If you’re using TypeScript, you also need to declare the modules for shader files so TypeScript can understand how to
    import them:

    // shaders.d.ts
    
    declare module "*.frag" {
      const value: string;
      export default value;
    }
    
    declare module "*.vert" {
      const value: string;
      export default value;
    }
    
    declare module "*.glsl" {
      const value: string;
      export default value;
    }

    Creating Custom Shader Materials

    Let’s now create our custom ShaderMaterial and apply it to our mesh:

    // WebGLText.ts
    
    // Importing shaders
    import fragmentShader from "../../shaders/text/text.frag";
    import vertexShader from "../../shaders/text/text.vert";
    
    //...
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMaterial(); // Creating material
    this.createMesh();
    this.setStaticValues();
    
    //...
    
    private createMaterial() {
       this.material = new THREE.ShaderMaterial({
         fragmentShader,
         vertexShader
           uniforms: {
           uColor: new THREE.Uniform(this.color), // Passing our color to the shader
         },
       });
     }

    In the
    createMaterial()
    method, we define the
    ShaderMaterial
    using the imported shaders and pass in the uColor uniform, which allows us to dynamically control the color of the
    text based on our DOM-element.

    And now, instead of setting the color directly on the default mesh material, we apply our new custom material:

      // WebGLText.ts
    
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Always use innerText (not innerHTML or textContent).
      this.mesh.font = this.font;
    
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.material = this.material; //Using custom material instead of color
    }

    At this point, we are using our custom shader material, but so far, nothing in our output has changed. Let’s now setup
    show and hide animations using our custom shader, and replicate the mask reveal effect.

    Setting up Reveal Animations

    We’ll create an animation that uses a progress uniform (uProgress) to control the visibility and reveal progress of
    the text. The animation will be controlled using the motion library.

    First, we must install
    motion
    and import its
    animate
    and
    inView
    functions to our WebGLText class.

    npm i motion
    // WebGLText.ts
    
    import { inView, animate } from "motion";

    Now, let’s configure our class so that when the text steps into view,
    the show() function is called
    , and when it steps away,
    the hide() function is called
    . These methods also control the current visibility variable
    this.isVisible
    . These functions will control the uProgress variable, and animate it between 0 and 1.

    For this, we also must setup an addEventListeners() function:

     // WebGLText.ts
    
    /**
      * Inits visibility tracking using motion's inView function.
      * Show is called when the element steps into view, and hide is called when the element steps out of view
      */
    private addEventListeners() {
      inView(this.element, () => {
        this.show();
    
        return () => this.hide();
      });
    }
    
    show() {
      this.isVisible = true;
    
      animate(
        this.material.uniforms.uProgress,
        { value: 1 },
        { duration: 1.8, ease: [0.25, 1, 0.5, 1] }
      );
    }
    
    hide() {
      animate(
        this.material.uniforms.uProgress,
        { value: 0 },
        { duration: 1.8, onComplete: () => (this.isVisible = false) }
      );
    }

    Just make sure to call addEventListeners() in your constructor after setting up the class.

    Updating the Shader Material for Animation

    We’ll also add two additional uniform variables in our material for the animations:

    • uProgress
      : Controls the reveal progress (from 0 to 1).
    • uHeight
      : Used by the vertex shader to calculate vertical position offset.

    Updated
    createMaterial()
    method:

     // WebGLText.ts
    
    private createMaterial() {
      this.material = new THREE.ShaderMaterial({
        fragmentShader,
        vertexShader,
        uniforms: {
          uProgress: new THREE.Uniform(0),
          uHeight: new THREE.Uniform(this.bounds.height),
          uColor: new THREE.Uniform(this.color),
        },
      });
    }

    Since the uHeight is dependent on bounds, we also want to update the uniform variable upon resizing:

      // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
      this.material.uniforms.uHeight.value = this.bounds.height;
    }

    We now have the text class instance automatically calling show() and hide(), and animating the uProgress according to
    the visibility of our underlying DOM-element.

    For performance, you might want to update the update() method to only calculate a new position when the mesh is
    visible:

    update() {
      if (this.isVisible) {
        this.mesh.position.y =
          -this.y +
          this.commons.lenis.animatedScroll +
          this.commons.sizes.screen.height / 2 -
          this.bounds.height / 2;
    
        this.mesh.position.x =
          this.bounds.left - this.commons.sizes.screen.width / 2;
      }
    }

    Mask Reveal Theory and Shader Implementation

    Creating a mask reveal effect with custom shaders in WebGL is surprisingly simple when we break it down into two
    separate movements: one happening in the fragment shader and the other in the vertex shader. You might’ve seen this
    effect happen in WebGL on the page of
    Zajno
    , for example.

    Instead of overcomplicating the concept with complex masks or thinking about “lifting it up” behind a window (as we do
    in traditional HTML), we can think of it as two distinct actions that work together.

    1. Fragment Shader
      : We clip the text vertically, revealing it gradually from top to bottom.
    2. Vertex Shader
      : We translate the text’s position from the bottom to the top by its height.

    Together these two movements create the illusion of the text lifting itself up from behind a mask.

    Let’s update our fragment shader code:

    //text.frag
    
    uniform float uProgress; // Our progress value between 0 and 1
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      // Calculate the reveal threshold (bottom to top reveal)
      float reveal = 1.0 - vUv.y;
      
      // Discard fragments above the reveal threshold based on progress
      if (reveal > uProgress) discard;
    
      // Apply the color to the visible parts of the text
      gl_FragColor = vec4(uColor, 1.0);
    }
    
    • When uProgress is 0, the mesh is fully clipped out, and nothing is visible
    • When uProgress increases towards 1, the mesh reveals itself from top to bottom.

    For the vertex shader, we can simply pass the new uniform called uHeight, which stands for the height of our
    DOM-element (this.bounds.height), and translate the output vertically according to it and uProgress.

    //text.vert
    
    uniform float uProgress;
    uniform float uHeight; // Total height of the mesh passed in from JS
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      
      vec3 transformedPosition = position;
    
      // Push the mesh upward as it reveals
      transformedPosition.y -= uHeight * (1.0 - uProgress);
      
      gl_Position = projectionMatrix * modelViewMatrix * vec4(transformedPosition, 1.0);
    }
    • uHeight
      : Total height of the DOM-element (and mesh), passed in from JS.
    • When
      uProgress
      is
      0
      , the mesh is fully pushed down.
    • As
      uProgress
      reaches
      1
      , it resolves to its natural position.

    Now, we should have a beautifully on-scroll animating scene, where the texts reveal themselves as in regular HTML when
    they scroll into view.

    To spice things up, let’s add some scroll-velocity based post processing effects to our scene as the final step!

    7. Adding Post-processing

    Now that we’ve built our animated WebGL text with custom shaders and scroll-triggered reveals, we can push the visuals
    further with
    post-processing
    .

    Post-processing allows us to apply full-screen visual effects after the scene has been rendered. This is done by
    passing the final image through a series of custom shader passes.

    So, in this final section, we’ll:

    • Set up a PostProcessing class using Three.js’s EffectComposer
    • Add a custom RGB shift and wave distortion effect
    • Drive the distortion strength dynamically using the scroll velocity from our Lenis custom scroll instance

    Creating a PostProcessing class with EffectComposer

    Let’s create a PostProcessing class that will be intialized from our entry-point, and which will handle everything
    regarding postprocessing using Three.JS’s EffectComposer. Read more about the EffectComposer class
    here from Three.js’s documentation
    . We’ll also create new fragment and vertex shaders for the postprocessing class to use.

    // PostProcessing.ts
    
    import {
      EffectComposer,
      RenderPass,
      ShaderPass,
    } from "three/examples/jsm/Addons.js";
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // Importing postprocessing shaders
    import fragmentShader from "../../shaders/postprocessing/postprocessing.frag";
    import vertexShader from "../../shaders/postprocessing/postprocessing.vert";
    
    interface Props {
      scene: THREE.Scene;
    }
    
    export default class PostProcessing {
      // Scene and utility references
      private commons: Commons;
      private scene: THREE.Scene;
    
      private composer!: EffectComposer;
    
      private renderPass!: RenderPass;
      private shiftPass!: ShaderPass;
    
      constructor({ scene }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
    
        this.createComposer();
        this.createPasses();
      }
    
      private createComposer() {
        this.composer = new EffectComposer(this.commons.renderer);
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      private createPasses() {
        // Creating Render Pass (final output) first.
        this.renderPass = new RenderPass(this.scene, this.commons.camera);
        this.composer.addPass(this.renderPass);
    
        // Creating Post-processing shader for wave and RGB-shift effect.
        const shiftShader = {
          uniforms: {
            tDiffuse: { value: null },      // Default input from previous pass
            uVelocity: { value: 0 },        // Scroll velocity input
            uTime: { value: 0 },            // Elapsed time for animated distortion
          },
          vertexShader,
          fragmentShader,
        };
    
        this.shiftPass = new ShaderPass(shiftShader);
        this.composer.addPass(this.shiftPass);
      }
    
      /**
       * Resize handler for EffectComposer, called from entry-point.
       */
      onResize() {
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
        this.composer.render();
      }
    }
    

    Since we don’t have our postprocessing shaders created yet, make sure you create placeholder postprocessing.frag and
    postprocessing.vert shaders so the imports don’t fail.

    Example placeholders below:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
        gl_FragColor = texture2D(tDiffuse, vUv);
    }
    
    //postprocessing.vert
    varying vec2 vUv;
    
    void main() {
        vUv = uv;
            
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    Breakdown of the PostProcessing class

    Constructor:
    Initializes the class by storing the provided scene, grabbing the shared Commons instance, and then calling
    createComposer()
    and
    createPasses()
    .

    createComposer():
    Sets up the EffectComposer with the correct pixel ratio and canvas size:

    • EffectComposer wraps the WebGL renderer and allows chaining of multiple render passes.
    • Sized according to current viewport dimensions and pixel ratio

    createPasses():
    This method sets up all rendering passes applied to the scene.

    • RenderPass
      : The first pass that simply renders the scene with the main camera as regular.
    • ShaderPass (shiftPass)
      : A custom full-screen shader pass that we’ll create and which will create the RGB shift and wavy distortion
      effects.

    update():
    Method called on every frame. Updates the uTime uniform so we can animate effects over time, and renders the final
    post-processed image using
    composer.render()

    Initializing Post-processing

    To wire the post-processing system into our existing app, we update our main.ts:

      //main.ts
    private postProcessing!: PostProcessing;
    
    //....
    
    constructor() {
      document.addEventListener("DOMContentLoaded", async () => {
        await document.fonts.ready;
    
        this.commons = Commons.getInstance();
        this.commons.init();
    
        this.createScene();
        this.createWebGLTexts();
        this.createPostProcessing(); // Creating post-processing
        this.addEventListeners();
    
        this.update();
      });
    }
    
    // ...
    
    private createPostProcessing() {
      this.postProcessing = new PostProcessing({ scene: this.scene });
    }
    
    // ...
    
    private update() {
      this.commons.update();
      
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
      
      // Don't need line below as we're rendering everything using EffectComposer.
      // this.commons.renderer.render(this.scene, this.commons.camera);
      
      this.postProcessing.update(); // Post-processing class handles rendering of output from now on
    
      
      window.requestAnimationFrame(this.update.bind(this));
    }
    
    
    private onResize() {
      this.commons.onResize();
    
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    
      this.postProcessing.onResize(); // Resize post-processing
    }

    So in the new update() function, instead of rendering directly from there, we now hand off rendering responsibility to
    the PostProcessing class.

    Creating Post-processing Shader and Wiring Scroll Velocity

    We want to modify the PostProcessing class further, so that we update the postprocessing fragment shader with the
    current scroll velocity from Lenis.

    For this, I’m adding a new property lerpedVelocity and lerpFactor, which control the smoothed out velocity. The raw
    velocity values from lenis can be spiky and sudden, especially with fast scrolling or scroll jumps. If we pass that
    raw value directly into a shader, it can cause a really jittery output.

    private lerpedVelocity = 0; // Smoothed scroll velocity for post-processing.
    private lerpFactor = 0.05; // Controls how quickly lerpedVelocity follows the real velocity
    
    // ...
    
    update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
      // Reading current velocity form lenis instance.
      const targetVelocity = this.commons.lenis.velocity;
    
      // We use the lerped velocity as the actual velocity for the shader, just for a smoother experience.
      this.lerpedVelocity +=
        (targetVelocity - this.lerpedVelocity) * this.lerpFactor;
    
      this.shiftPass.uniforms.uVelocity.value = this.lerpedVelocity;
    
      this.composer.render();
    }

    Post-processing Shaders

    For the vertex shader, we can keep all things default, we pass the texture coordinates to the fragment shader.

    //postprocessing.vert
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
            
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    And for the fragment shader:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
      vec2 uv = vUv;
      
      // Calculating wave distortion based on velocity
      float waveAmplitude = uVelocity * 0.0009;
      float waveFrequency = 4.0 + uVelocity * 0.01;
      
      // Applying wave distortion to the UV coordinates
      vec2 waveUv = uv;
      waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
      waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;
      
      // Applying the RGB shift to the wave-distorted coordinates
      float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
      vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
      gl_FragColor = vec4(r, gb, r);
    }

    Breakdown

    // Calculating wave distortion based on velocity
    float waveAmplitude = uVelocity * 0.0009;
    float waveFrequency = 4.0 + uVelocity * 0.01;

    Wave amplitude controls how strongly the wave effect distorts the screen according to our scroll velocity.

    Wave frequency controls how frequently the waves occur.

    Next, we distort the UV-coordinates using sin functions and the uTime uniform:

    // Applying wave distortion to the UV coordinates
    vec2 waveUv = uv;
    waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
    waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;

    The red channel is offset slightly based on the velocity, creating the RGB shift effect.

    // Applying the RGB shift to the wave-distorted coordinates
    float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
    vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
    gl_FragColor = vec4(r, gb, r);

    This will create a subtle color separation in the final image that shifts according to our scroll velocity.

    Finally, we combine red, green, blue, and alpha into the output color.

    8. Final Result

    And there you have it! We’ve created a responsive text scene, with scroll-triggered mask reveal animations and
    wavy/rgb shifted post-processing.

    This setup provides a solid, modular foundation for building more advanced WebGL text effects. If you’re curious to explore further, consider adding particles, fluid simulations, audio reactivity, or more complex materials and shaders. If you’re interested in breakdowns of any of these, feel free to reach out on X.

    Thanks so much for following along 🙌



    Source link

  • How to create custom snippets in Visual Studio 2022 &vert; Code4IT

    How to create custom snippets in Visual Studio 2022 | Code4IT


    A simple way to improve efficiency is knowing your IDE shortcuts. Let’s learn how to create custom ones to generate code automatically.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the best tricks to boost productivity is knowing your tools.

    I’m pretty sure you’ve already used some predefined snippets in Visual Studio. For example, when you type ctor and hit Tab twice, VS automatically creates an empty constructor for the current class.

    In this article, we will learn how to create custom snippets: in particular, we will design a snippet that automatically creates a C# Unit Test method with some placeholders and predefined Arrange-Act-Assert blocks.

    Snippet Designer: a Visual Studio 2022 extension to add a UI to your placeholders

    Snippets are defined in XML-like files with .snippet extension. But we all know that working with XMLs can be cumbersome, especially if you don’t have a clear idea of the expected structure.

    Therefore, even if not strictly necessary, I suggest installing a VS2022 extension called Snippet Designer 2022.

    Snippet Designer 2022 in VS2022

    This extension, developed by Matthew Manela, can be found on GitHub, where you can view the source code.

    This extension gives you a UI to customize the snippet instead of manually editing the XML nodes. It allows you to customize the snippet, the related metadata, and even the placeholders.

    Create a basic snippet in VS2022 using a .snippet file

    As we saw, snippets are defined in a simple XML.

    In order to have your snippets immediately available in Visual Studio, I suggest you create those files in a specific VS2022 folder under the path \Documents\Visual Studio 2022\Code Snippets\Visual C#\My Code Snippets\.

    So, create an empty file, change its extension to .snippet, and save it to that location.

    Save snippet file under the My Code Snippets folder in VS2022

    Now, you can open Visual Studio (it’s not necessary to open a project, but I’d recommend you to do so). Then, head to File > Open, and open the file you saved under the My Code Snippets directory.

    Thanks to Snippet Designer, you will be able to see a nice UI instead of plain XML content.

    Have a look at how I filled in the several parts to create a snippet that generates a variable named x, assigns to it a value, and then calls x++;

    Simple snippet, with related metadata and annotations

    Have a look at the main parts:

    • the body, which contains the snippet to be generated;
    • the top layer, where we specified:
      • the Snippet name: Int100; it’s the display name of the shortcut
      • the code language: C#;
      • the shortcut: int100; it’s the string you’ll type in that allows you to generate the expected snippet;
    • the bottom table, which contains the placeholders used in the snippet; more on this later;
    • the properties tab, on the sidebar: here is where you specify some additional metadata, such as:
      • Author, Description, and Help Url of the snippet, in case you want to export it;
      • the kind of snippet: possible values are MethodBody, MethodDecl and TypeDecl. However, this value is supported only in Visual Basic.

    Now, hit save and be ready to import it!

    Just for completeness, here’s the resulting XML:

    <?xml version="1.0" encoding="utf-8"?>
    <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
      <CodeSnippet Format="1.0.0">
        <Header>
          <SnippetTypes>
            <SnippetType>Expansion</SnippetType>
          </SnippetTypes>
          <Title>Int100</Title>
          <Author>
          </Author>
          <Description>
          </Description>
          <HelpUrl>
          </HelpUrl>
          <Shortcut>int100</Shortcut>
        </Header>
        <Snippet>
          <Code Kind="method decl" Language="csharp" Delimiter="$"><![CDATA[int x = 100;
    x++;]]></Code>
        </Snippet>
      </CodeSnippet>
    </CodeSnippets>
    

    Notice that the actual content of the snippet is defined in the CDATA block.

    Import the snippet in Visual Studio

    It’s time to import the snippet. Open the Tools menu item and click on Code Snippets Manager.

    Code Snippets Manager menu item, under Tools

    From here, you can import a snippet by clicking the Import… button. Given that we’ve already saved our snippet in the correct folder, we’ll find it under the My Code Snippets folder.

    Code Snippets Manager tool

    Now it’s ready! Open a C# class, and start typing int100. You’ll see our snippet in the autocomplete list.

    Int100 snippet is now visible in Visual Studio

    By hitting Tab twice, you’ll see the snippet’s content being generated.

    How to use placeholders when defining snippets in Visual Studio

    Wouldn’t it be nice to have the possibility to define customizable parts of your snippets?

    Let’s see a real example: I want to create a snippet to create the structure of a Unit Tests method with these characteristics:

    • it already contains the AAA (Arrange, Act, Assert) sections;
    • the method name should follow the pattern “SOMETHING should DO STUFF when CONDITION”. I want to be able to replace the different parts of the method name by using placeholders.

    You can define placeholders using the $ symbol. You will then see the placeholders in the table at the bottom of the UI. In this example, the placeholders are $TestMethod$, $DoSomething$, and $Condition$. I also added a description to explain the purpose of each placeholder better.

    TestSync snippet definition and metadata

    The XML looks like this:

    <?xml version="1.0" encoding="utf-8"?>
    <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
      <CodeSnippet Format="1.0.0">
        <Header>
          <SnippetTypes>
            <SnippetType>Expansion</SnippetType>
          </SnippetTypes>
          <Title>Test Sync</Title>
          <Author>Davide Bellone</Author>
          <Description>Scaffold the AAA structure for synchronous NUnit tests</Description>
          <HelpUrl>
          </HelpUrl>
          <Shortcut>testsync</Shortcut>
        </Header>
        <Snippet>
          <Declarations>
            <Literal Editable="true">
              <ID>TestMethod</ID>
              <ToolTip>Name of the method to be tested</ToolTip>
              <Default>TestMethod</Default>
              <Function>
              </Function>
            </Literal>
            <Literal Editable="true">
              <ID>DoSomething</ID>
              <ToolTip>Expected behavior or result</ToolTip>
              <Default>DoSomething</Default>
              <Function>
              </Function>
            </Literal>
            <Literal Editable="true">
              <ID>Condition</ID>
              <ToolTip>Initial conditions</ToolTip>
              <Default>Condition</Default>
              <Function>
              </Function>
            </Literal>
          </Declarations>
          <Code Language="csharp" Delimiter="$" Kind="method decl"><![CDATA[[Test]
    public void $TestMethod$_Should_$DoSomething$_When_$Condition$()
    {
        // Arrange
    
        // Act
    
        // Assert
    
    }]]></Code>
        </Snippet>
      </CodeSnippet>
    </CodeSnippets>
    

    Now, import it as we already did before.

    Then, head to your code, start typing testsync, and you’ll see the snippet come to life. The placeholders we defined are highlighted. You can then fill in these placeholders, hit tab, and move to the next one.

    Test sync snippet usage

    Bonus: how to view all the snippets defined in VS

    If you want to learn more about your IDE and the available snippets, you can have a look at the Snippet Explorer table.

    You can find it under View > Tools > Snippet Explorer.

    Snippet Explorer menu item

    Here, you can see all the snippets, their shortcuts, and the content of each snippet. You can also see the placeholders highlighted in green.

    List of snippets available in Snippet Explorer

    It’s always an excellent place to learn more about Visual Studio.

    Further readings

    As always, you can read more on Microsoft Docs. It’s a valuable resource, although I find it difficult to follow.

    🔗 Create a code snippet in Visual Studio | Microsoft docs

    I prefer working with the UI. If you want to have a look at the repo of the extension we used in this article, here’s the link:

    🔗 SnippetDesigner extension | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    There are some tips that may improve both the code quality and the developer productivity.

    If you want to enforce some structures or rules, add such snippets in your repository; when somebody joins your team, teach them how to import those snippets.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Create PDF in PHP Using FPDF.

    Create PDF in PHP Using FPDF.


    In this post, I  will explain how to create a pdf file in php. To create a PDF file in PHP we will use the FPDF library. It is a PHP library that is used to generate a PDF. FPDF is an open-source library. It is the best server-side PDF generation PHP library. It has rich features right from adding a PDF page to creating grids and more.

    Example:

    <?Php
    require('fpdf/fpdf.php');
    $pdf = new FPDF(); 
    $pdf->AddPage();
    $pdf->SetFont('Arial','B',16);
    $pdf->Cell(80,10,'Hello World From FPDF!');
    $pdf->Output('test.pdf','I'); // Send to browser and display
    ?>

    Output:

     

     



    Source link

  • How to create Custom Attributes, and why they are useful &vert; Code4IT

    How to create Custom Attributes, and why they are useful | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In C#, attributes are used to describe the meaning of some elements, such as classes, methods, and interfaces.

    I’m sure you’ve already used them before. Examples are:

    • the [Required] attribute when you define the properties of a model to be validated;
    • the [Test] attribute when creating Unit Tests using NUnit;
    • the [Get] and the [FromBody] attributes used to define API endpoints.

    As you can see, all the attributes do not specify the behaviour, but rather, they express the meaning of a specific element.

    In this article, we will learn how to create custom attributes in C# and some possible interesting usages of such custom attributes.

    Create a custom attribute by inheriting from System.Attribute

    Creating a custom attribute is pretty straightforward: you just need to create a class that inherits from System.Attribute.

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class ApplicationModuleAttribute : Attribute
    {
     public Module BelongingModule { get; }
    
     public ApplicationModuleAttribute(Module belongingModule)
       {
     BelongingModule = belongingModule;
       }
    }
    
    public enum Module
    {
     Authentication,
     Catalogue,
     Cart,
     Payment
    }
    

    Ideally, the class name should end with the suffix -Attribute: in this way, you can use the attribute using the short form [ApplicationModule] rather than using the whole class name, like [ApplicationModuleAttribute]. In fact, C# attributes can be resolved by convention.

    Depending on the expected usage, a custom attribute can have one or more constructors and can expose one or more properties. In this example, I created a constructor that accepts an enum.
    I can then use this attribute by calling [ApplicationModule(Module.Cart)].

    Define where a Custom Attribute can be applied

    Have a look at the attribute applied to the class definition:

    [AttributeUsage(AttributeTargets.Interface | AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    

    This attribute tells us that the ApplicationModule can be applied to interfaces, classes, and methods.

    System.AttributeTargets is an enum that enlists all the points you can attach to an attribute. The AttributeTargets enum is defined as:

    [Flags]
    public enum AttributeTargets
    {
     Assembly = 1,
     Module = 2,
     Class = 4,
     Struct = 8,
     Enum = 16,
     Constructor = 32,
     Method = 64,
     Property = 128,
     Field = 256,
     Event = 512,
     Interface = 1024,
     Parameter = 2048,
     Delegate = 4096,
     ReturnValue = 8192,
     GenericParameter = 16384,
     All = 32767
    }
    

    Have you noticed it? It’s actually a Flagged enum, whose values are powers of 2: this trick allows us to join two or more values using the OR operator.

    There’s another property to notice: AllowMultiple. When set to true, this property tells us that it’s possible to use apply more than one attribute of the same type to the same element, like this:

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Or, if you want, you can inline them:

    [ApplicationModule(Module.Cart), ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService { }
    

    Practical usage of Custom Attributes

    You can use custom attributes to declare which components or business areas an element belongs to.

    In the previous example, I defined an enum that enlists all the business modules supported by my application:

    public enum Module
    {
        Authentication,
        Catalogue,
        Cart,
        Payment
    }
    

    This way, whenever I define an interface, I can explicitly tell which components it belongs to:

    [ApplicationModule(Module.Catalogue)]
    public interface IItemDetails
    {
        [ApplicationModule(Module.Catalogue)]
        string ShowItemDetails(string itemId);
    }
    
    [ApplicationModule(Module.Cart)]
    public interface IItemDiscounts
    {
        [ApplicationModule(Module.Cart)]
        bool CanHaveDiscounts(string itemId);
    }
    

    Not only that: I can have one single class implement both interfaces and mark it as related to both the Catalogue and the Cart areas.

    [ApplicationModule(Module.Cart)]
    [ApplicationModule(Module.Catalogue)]
    public class ItemDetailsService : IItemDetails, IItemDiscounts
    {
        [ApplicationModule(Module.Catalogue)]
        public string ShowItemDetails(string itemId) => throw new NotImplementedException();
    
        [ApplicationModule(Module.Cart)]
        public bool CanHaveDiscounts(string itemId) => throw new NotImplementedException();
    }
    

    Notice that I also explicitly enriched the two inner methods with the related attribute – even if it’s not necessary.

    Further readings

    As you noticed, the AttributeTargets is a Flagged Enum. Don’t you know what they are and how to define them? I’ve got you covered! I wrote two articles about Enums, and you can find info about Flagged Enums in both articles:

    🔗 5 things you should know about enums in C# | Code4IT

    and
    🔗 5 more things you should know about enums in C# | Code4IT

    This article first appeared on Code4IT 🐧

    There are some famous but not-so-obvious examples of attributes that you should know: DebuggerDisplay and InternalsVisibleTo.

    DebuggerDisplay can be useful for improving your debugging sessions.

    🔗 Simplify debugging with DebuggerDisplay attribute dotNET | Code4IT

    IntenalsVisibleTo can be used to give access to internal classes to external projects:;for example, you can use that attribute when writing unit tests.

    🔗 Testing internal members with InternalsVisibleTo | Code4IT

    Wrapping up

    In this article, I showed you how to create custom attributes in C# to specify which modules a class or a method belongs to. This trick can be useful if you want to speed up the analysis of your repository: if you need to retrieve all the classes that are used for the Cart module (for example, because you want to move them to an external library), you can just search for Module.Cart across the repository and have a full list of elements.

    In particular, this approach can be useful for the exposed components, such as API controllers. Knowing that two or more modules use the same Controller can help you understand if a change in the API structure is necessary.

    Another good usage of this attribute is automatic documentation: you could create a tool that automatically enlists all the interfaces, API endpoints, and classes grouped by the belonging module. The possibilities are infinite!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Using WSL and Let’s Encrypt to create Azure App Service SSL Wildcard Certificates

    Using WSL and Let’s Encrypt to create Azure App Service SSL Wildcard Certificates



    There are many let’s encrypt automatic tools for azure but I also wanted to see if I could use certbot in wsl to generate a wildcard certificate for the azure Friday website and then upload the resulting certificates to azure app service.

    Azure app service ultimately needs a specific format called dot PFX that includes the full certificate path and all intermediates.

    Per the docs, App Service private certificates must meet the following requirements:

    • Exported as a password-protected PFX file, encrypted using triple DES.
    • Contains private key at least 2048 bits long
    • Contains all intermediate certificates and the root certificate in the certificate chain.

    If you have a PFX that doesn’t meet all these requirements you can have Windows reencrypt the file.

    I use WSL and certbot to create the cert, then I import/export in Windows and upload the resulting PFX.

    Within WSL, install certbot:

    sudo apt update
    sudo apt install python3 python3-venv libaugeas0
    sudo python3 -m venv /opt/certbot/
    sudo /opt/certbot/bin/pip install --upgrade pip
    sudo /opt/certbot/bin/pip install certbot

    Then I generate the cert. You’ll get a nice text UI from certbot and update your DNS as a verification challenge. Change this to make sure it’s two lines, and your domains and subdomains are correct and your paths are correct.

    sudo certbot certonly --manual --preferred-challenges=dns --email YOUR@EMAIL.COM   
    --server https://acme-v02.api.letsencrypt.org/directory
    --agree-tos --manual-public-ip-logging-ok -d "azurefriday.com" -d "*.azurefriday.com"
    sudo openssl pkcs12 -export -out AzureFriday2023.pfx
    -inkey /etc/letsencrypt/live/azurefriday.com/privkey.pem
    -in /etc/letsencrypt/live/azurefriday.com/fullchain.pem

    I then copy the resulting file to my desktop (check your desktop path) so it’s now in the Windows world.

    sudo cp AzureFriday2023.pfx /mnt/c/Users/Scott/OneDrive/Desktop
    

    Now from Windows, import the PFX, note the thumbprint and export that cert.

    Import-PfxCertificate -FilePath "AzureFriday2023.pfx" -CertStoreLocation Cert:\LocalMachine\My 
    -Password (ConvertTo-SecureString -String 'PASSWORDHERE' -AsPlainText -Force) -Exportable

    Export-PfxCertificate -Cert Microsoft.PowerShell.Security\Certificate::LocalMachine\My\597THISISTHETHUMBNAILCF1157B8CEBB7CA1
    -FilePath 'AzureFriday2023-fixed.pfx' -Password (ConvertTo-SecureString -String 'PASSWORDHERE' -AsPlainText -Force)

    Then upload the cert to the Certificates section of your App Service, under Bring Your Own Cert.

    Custom Domains in Azure App Service

    Then under Custom Domains, click Update Binding and select the new cert (with the latest expiration date).

    image

    Next step is to make this even more automatic or select a more automated solution but for now, I’ll worry about this in September and it solved my expensive Wildcard Domain issue.




    About Scott

    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

    facebook
    bluesky
    subscribe
    About   Newsletter

    Hosting By
    Hosted on Linux using .NET in an Azure App Service










    Source link