بلاگ

  • Deconstructing the 35mm Website: A Look at the Process and Technical Details

    Deconstructing the 35mm Website: A Look at the Process and Technical Details


    The Idea Behind the Project

    This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!

    After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

    After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!

    In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.

    1. The Edge Detection Shader

    Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.

    To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.

    Here’s my model with all the materials applied:

    This way, Three.js can accurately detect each area I want to highlight!

    As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.

    The shader can be found in:

    node_modules/three/examples/jsm/shaders/SobelOperatorShader.js

    I copied the contents of the fragment shader into a separate file so I could freely modify it.

    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        // kernel definition (in glsl matrices are filled in column-major order)
    
        const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
        const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
    
        // fetch the 3x3 neighbourhood of a fragment
    
        // first column
    
        float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
        float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1,  0 ) ).r;
        float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1,  1 ) ).r;
    
        // second column
    
        float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2(  0, -1 ) ).r;
        float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2(  0,  0 ) ).r;
        float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2(  0,  1 ) ).r;
    
        // third column
    
        float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2(  1, -1 ) ).r;
        float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2(  1,  0 ) ).r;
        float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2(  1,  1 ) ).r;
    
        // gradient value in x direction
    
        float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
            Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
            Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
    
        // gradient value in y direction
    
        float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
            Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
            Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
    
        // magnitute of the total gradient
    
        float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
    
        return G;
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);    
    
        float G = sobel(t,texel);
        G= G > 0.001 ? 1. : 0.;
            
        gl_FragColor = vec4(vec3(G),1.0);
    
        #include <colorspace_fragment>
    }

    What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.

    This way, I can modify the output of the edge detection shader before passing it back to the composer:

    float G = sobel(t,texel);
    G= G > 0.001 ? 1. : 0.;

    G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).

    As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).

    This way I can get all the edges to have the same intensity.

    Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:

    import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
    import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
    
    
    export default class CannonF1 {
        constructor() {
            //....code
        }
    
        setupPostprocessing()
        {
    
            SobelOperatorShader.fragmentShader = sobelFragment
    
            this.effectSobel = new ShaderPass(SobelOperatorShader)
            this.effectSobel.uniforms["resolution"].value.x =
            window.innerWidth * Math.min(window.devicePixelRatio, 2)
            this.effectSobel.uniforms["resolution"].value.y =
            window.innerHeight * Math.min(window.devicePixelRatio, 2)
    
            this.composer.addPass(this.effectSobel)
        }
    }

    2. The Mesh Highlight on Hover Effect

    Next, let’s take a look at the lens parts section.

    This is mainly achieved using a Three.js utility called RenderTarget.

    A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.

    Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.

    First I setup the render targets:

    /* 
      ....Code 
    */
    
    createRenderTargets() {
        const sizes = {
          width:
            window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
          height:
            window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
        }
    
        this.renderTargetA = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
    
        this.renderTargetB = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
      }
    
    /* 
      ...Code 
    */

    Then, using three.js Raycaster, I can retrieve the uuid of the mesh that is being hoverer on:

    onMouseMove(event: MouseEvent) {
        this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
        this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
    
        this.raycaster.setFromCamera(this.mouse, this.camera)
        const intersects = this.raycaster.intersectObjects(this.scene.children)
        const target = intersects[0]
    
        if (target && "material" in target.object) {
          const targetMesh = intersects[0].object as THREE.Mesh
          this.cannonF1?.onSelectMesh(targetMesh.uuid)
        } else {
          this.cannonF1?.onSelectMesh()
        }
      }

    In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).

    This way, in my render loop, I can create two distinct renders:

    • One render (renderTargetA) with all the meshes except the hovered mesh
    • Another render (renderTargetB) with only the hovered mesh
    render() {
        // Render renderTargetA
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = false
          } else {
            mesh.visible = true
          }
        })
    
        this.renderer.setRenderTarget(this.renderTargetA)
        this.renderer.render(this.scene, this.camera)
    
        // Render renderTargetB
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = true
          } else {
            mesh.visible = false
          }
        })
        if (this.targetedMesh) {
          this.targetedMesh.children.forEach((child) => {
            child.visible = true
          })
        }
    
        this.renderer.setRenderTarget(this.renderTargetB)
        this.renderer.render(this.scene, this.camera)
    
        this.modelChildren.forEach((mesh) => {
          mesh.visible = false
        })    
    
        this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
        this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
    
        this.renderer.setRenderTarget(null)
      }

    This is what the renderTargetA render looks like:

    …and renderTargetB:

    As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.

    At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:

    What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.

    To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.

    Instead, I used the alpha channel of each individual vertex to set the distance from the camera.

    #include <common>
    
    varying vec3 vPosition;
    uniform vec3 uColor;
    
    float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
        float normalized = (value - oldMin) / (oldMax - oldMin);
        
        return newMin + (newMax - newMin) * normalized;
    }
    
    void main()
    {
        float dist = distance(vPosition,cameraPosition);
    
        float l = luminance( uColor );
    
        gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
    
        #include <colorspace_fragment>
    }

    Here’s an explanation of this shader:

    • First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
    • The uColor value represents the initial color of the mesh.
    • The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
    • Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.

    And here is the updated logic for the postprocessing shader:

    uniform sampler2D tDiffuse;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        //sobel operator
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);
    
        vec4 t1 = texture2D(tDiffuse1,vUv);
        vec4 t2 = texture2D(tDiffuse2,vUv);     
    
        if(t1.a==0.)
        {
            t1.a = 1.;
        }
        if(t2.a==0.)
        {
            t2.a = 1.;
        }
    
    
        float G = sobel(tDiffuse1,texel);
        G= G > 0.001 ? 1. : 0.;
        float Gs = sobel(tDiffuse2,texel);
        Gs = Gs > 0.001 ? 1. : 0.;
        
        vec4 s1 = vec4(vec3(G),1.);
        
        vec4 s2 = vec4(vec3(Gs),1.);    
        
        vec4 sobelTexture = vec4(vec3(0.),1.);
    
    
        if(t1.a>t2.a)
        {
            sobelTexture = s2;       
        }    
        else{
            sobelTexture = s1;
        }    
    
            
        gl_FragColor = sobelTexture;
    
        #include <colorspace_fragment>
    }

    Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.

    3. The Film Roll Effect

    Next is this film roll component that moves and twist on scroll.

    This effect is achieved using only shaders, the component is a single plane component with a shader material.

    All the data is sent to the shader through uniforms:

    export default class Film {  
      constructor() {
        //...code
      }
    
      createGeometry() {
        this.geometry = new THREE.PlaneGeometry(
          60,
          2,
          100,
          10
        )
      }
    
      createMaterial() {
        this.material = new THREE.ShaderMaterial({
          vertexShader,
          fragmentShader,
          side: THREE.DoubleSide,
          transparent: true,
          depthWrite: false,
          blending: THREE.CustomBlending,
          blendEquation: THREE.MaxEquation,
          blendSrc: THREE.SrcAlphaFactor,
          blendDst: THREE.OneMinusSrcAlphaFactor,
          uniforms: {
            uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
            uRadius: new THREE.Uniform(2),
            uXZfreq: new THREE.Uniform(3.525),
            uYfreq: new THREE.Uniform(2.155),
            uOffset: new THREE.Uniform(0),
            uAlphaMap: new THREE.Uniform(
              window.preloader.loadTexture(
                "./alpha-map.jpg",
                "film-alpha-map",
                (texture) => {
                  texture.wrapS = THREE.RepeatWrapping
                  const { width, height } = texture.image
                  this.material.uniforms.uAlphaMapResolution.value =
                    new THREE.Vector2(width, height)
                }
              )
            ),
            //uImages: new THREE.Uniform(new THREE.Vector4()),
            uImages: new THREE.Uniform(
              window.preloader.loadTexture(
                "/film-texture.png",
                "film-image-texture",
                (tex) => {
                  tex.wrapS = THREE.RepeatWrapping
                }
              )
            ),
            uRepeatFactor: new THREE.Uniform(this.repeatFactor),
            uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
            uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
            uFilmColor: new THREE.Uniform(window.colors.orange1),
          },
        })
      }  
    
      createMesh() {
        this.mesh = new THREE.Mesh(this.geometry, this.material)
        this.scene.add(this.mesh)
      }
    }

    The main vertex shader uniforms are:

    • uRadius is the radius of the cylinder shape
    • uXZfreq is the frequency of the twists on the (X,Z) plane
    • uYfreq is a cylinder height factor
    • uOffset is the vertical offset of the roll when you scroll up and down

    Here is how they are used in the vertex shader:

    #define PI 3.14159265359
    
    uniform float uPlaneWidth;
    uniform float uXZfreq;
    uniform float uYfreq;
    varying vec2 vUv;
    uniform float uOffset;
    varying vec3 vPosition;
    uniform float uRadius;
    
    void main()
    {
        vec3 np = position;
        float theta = -(PI*np.x)/(uPlaneWidth*0.5);
    
    
        np.x=cos(uXZfreq*theta+uOffset)*uRadius;
        np.y+=theta*uYfreq;
        np.z=sin(uXZfreq*theta+uOffset)*uRadius;
        
        vec4 modelPosition = modelMatrix * vec4(np, 1.0);
    
        
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectedPosition = projectionMatrix * viewPosition;
        gl_Position = projectedPosition;    
    
    
        vUv=uv;
        vPosition=np;
    }

    As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.

    const tl = gsap.timeline({
      scrollTrigger: {
        trigger: this.section,
        start: "top bottom",
        end: "bottom top",
        scrub: true,
        invalidateOnRefresh: true,        
      },
    })    
    
    tl.to(
      this.material.uniforms.uOffset,
      {
        value: 10,
        duration: 1,
      },
      0
    )

    Conclusion

    That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.

    I learned so much from this project, and I hope you’ll find it just as useful!

    Thank you for reading, and thanks to Codrops for featuring me again!



    Source link

  • How to create custom snippets in Visual Studio 2022 &vert; Code4IT

    How to create custom snippets in Visual Studio 2022 | Code4IT


    A simple way to improve efficiency is knowing your IDE shortcuts. Let’s learn how to create custom ones to generate code automatically.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the best tricks to boost productivity is knowing your tools.

    I’m pretty sure you’ve already used some predefined snippets in Visual Studio. For example, when you type ctor and hit Tab twice, VS automatically creates an empty constructor for the current class.

    In this article, we will learn how to create custom snippets: in particular, we will design a snippet that automatically creates a C# Unit Test method with some placeholders and predefined Arrange-Act-Assert blocks.

    Snippet Designer: a Visual Studio 2022 extension to add a UI to your placeholders

    Snippets are defined in XML-like files with .snippet extension. But we all know that working with XMLs can be cumbersome, especially if you don’t have a clear idea of the expected structure.

    Therefore, even if not strictly necessary, I suggest installing a VS2022 extension called Snippet Designer 2022.

    Snippet Designer 2022 in VS2022

    This extension, developed by Matthew Manela, can be found on GitHub, where you can view the source code.

    This extension gives you a UI to customize the snippet instead of manually editing the XML nodes. It allows you to customize the snippet, the related metadata, and even the placeholders.

    Create a basic snippet in VS2022 using a .snippet file

    As we saw, snippets are defined in a simple XML.

    In order to have your snippets immediately available in Visual Studio, I suggest you create those files in a specific VS2022 folder under the path \Documents\Visual Studio 2022\Code Snippets\Visual C#\My Code Snippets\.

    So, create an empty file, change its extension to .snippet, and save it to that location.

    Save snippet file under the My Code Snippets folder in VS2022

    Now, you can open Visual Studio (it’s not necessary to open a project, but I’d recommend you to do so). Then, head to File > Open, and open the file you saved under the My Code Snippets directory.

    Thanks to Snippet Designer, you will be able to see a nice UI instead of plain XML content.

    Have a look at how I filled in the several parts to create a snippet that generates a variable named x, assigns to it a value, and then calls x++;

    Simple snippet, with related metadata and annotations

    Have a look at the main parts:

    • the body, which contains the snippet to be generated;
    • the top layer, where we specified:
      • the Snippet name: Int100; it’s the display name of the shortcut
      • the code language: C#;
      • the shortcut: int100; it’s the string you’ll type in that allows you to generate the expected snippet;
    • the bottom table, which contains the placeholders used in the snippet; more on this later;
    • the properties tab, on the sidebar: here is where you specify some additional metadata, such as:
      • Author, Description, and Help Url of the snippet, in case you want to export it;
      • the kind of snippet: possible values are MethodBody, MethodDecl and TypeDecl. However, this value is supported only in Visual Basic.

    Now, hit save and be ready to import it!

    Just for completeness, here’s the resulting XML:

    <?xml version="1.0" encoding="utf-8"?>
    <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
      <CodeSnippet Format="1.0.0">
        <Header>
          <SnippetTypes>
            <SnippetType>Expansion</SnippetType>
          </SnippetTypes>
          <Title>Int100</Title>
          <Author>
          </Author>
          <Description>
          </Description>
          <HelpUrl>
          </HelpUrl>
          <Shortcut>int100</Shortcut>
        </Header>
        <Snippet>
          <Code Kind="method decl" Language="csharp" Delimiter="$"><![CDATA[int x = 100;
    x++;]]></Code>
        </Snippet>
      </CodeSnippet>
    </CodeSnippets>
    

    Notice that the actual content of the snippet is defined in the CDATA block.

    Import the snippet in Visual Studio

    It’s time to import the snippet. Open the Tools menu item and click on Code Snippets Manager.

    Code Snippets Manager menu item, under Tools

    From here, you can import a snippet by clicking the Import… button. Given that we’ve already saved our snippet in the correct folder, we’ll find it under the My Code Snippets folder.

    Code Snippets Manager tool

    Now it’s ready! Open a C# class, and start typing int100. You’ll see our snippet in the autocomplete list.

    Int100 snippet is now visible in Visual Studio

    By hitting Tab twice, you’ll see the snippet’s content being generated.

    How to use placeholders when defining snippets in Visual Studio

    Wouldn’t it be nice to have the possibility to define customizable parts of your snippets?

    Let’s see a real example: I want to create a snippet to create the structure of a Unit Tests method with these characteristics:

    • it already contains the AAA (Arrange, Act, Assert) sections;
    • the method name should follow the pattern “SOMETHING should DO STUFF when CONDITION”. I want to be able to replace the different parts of the method name by using placeholders.

    You can define placeholders using the $ symbol. You will then see the placeholders in the table at the bottom of the UI. In this example, the placeholders are $TestMethod$, $DoSomething$, and $Condition$. I also added a description to explain the purpose of each placeholder better.

    TestSync snippet definition and metadata

    The XML looks like this:

    <?xml version="1.0" encoding="utf-8"?>
    <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
      <CodeSnippet Format="1.0.0">
        <Header>
          <SnippetTypes>
            <SnippetType>Expansion</SnippetType>
          </SnippetTypes>
          <Title>Test Sync</Title>
          <Author>Davide Bellone</Author>
          <Description>Scaffold the AAA structure for synchronous NUnit tests</Description>
          <HelpUrl>
          </HelpUrl>
          <Shortcut>testsync</Shortcut>
        </Header>
        <Snippet>
          <Declarations>
            <Literal Editable="true">
              <ID>TestMethod</ID>
              <ToolTip>Name of the method to be tested</ToolTip>
              <Default>TestMethod</Default>
              <Function>
              </Function>
            </Literal>
            <Literal Editable="true">
              <ID>DoSomething</ID>
              <ToolTip>Expected behavior or result</ToolTip>
              <Default>DoSomething</Default>
              <Function>
              </Function>
            </Literal>
            <Literal Editable="true">
              <ID>Condition</ID>
              <ToolTip>Initial conditions</ToolTip>
              <Default>Condition</Default>
              <Function>
              </Function>
            </Literal>
          </Declarations>
          <Code Language="csharp" Delimiter="$" Kind="method decl"><![CDATA[[Test]
    public void $TestMethod$_Should_$DoSomething$_When_$Condition$()
    {
        // Arrange
    
        // Act
    
        // Assert
    
    }]]></Code>
        </Snippet>
      </CodeSnippet>
    </CodeSnippets>
    

    Now, import it as we already did before.

    Then, head to your code, start typing testsync, and you’ll see the snippet come to life. The placeholders we defined are highlighted. You can then fill in these placeholders, hit tab, and move to the next one.

    Test sync snippet usage

    Bonus: how to view all the snippets defined in VS

    If you want to learn more about your IDE and the available snippets, you can have a look at the Snippet Explorer table.

    You can find it under View > Tools > Snippet Explorer.

    Snippet Explorer menu item

    Here, you can see all the snippets, their shortcuts, and the content of each snippet. You can also see the placeholders highlighted in green.

    List of snippets available in Snippet Explorer

    It’s always an excellent place to learn more about Visual Studio.

    Further readings

    As always, you can read more on Microsoft Docs. It’s a valuable resource, although I find it difficult to follow.

    🔗 Create a code snippet in Visual Studio | Microsoft docs

    I prefer working with the UI. If you want to have a look at the repo of the extension we used in this article, here’s the link:

    🔗 SnippetDesigner extension | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    There are some tips that may improve both the code quality and the developer productivity.

    If you want to enforce some structures or rules, add such snippets in your repository; when somebody joins your team, teach them how to import those snippets.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • &nbsp; Explained (With Code Snippets Included)



    &nbsp; Explained (With Code Snippets Included)



    Source link

  • Developer Spotlight: MisterPrada | Codrops

    Developer Spotlight: MisterPrada | Codrops


    Background

    I’m just about to turn 30, and over the years I’ve come to many realizations that I’d like to share as echoes of my journey. I’ve been consciously programming for about 14 years, and I’ve been using Windows since childhood—battling the infamous “blue screen of death.”

    From a young age, I knew who I wanted to be—a programmer. In my childhood, nothing was more exciting than a computer. However, my academic skills weren’t strong enough to get into university easily. I was never particularly gifted in any subject; my grades were average or worse.

    Somehow, I managed to get accepted into a university for an engineering program related to programming. I tried hard, but nothing worked—I ended up copying others just to pass exams. After some time, I realized it was time to get serious. I had no special talents, no head start—just the need for hard work. I wrote my first function, my first loop over a two-dimensional array, my first structure, my first doubly linked list—and I realized I liked it. I really, really liked the fact that I was starting to make progress.

    I didn’t stop copying completely, but I began writing my own programs. We studied C++, C#, Assembly, databases, and lots of things I couldn’t yet apply in real life. So I bought a book on PHP, JS, and MySQL and realized I could build websites using WordPress and other popular CMS platforms at the time like Joomla, Drupal, etc. And you know what? That made money—and it was insanely cool. I just took on any work I could find. Since I had spent all of university copying code, I found it really easy to understand and adapt other people’s code.

    Years passed, and I was building simple websites—tweaking templates downloaded from torrents, grabbing CSS styles from random websites, and so on. Something like these:

    Eventually, I realized that my growth had stalled and I needed to act fast. I started reading various books, trying to improve my skills and learn new, trending technologies. This mostly broadened my technical horizons—I understood more, copied more, and tried harder to boost my self-esteem.

    At one point, I felt confident, thinking I was pretty good and could handle anything. But then something happened during the final year of university. A classmate told me he had gone for an interview at a major company, and they asked him to implement a binary tree. I was shocked—I had no idea what a binary tree was, how to build one, or why I was even supposed to know it.

    Honestly, it hit me hard. I started questioning everything—was I even a real programmer? Maybe I was third, fourth, or even fifth-rate at best, especially with my modest PHP/JS skill set…

    No matter how tough things got, I never felt like this wasn’t for me. I never thought of quitting or doing something else. I just accepted that I wasn’t the best, not the smartest, and unlikely to be in Steve Jobs’ dream dev team. And you know what? Something strange happened.

    One day, while playing my favorite game, World of Warcraft, I decided I wanted to become a cheater. And it wasn’t just a casual thought or curiosity—it became a full-blown obsession. I was just a regular programmer with average web development knowledge, yet I decided to write a cheat, dive into hacking, and understand how it all worked.

    For a whole year, I obsessively studied the C++ source code of the game—despite not really using C++ at all. I explored how the server worked, dug into Assembly, network traffic, data packets, and hex code. I read books on cybersecurity and anything even remotely related. It felt like an endless world of discovery. I could spend months trying to understand things that didn’t make sense to me at first—occasionally achieving small victories, but victories nonetheless.

    I started building a toolkit of tools like IDA Pro, xDbg, and even something as simple as https://hexed.it/, which let me quickly modify binary files.

    After achieving real success—writing my first memory manipulation programs for protected software—I realized that what really makes a difference is a mix of luck, hard work, and a genuine passion for what you’re doing. And I had both of those things.

    That became a kind of guiding principle for my further development. Sure, I’m not the most talented or naturally gifted, but I began to understand that even without full knowledge, with persistence and effort, you can achieve goals that seem impossible at first—or even at second or third glance.

    Getting to Work

    I got a job at an outsourcing company, and honestly, I felt confident thanks to my freelance commercial experience. At work, I handled whatever tasks the client needed—it didn’t matter whether I already knew how to do it or not. My goals were simple: learn more and earn money. What did I work on? Pretty much everything, except I always thought of myself as more of a logic guy, and frontend wasn’t really my thing. It was easier for me to deploy and configure a server than to write 10 lines of CSS.

    So I focused mostly on backend logic, building systems, and I’d often hand off frontend tasks to others. Still, I was always afraid of losing touch with those skills, so I made an effort to study Vue, React, Angular, and various frontend libraries—just to understand the logic behind it.

    I read a lot of books, mostly on JavaScript, DevOps, and hacking. At work, I grew horizontally, gaining experience based on the clients’ needs. In my personal time, I was deeply interested in hacking and reverse engineering—not because of any grand ambition, but simply because I loved it. I saw myself in it, because I was good at it. I definitely had some luck—I could click randomly through code and somehow land on exactly what I needed. It’s comforting to know that not everything is hopeless.

    Years went by, and as backend developers and DevOps engineers, we often felt invisible. Over time, the huge amount of backend code I wrote stopped bringing the same satisfaction. There were more systems, more interfaces, and less recognition—because no one really sees what you do behind the scenes. So why not switch to frontend? Well, I just hate CSS. And building simple landing pages or generic websites with nothing unique? That’s just not interesting. I need something bold and impressive—something that grabs me the way watching *Dune* does. Game development? Too complex, and I never had the desire to make games.

    But then, at work, I was given a task to create a WebAR experience for a client. It required at least some basic 3D knowledge, which I didn’t have. So I dove in blindly and started building the app using 8thWall. That’s when I discovered A-Frame, which was super easy and incredibly fun—seeing results so different from anything I had done before. When A-Frame became limiting, I started using Three.js directly on commercial projects. I had zero understanding of vector math, zero 3D modeling experience (like in Blender), but I still managed to build something. Some things worked, some didn’t—but in the end, the client was happy.

    After creating dozens of such projects and nearly a hundred backend projects, I eventually grew tired of both. Out of boredom, I started reading books on Linux Bash, Kubernetes, WebAssembly, Security, and code quality—good and bad.

    All of this only expanded my technical perspective. I didn’t become a hero or some programming guru, but I felt like I was standing alone at the summit of my own mountain. There was this strange emptiness—an aimless desire to keep learning, and yet I kept doing it day after day. Some topics I studied still haven’t revealed their meaning to me, while others only made sense years later, or proved useful when I passed that knowledge on to others.

    Over the years, I became a team lead—not because I was naturally suited for it, but because there was simply no one else. I took on responsibility, began teaching others what to do, even though I wasn’t always sure what was right or wrong—I just shared my logic and experience.

    Alongside trends, I had to learn CI/CD and Docker to solve tasks more efficiently—tasks that used to be handled differently. And you know what? I really learned something from this period: that most tools are quite similar, and you don’t need to master all of them to solve real business problems. In my mind, they became just that—tools.

    All you need is to read the documentation, run a few basic examples, and you’re good to go. I’m simply not one of those people who wants to stick to one technology for life and squeeze value out of it forever. That’s not me. For over 5 years, I built 70–80 websites using just WordPress and Laravel—covering everything from custom themes and templating systems to multisites and even deep dives into the WordPress core. I worked with some truly awful legacy code that I wouldn’t wish on anyone.

    Eventually, I decided to move on. The developers I worked with came and went, and that cycle never ended—it’s still ongoing to this day. Then came my “day X.” I was given a project I couldn’t turn down. It involved GLSL shaders. I had to create a WebAR scene with a glass beverage placed on a table. The challenge was that it was a glass cup, and around version 130 of Three.js, this couldn’t be done using a simple material. The client provided ready-made code written in Three.js with custom shaders. I looked at it and saw nothing but math—math I couldn’t understand. It was way too complex. The developer who created it had written a shader for glass, water, ice, and other elements. My task was to integrate this scene into WebAR. I was lucky enough to get a call with the developer who built it, and I asked what seemed like a straightforward question at the time:

    (Me)How did you manage to create such effects using pure math? Can you actually visualize it all in your head?
    (Shader Developer)Yeah, it looks complicated, but if you start writing shaders, borrowing small snippets from elsewhere and understanding how different effects work, eventually you start to look at that mathematical code and visualize those parts in your head.

    His answer blew me away. I realized—this guy is brilliant. And I honestly hadn’t seen anyone cooler. I barely understood anything about what he’d done—it was all incredibly hard to grasp. Back then, I didn’t have ChatGPT or anything like it to help. I started searching for books on the subject, but there were barely any. It was like this secret world where everyone knew everything but never shared. And if they did, it was in dry, unreadable math-heavy documentation that someone like me just couldn’t digest. At that point, I thought maybe I was simply too weak to write anything like that, and I went back to what I was doing before.

    The Beginning of the Creative Developer Journey

    About a year later, I came across this website, which struck me with its minimalistic and stylish design—totally my vibe. Without hesitation, I bought the course by Bruno Simon, not even digging into the details. If he said he’d teach shaders, I was all in. My obsession was so intense that I completed the course in just two weeks, diving into every single detail. Thanks to my background, most of the lessons were just a pleasant refresher—but the shader sections truly changed my life.

    So, I finished the course. What now? I didn’t yet have real-world projects that matched the new skills I had gained, so I decided to just start coding and releasing my own work. I spent a long time thinking about what my first project should be. Being a huge fan of the Naruto universe, I chose to dedicate my first creative project to my favorite character—Itachi.

    I already had some very basic skills in Blender, and of course, there was no way I could create a model like that myself. Luckily, I stumbled upon one on Sketchfab and managed to download it (haha). I built the project almost the way I envisioned it, though I lacked the experience for some finer details. Still, I did everything I could at the time. God rays were already available in the Three.js examples, so creating a project like that was pretty straightforward. And man, it was so cool—the feeling of being able to build something immersive was just amazing.

    Next, I decided to create something in honor of my all-time favorite game, which I’ve been playing for over 15 years—World of Warcraft.

    In this project, the real challenge for me was linking the portal shader to sound, as well as creating particle motion along Bézier curves. But by this point, I already had ChatGPT—and my capabilities skyrocketed. This is my favorite non-commercial project. Still, copying and modifying something isn’t the same as creating it from scratch.

    The shaders I used here were pieced together from different sources—I borrowed some from Bruno Simon’s projects, and in other cases, I reverse-engineered other projects just to figure out what I could replicate instead of truly engaging my own thinking. It was like always taking the path of least resistance. Ironically, reverse engineering a Webpack-compiled site often takes more time than simply understanding the problem yourself. But that was my default mode—copy, modify, move on.

    For this particular project, it wasn’t a big deal, but I’ve had projects in the past that got flagged for copyright issues. I knew everything lived on the frontend and could be broken down and analyzed bit by bit—especially shaders. You might not know this, but in Safari on a MacBook, you can use developer tools to view all the shaders used on a site and even modify them in real time. Naturally, I used every trick I knew to reach my goals.

    That shader developer’s comment—about being able to read math and visualize it—kept echoing in my mind. After Bruno’s course, I started to believe he might have been right. I was beginning to understand fragments of shader code, even if not all of it. I ended up watching every single video on the YouTube channel “The Art Of Code“.

    After watching those videos, I started to notice my growth in writing shaders. I began to see, understand, and even visualize what I was writing. So I decided to create a fragment shader based on my own experience:

    Along my shader-writing journey, I came across someone everyone in the shader world knows—Inigo Quilez. Man, what an absolute legend. There’s this overwhelming feeling that you’ll never reach his level. His understanding of mathematics and computer graphics is just on another planet compared to mine. For a long time, that thought really got to me—20 years ago, he was creating things I still can’t do today, despite programming for so long. But looking back, I realized something: some of the people I once admired, I’ve actually surpassed in some ways—not because I aimed to, but simply by moving forward every day. And I came to believe that if I keep going, maybe I’ll reach my own peak—one where my ideas can be truly useful to others.

    So here I am, moving forward, and creating what I believe is a beautiful shader of the aurora.

    I realized that I could now create shaders based on models made in Blender—and do it with a full understanding of what’s going on. I was finally capable of building something entirely on my own.

    Just in case, I’ll leave my Shadertoy profile here.

    So what’s next? I dove back into Three.js and began trying to apply everything I had learned to create something new. You can find a list of those projects here.

    I bought and completed all the courses by Simon Dev. By then, the shader course wasn’t anything groundbreaking for me anymore, but the math course was something I really needed. I wanted to deepen my understanding of how to apply math in practice. I also played through this game, which demonstrates how vector math works—highly recommended for anyone struggling with the concept. It really opened my eyes to things I hadn’t understood before.

    I became obsessed with making sure I didn’t miss anything shared by the people who helped shape my knowledge. I watched 100% of the videos on his YouTube channel and those of other creators who were important to me in this field. And to this day, I keep learning, studying other developers’ techniques, and growing in the field of computer graphics.

    Interesting Projects

    I really enjoy working with particles—and I also love motion blur. I came up with an approach where each particle blurs in the direction of its movement based on its velocity. I left some empty space on the plane where the particle is drawn so the blur effect wouldn’t get cut off.

    Using particles and distance-based blur effects in commercial projects.

    After watching Dune, I decided to play around with sound.

    I really enjoy playing with light sources.

    Or even creating custom light sources using TSL.

    I consider this project my most underrated one. I’m a huge fan of the Predator and Alien universes. I did borrow the plasma shader from CodePen, but honestly, that’s not the most important detail here. At the time I made this project, Three.js had just introduced a new material property called AlphaHash, which allowed me to create an awesome laser effect. It really looks great. Maybe no one notices such small details, but for me, it was an achievement to come up with that solution right as the new version of Three.js was released. That’s where my luck comes in—I had no idea how I’d implement the laser at the start of the project and thought, “Oh well, I’ll figure something out.” And luckily, the engine developers delivered exactly what I needed just in time.

    One of my favorite projects, and it always brings me joy.

    You may have already noticed that I don’t build full frontend solutions with lots of interfaces and traditional layout work—that just doesn’t interest me, so I don’t do it. In commercial development, I focus on solving niche problems—problems other developers won’t spend hours watching videos to figure out. I create concepts that later get integrated into projects. You might have already seen some 3D scenes or visual effects I’ve built—without even knowing it. A lot of development happens through two, three, or even four layers of hands. That’s why, sometimes, creating something for Coca-Cola is more realistic than making a simple online store for a local business.

    And what have I learned from this journey?

    • Never give up. Be like Naruto—better to fail 100 times than never try at all.
    • I’m not a saint of a developer—I forget things just like you, I use ChatGPT, I get lazy, and sometimes, in trying to do more than I’m capable of, I give in to the temptation of borrowing code. And yes, that has sometimes ended badly for me.
    • I assure you, even top developers—the ones who seem untouchably brilliant—also borrow or adapt code. I’ve reverse-engineered projects and clearly seen others use code they didn’t write, even while they rake in thousands of views and win awwwards. Meanwhile, the original authors stay invisible. That’s why I now try to focus more on creating things that are truly mine, to grow the ability to create rather than just consume. And to you, I say—do whatever helps you get better. The takeaway for me is this: share what you’ve made today, because tomorrow it might be irrelevant. And believe me, if someone really wants what you’ve built, they’ll take it anyway—and you won’t even know.
    • Even if your job makes you build projects that don’t excite you, don’t assume it’s someone else’s job to teach you. You have to sit down, start learning on your own, and work toward what truly inspires you.
    • Don’t be afraid to forget things—remembering something isn’t the same as learning it from scratch, especially with ChatGPT around.
    • See new technologies as tools to reach your goals. Don’t fear them—use everything, including AI, as long as it helps you move forward. Making mistakes is the most normal thing that can happen to you.
    • Nothing is impossible—it’s just a matter of time you personally need to spend to understand something that currently feels incomprehensible.
    • When using ChatGPT, think critically and read what it outputs. Don’t blindly copy and paste code—I’ve done that, and it cost me a lot of time. If I had just thought it through, I could’ve solved it in five minutes.
    • If new technologies seem absurd to you, maybe you’re starting to age—or refusing to accept change. Try to shake yourself up and think critically. If you don’t do it, someone else will—and they’ll leave you behind.
    • Hard work and determination beat talent (Inigo Quilez is still out of reach for now), but the price is your time.
    • In the pursuit of your own achievements, don’t forget about your family, loved ones, and friends—otherwise your 30s will fly by even faster than mine did.
    • The more techniques you learn in digital art, the more you’ll want to understand math and physics—and many things you once found boring may suddenly gain new meaning and purpose.
    • Ideas that you create yourself may become more valuable to you than everything you’ve ever studied.
    • Programming books are often so huge that you don’t even want to buy them—but you don’t have to read them cover to cover. Learn to filter information. Don’t worry about skipping something—if you miss it, GPT can explain it later. So feel free to skip the chapters you don’t need right now or won’t retain anyway.
    • In the past, it was important to know what a certain technology could do and how to use it by memory or with references. Today, it’s enough to simply know what’s possible—documentation and ChatGPT can help you figure out the rest. Don’t memorize things that will be irrelevant or replaced by new tech in a few days.
    • Start gradually learning TSL—the node-based system will make it easier to create materials designed by artists in Blender. (Year 2025)
    • Don’t be afraid to dig into the core to read or even modify something. The people who build the tools you use are just people too, and they write readable code. Take Three.js, for example—when you dive into the material declarations, the hierarchy becomes much clearer, something that wasn’t obvious to me when I first started learning Three.js. Or with TSL—even though the documentation is still weak, looking at function declarations often reveals helpful comments that make it easier to understand how to use different features.

    To be honest, I didn’t really want to write about myself—but Manoela pushed me, so I decided to help. And you know, helping people often comes back around as luck 🍀—and that always comes in handy later!

    Alright, I won’t bore you any longer—just take a look at my cat ♥️



    Source link

  • Is Random.GetItems the best way to get random items in C# 12? &vert; Code4IT

    Is Random.GetItems the best way to get random items in C# 12? | Code4IT


    You have a collection of items. You want to retrieve N elements randomly. Which alternatives do we have?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common operations when dealing with collections of items is to retrieve a subset of these elements taken randomly.

    Before .NET 8, the most common way to retrieve random items was to order the collection using a random value and then take the first N items of the now sorted collection.

    From .NET 8 on, we have a new method in the Random class: GetItems.

    So, should we use this method or stick to the previous version? Are there other alternatives?

    For the sake of this article, I created a simple record type, CustomRecord, which just contains two properties.

    public record CustomRecord(int Id, string Name);
    

    I then stored a collection of such elements in an array. This article’s final goal is to find the best way to retrieve a random subset of such items. Spoiler alert: it all depends on your definition of best!

    Method #1: get random items with Random.GetItems

    Starting from .NET 8, released in 2023, we now have a new method belonging to the Random class: GetItems.

    There are three overloads:

    public T[] GetItems<T>(T[] choices, int length);
    public T[] GetItems<T>(ReadOnlySpan<T> choices, int length);
    public void GetItems<T>(ReadOnlySpan<T> choices, Span<T> destination);
    

    We will focus on the first overload, which accepts an array of items (choices) in input and returns an array of size length.

    We can use it as such:

    CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
    

    Simple, neat, efficient. Or is it?

    Method #2: get the first N items from a shuffled copy of the initial array

    Another approach is to shuffle the whole initial array using Random.Shuffle. It takes in input an array and shuffles the items in-place.

    Random.Shared.Shuffle(Items);
    CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    

    If you need to preserve the initial order of the items, you should create a copy of the initial array and shuffle only the copy. You can do this by using this syntax:

    CustomRecord[] copy = [.. Items];
    

    If you just need some random items and don’t care about the initial array, you can shuffle it without making a copy.

    Once we’ve shuffled the array, we can pick the first N items to get a subset of random elements.

    Method #3: order by Guid, then take N elements

    Before .NET 8, one of the most used approaches was to order the whole collection by a random value, usually a newly generated Guid, and then take the first N items.

    var randomItems = Items
        .OrderBy(_ => Guid.NewGuid()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach works fine but has the disadvantage that it instantiates a new Guid value for every item in the collection, which is an expensive memory-wise operation.

    Method #4: order by Number, then take N elements

    Another approach was to generate a random number used as a discriminator to order the collection; then, again, we used to get the first N items.

    var randomItems = Items
        .OrderBy(_ => Random.Shared.Next()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach is slightly better because generating a random integer is way faster than generating a new Guid.

    Benchmarks of the different operations

    It’s time to compare the approaches.

    I used BenchmarkDotNet to generate the reports and ChartBenchmark to represent the results visually.

    Let’s see how I structured the benchmark.

    [MemoryDiagnoser]
    public class RandomItemsBenchmark
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
    
        private CustomRecord[] Items;
        private int TotalItemsToBeRetrieved;
        private CustomRecord[] Copy;
    
        [IterationSetup]
        public void Setup()
        {
            var ids = Enumerable.Range(0, Size).ToArray();
            Items = ids.Select(i => new CustomRecord(i, $"Name {i}")).ToArray();
            Copy = [.. Items];
    
            TotalItemsToBeRetrieved = Random.Shared.Next(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithRandomGetItems()
        {
            CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomGuid()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Guid.NewGuid())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomNumber()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Random.Shared.Next())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffle()
        {
            CustomRecord[] copy = [.. Items];
    
            Random.Shared.Shuffle(copy);
            CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffleNoCopy()
        {
            Random.Shared.Shuffle(Copy);
            CustomRecord[] randomItems = Copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    }
    

    We are going to run the benchmarks on arrays with different sizes. We will start with a smaller array with 100 items and move to a bigger one with one million items.

    We generate the initial array of CustomRecord instances for every iteration and store it in the Items property. Then, we randomly choose the number of items to get from the Items array and store it in the TotalItemsToBeRetrieved property.

    We also generate a copy of the initial array at every iteration; this way, we can run Random.Shuffle without modifying the original array.

    Finally, we define the body of the benchmarks using the implementations we saw before.

    Notice: I marked the benchmark for the GetItems method as a baseline, using [Benchmark(Baseline = true)]. This way, we can easily see the results ratio for the other methods compared to this specific method.

    When we run the benchmark, we can see this final result (for simplicity, I removed the Error, StdDev, and Median columns):

    Method Size Mean Ratio Allocated Alloc Ratio
    WithRandomGetItems 100 6.442 us 1.00 424 B 1.00
    WithRandomGuid 100 39.481 us 6.64 3576 B 8.43
    WithRandomNumber 100 22.219 us 3.67 2256 B 5.32
    WithShuffle 100 7.038 us 1.16 1464 B 3.45
    WithShuffleNoCopy 100 4.254 us 0.73 624 B 1.47
    WithRandomGetItems 10000 58.401 us 1.00 5152 B 1.00
    WithRandomGuid 10000 2,369.693 us 65.73 305072 B 59.21
    WithRandomNumber 10000 1,828.325 us 56.47 217680 B 42.25
    WithShuffle 10000 180.978 us 4.74 84312 B 16.36
    WithShuffleNoCopy 10000 156.607 us 4.41 3472 B 0.67
    WithRandomGetItems 1000000 15,069.781 us 1.00 4391616 B 1.00
    WithRandomGuid 1000000 319,088.446 us 42.79 29434720 B 6.70
    WithRandomNumber 1000000 166,111.193 us 22.90 21512408 B 4.90
    WithShuffle 1000000 48,533.527 us 6.44 11575304 B 2.64
    WithShuffleNoCopy 1000000 37,166.068 us 4.57 6881080 B 1.57

    By looking at the numbers, we can notice that:

    • GetItems is the most performant method, both for time and memory allocation;
    • using Guid.NewGuid is the worst approach: it’s 10 to 60 times slower than GetItems, and it allocates, on average, 4x the memory;
    • sorting by random number is a bit better: it’s 30 times slower than GetItems, and it allocates around three times more memory;
    • shuffling the array in place and taking the first N elements is 4x slower than GetItems; if you also have to preserve the original array, notice that you’ll lose some memory allocation performance because you must allocate more memory to create the cloned array.

    Here’s the chart with the performance values. Notice that, for better readability, I used a Log10 scale.

    Results comparison for all executions

    If we move our focus to the array with one million items, we can better understand the impact of choosing one approach instead of the other. Notice that here I used a linear scale since values are on the same magnitude order.

    The purple line represents the memory allocation in bytes.

    Results comparison for one-million-items array

    So, should we use GetItems all over the place? Well, no! Let me tell you why.

    The problem with Random.GetItems: repeated elements

    There’s a huge problem with the GetItems method: it returns duplicate items. So, if you need to get N items without duplicates, GetItems is not the right choice.

    Here’s how you can demonstrate it.

    First, create an array of 100 distinct items. Then, using Random.Shared.GetItems, retrieve 100 items.

    The final array will have 100 items; the array may or may not contain duplicates.

    int[] source = Enumerable.Range(0, 100).ToArray();
    
    StringBuilder sb = new StringBuilder();
    
    for (int i = 1; i <= 200; i++)
    {
        HashSet<int> ints = Random.Shared.GetItems(source, 100).ToHashSet();
        sb.AppendLine($"run-{i}, {ints.Count}");
    }
    
    var finalCsv = sb.ToString();
    

    To check the number of distinct elements, I put the resulting array in a HashSet<int>. The final size of the HashSet will give us the exact percentage of unique values.

    If the HashSet size is exactly 100, it means that GetItems retrieved each element from the original array exactly once.

    For simplicity, I formatted the result in CSV format so that I could generate plots with it.

    Unique values percentage returned by GetItems

    As you can see, on average, we have 65% of unique items and 35% of duplicate items.

    Further readings

    I used the Enumerable.Range method to generate the initial items.

    I wrote an article to explain how to use it, which are some parts to consider when using it, and more.

    🔗 LINQ’s Enumerable.Range to generate a sequence of consecutive numbers | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    You should not replace the way you get random items from the array by using Random.GetItems. Well, unless you are okay with having duplicates.

    If you need unique values, you should rely on other methods, such as Random.Shuffle.

    All in all, always remember to validate your assumptions by running experiments on the methods you are not sure you can trust!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • No Visuals, No Time, No Problem: Launching OXI Instruments / ONE MKII in 2 Weeks

    No Visuals, No Time, No Problem: Launching OXI Instruments / ONE MKII in 2 Weeks


    Two weeks. No 3D Visuals. No panic.
    We built the OXI ONE MKII website using nothing but structure and type. All to meet the deadline for the product launch and its debut in Berlin.

    The Challenge

    Creating a website for the launch of a new flagship product is already a high-stakes task; doing it in under 14 days, with no flawless renders, raises the bar even higher. When OXI Instruments approached us, the ONE MKII was entering its final development stage. The product was set to premiere in Berlin, and the website had to be live by that time, no extensions, no room for delay. At the same time, there was no finalized imagery, no video, and no product renders ready for use.

    We had to

    • Build a bold, functional website without relying on visual assets
    • Reflect the character and philosophy of the ONE MKII — modular, live, expressive
    • Craft a structure that would be clear to musicians and intuitive across devices
    • Work in parallel with the OXI team, adjusting to changes and updates in real time

    This wasn’t just about speed. It was about designing clarity under pressure, with a strict editorial mindset, where every word, margin, and interaction had to work harder than usual. These are the kinds of things you’d never guess as an outside observer or a potential customer. But constraints like these are truly a test of resilience.

    The Approach

    If you’ve seen other websites we’ve launched with various teams, you’ll notice they often include 3D graphics or other rich visual layers. This project, however, was a rare exception.

    It was crucial to make the right call early on and to hit expectations spot-on during the concept stage. A couple of wrong turns wouldn’t be fatal, but too many missteps could easily lead to missing the deadline and delivering an underwhelming result.

    We focused on typography, photography, and rhythm. Fortunately, we were able to shape the art direction for the photos in parallel with the design process. Big thanks to Candace Janee (OXI project manager) who coordinated between me, the photographers, and everyone involved to quickly arrange compositions, lighting setups, and other details for the shoot.

    Another layer of complexity was planning the broader interface and future platform in tandem with this launch. While we were only releasing two core pages at this stage, we knew the site would eventually evolve into a full eCommerce platform. Every design choice had to consider the long game from homepage and support pages to product detail layouts and checkout flows. That also meant thinking ahead about how systems like Webflow, WordPress, WooCommerce, and email automation would integrate down the line.

    Typography

    With no graphics to lean on, typography had to carry more weight than usual not just in terms of legibility, but in how it communicates tone, energy, and brand attitude. We opted for a bold, editorial rhythm. Headlines drive momentum across the layout, while smaller supporting text helps guide the eye without clutter.

    We selected both typefaces from the same designer, Wei Huang, a type designer from Australia. Work Sans for headlines and body copy, and Fragment Mono for supporting labels and detailed descriptions.The two fonts complement each other well and are completely free to use, which allowed us to rely on Google Fonts without worrying about file formats or load sizes.

    CMS System

    Even though we were only launching two pages initially, the CMS was built with a full content ecosystem in mind. Product specs, updates, videos, and future campaigns all had a place in the structure. Instead of hardcoding static blocks, we built flexible content types that could evolve alongside the product line.

    The idea was simple: avoid rework later. The CMS wasn’t just a backend; it was the foundation of a scalable platform. Whether we were thinking of Webflow’s CMS collections or potential integrations with WordPress and WooCommerce, the goal was to create a system that was clean, extensible, and future-ready.

    Sketches. Early explorations.

    I really enjoy the concept phase. It’s the moment where different directions emerge and key patterns begin to form. Whether it’s alignment, a unique sense of ornamentation, asymmetry, or something else entirely. This stage is where the visual language starts to take shape.

    Here’s a look at some of the early concepts we explored. The OXI website could’ve turned out very differently.

    We settled on a dark version of the design partly due to the founder’s preference, and partly because the brand’s core colors (which were off-limits for changes) worked well with it. Additionally, cutting out the device from photos made it easier to integrate visuals into the layout and mask any imperfections.

    Rhythm & Layout

    When planning the rhythm and design, it’s important not to go overboard with creativity. As designers, we often want to add that “wow” factor but sometimes, the business just doesn’t need it.

    The target audience, people in the music world, already get their visual overload during performances by their favorite artists. But when they’re shopping for a new device, they’re not looking for spectacle. They want to see the product. The details. The specs. Everything that matters.

    All of it needs to be delivered clearly and accessibly. We chose the simplest approach: alternating between center-aligned and left-aligned sections, giving us the flexibility to structure the layout intuitively. Photography helps break up the technical content, and icons quickly draw attention to key features. People don’t read, they scan. We designed with that in mind.

    A few shots highlighting some of my favorite sections.

    Result

    The results were genuinely rewarding. The team felt a boost in motivation, and the brand’s audience and fans immediately noticed the shift highlighting how the update pushed OXI into a more professional direction.

    According to my information, the pre-orders for the device sold out in less than a week. It’s always a great feeling when you’re proud of the outcome, the team is happy, and the audience responds positively. That’s what matters most.

    Looking Ahead / Part Two

    This was just the beginning. The second part of the project (a full eCommerce experience) is currently in the works. The core will expand, but the principles will remain the same.

    I hope you’ll find the full relaunch of OXI Instruments just as exciting. Stay tuned on updates.





    Source link

  • [ENG] Improving Your Code Coverage | Microsoft Visual Studio YouTube channel



    [ENG] Improving Your Code Coverage | Microsoft Visual Studio YouTube channel



    Source link

  • The Quick Guide to Dijkstra's Algorithm



    The Quick Guide to Dijkstra's Algorithm



    Source link

  • Building a Physics-Based Character Controller with the Help of AI

    Building a Physics-Based Character Controller with the Help of AI


    Creating a third-person character controller involves more than just moving an object around a 3D scene. Realistic movement, grounded physics, responsive jumping, and animation blending are essential for a polished feel. This article explores how these elements can be assembled — not through traditional manual coding, but via AI-assisted development using Bolt.new, a browser-based AI-assisted development tool that generates web code from natural language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It provides a lightweight environment where developers can focus on describing functionality rather than writing boilerplate.

    For this character controller, Bolt handled tasks like setting up physics, integrating animations, and managing input systems, making it easier to test ideas and iterate quickly without switching between tools or writing everything from scratch.

    If you’re curious to learn more, check out this article on Codrops, which also explores the platform’s capabilities and showcases another real-world project built entirely with AI.

    The final project is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create complex, interactive 3D experiences by guiding AI — focusing on behavior and structure rather than syntax.

    Step 1: Setting Up Physics with a Capsule and Ground

    The character controller begins with a simple setup: a capsule collider for the player and a ground plane to interact with. Rapier, a fast and lightweight physics engine built in WebAssembly, handles gravity, rigid body dynamics, and collisions. This forms the foundation for player movement and world interaction.

    The capsule shape was chosen for its stability when sliding across surfaces and climbing over small obstacles — a common pattern in real-time games.

    Step 2: Real-Time Tuning with a GUI

    To enable rapid iteration and balance gameplay feel, a visual GUI was introduced (using Leva.js). This panel exposes parameters such as:

    • Player movement speed
    • Jump force
    • Gravity scale
    • Follow camera offset
    • Debug toggles

    By integrating this directly into the experience, developers can tune the controller live without needing to edit or recompile code, speeding up testing and design decisions.

    Step 3: Ground Detection with Raycasting

    A raycast is used to detect whether the player is grounded. This simple yet effective check prevents the character from jumping mid-air or triggering multiple jumps in sequence.

    The logic is executed on every frame, casting a ray downward from the base of the capsule collider. When contact is confirmed, the jump input is enabled. This technique also allows smooth transitions between grounded and falling states in the animation system.

    Step 4: Integrating a Rigged Character with Animation States

    The visual character uses a rigged GLB model via Mixamo, with three key animations: Idle, Run, and Fall. These are integrated as follows:

    • The GLB character is attached as a child of the capsule collider
    • The animation state switches dynamically based on velocity and grounded status
    • Transitions are handled via animation blending for a natural feel

    This setup keeps the visuals in sync with physics, while preserving modular control over the physical capsule.

    Step 5: World Building and Asset Integration

    The environment was arranged in Blender, then exported as a single .glb file and imported into the bolt.new project scene. This approach allows for efficient scene composition while keeping asset management simple.

    For web, using .glb keeps geometry and textures bundled together. To maintain performance, it’s recommended to keep textures at 1024×1024 resolution or other square power-of-two sizes (e.g. 256, 512, 2048). This ensures optimal GPU memory usage and faster load times across devices.

    Special thanks to KayLousberg for the low-poly 3D kit used for prototyping.

    Step 6: Cross-Platform Input Support

    The controller was designed to work seamlessly across desktop, mobile, and gamepad platforms — all built using AI-generated logic through Bolt.

    Gamepad support was added using the Gamepad API, allowing players to plug in a controller and play with analog input.

    On desktop, the controller uses standard keyboard input (WASD or arrow keys) and mouse movement for camera control.

    On mobile, AI-generated code enabled an on-screen joystick and jump button, making the game fully touch-compatible.

    All input types control the same physics-driven character, ensuring consistent behavior across devices — whether you’re playing on a laptop, touchscreen, or game controller.

    This cross-platform support was implemented entirely through natural language prompts, showcasing how AI can translate high-level intent into working input systems.

    The Role of AI in the Workflow

    What makes this controller unique isn’t the mechanics — it’s the process. Every system was generated by AI through descriptive prompts, allowing the developer to work more like a creative director than a traditional engineer.

    AI handled the boilerplate, the physics setup, the animation switching logic — all based on clear creative goals. This opens new doors for prototyping and interactive design, where iteration speed matters more than syntax.

    This character controller demo includes:

    • Capsule collider with physics
    • Grounded detection via raycast
    • State-driven animation blending
    • GUI controls for tuning
    • Environment interaction with static/dynamic objects
    • Cross-Platform Input Support

    It’s a strong starting point for creating browser-based games, interactive experiences, or prototyping new ideas — all with the help of AI.

    Check out the full game built using this setup as a base: 🎮 Demo Game

    Thanks for following along — have fun building 😊



    Source link

  • Prim's Algorithm: Quick Guide with Examples



    Prim's Algorithm: Quick Guide with Examples



    Source link