نویسنده: post Bina

  • How to Animate WebGL Shaders with GSAP: Ripples, Reveals, and Dynamic Blur Effects

    How to Animate WebGL Shaders with GSAP: Ripples, Reveals, and Dynamic Blur Effects



    In this tutorial, we’ll explore how to bring motion and interactivity to your WebGL projects by combining GSAP with custom shaders. Working with the Dev team at Adoratorio Studio, I’ll guide you through four GPU-powered effects, from ripples that react to clicks to dynamic blurs that respond to scroll and drag.

    We’ll start by setting up a simple WebGL scene and syncing it with our HTML layout. From there, we’ll move step by step through more advanced interactions, animating shader uniforms, blending textures, and revealing images through masks, until we turn everything into a scrollable, animated carousel.

    By the end, you’ll understand how to connect GSAP timelines with shader parameters to create fluid, expressive visuals that react in real time and form the foundation for your own immersive web experiences.

    Creating the HTML structure

    As a first step, we will set up the page using HTML.

    We will create a container without specifying its dimensions, allowing it to extend beyond the page width. Then, we will set the main container’s overflow property to hidden, as the page will be later made interactive through the GSAP Draggable and ScrollTrigger functionalities.

    <main>
      <section class="content">
        <div class="content__carousel">
          <div class="content__carousel-inner-static">
            <div class="content__carousel-image">
              <img src="/images/01.webp" alt="" role="presentation">
              <span>Lorem — 001</span>
            </div>
            <div class="content__carousel-image">
              <img src="/images/04.webp" alt="" role="presentation">
              <span>Ipsum — 002</span>
            </div>
            <div class="content__carousel-image">
              <img src="/images/02.webp" alt="" role="presentation">
              <span>Dolor — 003</span>
            </div>
            ...
          </div>
        </div>
      </section>
    </main>

    We’ll style all this and then move on to the next step.

    Sync between HTML and Canvas

    We can now begin integrating Three.js into our project by creating a Stage class responsible for managing all 3D engine logic. Initially, this class will set up a renderer, a scene, and a camera.

    We will pass an HTML node as the first parameter, which will act as the container for our canvas.
    Next, we will update the CSS and the main script to create a full-screen canvas that resizes responsively and renders on every GSAP frame.

    export default class Stage {
      constructor(container) {
        this.container = container;
    
        this.DOMElements = [...this.container.querySelectorAll('img')];
    
        this.renderer = new WebGLRenderer({
          powerPreference: 'high-performance',
          antialias: true,
          alpha: true,
        });
        this.renderer.setPixelRatio(Math.min(1.5, window.devicePixelRatio));
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.domElement.classList.add('content__canvas');
    
        this.container.appendChild(this.renderer.domElement);
    
        this.scene = new Scene();
    
        const { innerWidth: width, innerHeight: height } = window;
        this.camera = new OrthographicCamera(-width / 2, width / 2, height / 2, -height / 2, -1000, 1000);
        this.camera.position.z = 10;
      }
    
      resize() {
        // Update camera props to fit the canvas size
        const { innerWidth: screenWidth, innerHeight: screenHeight } = window;
    
        this.camera.left = -screenWidth / 2;
        this.camera.right = screenWidth / 2;
        this.camera.top = screenHeight / 2;
        this.camera.bottom = -screenHeight / 2;
        this.camera.updateProjectionMatrix();
    
        // Update also planes sizes
        this.DOMElements.forEach((image, index) => {
          const { width: imageWidth, height: imageHeight } = image.getBoundingClientRect();
          this.scene.children[index].scale.set(imageWidth, imageHeight, 1);
        });
    
        // Update the render using the window sizes
        this.renderer.setSize(screenWidth, screenHeight);
      }
    
      render() {
        this.renderer.render(this.scene, this.camera);
      }
    }

    Back in our main.js file, we’ll first handle the stage’s resize event. After that, we’ll synchronize the renderer’s requestAnimationFrame (RAF) with GSAP by using gsap.ticker.add, passing the stage’s render function as the callback.

    // Update resize with the stage resize
    function resize() {
      ...
      stage.resize();
    }
    
    // Add render cycle to gsap ticker
    gsap.ticker.add(stage.render.bind(stage));
    
    <style>
    .content__canvas {
      position: absolute;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100svh;
    
      z-index: 2;
      pointer-events: none;
    }
    </style>

    It’s now time to load all the images included in the HTML. For each image, we will create a plane and add it to the scene. To achieve this, we’ll update the class by adding two new methods:

    setUpPlanes() {
      this.DOMElements.forEach((image) => {
        this.scene.add(this.generatePlane(image));
      });
    }
    
    generatePlane(image, ) {
      const loader = new TextureLoader();
      const texture = loader.load(image.src);
    
      texture.colorSpace = SRGBColorSpace;
      const plane = new Mesh(
        new PlaneGeometry(1, 1),
        new MeshStandardMaterial(),
      );
    
      return plane;
    }

    We can then call setUpPlanes() within the constructor of our Stage class.
    The result should resemble the following, depending on the camera’s z-position or the planes’ placement—both of which can be adjusted to fit our specific needs.

    The next step is to position the planes precisely to correspond with the location of their associated images and update their positions on each frame. To achieve this, we will implement a utility function that converts screen space (CSS pixels) into world space, leveraging the Orthographic Camera, which is already aligned with the screen.

    const getWorldPositionFromDOM = (element, camera) => {
      const rect = element.getBoundingClientRect();
    
      const xNDC = (rect.left + rect.width / 2) / window.innerWidth * 2 - 1;
      const yNDC = -((rect.top + rect.height / 2) / window.innerHeight * 2 - 1);
    
      const xWorld = xNDC * (camera.right - camera.left) / 2;
      const yWorld = yNDC * (camera.top - camera.bottom) / 2;
    
      return new Vector3(xWorld, yWorld, 0);
    };
    render() {
      this.renderer.render(this.scene, this.camera);
    
      // For each plane and each image update the position of the plane to match the DOM element position on page
      this.DOMElements.forEach((image, index) => {
         this.scene.children[index].position.copy(getWorldPositionFromDOM(image, this.camera, this.renderer));
      });
    }

    By hiding the original DOM carousel, we can now display only the images as planes within the canvas. Create a simple class extending ShaderMaterial and use it in place of MeshStandardMaterial for the planes.

    const plane = new Mesh(
      new PlaneGeometry(1, 1),
      new PlanesMaterial(),
    );
    ...
    
    import { ShaderMaterial } from 'three';
    import baseVertex from './base.vert?raw';
    import baseFragment from './base.frag?raw';
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor() {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
        });
      }
    }
    
    // base.vert
    varying vec2 vUv;
    
    void main() {
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
      vUv = uv;
    }
    
    // base.frag
    varying vec2 vUv;
    
    void main() {
      gl_FragColor = vec4(vUv.x, vUv.y, 0.0, 1.0);
    }

    We can then replace the shader output with texture sampling based on the UV coordinates, passing the texture to the material and shaders as a uniform.

    ...
    const plane = new Mesh(
      new PlaneGeometry(1, 1),
      new PlanesMaterial(texture),
    );
    ...
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
          },
        });
      }
    }
    
    // base.frag
    varying vec2 vUv;
    
    uniform sampler2D uTexture;
    
    void main() {
      vec4 diffuse = texture2D(uTexture, vUv);
      gl_FragColor = diffuse;
    }

    Click on the images for a ripple and coloring effect

    This steps breaks down the creation of an interactive grayscale transition effect, emphasizing the relationship between JavaScript (using GSAP) and GLSL shaders.

    Step 1: Instant Color/Grayscale Toggle

    Let’s start with the simplest version: clicking the image makes it instantly switch between color and grayscale.

    The JavaScript (GSAP)

    At this stage, GSAP’s role is to act as a simple “on/off” switch so let’s create a GSAP Observer to monitor the mouse click interaction:

    this.observer = Observer.create({
      target: document.querySelector('.content__carousel'),
      type: 'touch,pointer',
      onClick: e => this.onClick(e),
    });

    And here come the following steps:

    • Click Detection: We use an Observer to detect a click on our plane.
    • State Management: A boolean flag, isBw (is Black and White), is toggled on each click.
    • Shader Update: We use gsap.set() to instantly change a uniform in our shader. We’ll call it uGrayscaleProgress.
      • If isBw is trueuGrayscaleProgress becomes 1.0.
      • If isBw is falseuGrayscaleProgress becomes 0.0.
    onClick(e) {
      if (intersection) {
        const { material, userData } = intersection.object;
    
        userData.isBw = !userData.isBw;
    
        gsap.set(material.uniforms.uGrayscaleProgress, {
          value: userData.isBw ? 1.0 : 0.0
        });
      }
    }

    The Shader (GLSL)

    The fragment shader is very simple. It receives uGrayscaleProgress and uses it as a switch.

    uniform sampler2D uTexture;
    uniform float uGrayscaleProgress; // Our "switch" (0.0 or 1.0)
    varying vec2 vUv;
    
    vec3 toGrayscale(vec3 color) {
      float gray = dot(color, vec3(0.299, 0.587, 0.114));
      return vec3(gray);
    }
    
    void main() {
      vec3 originalColor = texture2D(uTexture, vUv).rgb;
      vec3 grayscaleColor = toGrayscale(originalColor);
      
       vec3 finalColor = mix(originalColor, grayscaleColor, uGrayscaleProgress);
       gl_FragColor = vec4(finalColor, 1.0);
    }

    Step 2: Animated Circular Reveal

    An instant switch is boring. Let’s make the transition a smooth, circular reveal that expands from the center.

    The JavaScript (GSAP)

    GSAP’s role now changes from a switch to an animator.
    Instead of gsap.set(), we use gsap.to() to animate uGrayscaleProgress from 0 to 1 (or 1 to 0) over a set duration. This sends a continuous stream of values (0.0, 0.01, 0.02, …) to the shader.

    gsap.to(material.uniforms.uGrayscaleProgress, {
      value: userData.isBw ? 1 : 0,
      duration: 1.5,
      ease: 'power2.inOut'
    });

    The Shader (GLSL)

    The shader now uses the animated uGrayscaleProgress to define the radius of a circle.

    void main() {
      float dist = distance(vUv, vec2(0.5));
      
      // 2. Create a circular mask.
      float mask = smoothstep(uGrayscaleProgress - 0.1, uGrayscaleProgress, dist);
    
      // 3. Mix the colors based on the mask's value for each pixel.
      vec3 finalColor = mix(originalColor, grayscaleColor, mask);
      gl_FragColor = vec4(finalColor, 1.0);
    }

    How smoothstep works here: Pixels where dist is less than uGrayscaleProgress – 0.1 get a mask value of 0. Pixels where dist is greater than uGrayscaleProgress get a value of 1. In between, it’s a smooth transition, creating the soft edge.

    Step 3: Originating from the Mouse Click

    The effect is much more engaging if it starts from the exact point of the click.

    The JavaScript (GSAP)

    We need to tell the shader where the click happened.

    • Raycasting: We use a Raycaster to find the precise (u, v) texture coordinate of the click on the mesh.
    • uMouse Uniform: We add a uniform vec2 uMouse to our material.
    • GSAP Timeline: Before the animation starts, we use .set() on our GSAP timeline to update the uMouse uniform with the intersection.uv coordinates.
    if (intersection) {
      const { material, userData } = intersection.object;
    
      material.uniforms.uMouse.value = intersection.uv;
    
      gsap.to(material.uniforms.uGrayscaleProgress, {
          value: userData.isBw ? 1 : 0
      });
    }

    The Shader (GLSL)

    We simply replace the hardcoded center with our new uMouse uniform.

    ...
    uniform vec2 uMouse; // The (u,v) coordinates from the click
    ...
    
    void main() {
    ...
    
    // 1. Calculate distance from the MOUSE CLICK, not the center.
    float dist = distance(vUv, uMouse);
    }

    Important Detail: To ensure the circular reveal always covers the entire plane, even when clicking in a corner, we calculate the maximum possible distance from the click point to any of the four corners (getMaxDistFromCorners) and normalize our dist value with it: dist / maxDist.

    This guarantees the animation completes fully.

    Step 4: Adding the Final Ripple Effect

    The last step is to add the 3D ripple effect that deforms the plane. This requires modifying the vertex shader.

    The JavaScript (GSAP)

    We need one more animated uniform to control the ripple’s lifecycle.

    1. uRippleProgress Uniform: We add a uniform float uRippleProgress.
    2. GSAP Keyframes: In the same timeline, we animate uRippleProgress from 0 to 1 and back to 0. This makes the wave rise up and then settle back down.
    gsap.timeline({ defaults: { duration: 1.5, ease: 'power3.inOut' } })
      .set(material.uniforms.uMouse, { value: intersection.uv }, 0)
      .to(material.uniforms.uGrayscaleProgress, { value: 1 }, 0)
      .to(material.uniforms.uRippleProgress, {
          keyframes: { value: [0, 1, 0] } // Rise and fall
      }, 0)

    The Shaders (GLSL)

    High-Poly Geometry: To see a smooth deformation, the PlaneGeometry in Three.js must be created with many segments (e.g., new PlaneGeometry(1, 1, 50, 50)). This gives the vertex shader more points to manipulate.

    generatePlane(image, ) {
      ...
      const plane = new Mesh(
        new PlaneGeometry(1, 1, 50, 50),
        new PlanesMaterial(texture),
      );
    
      return plane;
    }

    Vertex Shader: This shader now calculates the wave and moves the vertices.

    uniform float uRippleProgress;
    uniform vec2 uMouse;
    varying float vRipple; // Pass the ripple intensity to the fragment shader
    
    void main() {
      vec3 pos = position;
      float dist = distance(uv, uMouse);
    
      float ripple = sin(-PI * 10.0 * (dist - uTime * 0.1));
      ripple *= uRippleProgress;
    
      pos.y += ripple * 0.1;
    
      vRipple = ripple;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
    }

    Fragment Shader: We can use the ripple intensity to add a final touch, like making the wave crests brighter.

    varying float vRipple; // Received from vertex shader
    
    void main() {
      // ... (all the color and mask logic from before)
      vec3 color = mix(color1, color2, mask);
    
      // Add a highlight based on the wave's height
      color += vRipple * 2.0;
    
      gl_FragColor = vec4(color, diffuse.a);
    }

    By layering these techniques, we create a rich, interactive effect where JavaScript and GSAP act as the puppet master, telling the shaders what to do, while the shaders handle the heavy lifting of drawing it beautifully and efficiently on the GPU.

    Step 5: Reverse effect on previous tile

    As a final step, we set up a reverse animation of the current tile when a new tile is clicked. Let’s start by creating the reset animation that reverses the animation of the uniforms:

    resetMaterial(object) {
      // Reset all shader uniforms to default values
      gsap.timeline({
        defaults: { duration: 1, ease: 'power2.out' },
    
        onUpdate() {
          object.material.uniforms.uTime.value += 0.1;
        },
        onComplete() {       
          object.userData.isBw = false;
        }
      })
      .set(object.material.uniforms.uMouse, { value: { x: 0.5, y: 0.5} }, 0)
      .set(object.material.uniforms.uDirection, { value: 1.0 }, 0)
      .fromTo(object.material.uniforms.uGrayscaleProgress, { value: 1 }, { value: 0 }, 0)
      .to(object.material.uniforms.uRippleProgress, { keyframes: { value: [0, 1, 0] } }, 0);
    }

    Now, at each click, we need to set the current tile so that it’s saved in the constructor, allowing us to pass the current material to the reset animation. Let’s modify the onClick function like this and analyze it step by step:

    if (this.activeObject && intersection.object !== this.activeObject && this.activeObject.userData.isBw) {
      this.resetMaterial(this.activeObject)
      
      // Stops timeline if active
      if (this.activeObject.userData.tl?.isActive()) this.activeObject.userData.tl.kill();
      
      // Cleans timeline
      this.activeObject.userData.tl = null;
    }
    
    // Setup active object
    this.activeObject = intersection.object;
    • If this.activeObject exists (initially set to null in the constructor), we proceed to reset it to its initial black and white state
    • If there’s a current animation on the active tile, we use GSAP’s kill method to avoid conflicts and overlapping animations
    • We reset userData.tl to null (it will be assigned a new timeline value if the tile is clicked again)
    • We then set the value of this.activeObject to the object selected via the Raycaster

    In this way, we’ll have a double ripple animation: one on the clicked tile, which will be colored, and one on the previously active tile, which will be reset to its original black and white state.

    Texture reveal mask effect

    In this tutorial, we will create an interactive effect that blends two images on a plane when the user hovers or touches it.

    Step 1: Setting Up the Planes

    Unlike the previous examples, in this case we need different uniforms for the planes, as we are going to create a mix between a visible front texture and another texture that will be revealed through a mask that “cuts through” the first texture.

    Let’s start by modifying the index.html file, adding a data attribute to all images where we’ll specify the underlying texture:

    <img src="/images/front-texture.webp" alt="" role="presentation" data-back="/images/back-texture.webp">

    Then, inside our Stage.js, we’ll modify the generatePlane method, which is used to create the planes in WebGL. We’ll start by retrieving the second texture to load via the data attribute, and we’ll pass the plane material the parameters with both textures and the aspect ratio of the images:

    generatePlane(image) {
      const loader = new TextureLoader();
      const texture = loader.load(image.src);
      const textureBack = loader.load(image.dataset.back);
    
      texture.colorSpace = SRGBColorSpace;
      textureBack.colorSpace = SRGBColorSpace;
    
      const { width, height } = image.getBoundingClientRect();
    
      const plane = new Mesh(
        new PlaneGeometry(1, 1),
        new PlanesMaterial(texture, textureBack, height / width),
      );
    
      return plane;
    }
    

    Step 2: Material Setup

    import { ShaderMaterial, Vector2 } from 'three';
    import baseVertex from './base.vert?raw';
    import baseFragment from './base.frag?raw';
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture, textureBack, imageRatio) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
            uTextureBack: { value: textureBack },
            uMixFactor: { value: 0.0 },
            uAspect: { value: imageRatio },
            uMouse: { value: new Vector2(0.5, 0.5) },
          },
        });
      }
    }
    

    Let’s quickly analyze the uniforms passed to the material:

    • uTexture and uTextureBack are the two textures shown on the front and through the mask
    • uMixFactor represents the blending value between the two textures inside the mask
    • uAspect is the aspect ratio of the images used to calculate a circular mask
    • uMouse represents the mouse coordinates, updated to move the mask within the plane

    Step 3: The Javascript (GSAP)

    this.observer = Observer.create({
      target: document.querySelector('.content__carousel'),
      type: 'touch,pointer',
      onMove: e => this.onMove(e),
      onHoverEnd: () => this.hoverOut(),
    });

    Quickly, let’s create a GSAP Observer to monitor the mouse movement, passing two functions:

    • onMove checks, using the Raycaster, whether a plane is being hit in order to manage the opening of the reveal mask
    • onHoverEnd is triggered when the cursor leaves the target area, so we’ll use this method to reset the reveal mask’s expansion uniform value back to 0.0

    Let’s go into more detail on the onMove function to explain how it works:

    onMove(e) {
      const normCoords = {
        x: (e.x / window.innerWidth) * 2 - 1,
        y: -(e.y / window.innerHeight) * 2 + 1,
      };
    
      this.raycaster.setFromCamera(normCoords, this.camera);
    
      const [intersection] = this.raycaster.intersectObjects(this.scene.children);
    
      if (intersection) {
        this.intersected = intersection.object;
        const { material } = intersection.object;
    
        gsap.timeline()
          .set(material.uniforms.uMouse, { value: intersection.uv }, 0)
          .to(material.uniforms.uMixFactor, { value: 1.0, duration: 3, ease: 'power3.out' }, 0);
      } else {
        this.hoverOut();
      }
    }

    In the onMove method, the first step is to normalize the mouse coordinates from -1 to 1 to allow the Raycaster to work with the correct coordinates.

    On each frame, the Raycaster is then updated to check if any object in the scene is intersected. If there is an intersection, the code saves the hit object in a variable.

    When an intersection occurs, we proceed to work on the animation of the shader uniforms.

    Specifically, we use GSAP’s set method to update the mouse position in uMouse, and then animate the uMixFactor variable from 0.0 to 1.0 to open the reveal mask and show the underlying texture.

    If the Raycaster doesn’t find any object under the pointer, the hoverOut method is called.

    hoverOut() {
        if (!this.intersected) return;
    
        // Stop any running tweens on the uMixFactor uniform
        gsap.killTweensOf(this.intersected.material.uniforms.uMixFactor);
    
        // Animate uMixFactor back to 0 smoothly
        gsap.to(this.intersected.material.uniforms.uMixFactor, { value: 0.0, duration: 0.5, ease: 'power3.out });
    
        // Clear the intersected reference
        this.intersected = null;
      }

    This method handles closing the reveal mask once the cursor leaves the plane.

    First, we rely on the killAllTweensOf method to prevent conflicts or overlaps between the mask’s opening and closing animations by stopping all ongoing animations on the uMixFactor .

    Then, we animate the mask’s closing by setting the uMixFactor uniform back to 0.0 and reset the variable that was tracking the currently highlighted object.

    Step 4: The Shader (GLSL)

    uniform sampler2D uTexture;
    uniform sampler2D uTextureBack;
    uniform float uMixFactor;
    uniform vec2 uMouse;
    uniform float uAspect;
    
    varying vec2 vUv;
    
    void main() {
        vec2 correctedUv = vec2(vUv.x, (vUv.y - 0.5) * uAspect + 0.5);
        vec2 correctedMouse = vec2(uMouse.x, (uMouse.y - 0.5) * uAspect + 0.5);
        
        float distance = length(correctedUv - correctedMouse);
        float influence = 1.0 - smoothstep(0.0, 0.5, distance);
    
        float finalMix = uMixFactor * influence;
    
        vec4 textureFront = texture2D(uTexture, vUv);
        vec4 textureBack = texture2D(uTextureBack, vUv);
    
        vec4 finalColor = mix(textureFront, textureBack, finalMix);
    
        gl_FragColor = finalColor;
    }

    Inside the main() function, it starts by normalizing the UV coordinates and the mouse position relative to the image’s aspect ratio. This correction is applied because we are using non-square images, so the vertical coordinates must be adjusted to keep the mask’s proportions correct and ensure it remains circular. Therefore, the vUv.y and uMouse.y coordinates are modified so they are “scaled” vertically according to the aspect ratio.

    At this point, the distance is calculated between the current pixel (correctedUv) and the mouse position (correctedMouse). This distance is a numeric value that indicates how close or far the pixel is from the mouse center on the surface.

    We then move on to the actual creation of the mask. The uniform influence must vary from 1 at the cursor’s center to 0 as it moves away from the center. We use the smoothstep function to recreate this effect and obtain a soft, gradual transition between two values, so the effect naturally fades.

    The final value for the mix between the two textures, that is the finalMix uniform, is given by the product of the global factor uMixFactor (which is a static numeric value passed to the shader) and this local influence value. So the closer a pixel is to the mouse position, the more its color will be influenced by the second texture, uTextureBack.

    The last part is the actual blending: the two colors are mixed using the mix() function, which creates a linear interpolation between the two textures based on the value of finalMix. When finalMix is 0, only the front texture is visible.

    When it is 1, only the background texture is visible. Intermediate values create a gradual blend between the two textures.

    Click & Hold mask reveal effect

    This document breaks down the creation of an interactive effect that transitions an image from color to grayscale. The effect starts from the user’s click, expanding outwards with a ripple distortion.

    Step 1: The “Move” (Hover) Effect

    In this step, we’ll create an effect where an image transitions to another as the user hovers their mouse over it. The transition will originate from the pointer’s position and expand outwards.

    The JavaScript (GSAP Observer for onMove)

    GSAP’s Observer plugin is the perfect tool for tracking pointer movements without the boilerplate of traditional event listeners.

    • Setup Observer: We create an Observer instance that targets our main container and listens for touch and pointer events. We only need the onMove and onHoverEnd callbacks.
    • onMove(e) Logic:
      When the pointer moves, we use a Raycaster to determine if it’s over one of our interactive images.
      • If an object is intersected, we store it in this.intersected.
      • We then use a GSAP Timeline to animate the shader’s uniforms.
      • uMouse: We instantly set this vec2 uniform to the pointer’s UV coordinate on the image. This tells the shader where the effect should originate.
      • uMixFactor: We animate this float uniform from 0 to 1. This uniform will control the blend between the two textures in the shader.
    • onHoverEnd() Logic:
      • When the pointer leaves the object, Observer calls this function.
      • We kill any ongoing animations on uMixFactor to prevent conflicts.
      • We animate uMixFactor back to 0, reversing the effect.

    Code Example: the “Move” effect

    This code shows how Observer is configured to handle the hover interaction.

    import { gsap } from 'gsap';
    import { Observer } from 'gsap/Observer';
    import { Raycaster } from 'three';
    
    gsap.registerPlugin(Observer);
    
    export default class Effect {
      constructor(scene, camera) {
        this.scene = scene;
        this.camera = camera;
        this.intersected = null;
        this.raycaster = new Raycaster();
    
    	// 1. Create the Observer
    	this.observer = Observer.create({
          target: document.querySelector('.content__carousel'),
          type: 'touch,pointer',
          onMove: e => this.onMove(e),
          onHoverEnd: () => this.hoverOut(), // Called when the pointer leaves the target
        });
      }
    
      hoverOut() {
        if (!this.intersected) return;
    
    	// 3. Animate the effect out
        gsap.killTweensOf(this.intersected.material.uniforms.uMixFactor);
        gsap.to(this.intersected.material.uniforms.uMixFactor, {
          value: 0.0,
          duration: 0.5,
          ease: 'power3.out'
        });
    
        this.intersected = null;
      }
    
      onMove(e) {
    	// ... (Raycaster logic to find intersection)
    	const [intersection] = this.raycaster.intersectObjects(this.scene.children);
    
        if (intersection) {
          this.intersected = intersection.object;
          const { material } = intersection.object;
    
    	  // 2. Animate the uniforms on hover
          gsap.timeline()
            .set(material.uniforms.uMouse, { value: intersection.uv }, 0) // Set origin point
            .to(material.uniforms.uMixFactor, { // Animate the blendvalue: 1.0,
              duration: 3,
              ease: 'power3.out'
            }, 0);
        } else {
          this.hoverOut(); // Reset if not hovering over anything
        }
      }
    }

    The Shader (GLSL)

    The fragment shader receives the uniforms animated by GSAP and uses them to draw the effect.

    • uMouse: Used to calculate the distance of each pixel from the pointer.
    • uMixFactor: Used as the interpolation value in a mix() function. As it animates from 0 to 1, the shader smoothly blends from textureFront to textureBack.
    • smoothstep(): We use this function to create a circular mask that expands from the uMouse position. The radius of this circle is controlled by uMixFactor.
    uniform sampler2D uTexture; // Front image
    uniform sampler2D uTextureBack; // Back image
    uniform float uMixFactor; // Animated by GSAP (0 to 1)
    uniform vec2 uMouse; // Set by GSAP on move
    
    // ...
    
    void main() {
      // ... (code to correct for aspect ratio)
    
      // 1. Calculate distance of the current pixel from the mouse
      float distance = length(correctedUv - correctedMouse);
    
      // 2. Create a circular mask that expands as uMixFactor increases
      float influence = 1.0 - smoothstep(0.0, 0.5, distance);
      float finalMix = uMixFactor * influence;
    
      // 3. Read colors from both textures
      vec4 textureFront = texture2D(uTexture, vUv);
      vec4 textureBack = texture2D(uTextureBack, vUv);
    
      // 4. Mix the two textures based on the animated value
      vec4 finalColor = mix(textureFront, textureBack, finalMix);
    	
      gl_FragColor = finalColor;
    }

    Step 2: The “Click & Hold” Effect

    Now, let’s build a more engaging interaction. The effect will start when the user presses down, “charge up” while they hold, and either complete or reverse when they release.

    The JavaScript (GSAP)

    Observer makes this complex interaction straightforward by providing clear callbacks for each state.

    • Setup Observer: This time, we configure Observer to use onPressonMove, and onRelease.
    • onPress(e):
      • When the user presses down, we find the intersected object and store it in this.active.
      • We then call onActiveEnter(), which starts a GSAP timeline for the “charging” animation.
    • onActiveEnter():
      • This function defines the multi-stage animation. We use await with a GSAP tween to create a sequence.
      • First, it animates uGrayscaleProgress to a midpoint (e.g., 0.35) and holds it. This is the “hold” part of the interaction.
      • If the user continues to hold, a second tween completes the animation, transitioning uGrayscaleProgress to 1.0.
      • An onComplete callback then resets the state, preparing for the next interaction.
    • onRelease():
      • If the user releases the pointer before the animation completes, this function is called.
      • It calls onActiveLeve(), which kills the “charging” animation and animates uGrayscaleProgress back to 0, effectively reversing the effect.
    • onMove(e):
      • This is still used to continuously update the uMouse uniform, so the shader’s noise effect tracks the pointer even during the hold.
      • Crucially, if the pointer moves off the object, we call onRelease() to cancel the interaction.

    Code Example: Click & Hold

    This code demonstrates the press, hold, and release logic managed by Observer.

    import { gsap } from 'gsap';
    import { Observer } from 'gsap/Observer';
    
    // ...
    
    export default class Effect {
      constructor(scene, camera) {
    	// ...
    		
        this.active = null; // Currently active (pressed) object
    	this.raycaster = new Raycaster();
    	
    	// 1. Create the Observer for press, move, and release
    	this.observer = Observer.create({
    	  target: document.querySelector('.content__carousel'),
    	  type: 'touch,pointer',
          onPress: e => this.onPress(e),
          onMove: e => this.onMove(e),
    	  onRelease: () => this.onRelease(),
    	});
    	
    	// Continuously update uTime for the procedural effect
    	gsap.ticker.add(() => {
    	  if (this.active) {
    	    this.active.material.uniforms.uTime.value += 0.1;
    	  }
    	});
      }
    
      // 3. The "charging" animation
      async onActiveEnter() {
        gsap.killTweensOf(this.active.material.uniforms.uGrayscaleProgress);
    
        // First part of the animation (the "hold" phase)
    	await gsap.to(this.active.material.uniforms.uGrayscaleProgress, {
          value: 0.35,
          duration: 0.5,
        });
    
    	// Second part, completes after the hold
        gsap.to(this.active.material.uniforms.uGrayscaleProgress, {
          value: 1,
          duration: 0.5,
          delay: 0.12,
          ease: 'power2.in',
          onComplete: () => {/* ... reset state ... */ },
        });
      }
    
      // 4. Reverses the animation on early release
      onActiveLeve(mesh) {
        gsap.killTweensOf(mesh.material.uniforms.uGrayscaleProgress);
        gsap.to(mesh.material.uniforms.uGrayscaleProgress, {
          value: 0,
          onUpdate: () => {
            mesh.material.uniforms.uTime.value += 0.1;
          },
        });
      }
    
      // ... (getIntersection logic) ...
    	
      // 2. Handle the initial press
      onPress(e) {
        const intersection = this.getIntersection(e);
    
        if (intersection) {
          this.active = intersection.object;
          this.onActiveEnter(this.active); // Start the animation
        }
      }
    
      onRelease() {
        if (this.active) {
          const prevActive = this.active;
          this.active = null;
          this.onActiveLeve(prevActive); // Reverse the animation
        }
      }
    
      onMove(e) {
    	// ... (getIntersection logic) ...
    		
    	if (intersection) {
    	  // 5. Keep uMouse updated while holding
    	  const { material } = intersection.object;
          gsap.set(material.uniforms.uMouse, { value: intersection.uv });
        } else {
          this.onRelease(); // Cancel if pointer leaves
        }
      }
    }

    The Shader (GLSL)

    The fragment shader for this effect is more complex. It uses the animated uniforms to create a distorted, noisy reveal.

    • uGrayscaleProgress: This is the main driver, animated by GSAP. It controls both the radius of the circular mask and the strength of a “liquid” distortion effect.
    • uTime: This is continuously updated by gsap.ticker as long as the user is pressing. It’s used to add movement to the noise, making the effect feel alive and dynamic.
    • noise() function: A standard GLSL noise function generates procedural, organic patterns. We use this to distort both the shape of the circular mask and the image texture coordinates (UVs).
    // ... (uniforms and helper functions)
    
    void main() {
      // 1. Generate a noise value that changes over time
      float noisy = (noise(vUv * 25.0 + uTime * 0.5) - 0.5) * 0.05;
    
      // 2. Create a distortion that pulses using the main progress animation
      float distortionStrength = sin(uGrayscaleProgress * PI) * 0.5;
      vec2 distortedUv = vUv + vec2(noisy) * distortionStrength;
    
      // 3. Read the texture using the distorted coordinates for a liquid effect
      vec4 diffuse = texture2D(uTexture, distortedUv);
      // ... (grayscale logic)
    	
      // 4. Calculate distance from the mouse, but add noise to it
      float dist = distance(vUv, uMouse);
      float distortedDist = dist + noisy;
    
      // 5. Create the circular mask using the distorted distance and progress
      float maxDist = getMaxDistFromCorners(uMouse);
      float mask = smoothstep(uGrayscaleProgress - 0.1, uGrayscaleProgress, distortedDist / maxDist);
    
      // 6. Mix between the original and grayscale colors
      vec3 color = mix(color1, color2, mask);
    
      gl_FragColor = vec4(color, diffuse.a);
    }

    This shader combines noise-based distortion, smooth circular masking, and real-time uniform updates to create a liquid, organic transition that radiates from the click position. As GSAP animates the shader’s progress and time values, the effect feels alive and tactile — a perfect example of how animation logic in JavaScript can drive complex visual behavior directly on the GPU.

    Dynamic blur effect carousel

    Step 1: Create the carousel

    In this final demo, we will create an additional implementation, turning the image grid into a scrollable carousel that can be navigated both by dragging and scrolling.

    First we will implement the Draggable plugin by registering it and targeting the appropriate <div>
    with the desired configuration. Make sure to handle boundary constraints and update them accordingly when the window is resized.

    const carouselInnerRef = document.querySelector('.content__carousel-inner');
    const draggable = new Draggable(carouselInnerRef, {
      type: 'x',
      inertia: true,
      dragResistance: 0.5,
      edgeResistance: 0.5,
      throwResistance: 0.5,
      throwProps: true,
    });
    
    function resize() {
      const innerWidth = carouselInnerRef.scrollWidth;
      const viewportWidth = window.innerWidth;
      maxScroll = Math.abs(Math.min(0, viewportWidth - innerWidth));
    
      draggable.applyBounds({ minX: -maxScroll, maxX: 0 });
    }
    
    window.addEventListener('resize', debounce(resize));

    We ill also link GSAP Draggable to the scroll functionality using the GSAP ScrollTrigger plugin, allowing us to synchronize both scroll and drag behavior within the same container. Let’s explore this in more detail:

    let maxScroll = Math.abs(Math.min(0, window.innerWidth - carouselInnerRef.scrollWidth));
    
    const scrollTriggerInstance = ScrollTrigger.create({
      trigger: carouselWrapper,
      start: 'top top',
      end: `+=${2.5 * maxScroll}`,
      pin: true,
      scrub: 0.05,
      anticipatePin: 1,
      invalidateOnRefresh: true,
    });
    
    ...
    
    resize() {
      ...
      scrollTriggerInstance.refresh();
    }

    Now that ScrollTrigger is configured on the same container, we can focus on synchronizing the scroll position between both plugins, starting from the ScrollTrigger instance:

    onUpdate(e) {
      const x = -maxScroll * e.progress;
    
      gsap.set(carouselInnerRef, { x });
      draggable.x = x;
      draggable.update();
    }

    We then move on to the Draggable instance, which will be updated within both its onDrag and onThrowUpdate callbacks using the scrollPos variable. This variable will serve as the final scroll position for both the window and the ScrollTrigger instance.

    onDragStart() {},
    onDrag() {
      const progress = gsap.utils.normalize(draggable.maxX, draggable.minX, draggable.x);
      scrollPos = scrollTriggerInstance.start + (scrollTriggerInstance.end - scrollTriggerInstance.start) * progress;
      window.scrollTo({ top: scrollPos, behavior: 'instant' });
    
      scrollTriggerInstance.scroll(scrollPos);
    },
    onThrowUpdate() {
      const progress = gsap.utils.normalize(draggable.maxX, draggable.minX, draggable.x);
      scrollPos = scrollTriggerInstance.start + (scrollTriggerInstance.end - scrollTriggerInstance.start) * progress;
      window.scrollTo({ top: scrollPos, behavior: 'instant' });
    },
    onThrowComplete() {
      scrollTriggerInstance.scroll(scrollPos);
    }

    Step 2: Material setup

    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
            uBlurAmount: { value: 0 },
          },
        });
      }
    }

    Let’s quickly analyze the uniforms passed to the material:

    • uTexture is the base texture rendered on the plane
    • uBlurAmount represents the blur strength based on the distance from the window center

    Step 3: The JavaScript (GSAP)

    constructor(scene, camera) {
      ...
      this.callback = this.scrollUpdateCallback;
      this.centerX = window.innerWidth / 2
      ...
    }

    In the constructor we set up two pieces we’ll use to drive the dynamic blur effect:

    • <strong>this.callback</strong> references the function used inside ScrollTrigger’s onUpdate to refresh the blur amount
    • this.centerX represents the window center on X axes and is updated on each window resize

    Let’s dive into the callback passed to ScrollTrigger:

    scrollUpdateCallback() {
      this.tiles.forEach(tile => {
        const worldPosition = tile.getWorldPosition(new Vector3());
        const vector = worldPosition.clone().project(this.camera);
    
        const screenX = (vector.x * 0.5 + 0.5) * window.innerWidth;
    
        const distance = Math.abs(screenX - this.centerX);
        const maxDistance = window.innerWidth / 2;
    
        const blurAmount = MathUtils.clamp(distance / maxDistance * 5, 0.0, 5.0);
    
        gsap.to(tile.material.uniforms.uBlurAmount, {
          value: Math.round(blurAmount / 2) * 2,
          duration: 1.5,
          ease: 'power3.out'
        });
      });
    }
    

    Let’s dive deeper into this:

    • Vector projects each plane’s 3D position into normalized device coordinates; .project(this.camera) converts to the -1..1 range, then it’s scaled to real screen pixel coordinates.
    • screenX are the 2D screen-space coordinates.
    • distance measures how far the plane is from the screen center.
    • maxDistance is the maximum possible distance from center to corner.
    • blurAmount computes blur strength based on distance from the center; it’s clamped between 0.0 and 5.0 to avoid extreme values that would harm visual quality or shader performance.
    • The <strong>uBlurAmount</strong> uniform is animated toward the computed blurAmount. Rounding to the nearest even number (Math.round(blurAmount / 2) * 2) helps avoid overly frequent tiny changes that could cause visually unstable blur.

    Step 4: The Shader (GLSL)

    uniform sampler2D uTexture;
    uniform float uBlurAmount;
    
    varying vec2 vUv;
    
    vec4 kawaseBlur(sampler2D tex, vec2 uv, float offset) {
      vec2 texelSize = vec2(1.0) / vec2(textureSize(tex, 0));
      
      vec4 color = vec4(0.0);
      
      color += texture2D(tex, uv + vec2(offset, offset) * texelSize);
      color += texture2D(tex, uv + vec2(-offset, offset) * texelSize);
      color += texture2D(tex, uv + vec2(offset, -offset) * texelSize);
      color += texture2D(tex, uv + vec2(-offset, -offset) * texelSize);
      
      return color * 0.25;
    }
    
    vec4 multiPassKawaseBlur(sampler2D tex, vec2 uv, float blurStrength) {
      vec4 baseTexture = texture2D(tex, uv);
      
      vec4 blur1 = kawaseBlur(tex, uv, 1.0 + blurStrength * 1.5);
      vec4 blur2 = kawaseBlur(tex, uv, 2.0 + blurStrength);
      vec4 blur3 = kawaseBlur(tex, uv, 3.0 + blurStrength * 2.5);
      
      float t1 = smoothstep(0.0, 3.0, blurStrength);
      float t2 = smoothstep(3.0, 7.0, blurStrength);
      
      vec4 blurredTexture = mix(blur1, blur2, t1);
      blurredTexture = mix(blurredTexture, blur3, t2);
      
      float mixFactor = smoothstep(0.0, 1.0, blurStrength);
      
      return mix(baseTexture, blurredTexture, mixFactor);
    }
    
    void main() {
      vec4 color = multiPassKawaseBlur(uTexture, vUv, uBlurAmount);
      gl_FragColor = color;
    }
    

    This GLSL fragment receives a texture (uTexture) and a dynamic value (uBlurAmount) indicating how much the plane should be blurred. Based on this value, the shader applies a multi-pass Kawase blur, an efficient technique that simulates a soft, pleasing blur while staying performant.

    Let’s examine the kawaseBlur function, which applies a light blur by sampling 4 points around the current pixel (uv), each offset positively or negatively.

    • texelSize computes the size of one pixel in UV coordinates so offsets refer to “pixel amounts” regardless of texture resolution.
    • Four samples are taken in a diagonal cross pattern around uv.
    • The four colors are averaged (multiplied by 0.25) to return a balanced result.

    This function is a light single pass. To achieve a stronger effect, we apply it multiple times.

    The multiPassKawaseBlur function does exactly that, progressively increasing blur and then blending the passes:

    vec4 blur1 = kawaseBlur(tex, uv, 1.0 + blurStrength * 1.5);
    vec4 blur2 = kawaseBlur(tex, uv, 2.0 + blurStrength);
    vec4 blur3 = kawaseBlur(tex, uv, 3.0 + blurStrength * 2.5);

    This produces a progressive, visually smooth result.

    Next, we blend the different blur levels using two separate smoothsteps:

    float t1 = smoothstep(0.0, 3.0, blurStrength);
    float t2 = smoothstep(3.0, 7.0, blurStrength);
      
    vec4 finalBlur = mix(blur1, blur2, t1);
    finalBlur = mix(finalBlur, blur3, t2);

    The first mix blends blur1 and blur2, while the second blends that result with blur3. The resulting finalBlur represents the Kawase-blurred texture, which we finally mix with the base texture passed via the uniform.

    Finally, we mix the blurred texture with the original texture based on blurStrength, using another smoothstep from 0 to 1:

    float mixFactor = smoothstep(0.0, 1.0, blurStrength);
    return mix(baseTexture, finalBlur, mixFactor);

    Final Words

    Bringing together GSAP’s animation power and the creative freedom of GLSL shaders opens up a whole new layer of interactivity for the web. By animating shader uniforms directly with GSAP, we’re able to blend smooth motion design principles with the raw flexibility of GPU rendering — crafting experiences that feel alive, fluid, and tactile.

    From simple grayscale transitions to ripple-based deformations and dynamic blur effects, every step in this tutorial demonstrates how motion and graphics can respond naturally to user input, creating interfaces that invite exploration rather than just observation.

    While these techniques push the boundaries of front-end development, they also highlight a growing trend: the convergence of design, code, and real-time rendering.

    So, take these examples, remix them, and make them your own — because the most exciting part of working with GSAP and shaders is that the canvas is quite literally infinite.



    Source link

  • [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    Source link

  • Ponpon Mania: How WebGL and GSAP Bring a Comic Sheep’s Dream to Life

    Ponpon Mania: How WebGL and GSAP Bring a Comic Sheep’s Dream to Life



    Ponpon Mania is an animated comic featuring Ponpon, a megalomaniac sheep dreaming of becoming a DJ. We wanted to explore storytelling beyond traditional comics by combining playful interactions, smooth GSAP-powered motion, and dynamic visuals. The goal was to create a comic that feels alive, where readers engage directly with Ponpon’s world while following the narrative. The project evolved over several months, moving from early sketches to interactive prototypes.

    About us

    We are Justine Soulié (Art Director & Illustrator) and Patrick Heng (Creative Developer), a creative duo passionate about storytelling through visuals and interaction. Justine brings expertise in illustration, art direction, and design, while Patrick focuses on creative development and interactive experiences. Together, we explore ways to make stories more playful, immersive, and engaging.

    Art Direction

    Our visual direction emphasizes clean layouts, bold colors, and playful details. From the start, we wanted the comic to feel vibrant and approachable while using design to support the story. On the homepage, we aimed to create a simple, welcoming scene that immediately draws the user in, offering many interactive elements to explore and encouraging engagement from the very first moment.

    The comic is mostly black and white, providing a simple and striking visual base. Color appears selectively, especially when Ponpon dreams of being a DJ and is fully immersed in his imagined world, highlighting these key moments and guiding the reader’s attention. Scroll-triggered animations naturally direct focus, while hover effects and clickable elements invite exploration without interrupting the narrative flow.

    To reinforce Ponpon’s connection to music, we designed the navigation to resemble a music player. Readers move through chapters as if they were albums, with each panel functioning like a song. This structure reflects Ponpon’s DJ aspirations, making the reading experience intuitive, dynamic, and closely tied to the story.

    Technical Approach

    Our main goal was to reduce technical friction so we could dedicate our energy to refining the artistic direction, motion design, and animation of the website.

    We used WebGL because it gave us full creative freedom over rendering. Even though the comic has a mostly 2D look, we wanted the flexibility to add depth and apply shader-based effects.

    Starting from Justine’s illustrator files, every layer and visual element from each panel was exported as an individual image. These assets were then packed into optimized texture atlases using Free TexturePacker.

    Atlas example

    Once exported, the images were further compressed into GPU-friendly formats to reduce memory usage. Using the data generated by the packer, we reconstructed each scene in WebGL by generating planes at the correct size. Finally, everything was placed in a 3D scene where we applied the necessary shaders and animations to achieve the desired visual effects.

    Tech Stack & Tools

    Design

    • Adobe Photoshop & Illustrator – illustration and asset preparation
    • Figma – layout and interface design

    Development

    • ogl – WebGL framework for rendering
    • Nuxt.js – frontend framework for structure and routing
    • GSAP – animation library for smooth and precise motion
    • Matter.js – physics engine used on the About page
    • Free TexturePacker – for creating optimized texture atlases from exported assets
    • Tweakpane – GUI tool for real-time debugging and fine-tuning parameters

    Animating using GSAP

    GSAP makes it easy to animate both DOM elements and WebGL objects with a unified syntax. Its timeline system brought structure to complex sequences, while combining it with ScrollTrigger streamlined scroll-based animations. We also used SplitText to handle text animations.

    Home page

    For the homepage, we wanted the very first thing users see to feel playful and full of life. It introduces the three main characters, all animated, and sets the tone for the rest of the experience. Every element reacts subtly to the mouse: the Ponpon mask deforms slightly, balloons collide softly, and clouds drift away in gentle repulsion. These micro-interactions make the scene feel tangible and invite visitors to explore the world of Ponpon Mania with curiosity and delight. We used GSAP timeline to choreograph the intro animation, allowing us to trigger each element in sequence for a smooth and cohesive reveal.

    // Simple repulsion we used for the clouds in our render function
    const dx = baseX - mouse.x;
    const dy = baseY - mouse.y;
    const dist = Math.sqrt(dx * dx + dy * dy);
    
    // Repel the cloud if the mouse is near
    const radius = 2; // interaction radius
    const strength = 1.5; // repulsion force
    const repulsion = Math.max(0, 1 - dist / radius) * strength;
    
    // Apply the repulsion with smooth spring motion
    const targetX = basePosX + dx * repulsion;
    const targetY = basePosY - Math.abs(dy * repulsion) / 2;
    
    velocity.x += (targetX - position.x) * springStrength * deltaTime;
    velocity.y += (targetY - position.y) * springStrength * deltaTime;
    
    position.x += velocity.x;
    position.y += velocity.y;

    Chapter Selection

    For the chapter selection, we wanted something simple yet evocative of Ponpon musical universe. Each chapter is presented as an album cover, inviting users to browse through them as if flipping through a record collection. We try to have a smooth and intuitive navigation, users can drag, scroll, or click to explore and each chapter snaps into place for an easy and satisfying selection experience.

    Panel Animation

    For the panel animations, we wanted each panel to feel alive bringing Justine’s illustrations to life through motion. We spent a lot of time refining every detail so that each scene feels expressive and unique. Using GSAP timelines made it easy to structure and synchronize the different animations, keeping them flexible and reusable. Here’s an example of a GSAP timeline animating a panel, showing how sequences can be chained together smoothly.

    // Animate ponpons in sequence with GSAP timelines
    const timeline = gsap.timeline({ repeat: -1, repeatDelay: 0.7 });
    const uFlash = { value: 0 };
    const flashTimeline = gsap.timeline({ paused: true });
    
    function togglePonponGroup(index) {
      ponponsGroups.forEach((g, i) => (g.mesh.visible = i === index));
    }
    
    function triggerFlash() {
      const flashes = Math.floor(Math.random() * 2) + 1; // 1–2 flashes
      const duration = 0.4 / flashes;
    
      flashTimeline.clear();
    
      for (let i = 0; i < flashes; i++) {
        flashTimeline
          .set(uFlash, { value: 0.6 }, i * duration) // bright flash
          .to(uFlash, { value: 0, duration: duration * 0.9 }, i * duration + duration * 0.1); // fade out
      }
    
      flashTimeline.play();
    }
    
    ponponMeshes.forEach((ponpon, i) => {
      timeline.fromTo(
        ponpon.position,
        { y: ponpon.initialY - 0.2 },  // start slightly below
        {
          y: ponpon.initialY,          // bounce up
          duration: 1,
          ease: "elastic.out",
          onStart: () => {
            togglePonponGroup(i);      // show active group
            triggerFlash();            // trigger flash
          }
        },
        i * 1.6 // stagger delay between ponpons
      );
    });

    About Page

    On the About page, GSAP ScrollTrigger tracks the scroll progress of each section. These values drive the WebGL scenes, controlling rendering, transitions, and camera movement. This ensures the visuals stay perfectly synchronized with the user’s scrolling.

    const sectionUniform = { progress: { value: 0 } };
    
    // create a ScrollTrigger for one section
    const sectionTrigger = ScrollTrigger.create({
      trigger: ".about-section",
      start: "top bottom",
      end: "bottom top",
      onUpdate: (self) => {
        sectionUniform.progress.value = self.progress; // update uniform
      }
    });
    
    // update scene each frame using trigger values
    function updateScene() {
      const progress = sectionTrigger.progress;  
      const velocity = sectionTrigger.getVelocity(); 
    
      // drive camera movement with scroll progress
      camera.position.y = map(progress, 0.75, 1, -0.4, 3.4);
      camera.position.z =
        5 + map(progress, 0, 0.3, -4, 0) +
            map(progress, 0.75, 1, 0, 2) + velocity * 0.01;
    
      // subtle velocity feedback on ponpon and camera
      ponpon.position.y = ponpon.initialY + velocity * 0.01;
    }

    Thanks to the SplitText plugin, we can animate each section title line by line as it comes into view while scrolling.

    // Split the text into lines for staggered animation
    const split = new SplitText(titleDomElement, { type: "lines" });
    const lines = split.lines;
    
    // Create a timeline for the text animation
    const tl = gsap.timeline({ paused: true });
    
    tl.from(lines, {
      x: "100%",
      skewX: () => Math.random() * 50 - 25,
      rotation: 5,
      opacity: 0,
      duration: 1,
      stagger: 0.06,
      ease: "elastic.out(0.7, 0.7)"
    });
    
    // Trigger the timeline when scrolling the section into view
    ScrollTrigger.create({
      trigger: ".about-section",
      start: "top 60%",
      end: "bottom top",
      onEnter: () => tl.play(),
      onLeaveBack: () => tl.reverse()
    });

    Page transitions

    For the page transitions, we wanted them to add a sense of playfulness to the experience while keeping navigation snappy and fluid. Each transition was designed to fit the mood of the page so rather than using a single generic effect, we built variations that keep the journey fresh.

    Technically, the transitions blend two WebGL scenes together using a custom shader, where the previous and next pages are rendered and mixed in real time. The animation of the blend is driven by GSAP tweens, which lets us precisely control the timing and progress of the shader for smooth, responsive transitions.

    Designing Playful Experiences

    Ponpon Mania pushed us to think beyond traditional storytelling. It was a joy to work on the narrative and micro-interactions that add playfulness and energy to the comic.

    Looking ahead, we plan to create new chapters, expand Ponpon’s story, and introduce small games and interactive experiences within the universe we’ve built. We’re excited to keep exploring Ponpon’s world and share more surprises with readers along the way.

    Thank you for reading! We hope you enjoyed discovering the creative journey behind Ponpon Mania and the techniques we used to bring Ponpon’s world to life.

    If you want to follow Ponpon, check us out on TikTok or Instagram.

    You can also support us on Tipeee!

    Justine Soulié & Patrick Heng





    Source link

  • Getting resource location with CreatedAtAction and CreatedAtRoute action results | Code4IT


    It’s not a good practice to return the ID of a newly created item in the HTTP Response Body. What to do? You can return it in the HTTP Response Headers, with CreatedAtAction and CreatedAtRoute.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even though many devs (including me!) often forget about it, REST is not a synonym of HTTP API: it is an architectural style based on the central idea of resource.

    So, when you are seeing an HTTP request like GET http://api.example.com/games/123 you may correctly think that you are getting the details of the game with ID 123. You are asking for the resource with ID 123.

    But what happens when you create a new resource? You perform a POST, insert a new item… and then? How can you know the ID of the newly created resource – if the ID is created automatically – and use it to access the details of the new item?

    Get item detail

    For .NET APIs, all the endpoints are exposed inside a Controller, which is a class that derives from ControllerBase:

    [ApiController]
    [Route("[controller]")]
    public class GameBoardController : ControllerBase
    {
        // all the actions here!
    }
    

    So, to define a GET endpoint, we have to create an Action and specify the HTTP verb associated by using [HttpGet].

    [HttpGet]
    [Route("{id}")]
     public IActionResult GetDetail(Guid id)
    {
        var game = Games.FirstOrDefault(_ => _.Id.Equals(id));
        if (game is not null)
        {
            return Ok(game);
        }
        else
        {
            return NotFound();
        }
    }
    

    This endpoint is pretty straightforward: if the game with the specified ID exists, the method returns it; otherwise, the method returns a NotFoundResult object that corresponds to a 404 HTTP Status Code.

    Notice the [Route("{id}")] attribute: it means that the ASP.NET engine when parsing the incoming HTTP requests, searches for an Action with the required HTTP method and a route that matches the required path. Then, when it finds the Action, it maps the route parameters ({id}) to the parameters of the C# method (Guid id).

    Hey! in this section I inserted not-so-correct info: I mean, it is generally right, but not precise. Can you spot it? Drop a comment😉

    What to do when POST-ing a resource?

    Of course, you also need to create new resources: that’s where the HTTP POST verb comes in handy.

    Suppose a simple data flow: you create a new object, you insert it in the database, and it is the database itself that assigns to the object an ID.

    Then, you need to use the newly created object. How to proceed?

    You could return the ID in the HTTP Response Body. But we are using a POST verb, so you should not return data – POST is meant to insert data, not return values.

    Otherwise, you can perform a query to find an item with the exact fields you’ve just inserted. For example

    1. POST /item {title:"foo", description: "bar"}
    2. GET /items?title=foo&description=bar

    Not a good idea to use those ways, uh?

    We have a third possibility: return the resource location in the HTTP Response Header.

    The location attribute can be found among the HTTP Response Headers

    How to return it? We have 2 ways: returning a CreatedAtActionResult or a CreatedAtRouteResult.

    Using CreatedAtAction

    With CreatedAtAction you can specify the name of the Action (or, better, the name of the method that implements that action) as a parameter.

    ps: for the sake of simplicity, the new ID is generated directly into the method – no DBs in sight!

    [HttpPost]
    public IActionResult Create(GameBoard game)
    {
        var newGameId = Guid.NewGuid();
        var gameBoard = new GameBoardEntity
        {
            Title = game.Title,
            Duration = game.Duration,
            Id = newGameId
        };
    
        Games.Add(gameBoard);
    
        return CreatedAtAction(nameof(GetDetail), new { id = newGameId }, game);
    }
    

    What are the second and third parameters?

    We can see a new { id = newGameId } that indicates the route parameters defined in the GET endpoint (remember the [Route("{id}")] attribute? ) and assigns to each parameter a value.

    The last parameter is the newly created item – or any object you want to return in that field.

    Using CreatedAtRoute

    Similar to the previous method we have CreatedAtRoute. As you may guess by the name, it does not refer to a specific Action by using the name, but it refers to the Route.

    [HttpPost]
    public IActionResult Create(GameBoard game)
    {
        var newGameId = Guid.NewGuid();
        var gameBoard = new GameBoardEntity
        {
            Title = game.Title,
            Duration = game.Duration,
            Id = newGameId
        };
    
        Games.Add(gameBoard);
        return CreatedAtRoute("EndpointName", new { id = newGameId }, game);
    }
    

    To give a Route a name, we need to add a Name attribute to it:

    [HttpGet]
    - [Route("{id}")]
    + [Route("{id}", Name = "EndpointName")]
    public IActionResult GetDetail(Guid id)
    

    That’s it! Easy Peasy!

    When POST-ing an item, the Location attribute contains the direct URL of the resource itself

    Needless to say, when we perform a GET at the URL specified in the Location attribute, we get the details of the item we’ve just created.

    The URL stored in HTTP Response Headers, when called, returns the correct resource

    What about Routes and Query Strings?

    We can use the same technique to get the details of an item by retrieving it using a query string parameter instead of a route parameter:

    [HttpGet]
    - [Route("{id}")]
    - public IActionResult GetDetail(Guid id)
    + [Route("details")]
    + public IActionResult GetDetail([FromQuery] Guid id)
    {
    

    This means that the corresponding path is /GameBoard/details?id=123.

    And, without modifying the Create methods we’ve seen before, we can let ASP.NET resolve the routing and create for us the URL:

    With CreatedAt you can reference even resources accessed via Query string parameters

    And, surprise surprise, there’s more!

    We can mix route parameters with query string parameters, and the Location attribute will hold the right value for the path.

    Let’s update the GetDetail method: now the resource ID is included in the route, and a new parameter – title – is passed in query string to filter for the name of the game:

    [HttpGet]
    [Route("details/{id}")]
    public IActionResult GetDetail([FromRoute] Guid id, [FromQuery] string title)
    {
        var game = Games.FirstOrDefault(_ =>
            _.Id.Equals(id) && _.Title.Equals(title, StringComparison.OrdinalIgnoreCase)
        );
    

    This means that we need to pass a new field in the object passed to the CreatedAtRoute and CreatedAtAction methods:

    - return CreatedAtRoute("EndpointName", new { id = newGameId }, game);
    + return CreatedAtRoute("EndpointName", new { id = newGameId, title = game.Title }, game);
    

    see, the title field?

    When creating a new item, we can see the correct path in the Response Header:

    With CreatedAt you can use both Route and Query string parameters

    Wrapping up

    We’ve seen how to manage the creation of an item when developing a REST API: depending on the way you define routes, you can use CreatedAtRoute or CreatedAtAction.

    Remember that REST APIs are based on the idea of manipulation of resources: you should remember that every HTTP Verb has its meaning, and you should always consider it when developing an endpoint. Is it a GET? We should not change the status of a resource. Is it a POST? We should not return the resource itself – but we can return a reference to it.

    Happy coding!



    Source link

  • Self Doubt and the Quest for Fun: How I Ended up Turning my Portfolio into a Game

    Self Doubt and the Quest for Fun: How I Ended up Turning my Portfolio into a Game



    At the beginning of 2025, I finally decided to build myself a new portfolio. I still pretty much liked the one I made back in 2021, but I felt the need to put to good use all the cool stuff I’ve learned these past couple years working with WebGPU. And, besides, half of the projects featured in my case studies had been put offline anyway, so it was about time.

    I didn’t really know where I was going at this point, except that:

    • It would, of course, feature multiple procedurally generated WebGPU scenes. I already had a few concepts to explore in mind, like particles or boids simulation.
    • I wanted to take care of the design myself. It may seem weird, especially since I was very happy with what Gilles came up designing for my last portfolio, and also because I do suck at design. But this would give me more freedom, and I’ve also always liked building things from scratch on my own.
    • Last but not least, it had to be fun!

    1. The journey

    The (tough) design and content process

    Don’t do this!

    At first, I had no idea what to do design wise. Fonts, colors: there are so many things that could go wrong.

    I started with simple light and dark colors, kept the fonts Gilles had chosen for my previous portfolio and started to copy/paste its old text content. It didn’t feel that great, and it wasn’t fun for sure.

    The very first design iterations… Still a long way to go!

    I definitely needed colors. I could have wasted a few hours (or days) choosing the right pairing, but instead I decided this could be the right opportunity to use this random color palette generator utility I’ve coded a few years ago. I cleaned the code a bit, created a repo, published it to npm and added it to my project. I also slightly changed the tone of the copywriting, and that led me to something still not that great, but a bit more fun.

    Slowly getting there

    I let it site for a while and started working on other parts of the site, such as integrating the CMS or experimenting with the WebGPU scenes. It’s only after a long iteration process that I’ve finally set up my mind on this kind of old school video games retro vibe mixed with a more cheerful, cartoonish aesthetic, almost Candy Crush-esque. Impactful headings, popping animations, banded gradients… you name it.

    Of course, I’ve never gone as far as creating a Figma project (I did select a few reference images as a moodboard though) and just tested a ton of stuff directly with code until I felt it wasn’t that bad anymore. All in all, it was a very long and painful process, and I guess every designer would agree at this point: don’t do this!

    A few images from my final moodboard – all credits go to their respective authors.

    Do you actually read portfolios content?

    Another painful point was to settle on the actual content and overall structure of the site. Do I need detailed case studies pages? Do I need pages at all? Will the users even read all those long blocks of text I will struggle to write?

    In the end, I chose to drop the case studies pages. I had a couple of reasons to do so:

    • Often times the project ends up being put offline for various reasons, and you end up showcasing something the user cannot visit anymore. This is exactly what happened on my previous portfolio.
    • Most of the client work I’ve been doing those past years has been for agencies, and I’m not always allowed to publicly share them. I have no problem with that, but it slightly reduced the number of projects I could highlight.

    From there on, it was a quick decision to just go with a single landing page. I’d put direct links to the projects I could highlight and small videos of all the other projects or personal works I could feature. On top of that, I’d add a few “about” sections mixed with my WebGPU scenes, and that’d be the gist of it.

    Speaking of the WebGPU scenes, I really wanted them to be meaningful, not just a technical demonstration of what I could do. But we’ll get to that later.

    The final UX twist

    After a few months, I felt like I was entering the final stage of development. The page structure was mostly done, all my various sections were there and I was working on the final animations and micro-interactions tweakings.

    So I took a step back, and looked back at my initial expectations. I had my WebGPU scenes showcasing my various technical skills. I had handled the design myself, and it wasn’t that bad. But were the flashy colors and animations enough to make it a really fun experience overall?

    I think you already know the answer. Something was missing.
    Except for the random color palette switcher, the UX basically consisted of scroll-driven animations. Most of the 3D scenes interactions were rudimentary. I needed an idea.

    The design already had this video game cheerful look. So… What if I turned my whole portfolio into a game?
    Once again, I started writing down my ideas:

    • The user would need to interact with the different UI elements to unlock the theme switcher and color palette generator buttons.
    • Each WebGPU scene could serve as a way to unlock the following content, acting as a very basic “puzzle” game.
    • Keep track of the user overall progress.
    • Allow the user to skip the whole game process if they want to.

    This means most of the users wouldn’t ever make it to the footer, or use this random palette generator tool I’ve struggled to implement. This might very well be the most riskiest, stupidest decision I’ve made so far. But it would give my portfolio this unique and fun touch I was looking for in the first place, so I went all in.

    Of course, it goes without saying it implied a major refactoring of the whole code and I needed to come up with original interaction ideas for the WebGPU scenes, but I like to think it was worth it.

    Gamification mechanisms: unlocking content and rewarding message
    Are you one of the few that unlocked the color palette generator button?

    2. Technical study

    Now that you know all the whys, let’s have a look at the hows!

    Tech stack

    I’ve decided to try Sanity Studio as I’ve never worked with it before and as I knew it would be a relatively small project, it’d be a perfect fit to start using it. Even though I felt like I just scratched its surface, I liked the overall developer experience it provided. On the other hand, I already had a good experience working with Nuxt3 so this was an easy choice.

    No need to mention why I chose GSAP and Lenis — everyone knows those are great tools to deliver smooth animated websites.

    Of course, the WebGPU scenes had to be done with gpu-curtains, the 3D engine I spent so much time working on these past two years. It was a great way to test it in a real-life scenario and gave me the opportunity to fix a few bugs or add a couple features along the way.

    And since I wanted the whole process to be as transparent as possible, I’ve published the whole source code as a monorepo on GitHub.

    Animations

    I won’t go too deep into how I handled the various animations, simply because I’ve essentially used CSS and a bit of GSAP here and there, mostly for canvas animations, SplitText effects or the videos carousel using ScrollTrigger observer.

    The basic scenes

    There are a lot of components on the website that needed to draw something onto a <canvas> and react to the theme and/or color palette changes.

    To handle that, I created a Scene.ts class:

    import type { ColorPalette } from "@martinlaxenaire/color-palette-generator";
    
    export interface SceneParams {
      container: HTMLElement;
      progress?: number;
      palette?: ColorPalette;
      colors?: ColorModelBase[];
    }
    
    export class Scene {
      #progress: number;
      container: HTMLElement;
      colors: ColorModelBase[];
      isVisible: boolean;
    
      constructor({ container, progress = 0, colors = [] }: SceneParams) {
        this.container = container;
        this.colors = colors;
        this.#progress = progress;
        this.isVisible = true;
      }
    
      onResize() {}
    
      onRender() {}
    
      setSceneVisibility(isVisible: boolean = true) {
        this.isVisible = isVisible;
      }
    
      setColors(colors: ColorModelBase[]) {
        this.colors = colors;
      }
    
      get progress(): number {
        return this.#progress;
      }
    
      set progress(value: number) {
        this.#progress = isNaN(value) ? 0 : value;
        this.onProgress();
      }
    
      forceProgressUpdate(progress: number = 0) {
        this.progress = progress;
      }
    
      lerp(start = 0, end = 1, amount = 0.1) {
        return (1 - amount) * start + amount * end;
      }
    
      onProgress() {}
    
      destroy() {}
    }

    Since switching theme from light to dark (or vice versa) also updates the color palette by tweaking the HSV value component of the colors a bit, I’ve just put a setColors() method in there to handle these changes.

    The progress handling here is actually a remain of when the WebGPU scenes animations were mostly scroll-driven (before I introduced the game mechanisms), but since a few scenes still used it, I kept it in there.

    All the 2D canvas scenes extend that class, including the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you notice that?).

    The WebGPU scenes

    One of the very cool features introduced by WebGPU is that you can render to multiple <canvas> elements using only one WebGPU device. I used this to build 4 different scenes (we’ll take a closer look at each of them below), that all extend a WebGPUScene.ts class:

    import { GPUCurtains } from "gpu-curtains";
    import type { ComputeMaterial, RenderMaterial } from "gpu-curtains";
    import { Scene } from "./Scene";
    import type { SceneParams } from "./Scene";
    import {
      QualityManager,
      type QualityManagerParams,
    } from "./utils/QualityManager";
    
    export interface WebGPUSceneParams extends SceneParams {
      gpuCurtains: GPUCurtains;
      targetFPS?: QualityManagerParams["targetFPS"];
    }
    
    export class WebGPUScene extends Scene {
      gpuCurtains: GPUCurtains;
    
      qualityManager: QualityManager;
      quality: number;
      _onVisibilityChangeHandler: () => void;
    
      constructor({
        gpuCurtains,
        container,
        progress = 0,
        colors = [],
        targetFPS = 55,
      }: WebGPUSceneParams) {
        super({ container, progress, colors });
    
        this.gpuCurtains = gpuCurtains;
    
        this._onVisibilityChangeHandler =
          this.onDocumentVisibilityChange.bind(this);
    
        this.qualityManager = new QualityManager({
          label: `${this.constructor.name} quality manager`,
          updateDelay: 2000,
          targetFPS,
          onQualityChange: (newQuality) => this.onQualityChange(newQuality),
        });
    
        this.quality = this.qualityManager.quality.current;
    
        document.addEventListener(
          "visibilitychange",
          this._onVisibilityChangeHandler
        );
      }
    
      override setSceneVisibility(isVisible: boolean = true) {
        super.setSceneVisibility(isVisible);
    
        this.qualityManager.active = isVisible;
      }
    
      onDocumentVisibilityChange() {
        this.qualityManager.active = this.isVisible && !document.hidden;
      }
    
      compilteMaterialOnIdle(material: ComputeMaterial | RenderMaterial) {
        if (!this.isVisible && "requestIdleCallback" in window) {
          window.requestIdleCallback(() => {
            material.compileMaterial();
          });
        }
      }
    
      override onRender(): void {
        super.onRender();
        this.qualityManager.update();
      }
    
      onQualityChange(newQuality: number) {
        this.quality = newQuality;
      }
    
      override destroy(): void {
        super.destroy();
    
        document.removeEventListener(
          "visibilitychange",
          this._onVisibilityChangeHandler
        );
      }
    }

    In the real version, this class also handles the creation of a Tweakpane GUI folder (useful for debugging or tweaking values), but for the sake of clarity I removed the related code here.

    As you can see, each of these scenes closely monitors its own performance using a custom QualityManager class. We’ll talk about that later, in the performance section.

    Okay, now that we have the basic architecture in mind, let’s break down each of the WebGPU scenes!

    Since WebGPU is not fully supported yet, I’ve created fallback versions using the 2D canvas API and the Scene class we’ve seen above for each of the following scenes.

    Hero scene

    The scenes featured in the portfolio somehow respect a kind of complexity order, meaning the more you advance in the portfolio, the more technically involved the scenes become.

    In that way, the hero scene is by far the most simple technically speaking, but it had to look particularly striking and engaging to immediately capture the user’s attention. It was thought as some sort of mobile puzzle game splash screen.

    It’s made of a basic, single fullscreen quad. The idea here is to first rotate its UV components each frame, map them to polar coordinates and use that to create colored triangles segments.

    // Center UVs at (0.5, 0.5)
    var centeredUV = uv - vec2f(0.5);
    
    // Apply rotation using a 2D rotation matrix
    let angleOffset = params.time * params.speed; // Rotation angle in radians
    let cosA = cos(angleOffset);
    let sinA = sin(angleOffset);
    
    // Rotate the centered UVs
    centeredUV = vec2<f32>(
      cosA * centeredUV.x - sinA * centeredUV.y,
      sinA * centeredUV.x + cosA * centeredUV.y
    );
    
    // Convert to polar coordinates
    let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
    let radius = length(centeredUV);
    
    // Map angle to triangle index
    let totalSegments  = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
    let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
    let triIndex = floor(normalizedAngle * totalSegments); // Get triangle index
    
    // Compute fractional part for blending
    let segmentFraction = fract(normalizedAngle * totalSegments); // Value in [0,1] within segment
    
    let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
    let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many color indices
    
    let color = select(vec4(params.colors[colorIndex], 1.0), vec4f(0.0), isEmpty);

    There’s actually a wavy noise applied to the UV beforehand using concentric circles, but you get the idea.

    Interestingly enough, the most difficult part was to achieve the rounded rectangle entering animation while preserving the correct aspect ratio. This was done using this function:

    fn roundedRectSDF(uv: vec2f, resolution: vec2f, radiusPx: f32) -> f32 {
        let aspect = resolution.x / resolution.y;
    
        // Convert pixel values to normalized UV space
        let marginUV = vec2f(radiusPx) / resolution;
        let radiusUV = vec2f(radiusPx) / resolution;
    
        // Adjust radius X for aspect ratio
        let radius = vec2f(radiusUV.x * aspect, radiusUV.y);
    
        // Center UV around (0,0) and apply scale (progress)
        var p = uv * 2.0 - 1.0;       // [0,1] → [-1,1]
        p.x *= aspect;                // fix aspect
        p /= max(0.0001, params.showProgress); // apply scaling
        p = abs(p);
    
        // Half size of the rounded rect
        let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
        let halfSizeScaled = vec2f(halfSize.x * aspect, halfSize.y);
    
        let d = p - halfSizeScaled;
        let outside = max(d, vec2f(0.0));
        let dist = length(outside) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
    
        return dist;
    }

    Highlighted videos slider scene

    Next up is the highlighted videos slider. The original idea came from an old WebGL prototype I had built a few years ago and never used.

    The idea is to displace the planes vertices to wrap them around a cylinder.

    var position: vec3f = attributes.position;
    
    // curve
    let angle: f32 = 1.0 / curve.nbItems;
    
    let cosAngle = cos(position.x * PI * angle);
    let sinAngle = sin(position.x * PI * angle);
          
    position.z = cosAngle * curve.itemWidth;
    position.x = sinAngle;

    I obviously used this for the years titles, whereas the videos and trail effects behind them are distorted using a post-processing pass.

    While this was originally tied to the vertical scroll values (and I really liked the feeling it produced), I had to update its behavior when I switched to the whole gamification idea, making it an horizontal carousel.

    Thanks to gpu-curtains DOM to WebGPU syncing capabilities, it was relatively easy to set up the videos grid prototype using the Plane class.

    The trail effect is done using a compute shader writing to a storage texture. The compute shader only runs when necessary, which means when the slider is moving. I’m sure it could have been done in a thousands different ways, but it was a good excuse to play with compute shaders and storage textures. Here’s the compute shader involved:

    struct Rectangles {
      sizes: vec2f,
      positions: vec2f,
      colors: vec4f
    };
    
    struct Params {
      progress: f32,
      intensity: f32
    };
    
    @group(0) @binding(0) var backgroundStorageTexture: texture_storage_2d<rgba8unorm, write>;
    
    @group(1) @binding(0) var<uniform> params: Params;
    @group(1) @binding(1) var<storage, read> rectangles: array<Rectangles>;
    
    fn sdfRectangle(center: vec2f, size: vec2f) -> f32 {
        let dxy = abs(center) - size;
        return length(max(dxy, vec2(0.0))) + max(min(dxy.x, 0.0), min(dxy.y, 0.0));
    }
    
    @compute @workgroup_size(16, 16) fn main(
        @builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
    ) {
        let bgTextureDimensions = vec2f(textureDimensions(backgroundStorageTexture));
    
        if(f32(GlobalInvocationID.x) <= bgTextureDimensions.x && f32(GlobalInvocationID.y) <= bgTextureDimensions.y) {
            let uv = vec2f(f32(GlobalInvocationID.x) / bgTextureDimensions.x - params.progress,
            f32(GlobalInvocationID.y) / bgTextureDimensions.y);
    
            var color = vec4f(0.0, 0.0, 0.0, 0.0); // Default to black
            let nbRectangles: u32 = arrayLength(&rectangles);
    
            for (var i: u32 = 0; i < nbRectangles; i++) {
                let rectangle = rectangles[i];
    
                let rectDist = sdfRectangle(uv - rectangle.positions, vec2(rectangle.sizes.x * params.intensity, rectangle.sizes.y));
    
                color = select(color, rectangle.colors * params.intensity, rectDist < 0.0);
            }
    
            textureStore(backgroundStorageTexture, vec2<i32>(GlobalInvocationID.xy), color);
        }
    }

    I thought I was done here, but while running production build tests I stumbled upon an issue. Unfortunately, preloading all those videos to use as WebGPU textures resulted in a huge initial payload and also significantly affected the CPU load. To mitigate that, I’ve implemented a sequential video preloading where I’d have to wait for each video to have enough data before loading the next one. This gave a huge boost regarding initial load time and CPU overhead.

    Invoices scene

    The third WebGPU scene was initially supposed to constitute my own take at 3D boids simulations, using instancing and a compute shader. After a bit of work, I had a bunch of instances that were following my mouse, but the end result was not living up to my expectations. The spheres were sometimes overlapping each other, or disappearing behind the edges of the screen. I kept improving it, adding self-collision, edge detections and attraction/repulsion mechanisms until I was happy enough with the result.

    I like to call it the “invoices” scene, because the sphere instances here actually represent all the invoices I actually issued during my freelance career, scaled based on the amounts. Since I’m using google sheets to handle most of my accounting, I’ve made a little script that gathers all my invoices amount in a single, separate private sheet each time I’m updating my accounting sheets. I then fetch and parse that sheet to create the instances. It was a fun little side exercise and turns this scene into an ironically meaningful experiment: each time you click and hold, you kind of help me collect my money.

    The compute shader uses a buffer ping-pong technique: you start with two identically filled buffers (e.g. packed raw data) then at each compute dispatch call, you read the data from the first buffer and update the second one accordingly. Once done, you swap the two buffers before the next call and repeat the process.
    If you’re familiar with WebGL, this is often done with textures. WebGPU and compute shaders allow us to do so with buffers, which is way more powerful. Here is the complete compute shader code:

    struct ParticleB {
      position: vec4f,
      velocity: vec4f,
      rotation: vec4f,
      angularVelocity: vec4f,
      data: vec4f
    };
    
    struct ParticleA {
      position: vec4f,
      velocity: vec4f,
      rotation: vec4f,
      angularVelocity: vec4f,
      data: vec4f
    };
    
    struct SimParams {
      deltaT: f32,
      mousePosition: vec3f,
      mouseAttraction: f32,
      spheresRepulsion: f32,
      boxReboundFactor: f32,
      boxPlanes: array<vec4f, 6>
    };
    
    @group(0) @binding(0) var<uniform> params: SimParams;
    @group(0) @binding(1) var<storage, read> particlesA: array<ParticleA>;
    @group(0) @binding(2) var<storage, read_write> particlesB: array<ParticleB>;
    
    
    fn constrainToFrustum(pos: vec3<f32>, ptr_velocity: ptr<function, vec3<f32>>, radius: f32) -> vec3<f32> {
        var correctedPos = pos;
    
        for (var i = 0u; i < 6u; i++) { // Loop through 6 frustum planes
            let plane = params.boxPlanes[i];
            let dist = dot(plane.xyz, correctedPos) + plane.w;
    
            if (dist < radius) { // If inside the plane boundary (radius = 1)
                // Move the point inside the frustum
                let correction = plane.xyz * (-dist + radius); // Push inside the frustum
                
                // Apply the position correction
                correctedPos += correction;
    
                // Reflect velocity with damping
                let normal = plane.xyz;
                let velocityAlongNormal = dot(*(ptr_velocity), normal);
                
                if (velocityAlongNormal < 0.0) { // Ensure we only reflect if moving towards the plane
                    *(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * normal;
                }
            }
        }
        return correctedPos;
    }
    
    fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
        let theta = length(omega) * dt;
        if (theta < 1e-5) {
            return vec4(0.0, 0.0, 0.0, 1.0);
        }
        let axis = normalize(omega);
        let halfTheta = 0.5 * theta;
        let sinHalf = sin(halfTheta);
        return vec4(axis * sinHalf, cos(halfTheta));
    }
    
    fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
        return vec4(
            a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
            a.w * b.w - dot(a.xyz, b.xyz)
        );
    }
    
    fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
        let omega = vec4(angularVel, 0.0);
        let dq = 0.5 * quaternionMul(q, omega);
        return normalize(q + dq * dt);
    }
    
    @compute @workgroup_size(64) fn main(
        @builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
    ) {
        var index = GlobalInvocationID.x;
        
        var vPos = particlesA[index].position.xyz;
    
        var vVel = particlesA[index].velocity.xyz;
        var collision = particlesA[index].velocity.w;
    
        var vQuat = particlesA[index].rotation;
        var angularVelocity = particlesA[index].angularVelocity.xyz;
    
        var vData = particlesA[index].data;
    
        let sphereRadius = vData.x;
        var newCollision = vData.y;
    
        
        collision += (newCollision - collision) * 0.2;
        collision = smoothstep(0.0, 1.0, collision);
        newCollision = max(0.0, newCollision - 0.0325);
    
        let mousePosition: vec3f = params.mousePosition;
        let minDistance: f32 = sphereRadius; // Minimum allowed distance between spheres
    
        // Compute attraction towards sphere 0
        var directionToCenter = mousePosition - vPos;
        let distanceToCenter = length(directionToCenter);
    
        // Slow down when close to the attractor
        var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
        
        if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Only attract if outside the minimum distance
            vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
            vVel *= 0.95;
        }
        
        // Collision Handling: Packing spheres instead of pushing them away
        var particlesArrayLength = arrayLength(&particlesA);
        
        for (var i = 0u; i < particlesArrayLength; i++) {
            if (i == index) {
                continue;
            }
            
            let otherPos = particlesA[i].position.xyz;
            let otherRadius = particlesA[i].data.x;
            let collisionMinDist = sphereRadius + otherRadius;
            let toOther = otherPos - vPos;
            let dist = length(toOther);
    
            if (dist < collisionMinDist) { 
                let pushDir = normalize(toOther);
                let overlap = collisionMinDist - dist;
                let pushStrength = otherRadius / sphereRadius; // radius
                
                // Push away proportionally to overlap
                vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
                newCollision = min(1.0, pushStrength * 1.5);
    
                let r = normalize(cross(pushDir, vVel));
                angularVelocity += r * length(vVel) * 0.1 * pushStrength;
            }
        }
    
        let projectedVelocity = dot(vVel, directionToCenter); // Velocity component towards mouse
    
        let mainSphereRadius = 1.0;
    
        if(distanceToCenter <= (mainSphereRadius + minDistance)) {
            let pushDir = normalize(directionToCenter);
            let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
            
            // Push away proportionally to overlap
            vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
    
            newCollision = 1.0;
    
            if(params.mouseAttraction > 0.0) {
                vPos -= pushDir * overlap;
            }
    
            let r = normalize(cross(pushDir, vVel));
            angularVelocity += r * length(vVel) * 0.05;
        }
    
        vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
    
        // Apply velocity update
        vPos += vVel * params.deltaT;
    
        angularVelocity *= 0.98;
        let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
        
        // Write back      
        particlesB[index].position = vec4(vPos, 0.0);
        particlesB[index].velocity = vec4(vVel, collision);
        particlesB[index].data = vec4(vData.x, newCollision, vData.z, vData.w);
        particlesB[index].rotation = updatedQuat;
        particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
    }

    One of my main inspirations for this scene was this awesome demo by Patrick Schroen. I spent a lot of time looking for the right rendering tricks to use and finally set up my mind on volumetric lighting. The implementation is quite similar to what Maxime Heckel explained in this excellent breakdown article. Funnily enough, I was already deep into my own implementation when he released that piece, and I owe him the idea of using a blue noise texture.

    As a side note, during the development phase this was the first scene that required an actual user interaction and it played a pivotal role in my decision to turn my folio into a game.

    Open source scene

    For the last scene, I wanted to experiment a bit more with particles and curl noise because I’ve always liked how organic and beautiful it can get. I had already published an article using these concepts, so I had to come up with something different. Jaume Sanchez’ Polygon Shredder definitely was a major inspiration here.

    Since this experiment was part of my open source commitment section, I had the idea to use my GitHub statistics as a data source for the particles. Each statistic (number of commits, followers, issues closed and so on) is assigned to a color and turned into a bunch of particles. You can even toggle them on and off using the filters in the information pop-up. Once again, this changed a rather technical demo into something more meaningful.

    While working on the portfolio, I was also exploring new rendering techniques with gpu-curtains such as planar reflections. Traditionally used for mirror effects or floor reflections, it consists of rendering a part of your scene a second time but from a different camera angle and projecting it onto a plane. Having nailed this, I thought it would be a perfect match there and added it to the scene.

    Last but not least, and as a reminder of the retro video games vibe, I wanted to add a pixelated mouse trail post-processing effect. I soon realized it would be way too much though, and ended up showing it only when the user is actually drawing a line, making it more subtle.

    Using the filters can actually help you unlock features!

    Performance and accessibility

    On such highly interactive and immersive pages, performance is key. Here are a few tricks I’ve used to try to maintain the most fluid experience across all devices.

    Dynamic imports

    I’ve used Nuxt dynamic imported components and lazy hydration for almost every non critical components of the page. In the same way, all WebGPU scenes are dynamically loaded only if WebGPU is supported. This significantly decreased the initial page load time.

    // pseudo code
    import type { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
    import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
    
    let scene: WebGPUHeroScene | CanvasHeroScene | null;
    const canvas = useTemplateRef("canvas");
    
    const { colors } = usePaletteGenerator();
    
    onMounted(async () => {
      const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
    
      if ($hasWebGPU && canvas.value) {
        const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
    
        scene = new WebGPUHeroScene({
          gpuCurtains: $gpuCurtains,
          container: canvas.value,
          colors: colors.value,
        });
      } else if (canvas.value) {
        scene = new CanvasHeroScene({
          container: canvas.value,
          isReducedMotion: $isReducedMotion,
          colors: colors.value,
        });
      }
    });

    I’m not particularly fond of Lighthouse reports but as you can see the test result is quite good (note that it’s running without WebGPU though).

    PageSpeed Insights report

    Monitoring WebGPU performance in real time

    I’ve briefly mentionned it earlier, but each WebGPU scene actually monitors its own performance by keeping track of its FPS rate in real time. To do so, I’ve written 2 separate classes: FPSWatcher, that records the average FPS over a given period of time, and QualityManager, that uses a FPSWatcher to set a current quality rating on a 0 to 10 scale based on the average FPS.

    This is what they look like:

    export interface FPSWatcherParams {
      updateDelay?: number;
      onWatch?: (averageFPS: number) => void;
    }
    
    export default class FPSWatcher {
      updateDelay: number;
      onWatch: (averageFPS: number) => void;
    
      frames: number[];
      lastTs: number;
      elapsedTime: number;
      average: number;
    
      constructor({
        updateDelay = 1000, // ms
        onWatch = () => {}, // callback called every ${updateDelay}ms
      }: FPSWatcherParams = {}) {
        this.updateDelay = updateDelay;
        this.onWatch = onWatch;
    
        this.frames = [];
    
        this.lastTs = performance.now();
        this.elapsedTime = 0;
    
        this.average = 0;
      }
    
      restart() {
        this.frames = [];
        this.elapsedTime = 0;
        this.lastTs = performance.now();
      }
    
      update() {
        const delta = performance.now() - this.lastTs;
        this.lastTs = performance.now();
        this.elapsedTime += delta;
    
        this.frames.push(delta);
    
        if (this.elapsedTime > this.updateDelay) {
          const framesTotal = this.frames.reduce((a, b) => a + b, 0);
    
          this.average = (this.frames.length * 1000) / framesTotal;
    
          this.frames = [];
          this.elapsedTime = 0;
    
          this.onWatch(this.average);
        }
      }
    }

    It’s very basic: I just record the elapsed time between two render calls, put that into an array and run a callback every updateDelay milliseconds with the latest FPS average value.
    It is then used by the QualityManager class, that does all the heavy lifting to assign an accurate current quality score:

    import type { FPSWatcherParams } from "./FPSWatcher";
    import FPSWatcher from "./FPSWatcher";
    
    export interface QualityManagerParams {
      label?: string;
      updateDelay?: FPSWatcherParams["updateDelay"];
      targetFPS?: number;
      onQualityChange?: (newQuality: number) => void;
    }
    
    export class QualityManager {
      label: string;
      fpsWatcher: FPSWatcher;
      targetFPS: number;
      #lastFPS: number | null;
      #active: boolean;
      onQualityChange: (newQuality: number) => void;
    
      quality: {
        current: number;
        min: number;
        max: number;
      };
    
      constructor({
        label = "Quality manager",
        updateDelay = 1000,
        targetFPS = 60,
        onQualityChange = (newQuality) => {},
      }: QualityManagerParams = {}) {
        this.label = label;
        this.onQualityChange = onQualityChange;
    
        this.quality = {
          min: 0,
          max: 10,
          current: 7,
        };
    
        this.#active = true;
    
        this.targetFPS = targetFPS;
        this.#lastFPS = null;
    
        this.fpsWatcher = new FPSWatcher({
          updateDelay,
          onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
        });
      }
    
      get active() {
        return this.#active;
      }
    
      set active(value: boolean) {
        if (!this.active && value) {
          this.fpsWatcher.restart();
        }
    
        this.#active = value;
      }
    
      onFPSWatcherUpdate(averageFPS = 0) {
        const lastFpsRatio = this.#lastFPS
          ? Math.round(averageFPS / this.#lastFPS)
          : 1;
        const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
    
        // if fps ratio is over 0.95, we should increase
        // else we decrease
        const boostedFpsRatio = fpsRatio / 0.95;
    
        // smooth change multiplier avoid huge changes in quality
        // except if we've seen a big change from last FPS values
        const smoothChangeMultiplier = 0.5 * lastFpsRatio;
    
        // quality difference that should be applied (number with 2 decimals)
        const qualityDiff =
          Math.round((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
    
        if (Math.abs(qualityDiff) > 0.25) {
          const newQuality = Math.min(
            Math.max(
              this.quality.current + Math.round(qualityDiff),
              this.quality.min
            ),
            this.quality.max
          );
    
          this.setCurrentQuality(newQuality);
        }
    
        this.#lastFPS = averageFPS;
      }
    
      setCurrentQuality(newQuality: number) {
        this.quality.current = newQuality;
        this.onQualityChange(this.quality.current);
      }
    
      update() {
        if (this.active) {
          this.fpsWatcher.update();
        }
      }
    }
    

    The most difficult part here is to smoothly handle the quality changes to avoid huge drops or gains in quality. You also don’t want to fall in a loop where for example:

    1. The average FPS are poor, so you degrade your current quality.
    2. You detect a quality loss and therefore decide to switch off an important feature, such as shadow mapping.
    3. Removing the shadow mapping gives you a FPS boost and after the expected delay the current quality is upgraded.
    4. You detect a quality gain, decide to re-enable shadow mapping and soon enough, you’re back to step 1.

    Typically, the quality rating is used to update things such as the current pixel ratio of the scene, frame buffers resolutions, number of shadow maps PCF samples, volumetric raymarching steps and so on. In worst case scenarios, it can even disable shadow mapping or post processing effects.

    Accessibility

    Finally, the site had to respect at least the basic accessibility standards. I’m not an accessibility expert and I may have made a few mistakes here and there, but the key points are that the HTML is semantically correct, it is possible to navigate using the keyboard and the prefers-reduced-motion preference is respected. I achieved that by disabling entirely the gamification concept for these users, removing every CSS and JavaScript animations, and made the scenes fall back to their 2D canvas versions, without being animated at all.

    Conclusion

    Well, it was a long journey, wasn’t it?

    Working on my portfolio these past 6 months has been a truly demanding task, technically but also emotionally. I’m still having a lot of self doubts about the overall design, key UX choices or level of creativity. I also do think that it kind of honestly sums up who I am as a developer but also as a person. In the end, it’s probably what matters most.

    I hope that you’ve learnt a few things reading this case study, whether it’d be about technical stuff or my own creative process. Thank you all, and remember: stay fun!



    Source link

  • 13 tips for delivering better tech talks | Code4IT


    Doing a tech talk is easy. Doing a good talk is harder. We’re going to see some tips to improve the delivery of your conferences.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    I love to deliver tech talks: they help me improve both my technical and communication skills.

    Hey! If you’re starting doing tech talks, don’t miss my article Thoughts after my very first public speech where I explained what I did right and what I did wrong at my very first tech talk. Learn from my errors, and avoid them!💪

    On one hand, teaching stuff requires technical preparations: you need to know what you’re talking about, and you need to know it pretty well. Even more, you need to know some advanced stuff to give the audience something they will remember – if everything is obvious, what will they remember from your talk?

    On the other hand, tech talks require good communication skills: your job is to deliver a message to your audience, and you can do it only if your intent is clear and you avoid talking of useless (or misleading) stuff.

    But, in the end, only having good content is not enough: you need to shape the talk in a way that stimulates the attention of the public and does not bore them.

    note: I still have a lot of room for improvement, so I still have to work on myself to improve my talks!

    1- Tell what are the topics covered by your talk

    Why should someone attend your talk?

    This is a simple question, but it must be clear to you way before submitting your talk to CFPs. Usually, the best reason to attend is because of the content of the conference (unless you attend a conference only for the free pizza and swags!).

    You should always express what is the topic of your talk.

    Where, and when?

    1. In the title: the title should express what you’re going to say. «Azure DevOps: an intro to build and release pipelines» is better than «Let’s work with Azure DevOps!». Yes, it’s less fancy, but you are making the scope clear (build and release pipelines), the tool (Azure DevOps), and the difficulty of your talk (it’s an intro, not a talk that targets experts)
    2. In the description of your talk: when submitting CFP, when sharing it on social media, and everywhere else you can add some text to describe your talk, you should add some more details. For instance, «In this session, we’re gonna see how to build and release .NET Core projects with Azure DevOps pipelines, how to use PR builds, how to manage variable substitution with Variable Groups…». This will help the reader decide whether or not attending to your session.
    3. At the beginning of your talk: this is for people who forgot to read the session description. Repeat the points you’re gonna cover at the beginning of your talk, like right after the title and the slide about who are you. In this way, attendees can leave if they find out that the topic is not what they were expecting from the title. They don’t lose time on anything not interesting for them, and you don’t lose your focus watching at their bored faces.

    2- Divide the talks into smaller blocks

    Think of your own experience: are you able to keep the focus on a 1-hour long talk? Or do you get distracted after 10 minutes, start wandering with the mind, and so on?

    Well, that’s normal. Generally, people have a short attention span. This means that you cannot talk for 60 minutes about the same topic: your audience will get bored soon.

    So, you should split your talk into several smaller blocks. A good idea is to separate the sub-topics into 5 or 10 minutes slots, to help people understanding the precise topic of a block and, in case, pay less attention to that specific block (maybe because that’s a topic they already know, so not focusing 100% is fine).

    3- Wake up the audience with simple questions

    Sometimes the easiest way to regain the attention of the attendees is to ask them some simple questions: «Can you see my screen?», «Does any of you already used this tool?».

    It’s easy to reply to these questions, even without thinking too much about the answer.

    This kind of questions will wake up the audience and let them focus on what you’re saying for a bit more.

    Needless to say, avoid asking those questions too many times, and don’t repeat always the same question.

    4- Choose the right slide layout

    Many monitors and screens are now in 16:9. So remember to adapt the slide layout to that format.

    In the image below, we can see how the slide layout impacts the overall look: slides with a 4:3 layout are too small for current devices, and they just look… ugly!

    The right format impacts how the slides are viewed on different devices

    Slides in 16:9 feel more natural for many screen layouts.

    It’s a simple trick to remember, but it may have a great impact on your delivery.

    5- Don’t move hands and body if it’s not necessary

    Moving too much your body drives the attention away from the content of your talk. Avoid fidgeting, moving too much your hands and head.

    Stop fidgeting!

    Remember that every movement of your body should have a meaning. Use your movements to drive attention to a specific topic, or to imitate and explain some details.
    For instance, use your hands to simulate how some modules communicate with each other.

    When preparing your presentation, you are used to thinking of how you see the screen: you have your monitor size and resolution, and you can adjust your content based on that info.

    But you don’t know how the audience will see your screen.

    If you are doing an in-person talk, pay attention to the screens the audience sees: is the resolution fine? Do you have to increase the font size? Is it fine both for folks on the front and the last seats?

    On the contrary, when doing an online talk, you don’t know the device your audience will use: PC, tablet, smart tv, smartphone?

    This means that you can’t rely on the mouse cursor to point at a specific part of your monitor (eg: some text, a button, a menu item) as your audience may not see it.

    Where is the cursor?

    A good idea is to use some kind of tools like ZoomIt: it allows you to zoom in a part of your screen and to draw lines in a virtual layer.

    So, instead of saying «now click this button – hey, can you see my cursor?», use Zoomit to zoom on that button or, even better, to draw a rectangle or an arrow to highlight it.

    7- Pin presentation folder on Resource Explorer

    As we’ve already discussed in my article 10 underestimated tasks to do before your next virtual presentation, you should hide all the desktop icons – they tend to distract the audience. This also implies that even your folder you use to store the presentation assets has to be hidden.

    But now… Damn, you’ve just closed the folder with all the conference assets! Now you have to find it again and navigate through your personal folders.

    If you use Windows, luckily you can simply right-click on your folder, click Pin to Quick access

    Click &ldquo;Pin to quick access&rdquo;

    and have it displayed on the right bar of any folder you open.

    Folder displayed as Pinned Folder

    In this way, you can easily reach any folder with just one click.

    So your “main” folder will not be visible on your desktop, but you can still open it via the Quick Access panel.

    8- Stress when a topic is important

    You have created the presentation. You know why you built it, and what are the important stuff. Does your audience know what is important to remember?

    If you are talking for one hour, you are giving the public a lot of information. Some are trivia, some are niche details, some are the key point of a topic.

    So, make it clear what is important to remember and what is just a “good-to-know”.

    For instance, when talking about clean code, stress why it is important to follow a certain rule if it can be a game-changer. «Use consistent names when classes have similar meaning» and «Choose whether using tabs or spaces, and use them for all your files» are both valid tips, but the first one has a different weight compared to the latter one.

    Again, spend more time on the important stuff, and tell explicitly the audience that that part is important (and why).

    9- Use the slide space in the best way possible

    Let’s talk about the size of the slides’ font: keep it consistent or adapt it to the text and space in the slide?

    I thought that keeping it consistent was a good idea – somehow it hurts my brain seeing different sizes in different slides.

    But then I realized that there are some exceptions: for example, when a slide contains only a few words or a few points in a bullet list. In that case, you should occupy the space in a better way, to avoid all the emptiness around your text.

    Here we have 2 slides with the same font:

    Two slides with the same font size

    The first one is fine, the second one is too empty.

    Let’s adjust the font of the second slide:

    Two slides with different font size

    It’s a bit better. Not excellent, but at least the audience can read it. The text is a bit bigger, but you’ll hardly notice it.

    10- Turn off all the notifications

    It’s simple: if you are sharing your screen, you don’t want your audience to see those weird messages you receive on Discord or the spam emails on Outlook.

    So, turn off all the notifications. Of course, unless you are demonstrating how to integrate your stuff with platforms like Slack, Teams et cetera.

    11- Use the slides as a reference, not as a teleprompter

    Avoid full sentences in your slides. Nobody’s gonna read them – even more, if the audience is not paying attention!

    So, prefer putting just a few words instead of full, long sentences: you should not read your slides as if they were a teleprompter, and you should help your audience getting back on track if they lost their focus.

    Two bullet points like “Keep track of your progress” and “Fix weakness” are better than a single phrase like “Remember to use some tool to keep track of the progress of your project, so that you can find the weak points and fix them”.

    – of course, unless it is a quote: you should write the full text if it is important.

    12- “End” is the word

    We’re nearly at the end of this session.

    A simple yet powerful statement that can wake up your audience.

    When you’ve lost your focus, you get triggered by some words, like end. You unconsciously remember that you are at that conference for some reason, and you have to focus to get the most from the last minutes of the conference.

    So, try triggering the subconscious of your audience with some words like ending.

    13- Recap what you’ve explained

    Finally, you’re at the end of your talk.

    What should the audience remember from it?

    Spend some time to recap what you’ve seen, what are the key points of your conference, and what you’d like the others to remember.

    It is a good way to help the audience focus again and thinking of questions to bring to your attention.

    Wrapping up

    In this article, I’ve summarized some of the things I’ve worked on to improve my tech talks.

    There is still a lot to do, and a lot to learn. But I hope that those simple tricks will help other newbies like me to improve their talks.

    If you are interesting on learning from a great speaker, you should definitely watch Scott Hanselman’s “The Art of Speaking” course on Pluralsight.

    Do you have any other resources to share? The comment section is here for you!



    Source link

  • How to expose .NET Assembly Version via API endpoint routing | Code4IT


    Knowing the Assembly Version of the API you’ve deployed on an environment may be helpful for many reasons. We’re gonna see why, how to retrieve it, and how to expose it with Endpoint Routing (bye-bye Controllers and Actions!)

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes it can be useful to show the version of the running Assembly in one .NET Core API endpoint: for example, when you want to know which version of your code is running in an environment, or to expose a simple endpoint that acts as a “minimal” health check.

    In this article, we’re going to see how to retrieve the assembly version at runtime using C#, then we will expose it under the root endpoint of a .NET Core API without creating an API Controller, and lastly we’ll see how to set the Assembly version with Visual Studio.

    How to get Assembly version

    To get the Assembly version, everything we need is this snippet:

    Assembly assembly = Assembly.GetEntryAssembly();
    AssemblyInformationalVersionAttribute versionAttribute = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>();
    string assemblyVersion = versionAttribute.InformationalVersion;
    

    Let’s break it down!

    The first step is to get the info about the running assembly:

    Assembly assembly = Assembly.GetEntryAssembly();
    

    The Assembly class is part of the System.Reflection namespace, so you have to declare the corresponding using statement.

    The AssemblyInformationalVersionAttribute attribute comes from the same namespace, and contains some info for the assembly manifest. You can get that info with the second line of the snippet:

    AssemblyInformationalVersionAttribute versionAttribute = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>();
    

    Lastly, we need the string that represents the assembly version:

    string assemblyVersion = versionAttribute.InformationalVersion;
    

    If you want to read more about Assembly versioning in .NET, just head to the official documentation.

    How to expose an endpoint with Endpoint Routing

    Next, we need to expose that value using .NET Core API.

    Since we’re exposing only that value, we might not want to create a new Controller with a single Action: in this case, endpoint routing is the best choice!

    In the Startup.cs file, under the Configure method, we can define how the HTTP request pipeline is configured.

    By default, for ASP.NET Core APIs, you’ll see a section that allows the engine to map the Controllers to the endpoints:

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllers();
    });
    

    In this section, we can configure some other endpoints.

    The easiest way is to map a single path to an endpoint and specify the returned content. We can do it by using the MapGet method, which accepts a string for the path pattern and an async Delegate for the execution:

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapGet("/", async context =>
        {
            await context.Response.WriteAsync("Hi there!!");
        });
    
        endpoints.MapControllers();
    });
    

    In this way, we will receive the message Hi there every time we call the root of our API (because of the first parameter, /), and it happens only when we use the GET HTTP Verb, because of the MapGet method.

    Putting all together

    Now that we have all in place, we can join the two parts and return the Assembly version on the root of our API.

    Putting all together

    You could just return the string as it is returned from the versionAttribute.InformationalVersion property we’ve seen before. Or you could wrap it into an object.

    If you don’t want to specify a class for it, you can use an ExpandoObject instance and create new properties on the fly. Then, you have to serialize it into a string, and return it in the HTTP Response:

    endpoints.MapGet("/", async context =>
    {
        // get assembly version
        Assembly assembly = Assembly.GetEntryAssembly();
        AssemblyInformationalVersionAttribute versionAttribute = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>();
        string assemblyVersion = versionAttribute.InformationalVersion;
    
        // create the dynamic object
        dynamic result = new ExpandoObject();
        result.version = assemblyVersion;
    
        // serialize the object
        string versionAsText = JsonSerializer.Serialize(result);
    
        // return it as a string
        await context.Response.WriteAsync(versionAsText);
    });
    

    That’s it!

    Of course, if you want only the version as a string without the dynamic object, you can simplify the MapGet method in this way:

    endpoints.MapGet("/", async context =>
    {
        var version = Assembly.GetEntryAssembly().GetCustomAttribute<AssemblyInformationalVersionAttribute>().InformationalVersion;
        await context.Response.WriteAsync(version);
    });
    

    But, for this example, let’s stay with the full object.

    Let’s try it: update Assembly version and retrieve it from API

    After tidying up the code, the UseEndpoints section will have this form:

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapGet("/", async context =>
        {
            dynamic result = new ExpandoObject();
            result.version = Assembly.GetEntryAssembly().GetCustomAttribute<AssemblyInformationalVersionAttribute>().InformationalVersion;
            string versionAsText = JsonSerializer.Serialize(result);
            await context.Response.WriteAsync(versionAsText);
        });
    
        endpoints.MapControllers();
    });
    

    or, if you want to clean up your code, you could simplify it like this:

    app.UseEndpoints(endpoints =>
    {
        endpoints.WithAssemblyVersionOnRoot();
        endpoints.MapControllers();
    });
    

    WithAssemblyVersionOnRoot is an extension method I created to wrap that logic and make the UseEndpoints method cleaner. If you want to learn how to create extension methods with C#, and what are some gotchas, head to this article!

    To see the result, open Visual Studio, select the API project and click alt + Enter to navigate to the Project properties. Here, under the Package tag, define the version in the Package version section.

    Tab on Visual Studio used to define assembly version

    In this screen, you can set the value of the package that will be built.

    To double-check that the version is correct, head to the bin folder and locate the exe related to your project: right-click on it, go to properties and to the details tab. Here you can see the details of that exe:

    Assembly version on exe properties

    Noticed the Product version? That’s exactly what we’ve set up on Visual Studio.

    So, now it’s time to run the application.

    Get back to Visual Studio, run the application, and navigate to the root of the API.

    Finally, we can enjoy the result!

    Assembly version as exposed by the API endpoint

    Quite simple, isn’t it?

    Wrapping up

    In this article, we’ve seen how to expose on a specific route the version of the assembly running at a specified URL.

    This is useful to help you understand which version is currently running in an environment without accessing the CD pipelines to see which version has been deployed.

    Also, you can use this information as a kind of health check, since the data exposed are static and do not depend on any input or database status: the simplest match for getting info about the readiness of your application.

    What other info would you add to the exposed object? Let me know in the comment section 👇

    Happy coding!



    Source link

  • How to choose meaningful names? | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the fundamentals of clean code is: use meaningful names.

    But choosing meaningful names takes time!

    Time spent finding good names is time saved trying to figure out what you meant.

    How to approach it? Good names do not come on the first try!

    My suggestion is: at first, write down the code as it comes.

    public static string Get(string input)
    {
      char[] arr = new char[input.Length];
      int i = input.Length - 1;
      foreach (var e in input)
      {
        arr[i] = e;
        i--;
      }
    
      return new String(arr);
    }
    

    And then, when you have almost everything clear, choose better names for

    • classes
    • methods
    • parameters
    • variables
    • namespaces
    • libraries
    public static string GetReversedString(string originalString)
    {
      char[] reversedChars = new char[originalString.Length];
      int currentIndex = originalString.Length - 1;
      foreach (var currentChar in originalString)
      {
        reversedChars[currentIndex] = currentChar;
        currentIndex--;
      }
    
      return new String(reversedChars);
    }
    

    Probably, you’ll never reach perfection. Sad, but true.

    You might want to add some tests to your code, right? RIGHT??

    A good moment to choose better names is while writing test: at that moment your tests act as Clients to your production code, so if you find that the name of the method does not fully represent its meaning, or the parameter names are misleading, this is a good moment to improve them.

    And don’t forget about private variables and methods!

    So, what is “a good name”?

    A good name should express:

    • its meaning (what a method does?)
    • its scope (for items in loops, even var i = 0 is acceptable, if the scope is small)
    • what it represents (originalString is, of course, the original string)

    👉 Let’s discuss it on Twitter or on the comment section below!

    🐧

    This article first appeared on Code4IT





    Source link

  • Azure Service Bus and C#


    Azure Service bus is a message broker generally used for sharing messages between applications. In this article, we’re gonna see an introduction to Azure Service Bus, and how to work with it with .NET and C#

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Azure Service Bus is a message broker that allows you to implement queues and pub-subs topics. It is incredibly common to use queues to manage the communication between microservices: it is a simple way to send messages between applications without bind them tightly.

    In this introduction, we’re going to learn the basics of Azure Service Bus: what it is, how to create a Bus and a Queue, how to send and receive messages on the Bus with C#, and more.

    This is the first part of a series about Azure Service Bus. We will see:

    1. An introduction to Azure Service Bus with C#
    2. Queues vs Topics
    3. Handling Azure Service Bus errors with .NET

    But, for now, let’s start from the basics.

    What is Azure Service Bus?

    Azure Service Bus is a complex structure that allows you to send content through a queue.

    As you may already know, a queue is… well, a queue! First in, first out!

    This means that the messages will be delivered in the same order as they were sent.

    Queue of penguins

    Why using a queue is becoming more and more common, for scalable applications?
    Let’s consider this use case: you are developing a microservices-based application. With the common approach, communication occurs via HTTP: this means that

    • if the receiver is unreachable, the HTTP message is lost (unless you add some kind of retry policy)
    • if you have to scale out, you will need to add a traffic manager/load balancer to manage which instance must process the HTTP Request

    On the contrary, by using a queue,

    • if the receiver is down, the message stays in the queue until the receiver becomes available again
    • if you have to scale out, nothing changes, because the first instance that receives the message removes it from the queue, so you will not have multiple receivers that process the same message.

    How to create an Azure Service Bus instance

    It is really simple to create a new Service Bus on Azure!

    Just open Portal Azure, head to the Service Bus section, and start creating a new resource.

    You will be prompted to choose which subscription will be linked to this new resource, and what will be the name of that resource.

    Lastly, you will have to choose which will be the pricing tier to apply.

    Service Bus creation wizard on Azure UI

    There are 3 pricing tiers available:

    • Basic: its price depends on how many messages you send. At the moment of writing, with Basic tier you pay 0.05$ for every million messages sent.
    • Standard: Similar to the Basic tier, but allows you to have both Queues and Topics. You’ll see the difference between Queue and Topics in the next article
    • Premium: zone-redundant, with both Queues and Topics; of course, quite expensive

    So now, you can create the resource and see it directly on the browser.

    Policies and Connection Strings

    The first thing to do to connect to the Azure Service Bus is to create a Policy that allows you to perform specific operations on the Bus.

    By default, under the Shared access policies tab you’ll see a policy called RootManageSharedAccessKey: this is the default Policy that allows you to send and receive messages on the Bus.

    To get the connection string, click on that Policy and head to Primary Connection String:

    How to define Service Bus Policy via UI

    A connection string for the Service Bus looks like this:

    Endpoint=sb://c4it-testbus.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=my-secret-key
    

    Let’s break it down:

    The first part represents the Host name: this is the value you’ve set in the creation wizard, and the one you can see on the Overview tab:

    Service Bus instance Host name

    Then, you’ll see the SharedAccessKeyName field, which contains the name of the policy to use (in this case, RootManageSharedAccessKey).

    Then, we have the secret Key. If you select the Primary Connection String you will use the Primary Key; same if you use the Secondary Connection String.

    Keep that connection string handy, we’re gonna use it in a moment!

    Adding a queue

    Now that we have created the general infrastructure, we need to create a Queue. This is the core of the bus – all the messages pass through a queue.

    To create one, on the Azure site head to Entities > Queues and create a new queue.

    You will be prompted to add different values, but for now, we are only interested in defining its name.

    Write the name of the queue and click Create.

    Create queue panel on Azure UI

    Once you’ve created your queue (for this example, I’ve named it PizzaOrders), you’ll be able to see it in the Queues list and see its details.

    You can even define one or more policies for that specific queue just as we did before: you’ll be able to generate a connection string similar to the one we’ve already analyzed, with the only difference that, here, you will see a new field in the connection string, EntityPath, whose value is the name of the related queue.

    So, a full connection string will have this form:

    Service Bus connection string breakdown

    ServiceBusExplorer – and OSS UI for accessing Azure Service Bus

    How can you see what happens inside the Service Bus?

    You have two options: use the Service Bus Explorer tool directly on Azure:

    Service Bus Explorer on Azure UI

    Or use an external tool.

    I honestly prefer to use ServiceBusExplorer, a project that you can download from Chocolatey: this open source tool allows you to see what is happening inside Azure Service Bus: just insert your connection string and… voilá! You’re ready to go!

    ServiceBusExplorer project on Windows

    With this tool, you can see the status of all the queues, as well as send, read, and delete messages.

    If you want to save a connection, you have to open that tool as Administrator, otherwise, you won’t have enough rights to save it.

    How to send and receive messages with .NET 5

    To test it, we’re gonna create a simple project that manages pizza orders.
    A .NET 5 API application receives a list of pizzas to be ordered, then it creates a new message for every pizza received and sends them into the PizzaOrders queue.

    With another application, we’re gonna receive the order of every single pizza by reading it from the same queue.

    For both applications, you’ll need to install the Azure.Messaging.ServiceBus NuGet package.

    How to send messages on Azure Service Bus

    The API application that receives pizza orders from the clients is very simple: just a controller with a single action.

    [ApiController]
    [Route("[controller]")]
    public class PizzaOrderController : ControllerBase
    {
        private string ConnectionString = ""; //hidden
    
        private string QueueName = "PizzaOrders";
    
        [HttpPost]
        public async Task<IActionResult> CreateOrder(IEnumerable<PizzaOrder> orders)
        {
            await ProcessOrder(orders);
            return Ok();
        }
    }
    

    Nothing fancy, just receive a list of Pizza Orders objects with this shape:

    public class PizzaOrder
    {
        public string Name { get; set; }
        public string[] Toppings { get; set; }
    }
    

    and process those with a valid quantity.

    As you can imagine, the core of the application is the ProcessOrder method.

    private async Task ProcessOrder(IEnumerable<PizzaOrder> orders)
    {
        await using (ServiceBusClient client = new ServiceBusClient(ConnectionString))
        {
            ServiceBusSender sender = client.CreateSender(QueueName);
    
            foreach (var order in orders)
            {
                string jsonEntity = JsonSerializer.Serialize(order);
                ServiceBusMessage serializedContents = new ServiceBusMessage(jsonEntity);
                await sender.SendMessageAsync(serializedContents);
            }
        }
    }
    

    Let’s break it down.

    We need to create a client to connect to the Service Bus by using the specified Connection string:

    await using (ServiceBusClient client = new ServiceBusClient(ConnectionString))
    {
    }
    

    This client must be disposed after its use.

    Then, we need to create a ServiceBusSender object whose sole role is to send messages to a specific queue:

    ServiceBusSender sender = client.CreateSender(QueueName);
    

    Lastly, for every pizza order, we convert the object into a string and we send it as a message in the queue.

    // Serialize as JSON string
    string jsonEntity = JsonSerializer.Serialize(order);
    
    /// Create Bus Message
    ServiceBusMessage serializedContents = new ServiceBusMessage(jsonEntity);
    
    // Send the message on the Bus
    await sender.SendMessageAsync(serializedContents);
    

    Hey! Never used async, await, and Task? If you want a short (but quite thorough) introduction to asynchronous programming, head to this article!

    And that’s it! Now the message is available on the PizzaOrders queue and can be received by any client subscribed to it.

    Pizza Order message as shown on ServiceBusExplorer

    Here I serialized the PizzaOrder into a JSON string. This is not mandatory: you can send messages in whichever format you want: JSON, XML, plain text, BinaryData… It’s up to you!

    Also, you can add lots of properties to each message. To read the full list, head to the ServiceBusMessage Class documentation.

    How to receive messages on Azure Service Bus

    Once we have the messages on the Bus, we need to read them.

    To demonstrate how to read messages from a queue using C#, I have created a simple Console App, named PizzaChef. The first thing to do, of course, is to install the Azure.Messaging.ServiceBus NuGet package.

    As usual, we need a ServiceBusClient object to access the resources on Azure Service Bus. Just as we did before, create a new Client in this way:

    ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    

    This time, instead of using a ServiceBusSender, we need to create a ServiceBusProcessor object which, of course, will process all the messages coming from the Queue. Since receiving a message on the queue is an asynchronous operation, we need to register an Event Handler both for when we receive the message and when we receive an error:

    ServiceBusProcessor   _ordersProcessor = serviceBusClient.CreateProcessor(QueueName);
    _ordersProcessor.ProcessMessageAsync += PizzaItemMessageHandler;
    _ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
    

    For now, let’s add an empty implementation of both handlers.

    private Task PizzaItemErrorHandler(ProcessErrorEventArgs arg)
    {
    
    }
    
    private async Task PizzaItemMessageHandler(ProcessMessageEventArgs args)
    {
    
    }
    

    Note: in this article I’ll implement only the PizzaItemMessageHandler method. The PizzaItemErrorHandler, however, must be at least declared, even if empty: you will get an exception if you forget about it. Anyways, we’ll implement it in the last article of this series, the one about error handling.

    To read the content received in the PizzaItemMessageHandler method, you must simply access the Message.Body property of the args parameter:

    string body = args.Message.Body.ToString();
    

    And, from here, you can do whatever you want with the body of the message. For instance, deserialize it into an object. Of course, you can reuse the PizzaOrder class we used before, or create a new class with more properties but, still, compatible with the content of the message.

    public class ProcessedPizzaOrder
    {
        public string Name { get; set; }
        public string[] Toppings { get; set; }
    
        public override string ToString()
        {
            if (Toppings?.Any() == true)
                return $"Pizza {Name} with some toppings: {string.Join(',', Toppings)}";
            else
                return $"Pizza {Name} without toppings";
        }
    }
    

    Lastly, we need to mark the message as complete.

    await args.CompleteMessageAsync(args.Message);
    

    Now we can see the full example of the PizzaItemMessageHandler implementation:

    private async Task PizzaItemMessageHandler(ProcessMessageEventArgs args)
    {
        try
        {
            string body = args.Message.Body.ToString();
            Console.WriteLine("Received " + body);
    
            var processedPizza = JsonSerializer.Deserialize<ProcessedPizzaOrder>(body);
    
            Console.WriteLine($"Processing {processedPizza}");
    
            // complete the message. messages is deleted from the queue.
            await args.CompleteMessageAsync(args.Message);
        }
        catch (System.Exception ex)
        {
            // handle exception
        }
    }
    

    Does it work? NO.

    We forgot to start processing the incoming messages. It’s simple: in the Main method, right after the declaration of the ServiceBusProcessor object, we need to call StartProcessingAsync to start processing and, similarly, StartProcessingAsync to end the processing.

    Here’s the full example of the Main method: pay attention to the calls to Start and Stop processing.

    private static async Task Main(string[] args)
    {
        ServiceBusProcessor _ordersProcessor = null;
        try
        {
            ServiceBusClient serviceBusClient = new ServiceBusClient(ConnectionString);
    
            _ordersProcessor = serviceBusClient.CreateProcessor(QueueName);
            _ordersProcessor.ProcessMessageAsync += PizzaItemMessageHandler;
            _ordersProcessor.ProcessErrorAsync += PizzaItemErrorHandler;
            await _ordersProcessor.StartProcessingAsync();
    
            Console.WriteLine("Waiting for pizza orders");
            Console.ReadKey();
        }
        catch (Exception)
        {
            throw;
        }
        finally
        {
            if (_ordersProcessor != null)
                await _ordersProcessor.StopProcessingAsync();
        }
    }
    

    While the call to StartProcessingAsync is mandatory (otherwise, how would you receive messages?), the call to StopProcessingAsync, in a console application, can be skipped, since we are destroying the application. At least, I think so. I still haven’t found anything that says whether to call or skip it. If you know anything, please contact me on Twitter or, even better, here in the comments section – so that we can let the conversation going.

    Wrapping up

    This is part of what I’ve learned from my first approach with Azure Service Bus, and the use of Queues in microservice architectures.

    Is there anything else I should say? Have you ever used queues in your applications? As usual, feel free to drop a comment in the section below, or to contact me on Twitter.

    In the next article, we’re gonna explore another topic about Azure Service Bus, called… Topic! We will learn how to use them and what is the difference between a Queue and a Topic.

    But, for now, happy coding!



    Source link

  • Fluid Worlds and Layered Visions: Tomoya Okada’s Digital Craft

    Fluid Worlds and Layered Visions: Tomoya Okada’s Digital Craft


    Since 2020, I have been working as a frontend engineer in Japan.

    While my main role is in engineering, I also have a strong interest in design, and I have created numerous personal projects that reflect this passion.

    I began studying web development independently in 2020, and after working as a freelancer for about four years, I am now employed at a production company (as of 2025). Recently, I’ve also developed an interest in graphic design and 3D modeling. I hope to continue expanding my knowledge across a wide range of fields—not limited to the web.

    My strengths lie in creating interactive experiences, particularly in the area of animation. When it comes to design, I excel at producing visuals that are both aesthetically appealing and highly functional.

    Featured Work

    Faint Film

    This website showcases the photographs I have taken so far. I aimed to create a soft, layered visual experience where each image gently overlaps the others.

    One of the most distinctive features of the site is the view-mode toggle on the main page.
    It offers two modes: a refined grid layout and a slider layout. The transition animation between these modes is something I’m particularly proud of, as I believe it turned out beautifully.

    I paid close attention to implementing animations that complement—rather than overpower—the photographs, which are the true focus of the site. Thanks to this balanced approach, the project has received recognition and multiple awards from various design platforms. It’s a piece of work that highlights my strengths in design.

    Portfolio v7

    This is the website I released as Portfolio Version 7. It became a site defined by bold parallax effects and smooth transitions. There were several challenges in this project, one of which was creating visuals using fluid expressions and generative AI.

    The flower image on the homepage was generated with AI, and I explored how to present this image as beautifully as possible. That exploration led me to fluid expression. By blending monochrome and color visuals that appear and disappear within the fluid, I was able to capture both fragility and beauty.

    This site embodies many of my creative experiments and challenges.

    Layrid

    This site is a commemorative project celebrating the creation of 100 personal works. The name Layrid is a coined word combining layer and grid, reflecting the idea of organizing accumulated works within a structured grid.

    Key features include a smooth opening animation, seamless navigation between subpages, and dynamic transitions of the top images. By incorporating bold WebGL-driven transitions within an experience that maintains continuity, I was able to create a striking contrast between stillness and motion.

    Portfolio v8

    This is my latest portfolio as of September 2025. For this site, I focused on achieving both clarity and functionality. On the homepage, you can view several of my latest projects at a glance, presented with unique open/close animations and organized for easy exploration.

    In addition, the site includes an archive of personal projects as well as a Playground that showcases graphics created purely for fun. Through this site, you can explore the wide range of works I continue to produce and gain a deeper understanding of who I am.

    I’m especially proud of the details, such as the subtle click sounds and the color scheme that shifts depending on the time of day.

    What I value

    Enjoying my work

    This doesn’t mean playing around or having fun in a casual sense. We believe that great websites are born when creators approach each project with passion and genuine enjoyment. That’s why we value making the process enjoyable for ourselves first—it forms the foundation for meaningful, high-quality work.

    Ensuring smooth communication

    Clear and smooth communication allows us to align on direction and create with a strong shared understanding. We make it a point to engage thoughtfully and carefully with both clients and team members, ensuring we move forward without misunderstandings.

    Continuing to learn every day

    This industry evolves at a rapid pace, with trends constantly shifting. That’s why I make it a priority to keep learning and updating my knowledge every day, never standing still.

    Final Thoughts

    First of all, I would like to express my heartfelt gratitude to Codrops for giving me this opportunity. Although I’m a developer, I’m also eager to further explore the field of design. I believe that having a well-rounded skill set is essential when working in the web industry. I will continue to grow by strengthening my passions while also improving in areas where I still lack.

    I actively share various websites and insights on social media, so please feel free to check them out.

    Thank you very much for reading to the end.



    Source link