برچسب: Process

  • Setting the Stage: Inside the Process of Bringing Christian Fleming’s Work to Life in Print, Web, and 3D

    Setting the Stage: Inside the Process of Bringing Christian Fleming’s Work to Life in Print, Web, and 3D


    When two creatives collaborate, the design process becomes a shared stage — each bringing their own strengths, perspectives, and instincts. This project united designer/art director Artem Shcherban and 3D/motion designer Andrew Moskvin to help New York–based scenographer and costume designer Christian Fleming completely reimagine how his work is presented.

    What began as a portfolio refresh evolved into a cohesive visual system: a rigorously minimal print catalog, a single-page website concept, and a cinematic 3D visualization. Together, Artem and Andrew shaped an experience that distilled Christian’s theatrical sensibility into clear, atmospheric design across both physical and digital formats.

    From here, Artem picks up the story, walking us through how he approached the portfolio’s structure, the visual rules it would live by, and the thinking that shaped both its print and on-screen presence.

    Starting the Design Conversation

    Christian Fleming is a prominent designer and director based in New York City who works with theaters around the world creating visual spaces for performances. He approached me with a challenge: to update and rethink his portfolio, to make it easy to send out to theater directors and curators. Specifically the print format.

    Christian had a pretty clear understanding of what he wanted to show and how it should look: rigid Scandinavian minimalism, extreme clarity of composition, a minimum of elements and a presentation that would be understandable to absolutely anyone – regardless of age, profession or context.

    It was important to create a system that would:

    • be updated regularly (approximately every 3 weeks),
    • adapt to new projects,
    • and at the same time remain visually and semantically stable.

    There also needed to be an “About Christian” section in the structure, but this too had to fit within a strict framework of visual language.

    Designing a Flexible Visual System

    I started by carefully analyzing how Christian works. His primary language is visual. He thinks in images, light, texture and composition. So it was important to retain a sense of air and rhythm, but build a clear modular structure that he could confidently work with on his own.

    We came up with a simple adaptive system:

    • it easily adapts to images of different formats,
    • scalable for everything from PDFs to presentations,
    • and can be used both digitally and offline.

    In the first stages, we tried several structures. However, Christian still felt that there was something missing in the layout – the visuals and logic were in conflict. We discussed which designs he wanted to show openly and which he didn’t. Some works had global reviews and important weight, but could not be shown in all details.

    The solution was to divide them into two meaningful blocks:

    “Selected Projects”, with full submission, and “Archival Projects”, with a focus on awards, reviews, and context. This approach preserved both structure and tone. The layout became balanced – and Christian immediately responded to this.

    After gathering the structure and understanding how it would work, we began creating the design itself and populating it with content. It was important from the start to train Kristan to add content on his own, as there was a lot of project and they change quite often.

    One of the key pluses of our work is versatility. Not only could the final file be emailed, but it could also be used as a print publication. This gave Christian the opportunity to give physical copies at meetings, premieres and professional events where tactility and attention to detail are important.

    Christian liked the first result, both in the way the system was laid out and the way I approached the task. Then I suggested: let’s update the website as well.

    Translating the Portfolio to a Single-Page Site

    This phase proved to be the most interesting, and the most challenging.

    Although the website looks simple, it took almost 3 months to build. From the very beginning, Christian and I tried to understand why he needed to update the site and how it should work together with the already established portfolio system.

    The main challenge was to show the visual side of his projects. Not just text or logos, but the atmosphere, the light, the costumes, the feeling of the scene.

    One of the restrictions that Christian set was the requirement to make the site as concise as possible, without a large number of pages, or better to limit it to one, and without unnecessary transitions. It had to be simple, clear and intuitive, but still user-friendly and quite informative. This was a real challenge, given the amount of content that needed to be posted.

    Designing with Stage Logic

    One of the key constraints that started the work on the site was Christian’s wish: no multiple pages. Everything had to be compact, coherent, clear and yet rich. This posed a special challenge. It was necessary to accommodate a fairly large amount of information without overloading the perception.

    I proposed a solution built on a theatrical metaphor: as in a stage blackout, the screen darkens and a new space appears. Each project becomes its own scene, with the user as a spectator — never leaving their seat, never clicking through menus. Navigation flows in smooth, seamless transitions, keeping attention focused and the emotional rhythm intact.

    Christian liked the idea, but immediately faced a new challenge: how to fit everything important on one screen:

    • a short text about him,
    • social media links and a resume,
    • the job title and description,
    • and, if necessary, reviews.

    At the same time, the main visual content – photos and videos – had to remain in the center of attention and not overlap with the interface.

    Solving the Composition Puzzle

    We explored several layouts — from centered titles and multi-level disclosures to diagonal structures and thumbnail navigation. Some looked promising, but they lacked the sense of theatrical rhythm we wanted. The layouts felt crowded, with too much design and not enough air.

    The breakthrough came when we shifted focus from pure visuals to structural logic. We reduced each project view to four key elements: minimal information about Christian, the production title with the director’s name, a review (when available), and a button to select the project. Giving each element its own space created a layout that was both clear and flexible, without overloading the screen.

    Refining Through Iteration

    As with the book, the site went through several iterations:

    • In the first prototype, the central layout quickly proved unworkable – long play titles and director names didn’t fit on the screen, especially in the mobile version. We were losing scalability and not using all the available space.
    • In the second version, we moved the information blocks upwards – this gave us a logical hierarchy and allowed us not to burden the center of the screen. The visual focus remained on the photos, and the text did not interfere with the perception of the scenography.
    • In the third round, the idea of “titles” appeared – a clear typographic structure, where titles are highlighted only by boldness, without changing the lettering. This was in keeping with the overall minimalist aesthetic, and Christian specifically mentioned that he didn’t want to use more than one font or style unless necessary.

    We also decided to stylistically separate the reviews from the main description. We italicized them and put them just below. This made it clear what belonged to the author and what was a response to the author’s work.

    Bringing Theatrical Flow to Navigation

    The last open issue was navigation between projects. I proposed two scenarios:

    1. Navigating with arrows, as if the viewer were leafing through the play scene by scene.
    2. A clickable menu with a list of works for those who want to go directly.

    Christian was concerned about the question: wouldn’t the user lose their bearings if they didn’t see the list all the time? We discussed this and came to the conclusion that most visitors don’t come to the site to “look for the right job”. They come to feel the atmosphere and “experience” its theater. So the basic scenario is a consistent browsing experience, like moving through a play. The menu is available, but not in the way – it should not break the effect of involvement.

    What We Learned About Theatrical Design

    We didn’t build just a website. We built an experience. It is not a digital storefront, but a space that reflects the way Christian works. He is an artist who thinks in the rhythm of the stage, and it was essential not to break that rhythm.

    The result is a place where the viewer isn’t distracted; they inhabit it. Navigation, structure, and interface quietly support this experience. Much of that comes from Christian’s clear and thoughtful feedback, which shaped the process at every step. This project is a reminder that even work which appears simple is defined by countless small decisions, each influencing not only how it functions but also the mood it creates from the very beginning.

    Extending the Design from Screen to Print

    Once the site was complete, a new question emerged: how should this work be presented in the most meaningful way?

    The digital format was only part of the answer. We also envisioned a printed edition — something that could be mailed or handed over in person as a physical object. In the theater world, where visual presence and tactility carry as much weight as the idea itself, this felt essential.

    We developed a set of layouts, but bringing the catalog to life as intended proved slow. Christian’s schedule with his theater work left little time to finalize the print production. We needed an alternative that could convey not only the design but also the atmosphere and weight of the finished book.

    Turning the Book into a Cinematic Object

    At this stage, 3D and motion designer Andrew Moskvin joined the project. We shared the brief with him — not just to present the catalog, but to embed it within the theatrical aesthetic, preserving the play of light, texture, air, and mood that defined the website.

    Andrew was immediately enthusiastic. After a quick call, he dove into the process. I assembled all the pages of the print version we had, and together we discussed storyboards, perspectives, atmosphere, possible scenes, and materials that could deepen the experience. The goal was more than simply showing the layout — we wanted cinematic shots where every fold of fabric and every spot of light served a single dramaturgy.

    The result exceeded expectations. Andrew didn’t just recreate the printed version; he brought it to life. His work was subtle and precise, with a deep respect for context. He captured not only the mood but also the intent behind each spread, giving the book weight, materiality, and presence — the kind we imagined holding in our hands and leafing through in person.

    Andrew will share his development process below.

    Breaking Down the 3D Process

    The Concept

    At the very start, I wanted my work to blend fluently in the ideas that were already made. Christian Fleming is a scenographer and costume designer, so the visual system needed to reflect his world. Since the project was deeply rooted in the theatrical aesthetic, my 3D work had to naturally blend into that atmosphere. Artem’s direction played a key role in shaping the unique look envisioned by Christian Fleming — rich with stage-like presence, bold compositions, and intentional use of space. My task was to ensure that the 3D elements not only supported this world, but also felt like an organic extension of it — capturing the same mood, lighting nuances, and visual rhythm that define a theatrical setting.

    The Tools

    For the entire 3D pipeline, I worked in:

    1. Cinema 4D for modeling and scene setup
    2. Redshift for rendering 
    3. After Effects for compositing 
    4. Photoshop for color correcting static images

    Modeling the Book

    The book was modeled entirely from scratch. Me and Artem discussed the form and proportions, and after several iterations, we finalized the design direction. I focused on the small details that bring realism: the curvature of the hardcover spine, beveled edges, the separation between the cover and pages, and the layered structure of the paper block. I also modeled the cloth texture wrapping the spine, giving the book a tactile, fabric-like look. The geometry was built to hold up in close-up shots and fit the theatrical lighting.

    Lighting with a Theatrical Eye

    Lighting was one of the most important parts of this process. I wanted the scenes to feel theatrical — as if the objects were placed on a stage under carefully controlled spotlights. Using a combination of area lights and spotlights in Redshift, I shaped the lighting to create soft gradients and shadows on the surfaces. The setup was designed to emphasize the geometry without flattening it, always preserving depth and direction. A subtle backlight highlight played a key role in defining the edges and enhancing the overall form.

    I think I spent more time on lighting than on modeling, since lighting has always been more experimental for me — even in product scenes.

    One small but impactful trick I always use is setting up a separate HDRI map just for reflections. I disable its contribution to diffuse lighting by setting the diffuse value to 0, while keeping reflections at 1. This allows the reflections to pop more without affecting the overall lighting of the scene. It’s a simple setup, but it gives you way more control over how materials respond — especially in stylized or highly art-directed environments.

    Building the Materials

    When I was creating the materials, I noticed that Artem had used a checkerboard texture for the cover. So I thought — why not take that idea further and implement it directly into the material? I added a subtle bump using a checker texture on the sides and front part of the book.

    I also experimented quite a bit with displacement. Initially, I had the idea to make the title metallic, but it felt too predictable. So instead, I went with a white title featuring embossed details, while keeping the checker bump texture underneath.

    This actually ties back to the modeling process — for the displacement to work properly, the geometry had to be evenly dense and ready for subdivision. 

    I created a mask in Photoshop and applied a procedural Gaussian blur using a Smart Object. Without the blur, the displacement looked harsh and unrefined — even a slight blur made a noticeable difference.

    The main challenge with using white, as always, was avoiding blown-out highlights. I had to carefully balance the lighting and tweak the material settings to make the title clean and visible without overexposing it.

    One of the more unusual challenges in this project was animating the page slide and making the pages differ. I didn’t want the pages to feel too repetitive, but I also didn’t want to create dozens of individual materials for each page. To find a balance, I created two different materials for two pages and made them random inside of the cloner. It was a bit of a workaround — mostly due to limitations inside the Shader switch node — but it worked well enough to create the illusion of variety without significantly increasing the complexity of the setup.

    There’s a really useful node in Redshift called Color User Data — especially when working with the MoGraph system to trigger object index values. One of the strangest (and probably least intuitive) things I did in this setup was using a Change Range node to remap those index values properly according to the number of textures I had. With that in place, I built a system that used an index to mix between all the textures inside a Shader Switch node. This allowed me to get true variation across the pages without manually assigning materials to each one.

    You might’ve noticed that the pages look a bit too bright for a real-world scenario — and that was actually a deliberate choice. I often use a trick that helps me art-direct material brightness independently of the scene’s lighting. The key node here is Color Correct Node.

    Inside it, there’s a parameter called Level. If you set it higher than 1, it increases the overall brightness of the texture output — without affecting shadows or highlights too aggressively. This also works in reverse: if your texture has areas that are too bright (like pure white), lowering the Level value below 1 will tone it down without needing to modify the source texture.

    It’s a simple trick, but incredibly useful when you want fine control over how materials react in stylized or theatrical lighting setups.

    The red cloth material I used throughout the scene is another interesting part of the project. I wanted it to have a strong tactile feel — something that looks thick, textured, and physically present. To achieve that, I relied heavily on geometry. I used a Redshift Object Tag with Subdivision (under the Geometry tab) enabled to add more detail where it was needed. This helped the cloth catch light properly and hold up in close-up shots.

    For the translucent look, I originally experimented with Subsurface Scattering, but it didn’t give me the control I wanted. So instead, I used an Opacity setup driven by a Ramp and Change Range nodes. That gave me just enough falloff and variation to fake the look of light passing through thinner areas of the fabric — and in the end, it worked surprisingly well.

    Animating the Pages

    This was by far the most experimental part of the project for me. The amount of improvisation — and the complete lack of confidence in what the next frame would be — made the process both fun and flexible.

    What you’re about to see might look a bit chaotic, so let me quickly walk you through how it all started.

    The simulation started with a subject — in our case, a page. It had to have the proper form, and by that I mean the right typology. Specifically, it needed to consist only of horizontal segments; otherwise, it would bend unevenly under the forces present in the scene. (And yes, I did try versions with even polygons — it got messy.)

    I set up all the pages in a Cloner so I could easily adjust any parameters I needed, and added a bit of randomness using a Random Effector.

    In the video, you can see a plane on the side that connects to the pages — that was actually the first idea I had when thinking about how to run the simulation. The plane has a Connect tag that links all the pages to it, so when it rotates, they all follow along.

    I won’t go into all the force settings — most of them were experimental, and animations like this always require a bit of creative adjustment.

    The main force was wind. The pages did want to slide just from the plane with the Connect tag, but I needed to give them an extra push from underneath — that’s where wind came in handy.

    I also used a Field Force to move the pages mid-air, from the center outward to the other side.

    Probably the most important part was how I triggered the “Mix Animation.” I used a Vertex Map tag on the Cloner to paint a map using a Field, which then drove the Mix Animation parameter in the Cloth tag. This setup made the pages activate one by one, creating a natural, finger-like sliding motion as seen in Video.

    Postprocessing

    I didn’t go too heavy on post-processing, but there’s one plugin I have to mention — Deep Glow. It gives amazing results. By tweaking the threshold, you can make it react only to the brightest areas, which creates a super clean, glowing effect.

    The Final Theatrical Ecosystem

    In the end, Christian was delighted with the outcome. Together we had built more than a portfolio — we had created a cohesive theatrical ecosystem. It moved fluidly from digital performance to printed object, from live stage to interface, and from emotion to technology.

    The experience is pared back to its essence: no superfluous effects, no unnecessary clicks, nothing to pull focus. What remains is what matters most — the work itself, framed in a way that stays quietly behind the scenes yet comes fully alive in the viewer’s hands and on their screen.



    Source link

  • Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js

    Exploring the Process of Building a Procedural 3D Kitchen Designer with Three.js


    Back in November 2024, I shared a post on X about a tool I was building to help visualize kitchen remodels. The response from the Three.js community was overwhelmingly positive. The demo showed how procedural rendering techniques—often used in games—can be applied to real-world use cases like designing and rendering an entire kitchen in under 60 seconds.

    In this article, I’ll walk through the process and thinking behind building this kind of procedural 3D kitchen design tool using vanilla Three.js and TypeScript—from drawing walls and defining cabinet segments to auto-generating full kitchen layouts. Along the way, I’ll share key technical choices, lessons learned, and ideas for where this could evolve next.

    You can try out an interactive demo of the latest version here: https://kitchen-designer-demo.vercel.app/. (Tip: Press the “/” key to toggle between 2D and 3D views.)

    Designing Room Layouts with Walls

    Example of user drawing a simple room shape using the built-in wall module.

    To initiate our project, we begin with the wall drawing module. At a high level, this is akin to Figma’s pen tool, where the user can add one line segment at a time until a closed—or open-ended—polygon is complete on an infinite 2D canvas. In our build, each line segment represents a single wall as a 2D plane from coordinate A to coordinate B, while the complete polygon outlines the perimeter envelope of a room.

    1. We begin by capturing the [X, Z] coordinates (with Y oriented upwards) of the user’s initial click on the infinite floor plane. This 2D point is obtained via Three.js’s built-in raycaster for intersection detection, establishing Point A.
    2. As the user hovers the cursor over a new spot on the floor, we apply the same intersection logic to determine a temporary Point B. During this movement, a preview line segment appears, connecting the fixed Point A to the dynamic Point B for visual feedback.
    3. Upon the user’s second click to confirm Point B, we append the line segment (defined by Points A and B) to an array of segments. The former Point B instantly becomes the new Point A, allowing us to continue the drawing process with additional line segments.

    Here is a simplified code snippet demonstrating a basic 2D pen-draw tool using Three.js:

    import * as THREE from 'three';
    
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.set(0, 5, 10); // Position camera above the floor looking down
    camera.lookAt(0, 0, 0);
    
    const renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);
    
    // Create an infinite floor plane for raycasting
    const floorGeometry = new THREE.PlaneGeometry(100, 100);
    const floorMaterial = new THREE.MeshBasicMaterial({ color: 0xcccccc, side: THREE.DoubleSide });
    const floor = new THREE.Mesh(floorGeometry, floorMaterial);
    floor.rotation.x = -Math.PI / 2; // Lay flat on XZ plane
    scene.add(floor);
    
    const raycaster = new THREE.Raycaster();
    const mouse = new THREE.Vector2();
    let points: THREE.Vector3[] = []; // i.e. wall endpoints
    let tempLine: THREE.Line | null = null;
    const walls: THREE.Line[] = [];
    
    function getFloorIntersection(event: MouseEvent): THREE.Vector3 | null {
      mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
      mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
      raycaster.setFromCamera(mouse, camera);
      const intersects = raycaster.intersectObject(floor);
      if (intersects.length > 0) {
        // Round to simplify coordinates (optional for cleaner drawing)
        const point = intersects[0].point;
        point.x = Math.round(point.x);
        point.z = Math.round(point.z);
        point.y = 0; // Ensure on floor plane
        return point;
      }
      return null;
    }
    
    // Update temporary line preview
    function onMouseMove(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && points.length > 0) {
        // Remove old temp line if exists
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
        // Create new temp line from last point to current hover
        const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 1], point]);
        const material = new THREE.LineBasicMaterial({ color: 0x0000ff }); // Blue for temp
        tempLine = new THREE.Line(geometry, material);
        scene.add(tempLine);
      }
    }
    
    // Add a new point and draw permanent wall segment
    function onMouseDown(event: MouseEvent) {
      if (event.button !== 0) return; // Left click only
      const point = getFloorIntersection(event);
      if (point) {
        points.push(point);
        if (points.length > 1) {
          // Draw permanent wall line from previous to current point
          const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 2], points[points.length - 1]]);
          const material = new THREE.LineBasicMaterial({ color: 0xff0000 }); // Red for permanent
          const wall = new THREE.Line(geometry, material);
          scene.add(wall);
          walls.push(wall);
        }
        // Remove temp line after click
        if (tempLine) {
          scene.remove(tempLine);
          tempLine = null;
        }
      }
    }
    
    // Add event listeners
    window.addEventListener('mousemove', onMouseMove);
    window.addEventListener('mousedown', onMouseDown);
    
    // Animation loop
    function animate() {
      requestAnimationFrame(animate);
      renderer.render(scene, camera);
    }
    animate();

    The above code snippet is a very basic 2D pen tool, and yet this information is enough to generate an entire room instance. For reference: not only does each line segment represent a wall (2D plane), but the set of accumulated points can also be used to auto-generate the room’s floor mesh, and likewise the ceiling mesh (the inverse of the floor mesh).

    In order to view the planes representing the walls in 3D, one can transform each THREE.Line into a custom Wall class object, which contains both a line (for orthogonal 2D “floor plan” view) and a 2D inward-facing plane (for perspective 3D “room” view). To build this class:

    class Wall extends THREE.Group {
      constructor(length: number, height: number = 96, thickness: number = 4) {
        super();
    
        // 2D line for top view, along the x-axis
        const lineGeometry = new THREE.BufferGeometry().setFromPoints([
          new THREE.Vector3(0, 0, 0),
          new THREE.Vector3(length, 0, 0),
        ]);
        const lineMaterial = new THREE.LineBasicMaterial({ color: 0xff0000 });
        const line = new THREE.Line(lineGeometry, lineMaterial);
        this.add(line);
    
        // 3D wall as a box for thickness
        const wallGeometry = new THREE.BoxGeometry(length, height, thickness);
        const wallMaterial = new THREE.MeshBasicMaterial({ color: 0xaaaaaa, side: THREE.DoubleSide });
        const wall = new THREE.Mesh(wallGeometry, wallMaterial);
        wall.position.set(length / 2, height / 2, 0);
        this.add(wall);
      }
    }

    We can now update the wall draw module to utilize this newly created Wall object:

    // Update our variables
    let tempWall: Wall | null = null;
    const walls: Wall[] = [];
    
    // Replace line creation in onMouseDown with
    if (points.length > 1) {
      const start = points[points.length - 2];
      const end = points[points.length - 1];
      const direction = end.clone().sub(start);
      const length = direction.length();
      const wall = new Wall(length);
      wall.position.copy(start);
      wall.rotation.y = Math.atan2(direction.z, direction.x); // Align along direction (assuming CCW for inward facing)
      scene.add(wall);
      walls.push(wall);
    }
    

    Upon adding the floor and ceiling meshes, we can further transform our wall module into a room generation module. To recap what we have just created: by adding walls one by one, we have given the user the ability to create full rooms with walls, floors, and ceilings—all of which can be adjusted later in the scene.

    User dragging out the wall in 3D perspective camera-view.

    Generating Cabinets with Procedural Modeling

    Our cabinet-related logic can consist of countertops, base cabinets, and wall cabinets.

    Rather than taking several minutes to add the cabinets on a case-by-case basis—for example, like with IKEA’s 3D kitchen builder—it’s possible to add all the cabinets at once via a single user action. One method to employ here is to allow the user to draw high-level cabinet line segments, in the same manner as the wall draw module.

    In this module, each cabinet segment will transform into a linear row of base and wall cabinets, along with a parametrically generated countertop mesh on top of the base cabinets. As the user creates the segments, we can automatically populate this line segment with pre-made 3D cabinet meshes in meshing software like Blender. Ultimately, each cabinet’s width, depth, and height parameters will be fixed, while the width of the last cabinet can be dynamic to fill the remaining space. We use a cabinet filler piece mesh here—a regular plank, with its scale-X parameter stretched or compressed as needed.

    Creating the Cabinet Line Segments

    User can make a half-peninsula shape by dragging the cabinetry line segments alongside the walls, then in free-space.

    Here we will construct a dedicated cabinet module, with the aforementioned cabinet line segment logic. This process is very similar to the wall drawing mechanism, where users can draw straight lines on the floor plane using mouse clicks to define both start and end points. Unlike walls, which can be represented by simple thin lines, cabinet line segments need to account for a standard depth of 24 inches to represent the base cabinets’ footprint. These segments do not require closing-polygon logic, as they can be standalone rows or L-shapes, as is common in most kitchen layouts.

    We can further improve the user experience by incorporating snapping functionality, where the endpoints of a cabinet line segment automatically align to nearby wall endpoints or wall intersections, if within a certain threshold (e.g., 4 inches). This ensures cabinets fit snugly against walls without requiring manual precision. For simplicity, we’ll outline the snapping logic in code but focus on the core drawing functionality.

    We can start by defining the CabinetSegment class. Like the walls, this should be its own class, as we will later add the auto-populating 3D cabinet models.

    class CabinetSegment extends THREE.Group {
      public length: number;
    
      constructor(length: number, height: number = 96, depth: number = 24, color: number = 0xff0000) {
        super();
        this.length = length;
        const geometry = new THREE.BoxGeometry(length, height, depth);
        const material = new THREE.MeshBasicMaterial({ color, wireframe: true });
        const box = new THREE.Mesh(geometry, material);
        box.position.set(length / 2, height / 2, depth / 2); // Shift so depth spans 0 to depth (inward)
        this.add(box);
      }
    }

    Once we have the cabinet segment, we can use it in a manner very similar to the wall line segments:

    let cabinetPoints: THREE.Vector3[] = [];
    let tempCabinet: CabinetSegment | null = null;
    const cabinetSegments: CabinetSegment[] = [];
    const CABINET_DEPTH = 24; // everything in inches
    const CABINET_SEGMENT_HEIGHT = 96; // i.e. both wall & base cabinets -> group should extend to ceiling
    const SNAPPING_DISTANCE = 4;
    
    function getSnappedPoint(point: THREE.Vector3): THREE.Vector3 {
      // Simple snapping: check against existing wall points (wallPoints array from wall module)
      for (const wallPoint of wallPoints) {
        if (point.distanceTo(wallPoint) < SNAPPING_DISTANCE) return wallPoint;
      }
      return point;
    }
    
    // Update temporary cabinet preview
    function onMouseMoveCabinet(event: MouseEvent) {
      const point = getFloorIntersection(event);
      if (point && cabinetPoints.length > 0) {
        const snappedPoint = getSnappedPoint(point);
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
        const start = cabinetPoints[cabinetPoints.length - 1];
        const direction = snappedPoint.clone().sub(start);
        const length = direction.length();
        if (length > 0) {
          tempCabinet = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0x0000ff); // Blue for temp
          tempCabinet.position.copy(start);
          tempCabinet.rotation.y = Math.atan2(direction.z, direction.x);
          scene.add(tempCabinet);
        }
      }
    }
    
    // Add a new point and draw permanent cabinet segment
    function onMouseDownCabinet(event: MouseEvent) {
      if (event.button !== 0) return;
      const point = getFloorIntersection(event);
      if (point) {
        const snappedPoint = getSnappedPoint(point);
        cabinetPoints.push(snappedPoint);
        if (cabinetPoints.length > 1) {
          const start = cabinetPoints[cabinetPoints.length - 2];
          const end = cabinetPoints[cabinetPoints.length - 1];
          const direction = end.clone().sub(start);
          const length = direction.length();
          if (length > 0) {
            const segment = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0xff0000); // Red for permanent
            segment.position.copy(start);
            segment.rotation.y = Math.atan2(direction.z, direction.x);
            scene.add(segment);
            cabinetSegments.push(segment);
          }
        }
        if (tempCabinet) {
          scene.remove(tempCabinet);
          tempCabinet = null;
        }
      }
    }
    
    // Add separate event listeners for cabinet mode (e.g., toggled via UI button)
    window.addEventListener('mousemove', onMouseMoveCabinet);
    window.addEventListener('mousedown', onMouseDownCabinet);

    Auto-Populating the Line Segments with Live Cabinet Models

    Here we fill 2 line-segments with 3D cabinet models (base & wall), and countertop meshes.

    Once the cabinet line segments are defined, we can procedurally populate them with detailed components. This involves dividing each segment vertically into three layers: base cabinets at the bottom, countertops in the middle, and wall cabinets above. For the base and wall cabinets, we’ll use an optimization function to divide the segment’s length into standard widths (preferring 30-inch cabinets), with any remainder filled using the filler piece mentioned above. Countertops are even simpler—they form a single continuous slab stretching the full length of the segment.

    The base cabinets are set to 24 inches deep and 34.5 inches high. Countertops add 1.5 inches in height and extend to 25.5 inches deep (including a 1.5-inch overhang). Wall cabinets start at 54 inches high (18 inches above the countertop), measure 12 inches deep, and are 30 inches tall. After generating these placeholder bounding boxes, we can replace them with preloaded 3D models from Blender using a loading function (e.g., via GLTFLoader).

    // Constants in inches
    const BASE_HEIGHT = 34.5;
    const COUNTER_HEIGHT = 1.5;
    const WALL_HEIGHT = 30;
    const WALL_START_Y = 54;
    const BASE_DEPTH = 24;
    const COUNTER_DEPTH = 25.5;
    const WALL_DEPTH = 12;
    
    const DEFAULT_MODEL_WIDTH = 30;
    
    // Filler-piece information
    const FILLER_PIECE_FALLBACK_PATH = 'models/filler_piece.glb'
    const FILLER_PIECE_WIDTH = 3;
    const FILLER_PIECE_HEIGHT = 12;
    const FILLER_PIECE_DEPTH = 24;

    To handle individual cabinets, we’ll create a simple Cabinet class that manages the placeholder and model loading.

    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    
    const loader = new GLTFLoader();
    
    class Cabinet extends THREE.Group {
      constructor(width: number, height: number, depth: number, modelPath: string, color: number) {
        super();
    
        // Placeholder box
        const geometry = new THREE.BoxGeometry(width, height, depth);
        const material = new THREE.MeshBasicMaterial({ color });
        const placeholder = new THREE.Mesh(geometry, material);
        this.add(placeholder);
    
    
        // Load and replace with model async
    
        // Case: Non-standard width -> use filler piece
        if (width < DEFAULT_MODEL_WIDTH) {
          loader.load(FILLER_PIECE_FALLBACK_PATH, (gltf) => {
            const model = gltf.scene;
            model.scale.set(
              width / FILLER_PIECE_WIDTH,
              height / FILLER_PIECE_HEIGHT,
              depth / FILLER_PIECE_DEPTH,
            );
            this.add(model);
            this.remove(placeholder);
          });
        }
    
        loader.load(modelPath, (gltf) => {
          const model = gltf.scene;
          model.scale.set(width / DEFAULT_MODEL_WIDTH, 1, 1); // Scale width
          this.add(model);
          this.remove(placeholder);
        });
      }
    }

    Then, we can add a populate method to the existing CabinetSegment class:

    function splitIntoCabinets(width: number): number[] {
      const cabinets = [];
      // Preferred width
      while (width >= DEFAULT_MODEL_WIDTH) {
        cabinets.push(DEFAULT_MODEL_WIDTH);
        width -= DEFAULT_MODEL_WIDTH;
      }
      if (width > 0) {
        cabinets.push(width); // Custom empty slot
      }
      return cabinets;
    }
    
    class CabinetSegment extends THREE.Group {
      // ... (existing constructor and properties)
    
      populate() {
        // Remove placeholder line and box
        while (this.children.length > 0) {
          this.remove(this.children[0]);
        }
    
        let offset = 0;
        const widths = splitIntoCabinets(this.length);
    
        // Base cabinets
        widths.forEach((width) => {
          const baseCab = new Cabinet(width, BASE_HEIGHT, BASE_DEPTH, 'models/base_cabinet.glb', 0x8b4513);
          baseCab.position.set(offset + width / 2, BASE_HEIGHT / 2, BASE_DEPTH / 2);
          this.add(baseCab);
          offset += width;
        });
    
        // Countertop (single slab, no model)
        const counterGeometry = new THREE.BoxGeometry(this.length, COUNTER_HEIGHT, COUNTER_DEPTH);
        const counterMaterial = new THREE.MeshBasicMaterial({ color: 0xa9a9a9 });
        const counter = new THREE.Mesh(counterGeometry, counterMaterial);
        counter.position.set(this.length / 2, BASE_HEIGHT + COUNTER_HEIGHT / 2, COUNTER_DEPTH / 2);
        this.add(counter);
    
        // Wall cabinets
        offset = 0;
        widths.forEach((width) => {
          const wallCab = new Cabinet(width, WALL_HEIGHT, WALL_DEPTH, 'models/wall_cabinet.glb', 0x4b0082);
          wallCab.position.set(offset + width / 2, WALL_START_Y + WALL_HEIGHT / 2, WALL_DEPTH / 2);
          this.add(wallCab);
          offset += width;
        });
      }
    }
    
    // Call for each cabinetSegment after drawing
    cabinetSegments.forEach((segment) => segment.populate());

    Further Improvements & Optimizations

    We can further improve the scene with appliances, varying-height cabinets, crown molding, etc.

    At this point, we should have the foundational elements of room and cabinet creation logic fully in place. In order to take this project from a rudimentary segment-drawing app into the practical realm—along with dynamic cabinets, multiple realistic material options, and varying real appliance meshes—we can further enhance the user experience through several targeted refinements:

    • We can implement a detection mechanism to determine if a cabinet line segment is in contact with a wall line segment.
      • For cabinet rows that run parallel to walls, we can automatically incorporate a backsplash in the space between the wall cabinets and the countertop surface.
      • For cabinet segments not adjacent to walls, we can remove the upper wall cabinets and extend the countertop by an additional 15 inches, aligning with standard practices for kitchen islands or peninsulas.
    • We can introduce drag-and-drop functionality for appliances, each with predefined widths, allowing users to position them along the line segment. This integration will instruct our cabinet-splitting algorithm to exclude those areas from dynamic cabinet generation.
    • Additionally, we can give users more flexibility by enabling the swapping of one appliance with another, applying different textures to our 3D models, and adjusting default dimensions—such as wall cabinet depth or countertop overhang—to suit specific preferences.

    All these core components lead us to a comprehensive, interactive application that enables the rapid rendering of a complete kitchen: cabinets, countertops, and appliances, in a fully interactive, user-driven experience.

    The aim of this project is to demonstrate that complex 3D tasks can be distilled down to simple user actions. It is fully possible to take the high-dimensional complexity of 3D tooling—with seemingly limitless controls—and encode these complexities into low-dimensional, easily adjustable parameters. Whether the developer chooses to expose these parameters to the user or an LLM, the end result is that historically complicated 3D processes can become simple, and thus the entire contents of a 3D scene can be fully transformed with only a few parameters.

    If you find this type of development interesting, have any great ideas, or would love to contribute to the evolution of this product, I strongly welcome you to reach out to me via email. I firmly believe that only recently has it become possible to build home design software that is so wickedly fast and intuitive that any person—regardless of architectural merit—will be able to design their own single-family home in less than 5 minutes via a web app, while fully adhering to local zoning, architectural, and design requirements. All the infrastructure necessary to accomplish this already exists; all it takes is a team of crazy, ambitious developers looking to change the standard of architectural home design.





    Source link

  • How to kill a process running on a local port in Windows &vert; Code4IT

    How to kill a process running on a local port in Windows | Code4IT


    Now you can’t run your application because another process already uses the port. How can you find that process? How to kill it?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, when trying to run your ASP.NET application, there’s something stopping you.

    Have you ever found a message like this?

    Failed to bind to address https://127.0.0.1:7261: address already in use.

    You can try over and over again, you can also restart the application, but the port still appears to be used by another process.

    How can you find the process that is running on a local port? How can you kill it to free up the port and, eventually, be able to run your application?

    In this article, we will learn how to find the blocking port in Windows 10 and Windows 11, and then we will learn how to kill that process given its PID.

    How to find the process running on a port on Windows 11 using PowerShell

    Let’s see how to identify the process that is running on port 7261.

    Open a PowerShell and run the netstat command:

    NETSTAT is a command that shows info about the active TCP/IP network connections. It accepts several options. In this case, we will use:

    • -n: Displays addresses and port numbers in numerical form.
    • -o: Displays the owning process ID associated with each connection.
    • -a: Displays all connections and listening ports;
    • -p: Filter for a specific protocol (TCP or UDP)

    Netstat command to show all active TCP connections

    Notice that the last column lists the PID (Process ID) bound to each connection.

    From here, we can use the findstr command to get only the rows with a specific string (the searched port number).

    netstat -noa -p TCP | findstr 7261
    

    Netstat info filtered by string

    Now, by looking at the last column, we can identify the Process ID: 19160.

    How to kill a process given its PID on Windows or PowerShell

    Now that we have the Process ID (PID), we can open the Task Manager, paste the PID value in the topmost textbox, and find the related application.

    In our case, it was an instance of Visual Studio running an API application. We can now kill the process by hitting End Task.

    Using Task Manager on Windows11 to find the process with specified ID

    If you prefer working with PowerShell, you can find the details of the related process by using the Get-Process command:

    Process info found using PowerShell

    Then, you can use the taskkill command by specifying the PID, using the /PID flag, and adding the /F flag to force the killing of the process.

    We have killed the process related to the running application. Visual Studio is still working, of course.

    Further readings

    Hey, what are these fancy colours on the PowerShell?

    It’s a customization I added to show the current folder and the info about the associated GIT repository. It’s incredibly useful while developing and navigating the file system with PowerShell.

    🔗 OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal

    This article first appeared on Code4IT 🐧

    Wrapping up

    As you can imagine, this article exists because I often forget how to find the process that stops my development.

    It’s always nice to delve into these topics to learn more about what you can do with PowerShell and which flags are available for a command.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Deconstructing the 35mm Website: A Look at the Process and Technical Details

    Deconstructing the 35mm Website: A Look at the Process and Technical Details


    The Idea Behind the Project

    This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!

    After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

    After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!

    In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.

    1. The Edge Detection Shader

    Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.

    To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.

    Here’s my model with all the materials applied:

    This way, Three.js can accurately detect each area I want to highlight!

    As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.

    The shader can be found in:

    node_modules/three/examples/jsm/shaders/SobelOperatorShader.js

    I copied the contents of the fragment shader into a separate file so I could freely modify it.

    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        // kernel definition (in glsl matrices are filled in column-major order)
    
        const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
        const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
    
        // fetch the 3x3 neighbourhood of a fragment
    
        // first column
    
        float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
        float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1,  0 ) ).r;
        float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1,  1 ) ).r;
    
        // second column
    
        float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2(  0, -1 ) ).r;
        float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2(  0,  0 ) ).r;
        float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2(  0,  1 ) ).r;
    
        // third column
    
        float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2(  1, -1 ) ).r;
        float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2(  1,  0 ) ).r;
        float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2(  1,  1 ) ).r;
    
        // gradient value in x direction
    
        float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
            Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
            Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
    
        // gradient value in y direction
    
        float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
            Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
            Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
    
        // magnitute of the total gradient
    
        float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
    
        return G;
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);    
    
        float G = sobel(t,texel);
        G= G > 0.001 ? 1. : 0.;
            
        gl_FragColor = vec4(vec3(G),1.0);
    
        #include <colorspace_fragment>
    }

    What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.

    This way, I can modify the output of the edge detection shader before passing it back to the composer:

    float G = sobel(t,texel);
    G= G > 0.001 ? 1. : 0.;

    G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).

    As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).

    This way I can get all the edges to have the same intensity.

    Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:

    import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
    import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
    
    
    export default class CannonF1 {
        constructor() {
            //....code
        }
    
        setupPostprocessing()
        {
    
            SobelOperatorShader.fragmentShader = sobelFragment
    
            this.effectSobel = new ShaderPass(SobelOperatorShader)
            this.effectSobel.uniforms["resolution"].value.x =
            window.innerWidth * Math.min(window.devicePixelRatio, 2)
            this.effectSobel.uniforms["resolution"].value.y =
            window.innerHeight * Math.min(window.devicePixelRatio, 2)
    
            this.composer.addPass(this.effectSobel)
        }
    }

    2. The Mesh Highlight on Hover Effect

    Next, let’s take a look at the lens parts section.

    This is mainly achieved using a Three.js utility called RenderTarget.

    A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.

    Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.

    First I setup the render targets:

    /* 
      ....Code 
    */
    
    createRenderTargets() {
        const sizes = {
          width:
            window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
          height:
            window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
        }
    
        this.renderTargetA = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
    
        this.renderTargetB = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
      }
    
    /* 
      ...Code 
    */

    Then, using three.js Raycaster, I can retrieve the uuid of the mesh that is being hoverer on:

    onMouseMove(event: MouseEvent) {
        this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
        this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
    
        this.raycaster.setFromCamera(this.mouse, this.camera)
        const intersects = this.raycaster.intersectObjects(this.scene.children)
        const target = intersects[0]
    
        if (target && "material" in target.object) {
          const targetMesh = intersects[0].object as THREE.Mesh
          this.cannonF1?.onSelectMesh(targetMesh.uuid)
        } else {
          this.cannonF1?.onSelectMesh()
        }
      }

    In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).

    This way, in my render loop, I can create two distinct renders:

    • One render (renderTargetA) with all the meshes except the hovered mesh
    • Another render (renderTargetB) with only the hovered mesh
    render() {
        // Render renderTargetA
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = false
          } else {
            mesh.visible = true
          }
        })
    
        this.renderer.setRenderTarget(this.renderTargetA)
        this.renderer.render(this.scene, this.camera)
    
        // Render renderTargetB
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = true
          } else {
            mesh.visible = false
          }
        })
        if (this.targetedMesh) {
          this.targetedMesh.children.forEach((child) => {
            child.visible = true
          })
        }
    
        this.renderer.setRenderTarget(this.renderTargetB)
        this.renderer.render(this.scene, this.camera)
    
        this.modelChildren.forEach((mesh) => {
          mesh.visible = false
        })    
    
        this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
        this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
    
        this.renderer.setRenderTarget(null)
      }

    This is what the renderTargetA render looks like:

    …and renderTargetB:

    As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.

    At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:

    What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.

    To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.

    Instead, I used the alpha channel of each individual vertex to set the distance from the camera.

    #include <common>
    
    varying vec3 vPosition;
    uniform vec3 uColor;
    
    float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
        float normalized = (value - oldMin) / (oldMax - oldMin);
        
        return newMin + (newMax - newMin) * normalized;
    }
    
    void main()
    {
        float dist = distance(vPosition,cameraPosition);
    
        float l = luminance( uColor );
    
        gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
    
        #include <colorspace_fragment>
    }

    Here’s an explanation of this shader:

    • First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
    • The uColor value represents the initial color of the mesh.
    • The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
    • Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.

    And here is the updated logic for the postprocessing shader:

    uniform sampler2D tDiffuse;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        //sobel operator
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);
    
        vec4 t1 = texture2D(tDiffuse1,vUv);
        vec4 t2 = texture2D(tDiffuse2,vUv);     
    
        if(t1.a==0.)
        {
            t1.a = 1.;
        }
        if(t2.a==0.)
        {
            t2.a = 1.;
        }
    
    
        float G = sobel(tDiffuse1,texel);
        G= G > 0.001 ? 1. : 0.;
        float Gs = sobel(tDiffuse2,texel);
        Gs = Gs > 0.001 ? 1. : 0.;
        
        vec4 s1 = vec4(vec3(G),1.);
        
        vec4 s2 = vec4(vec3(Gs),1.);    
        
        vec4 sobelTexture = vec4(vec3(0.),1.);
    
    
        if(t1.a>t2.a)
        {
            sobelTexture = s2;       
        }    
        else{
            sobelTexture = s1;
        }    
    
            
        gl_FragColor = sobelTexture;
    
        #include <colorspace_fragment>
    }

    Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.

    3. The Film Roll Effect

    Next is this film roll component that moves and twist on scroll.

    This effect is achieved using only shaders, the component is a single plane component with a shader material.

    All the data is sent to the shader through uniforms:

    export default class Film {  
      constructor() {
        //...code
      }
    
      createGeometry() {
        this.geometry = new THREE.PlaneGeometry(
          60,
          2,
          100,
          10
        )
      }
    
      createMaterial() {
        this.material = new THREE.ShaderMaterial({
          vertexShader,
          fragmentShader,
          side: THREE.DoubleSide,
          transparent: true,
          depthWrite: false,
          blending: THREE.CustomBlending,
          blendEquation: THREE.MaxEquation,
          blendSrc: THREE.SrcAlphaFactor,
          blendDst: THREE.OneMinusSrcAlphaFactor,
          uniforms: {
            uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
            uRadius: new THREE.Uniform(2),
            uXZfreq: new THREE.Uniform(3.525),
            uYfreq: new THREE.Uniform(2.155),
            uOffset: new THREE.Uniform(0),
            uAlphaMap: new THREE.Uniform(
              window.preloader.loadTexture(
                "./alpha-map.jpg",
                "film-alpha-map",
                (texture) => {
                  texture.wrapS = THREE.RepeatWrapping
                  const { width, height } = texture.image
                  this.material.uniforms.uAlphaMapResolution.value =
                    new THREE.Vector2(width, height)
                }
              )
            ),
            //uImages: new THREE.Uniform(new THREE.Vector4()),
            uImages: new THREE.Uniform(
              window.preloader.loadTexture(
                "/film-texture.png",
                "film-image-texture",
                (tex) => {
                  tex.wrapS = THREE.RepeatWrapping
                }
              )
            ),
            uRepeatFactor: new THREE.Uniform(this.repeatFactor),
            uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
            uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
            uFilmColor: new THREE.Uniform(window.colors.orange1),
          },
        })
      }  
    
      createMesh() {
        this.mesh = new THREE.Mesh(this.geometry, this.material)
        this.scene.add(this.mesh)
      }
    }

    The main vertex shader uniforms are:

    • uRadius is the radius of the cylinder shape
    • uXZfreq is the frequency of the twists on the (X,Z) plane
    • uYfreq is a cylinder height factor
    • uOffset is the vertical offset of the roll when you scroll up and down

    Here is how they are used in the vertex shader:

    #define PI 3.14159265359
    
    uniform float uPlaneWidth;
    uniform float uXZfreq;
    uniform float uYfreq;
    varying vec2 vUv;
    uniform float uOffset;
    varying vec3 vPosition;
    uniform float uRadius;
    
    void main()
    {
        vec3 np = position;
        float theta = -(PI*np.x)/(uPlaneWidth*0.5);
    
    
        np.x=cos(uXZfreq*theta+uOffset)*uRadius;
        np.y+=theta*uYfreq;
        np.z=sin(uXZfreq*theta+uOffset)*uRadius;
        
        vec4 modelPosition = modelMatrix * vec4(np, 1.0);
    
        
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectedPosition = projectionMatrix * viewPosition;
        gl_Position = projectedPosition;    
    
    
        vUv=uv;
        vPosition=np;
    }

    As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.

    const tl = gsap.timeline({
      scrollTrigger: {
        trigger: this.section,
        start: "top bottom",
        end: "bottom top",
        scrub: true,
        invalidateOnRefresh: true,        
      },
    })    
    
    tl.to(
      this.material.uniforms.uOffset,
      {
        value: 10,
        duration: 1,
      },
      0
    )

    Conclusion

    That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.

    I learned so much from this project, and I hope you’ll find it just as useful!

    Thank you for reading, and thanks to Codrops for featuring me again!



    Source link

  • How Has Medical Technology Impacted the Surrogacy Process?


    Advancements in medical technology have significantly transformed the surrogacy process, offering new opportunities and improving outcomes for all parties involved. From the initial application to post-birth care, technology plays a crucial role in making surrogacy a viable and successful option for many families. Let’s explore how these advancements have impacted the various stages of the surrogacy journey.

    Streamlining the Application Process

    Every year, thousands of women express their interest in becoming surrogate mothers. The process begins with a thorough application and screening to ensure candidates are suitable for the role. Medical technology has streamlined this initial stage, enabling agencies to efficiently process and review applications. Online platforms and databases allow for quick and secure submission of documents, while advanced screening tools help identify potential surrogates who meet the necessary health and psychological criteria.

    Ensuring Health and Compatibility

    The first three months of the surrogacy process involve a rigorous schedule of paperwork, legal formalities, and medical exams, as stated by Elevate Baby. Medical technology has enhanced these early stages by providing sophisticated diagnostic tools and tests. Surrogate mothers undergo comprehensive health evaluations to ensure they are physically capable of carrying a pregnancy to term. This includes blood tests, ultrasounds, and other imaging techniques that offer detailed insights into their health status. These exams help identify any potential issues early on, ensuring a smooth and safe journey ahead.

    Facilitating Legal and Ethical Compliance

    Legal aspects are a critical component of the surrogacy process. The initial months also involve meticulous legal work to protect the rights and responsibilities of all parties. Medical technology aids in this by ensuring accurate and secure documentation. Digital contracts and electronic signatures have replaced traditional paperwork, making the process more efficient and less prone to errors. Secure online portals allow for the easy sharing and storage of legal documents, ensuring compliance with local regulations and ethical standards.

    Enhancing Fertility Treatments

    One of the most significant impacts of medical technology on surrogacy is in the realm of fertility treatments. In vitro fertilization (IVF) is a cornerstone of the surrogacy process, and advancements in this field have greatly improved success rates. Technologies such as preimplantation genetic testing (PGT) allow for the screening of embryos for genetic abnormalities before implantation. This increases the likelihood of a healthy pregnancy and reduces the risk of complications. Additionally, innovations in cryopreservation enable the freezing and storage of eggs, sperm, and embryos, providing greater flexibility and options for intended parents and surrogates.

    Monitoring Pregnancy and Health

    Throughout the surrogacy journey, continuous monitoring of the surrogate’s health is paramount. Modern medical technology offers a range of tools to track the progress of the pregnancy and ensure the well-being of both the surrogate and the developing baby. Regular ultrasounds, non-invasive prenatal testing (NIPT), and wearable health devices provide real-time data on the surrogate’s condition. This information allows healthcare providers to promptly address any concerns and make informed decisions to support a healthy pregnancy.

    Supporting Emotional Well-being

    The surrogacy process can be emotionally taxing for all involved. Medical technology also plays a role in supporting the mental health of surrogate mothers. Telemedicine and virtual counseling services offer accessible support, allowing surrogates to connect with mental health professionals from the comfort of their homes. These resources help surrogates manage stress, anxiety, and other emotional challenges, ensuring a positive and fulfilling experience.

    Post-Birth Care and Follow-Up

    After the birth of the child, medical technology continues to be essential. Surrogates receive comprehensive post-birth care to ensure their physical and emotional recovery. Regular follow-up visits and check-ups are facilitated by advanced medical scheduling systems and electronic health records, ensuring continuity of care. It is recommended that individuals visit a doctor at least once a year to maintain their overall health, and this applies to surrogate mothers as well. Annual check-ups help monitor long-term health outcomes and provide ongoing support.



    Source link