برچسب: The

  • Behind the Curtain: Building Aurel’s Grand Theater from Design to Code

    Behind the Curtain: Building Aurel’s Grand Theater from Design to Code


    “Aurel’s Grand Theater” is an experimental, unconventional solo portfolio project that invites users to read case
    studies, solve mysteries to unlock secret pages, or freely explore the theater – jumping around and even smashing
    things!

    I had an absolute blast working on it, even though it took much longer than I anticipated. Once I finally settled on a
    creative direction, the project took about a year to complete – but reaching that direction took nearly two years on
    its own. Throughout the journey, I balanced a full-time job as a lead web developer, freelance gigs, and an unexpected
    relocation to the other side of the world. The cherry on top? I went through
    way
    too many artistic iterations. It ‘s my longest solo project to date, but also one of the most fun and creatively
    rewarding. It gave me the chance to dive deep into creative coding and design.

    This article takes you behind the scenes of the project – covering everything from design to code, including tools,
    inspiration, project architecture, design patterns, and even feature breakdowns with code snippets you can adapt for
    your own work.

    The Creative Process: Behind the Curtain

    Genesis

    After eight years, my portfolio no longer reflected my skills or creativity. I wanted to create something unconventional – an experience where visitors become active participants rather than passive observers. Most importantly, I wanted it to be something I ‘d genuinely enjoy building. I was wrapping up “ Leap for Mankind” at the time and had a blast working on it, blending storytelling with game and interactive elements. I wanted to create another experimental website that combines game mechanics with a narrative experience.

    From the beginning, I envisioned a small character that could freely explore its environment – smashing objects, interacting with surrounding elements, and navigating not just the floor but also vertical spaces by jumping onto tables and chairs. The goal was to transform the portfolio from a passive viewing experience into a fun, interactive one. At the same time, I recognized that some content demands clarity over creativity. For example, case studies require a more traditional format that emphasizes readability.

    One of the key challenges, then, was designing a portfolio that could seamlessly transition between an immersive 3D game world and more conventional documentation pages – without disrupting the overall experience.

    Building the Foundation

    I had a general concept of the website in mind, so I started coding a proof of concept (POC) for the game back in
    2022. In this early version, the player could move around, bump into objects, and jump – laying the foundation for the
    interactive world I envisioned. Interestingly, much of the core code structure from that POC made it into the final
    product. While the technical side was coming together, I still hadn ‘t figured out the artistic direction at that
    point.

    Trials and Errors

    As a full-time web developer, I rarely find myself wrestling with artistic direction. Until now, every freelance and
    side project I took on began with a clear creative vision that simply needed technical execution.

    This time was different. At first, I leaned toward a cartoonish aesthetic with bold outlines, thinking it would
    emphasize my creativity. I tried to convince myself it worked, but something felt off – especially when pairing the
    visual style with the user interface. The disconnect between my vision and its execution was unfamiliar territory, and
    it led me down a long and winding path of creative exploration.

    Early artistic direction

    I experimented with other styles too, like painterly visuals, which held promise but proved too time-consuming. Each
    artistic direction felt either not suitable for me or beyond my practical capabilities as a developer moonlighting as
    a designer.

    The theater concept – which ultimately became central to the portfolio ‘s identity – arrived surprisingly late. It
    wasn ‘t part of the original vision but surfaced only after countless iterations and discarded ideas. In total,
    finding an artistic direction that truly resonated took nearly two years – a journey further complicated by a major
    relocation across continents, ongoing work and freelance commitments, and personal responsibilities.

    The extended timeline wasn ‘t due to technical complexity, but to an unexpected battle with creative identity. What
    began as a straightforward portfolio refresh evolved into a deeper exploration of how to merge professional
    presentation with personal expression – pushing me far beyond code and into the world of creative direction.

    Tools & Inspiration: The Heart of Creation

    After numerous iterations and abandoned concepts, I finally arrived at a creative direction that resonated with my
    vision. Rather than detailing every artistic detour, I ‘ll focus on the tools and direction that ultimately led to the
    final product.

    Design Stack

    Below is the stack I use to design my 3D projects:

    UI/UX & Visual Design

    • Figma
      : When I first started, everything was laid out in a Photoshop file. Over the years, I tried various design tools,
      but I ‘ve been using Figma consistently since 2018 – and I ‘ve been really satisfied with it ever since.
    • Miro
      : reat for moodboarding and early ideation. It helps me visually organize thoughts and explore concepts during the
      initial phase.

    3D Modeling & Texturing

    • Blender
      : My favorite tool for 3D modeling. It ‘s incredibly powerful and flexible, though it does have a steep learning
      curve at first. Still, it ‘s well worth the effort for the level of creative control it offers.
    • Adobe Substance 3D Painter
      : The gold standard in my workflow for texture painting. It’s expensive, but the quality and precision it delivers
      make it indispensable.

    Image Editing

    • Krita
      : I only need light photo editing, and Krita handles that perfectly without locking me into Adobe ‘s ecosystem – a
      practical and efficient alternative.

    Drawing Inspiration from Storytellers

    While I drew inspiration from many sources, the most influential were Studio Ghibli and the mystical world of Harry
    Potter. Ghibli ‘s meticulous attention to environmental detail shaped my understanding of atmosphere, while the
    enchanting realism of the Harry Potter universe helped define the mood I wanted to evoke. I also browsed platforms
    like ArtStation and Pinterest for broader visual inspiration, while sites like Behance, FWA, and Awwwards influenced
    the more granular aspects of UX/UI design.

    Initially, I organized these references on an InVision board. However, when the platform shut down mid-project, I had
    to migrate everything to Miro – an unexpected transition and symbolic disruption that echoed the broader delays in the
    project.

    Mood board of Aurel’s Grand Theater

    Designing the Theater

    The theater concept emerged as the perfect metaphor for a portfolio: a space where different works could be presented
    as “performances,” while maintaining a cohesive environment. It also aligned beautifully with the nostalgic,
    pre-digital vibe inspired by many of my visual references.

    Environment design is a specialized discipline I wasn ‘t very familiar with initially. To create a theater that felt
    visually engaging and believable, I studied techniques from the
    FZD School
    . These approaches were invaluable in conceptualizing spaces that truly feel alive: places where you can sense people
    living their lives, working, and interacting with the environment.

    To make the environment feel genuinely inhabited, I incorporated details that suggest human presence: scattered props,
    tools, theater posters, food items, pamphlets, and even bits of miscellaneous junk throughout the space. These
    seemingly minor elements were crucial in transforming the static 3D model into a setting rich with history, mood, and
    character.

    The 3D Modeling Process

    Optimizing for Web Performance

    Creating 3D environments for the web comes with unique challenges that differ significantly from video modelling. When
    scenes need to be rendered in real-time by a browser, every polygon matters.

    To address this, I adopted a strict low-poly approach and focused heavily on building reusable modular components.
    These elements could be instantiated throughout the environment without duplicating unnecessary geometry or textures.

    While the final result is still relatively heavy, this modular system allowed me to construct more complex and
    detailed scenes while maintaining reasonable download sizes and rendering performance, which wouldn ‘t have been
    possible without this approach.

    Texture Over Geometry

    Rather than modeling intricate details that would increase polygon counts, I leveraged textures to suggest complexity.

    Adobe Substance 3D became my primary tool for creating rich material surfaces that could convey detail without
    overloading the renderer. This approach was particularly effective for elements like the traditional Hanok windows
    with their intricate wooden lattice patterns. Instead of modeling each panel, which would have been
    performance-prohibitive, I painted the details into textures and applied them to simple geometric forms.

    Frameworks & Patterns: Behind the Scenes of Development

    Tech Stack

    This is a comprehensive overview of the technology stack I used for Aurel’s Grand Theater website, leveraging my
    existing expertise while incorporating specialized tools for animation and 3D effects.

    Core Framework

    • Vue.js
      : While I previously worked with React, Vue has been my primary framework since 2018. Beyond simply enjoying and
      loving this framework, it makes sense for me to maintain consistency between the tools I use at work and on my side
      projects. I also use Vite and Pinia.

    Animation & Interaction

    • GSAP
      : A cornerstone of my development toolkit for nearly a decade, primarily utilized on this project for:

      • ScrollTrigger functionality
      • MotionPath animations
      • Timeline and tweens
      • As a personal challenge, I created my own text-splitting functionality for this project (since it wasn ‘t client
        work), but I highly recommend GSAP ‘s SplitText for most use cases.
    • Lenis
      : My go-to library for smooth scrolling. It integrates beautifully with scroll animations, especially when working
      with Three.js.

    3D Graphics & Physics

    • Three.js
      : My favorite 3D framework and a key part of my toolkit since 2015. I enjoy using it to bring interactive 3D
      elements to the web.
    • Cannon.js
      : Powers the site ‘s physics simulations. While I considered alternatives like Rapier, I stuck with Cannon.js since
      it was already integrated into my 2022 proof-of-concept. Replacing it would have introduced unnecessary delays.

    Styling

    • Queso
      : A headless CSS framework developed at MamboMambo (my workplace). I chose it for its comprehensive starter
      components and seamless integration with my workflow. Despite being in beta, it ‘s already reliable and flexible.

    This tech stack strikes a balance between familiar tools and specialized libraries that enable the visual and
    interactive elements that define the site’s experience.

    Architecture

    I follow Clean Code principles and other industry best practices, including aiming to keep my files small,
    independent, reusable, concise, and testable.

    I’ve also adopted the component folder architecture developed at my workplace. Instead of placing
    Vue
    files directly inside the
    ./components
    directory, each component resides in its own folder. This folder contains the
    Vue
    file along with related types, unit tests, supporting files, and any child components.

    Although initially designed for
    Vue
    components, I ‘ve found this structure works equally well for organizing logic with
    Typescript
    files,
    utilities
    ,
    directives
    , and more. It ‘s a clean, consistent system that improves code readability, maintainability, and scalability.

    MyFile
    ├── MyFile.vue
    ├── MyFile.test.ts
    ├── MyFile.types.ts
    ├── index.ts (export the types and the vue file)
    ├── data.json (optional files needed in MyFile.vue such as .json files)
    │ 
    ├── components
    │   ├── MyFileChildren
    │   │   ├── MyFileChildren.vue
    │   │   ├── MyFileChildren.test.ts
    │   │   ├── MyFileChildren.types.ts
    │   │   ├── index.ts
    │   ├── MyFileSecondChildren
    │   │   ├── MyFileSecondChildren.vue
    │   │   ├── MyFileSecondChildren.test.ts
    │   │   ├── MyFileSecondChildren.types.ts
    │   │   ├── index.ts

    The overall project architecture follows the high-level structure outlined below.

    src/
    ├── assets/             # Static assets like images, fonts, and styles
    ├── components/         # Vue components
    ├── composables/        # Vue composables for shared logic
    ├── constant/           # Project wide constants
    ├── data/               # Project wide data files
    ├── directives/         # Vue custom directives
    ├── router/             # Vue Router configuration and routes
    ├── services/           # Services (e.g i18n)
    ├── stores/             # State management (Pinia)
    ├── three/              
    │   ├── Experience/    
    │   │   ├── Theater/                 # Theater experience
    │   │   │   ├── Experience/          # Core experience logic
    │   │   │   ├── Progress/            # Loading and progress management
    │   │   │   ├── Camera/              # Camera configuration and controls
    │   │   │   ├── Renderer/            # WebGL renderer setup and configuration
    │   │   │   ├── Sources/             # List of resources
    │   │   │   ├── Physics/             # Physics simulation and interactions
    │   │   │   │   ├── PhysicsMaterial/ # Physics Material
    │   │   │   │   ├── Shared/          # Physics for models shared across scenes
    │   │   │   │   │   ├── Pit/         # Physics simulation and interactions
    │   │   │   │   │   │   ├── Pit.ts   # Physics for models in the pit
    │   │   │   │   │   │   ├── ...       
    │   │   │   │   ├── Triggers/         # Physics Triggers
    │   │   │   │   ├── Scenes/           # Physics for About/Leap/Mont-Saint-Michel
    │   │   │   │   │   ├── Leap/         
    │   │   │   │   │   │   ├── Leap.ts   # Physics for Leap For Mankind's models       
    │   │   │   │   │   │   ├── ...         
    │   │   │   │   │   └── ...          
    │   │   │   ├── World/               # 3D world setup and management
    │   │   │   │   ├── World/           # Main world configuration and setup
    │   │   │   │   ├── PlayerModel/     # Player character model and controls
    │   │   │   │   ├── CameraTransition/ # Camera movement and transitions
    │   │   │   │   ├── Environments/    # Environment setup and management
    │   │   │   │   │   ├── Environment.ts # Environment configuration
    │   │   │   │   │   └── types.ts     # Environment type definitions
    │   │   │   │   ├── Scenes/          # Different scene configurations
    │   │   │   │   │   ├── Leap/ 
    │   │   │   │   │   │   ├── Leap.ts  # Leap For Mankind model's logic
    │   │   │   │   │   └── ...      
    │   │   │   │   ├── Tutorial/        # Tutorial meshes & logic
    │   │   │   │   ├── Bleed/           # Bleed effect logic
    │   │   │   │   ├── Bird/            # Bird model logic
    │   │   │   │   ├── Markers/         # Points of interest
    │   │   │   │   ├── Shared/          # Models & meshes used across scenes
    │   │   │   │   └── ...         
    │   │   │   ├── SharedMaterials/     # Reusable Three.js materials
    │   │   │   └── PostProcessing/      # Post-processing effects
    │   │   │
    │   │   ├── Basement/                # Basement experience
    │   │   ├── Idle/                    # Idle state experience
    │   │   ├── Error404/                # 404 error experience
    │   │   ├── Constant/                # Three.js related constants
    │   │   ├── Factories/               # Three.js factory code
    │   │   │   ├── RopeMaterialGenerator/
    │   │   │   │   ├── RopeMaterialGenerator.ts        
    │   │   │   │   └── ...
    │   │   │   ├── ... 
    │   │   ├── Utils/                   # Three.js utilities other reusable functions
    │   │   └── Shaders/                 # Shaders programs
    ├── types/              # Project-wide TypeScript type definitions
    ├── utils/              # Utility functions and helpers
    ├── vendors/            # Third-party vendor code
    ├── views/              # Page components and layouts
    ├── workers/            # Web Workers
    ├── App.vue             # Root Vue component
    └── main.ts             # Application entry point

    This structured approach helps me manage the code base efficiently and maintain clear separation of concerns
    throughout the codebase, making both development and future maintenance significantly more straightforward.

    Design Patterns

    Singleton

    Singletons play a key role in this type of project architecture, enabling efficient code reuse without incurring
    performance penalties.

    import Experience from "@/three/Experience/Experience";
    import type { Scene } from "@/types/three.types";
    
    let instance: SingletonExample | null = null;
    
    export default class SingletonExample {
      private scene: Scene;
      private experience: Experience;
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
      }
    
      init() {
        // initialize the singleton
      }
    
      someMethod() {
        // some method
      }
    
      update() {
        // update the singleton
      }
      
      update10fps() {
        // Optional: update methods capped at 10FPS
      }
    
      destroySingleton() {
        // clean up three.js + destroy the singleton
      }
    }
    

    Split Responsibility Architecture

    As shown earlier in the project architecture section, I deliberately separated physics management from model handling
    to produce smaller, more maintainable files.

    World Management Files:

    These files are responsible for initializing factories and managing meshes within the main loop. They may also include
    functions specific to individual world items.

    Here’s an example of one such file:

    // src/three/Experience/Theater/mockFileModel/mockFileModel.ts
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type {
      List,
      LoadModel
    } from "@/types/experience/experience.types";
    import type { Scene } from "@/types/three.types";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { Resources } from "@/three/Experience/Utils/Ressources/Resources";
    import type { MaterialGenerator } from "@/types/experience/materialGeneratorType";
    
    
    let instance: mockWorldFile | null = null;
    export default class mockWorldFile {
      private experience: Experience;
      private list: List;
      private physics: Physics;
      private resources: Resources;
      private scene: Scene;
      private materialGenerator: MaterialGenerator;
      public loadModel: LoadModel;
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
    
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.resources = this.experience.resources;
        this.physics = this.experience.physics;
    
        // factories
        this.materialGenerator = this.experience.materialGenerator;
        this.loadModel = this.experience.loadModel;
    
         // Most of the material are init in a file called sharedMaterials
        const bakedMaterial = this.experience.world.sharedMaterials.bakedMaterial;
        // physics infos such as position, rotation, scale, weight etc.
        const paintBucketPhysics = this.physics.items.paintBucket; 
    
        // Array of objects of models. This will be used to update it's position, rotation, scale, etc.
        this.list = {
          paintBucket: [],
          ...
        };
    
        // get the resource file
        const resourcePaintBucket = this.resources.items.paintBucketWhite;
    
         //Reusable code to add models with physics to the scene. I will talk about that later.
        this.loadModel.setModels(
          resourcePaintBucket.scene,
          paintBucketPhysics,
          "paintBucketWhite",
          bakedMaterial,
          true,
          true,
          false,
          false,
          false,
          this.list.paintBucket,
          this.physics.mock,
          "metalBowlFalling",
        );
      }
    
      otherMethod() {
        ...
      }
    
      destroySingleton() {
        ...
      }
    }

    Physics Management Files

    These files trigger the factories to apply physics to meshes, store the resulting physics bodies, and update mesh
    positions on each frame.

    // src/three/Experience/Theater/pathTo/mockFilePhysics
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import additionalShape from "./additionalShape.json";
    
    import type {
      PhysicsResources,
      TrackName,
      List,
      modelsList
    } from "@/types/experience/experience.types";
    import type { cannonObject } from "@/types/three.types";
    import type PhysicsGenerator from "../Factories/PhysicsGenerator/PhysicsGenerator";
    import type UpdateLocation from "../Utils/UpdateLocation/UpdateLocation";
    import type UpdatePositionMesh from "../Utils/UpdatePositionMesh/UpdatePositionMesh";
    import type AudioGenerator from "../Utils/AudioGenerator/AudioGenerator";
    
    let instance: MockFilePhysics | null = null;
    
    export default class MockFilePhysics {
      private experience: Experience;
      private list: List;
      private physicsGenerator: PhysicsGenerator;
      private updateLocation: UpdateLocation;
      private modelsList: modelsList;
      private updatePositionMesh: UpdatePositionMesh;
      private audioGenerator: AudioGenerator;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.debug = this.experience.debug;
        this.physicsGenerator = this.experience.physicsGenerator;
        this.updateLocation = this.experience.updateLocation;
        this.updatePositionMesh = this.experience.updatePositionMesh;
        this.audioGenerator = this.experience.audioGenerator;
    
        // Array of objects of physics. This will be used to update the model's position, rotation, scale etc.
        this.list = {
          paintBucket: [],
        };
      }
    
      setModelsList() {
        //When the load progress reaches a certain percentage, we can set the models list, avoiding some potential bugs or unnecessary conditional logic. Please note that the method update is never run until the scene is fully ready.
        this.modelsList = this.experience.world.constructionToolsModel.list;
      }
    
      addNewItem(
        element: PhysicsResources,
        listName: string,
        trackName: TrackName,
        sleepSpeedLimit: number | null = null,
      ) {
    
        // factory to add physics, I will talk about that later
        const itemWithPhysics = this.physicsGenerator.createItemPhysics(
          element,
          null,
          true,
          true,
          trackName,
          sleepSpeedLimit,
        );
    
        // Additional optional shapes to the item if needed
        switch (listName) {
          case "broom":
            this.physicsGenerator.addMultipleAdditionalShapesToItem(
              itemWithPhysics,
              additionalShape.broomHandle,
            );
            break;
    
        }
    
        this.list[listName].push(itemWithPhysics);
      }
    
      // this methods is called everyfame.
      update() {
        // reusable code to update the position of the mesh
        this.updatePositionMesh.updatePositionMesh(
          this.modelsList["paintBucket"],
          this.list["paintBucket"],
        );
      }
    
    
      destroySingleton() {
        ...
      }
    }

    Since the logic for updating mesh positions is consistent across the project, I created reusable code that can be
    applied in nearly all physics-related files.

    // src/three/Experience/Utils/UpdatePositionMesh/UpdatePositionMesh.ts
    
    export default class UpdatePositionMesh {
      updatePositionMesh(meshList: MeshList, physicList: PhysicList) {
        for (let index = 0; index < physicList.length; index++) {
          const physic = physicList[index];
          const model = meshList[index].model;
    
          model.position.set(
            physic.position.x,
            physic.position.y,
            physic.position.z
          );
          model.quaternion.set(
            physic.quaternion.x,
            physic.quaternion.y,
            physic.quaternion.z,
            physic.quaternion.w
          );
        }
      }
    }

    Factory Patterns

    To avoid redundant code, I built a system around reusable code. While the project includes multiple factories, these
    two are the most essential:

    Model Factory
    : LoadModel

    With few exceptions, all models—whether instanced or regular, with or without physics—are added through this factory.

    // src/three/Experience/factories/LoadModel/LoadModel.ts
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type {
      PhysicsResources,
      TrackName,
      List,
      modelListPath,
      PhysicsListPath
    } from "@/types/experience/experience.type";
    import type { loadModelMaterial } from "./types";
    import type { Material, Scene, Mesh } from "@/types/Three.types";
    import type Progress from "@/three/Experience/Utils/Progress/Progress";
    import type AddPhysicsToModel from "@/three/Experience/factories/AddPhysicsToModel/AddPhysicsToModel";
    
    let instance: LoadModel | null = null;
    
    
    export default class LoadModel {
      public experience: Experience;
      public progress: Progress;
      public mesh: Mesh;
      public addPhysicsToModel: AddPhysicsToModel;
      public scene: Scene;
    
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.progress = this.experience.progress;
        this.addPhysicsToModel = this.experience.addPhysicsToModel;
      }
    
    
      async setModels(
        model: Model,
        list: PhysicsResources[],
        physicsList: string,
        bakedMaterial: LoadModelMaterial,
        isCastShadow: boolean = false,
        isReceiveShadow: boolean = false,
        isIntancedModel: boolean = false,
        isDoubleSided: boolean = false,
        modelListPath: ModelListPath,
        physicsListPath: PhysicsListPath,
        trackName: TrackName = null,
        sleepSpeedLimit: number | null = null,
      ) {
        const loadedModel = isIntancedModel
          ? await this.addInstancedModel(
              model,
              bakedMaterial,
              true,
              true,
              isDoubleSided,
              isCastShadow,
              isReceiveShadow,
              list.length,
            )
            : await this.addModel(
                model,
                bakedMaterial,
                true,
                true,
                isDoubleSided,
                isCastShadow,
                isReceiveShadow,
              );
    
    
        this.addPhysicsToModel.loopListThenAddModelToSceneThenToPhysics(
          list,
          modelListPath,
          physicsListPath,
          physicsList,
          loadedModel,
          isIntancedModel,
          trackName,
          sleepSpeedLimit,
        );
      }
    
    
      addModel = (
        model: Model,
        material: Material,
        isTransparent: boolean = false,
        isFrustumCulled: boolean = true,
        isDoubleSided: boolean = false,
        isCastShadow: boolean = false,
        isReceiveShadow: boolean = false,
        isClone: boolean = true,
      ) => {
        model.traverse((child: THREE.Object3D) => {
          !isFrustumCulled ? (child.frustumCulled = false) : null;
          if (child instanceof THREE.Mesh) {
            child.castShadow = isCastShadow;
            child.receiveShadow = isReceiveShadow;
    
            material
              && (child.material = this.setMaterialOrCloneMaterial(
                  isClone,
                  material,
                ))
              
    
            child.material.transparent = isTransparent;
            isDoubleSided ? (child.material.side = THREE.DoubleSide) : null;
            isReceiveShadow ? child.geometry.computeVertexNormals() : null; // https://discourse.threejs.org/t/gltf-model-shadows-not-receiving-with-gltfmeshstandardsgmaterial/24112/9
          }
        });
    
        this.progress.addLoadedModel(); // Update the number of items loaded
        return { model: model };
      };
    
    
      setMaterialOrCloneMaterial(isClone: boolean, material: Material) {
        return isClone ? material.clone() : material;
      }
    
    
      addInstancedModel = () => {
       ...
      };
    
      // other methods
    
    
      destroySingleton() {
        ...
      }
    }
    Physics Factory: PhysicsGenerator

    This factory has a single responsibility: creative physics properties for meshes.

    // src/three/Experience/Utils/PhysicsGenerator/PhysicsGenerator.ts
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import * as CANNON from "cannon-es";
    
    import CannonUtils from "@/utils/cannonUtils.js";
    
    import type {
      Quaternion,
      PhysicsItemPosition,
      PhysicsItemType,
      PhysicsResources,
      TrackName,
      CannonObject,
    } from "@/types/experience/experience.types";
    
    import type { Scene, ConvexGeometry } from "@/types/three.types";
    import type Progress from "@/three/Experience/Utils/Progress/Progress";
    import type AudioGenerator from "@/three/Experience/Utils/AudioGenerator/AudioGenerator";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { physicsShape } from "./PhysicsGenerator.types"
    
    let instance: PhysicsGenerator | null = null;
    
    export default class PhysicsGenerator {
      public experience: Experience;
      public physics: Physics;
      public currentScene: string | null = null;
      public progress: Progress;
      public audioGenerator: AudioGenerator;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.resources = this.experience.resources;
        this.audioGenerator = this.experience.audioGenerator;
        this.physics = this.experience.physics;
        this.progress = this.experience.progress;
    
        this.currentScene = this.experience.currentScene;
      }
    
    
      //#region add physics to an object
    
      createItemPhysics(
        source: PhysicsResources, // object containing physics info such as mass, shape, position....
        convex?: ConvexGeometry | null = null,
        allowSleep?: boolean = true,
        isBodyToAdd?: boolean = true,
        trackName?: TrackName = null,
        sleepSpeedLimit?: number | null = null
      ) {
        const setSpeedLimit = sleepSpeedLimit ?? 0.15;
    
        // For this project I needed to detect if the user was in the Mont-Saint-Michel, Leap For Mankind, About or Archives scene.
        const localCurrentScene = source.locations[this.currentScene]
          ? this.currentScene
          : "about";
    
        switch (source.type as physicsShape) {
          case "box": {
            const boxShape = new CANNON.Box(new CANNON.Vec3(...source.shape));
            const boxBody = new CANNON.Body({
              mass: source.mass,
              position: new CANNON.Vec3(
                source.locations[localCurrentScene].position.x,
                source.locations[localCurrentScene].position.y,
                source.locations[localCurrentScene].position.z
              ),
              allowSleep: allowSleep,
              shape: boxShape,
              material: source.material
                ? source.material
                : this.physics.physics.defaultMaterial,
              sleepSpeedLimit: setSpeedLimit,
            });
    
            source.locations[localCurrentScene].quaternion
              && (boxBody.quaternion.y =
                  source.locations[localCurrentScene].quaternion.y);
    
            this.physics.physics.addBody(boxBody);
            this.updatedLoadedItem();
    
            // Add optional SFX that will be played if the item collides with another physics item
            trackName
              && this.audioGenerator.addEventListenersToObject(boxBody, TrackName);
    
            return boxBody;
          }
    
          // Then it's basicly the same logic for all other cases
          case "sphere": {
            ...
          }
    
          case "cylinder": {
           ...
          }
    
          case "plane": {
           ...
          }
    
          case "trigger": {
          ...
          }
    
          case "torus": {
            ...
          }
    
          case "trimesh": {
           ...
          }
    
          case "polyhedron": {
            ...
          }
    
          default:
            ...
            break;
        }
      }
    
      updatedLoadedItem() {
        this.progress.addLoadedPhysicsItem(); // Update the number of item loaded (physics only)
      }
    
      //#endregion add physics to an object
    
      // other
    
      destroySingleton() {
        ...
      }
    }

    FPS Capping

    With over 100 models and approximately 150 physics items loaded in the main scene, Aurel’s Grand Theater required
    performance-driven coding from the outset.

    I were to rebuild the project today, I would leverage GPU computing much more intensively. However, when I started the
    proof of concept in 2022, GPU computing for the web was still relatively new and not fully mature—at least, that was
    my perception at the time. Rather than recoding everything, I worked with what I had, which also presented a great
    personal challenge. In addition to using low-poly models and employing classic optimization techniques, I extensively
    used instanced meshes for all small, reusable items—even those with physics. I also relied on many other
    under-the-hood techniques to keep the performance as smooth as possible on this CPU-intensive website.

    One particularly helpful approach I implemented was adaptive frame rates. By capping the FPS to different levels (60,
    30, or 10), depending on whether the logic required rendering at those rates, I optimized performance. After all, some
    logic doesn ‘t require rendering every frame. This is a simple yet effective technique that can easily be incorporated
    into your own project.

    Now, let ‘s take a look at the file responsible for managing time in the project.

    // src/three/Experience/Utils/Time/Time.ts
    import * as THREE from "three";
    import EventEmitter from "@/three/Experience/Utils/EventEmitter/EventEmitter";
    
    let instance: Time | null = null;
    let animationFrameId: number | null = null;
    const clock = new THREE.Clock();
    
    export default class Time extends EventEmitter {
      private lastTick60FPS: number = 0;
      private lastTick30FPS: number = 0;
      private lastTick10FPS: number = 0;
    
      private accumulator60FPS: number = 0;
      private accumulator30FPS: number = 0;
      private accumulator10FPS: number = 0;
    
      public start: number = 0;
      public current: number = 0;
      public elapsed: number = 0;
      public delta: number = 0;
      public delta60FPS: number = 0;
      public delta30FPS: number = 0;
      public delta10FPS: number = 0;
    
      constructor() {
        if (instance) {
          return instance;
        }
        super();
        instance = this;
      }
    
      tick() {
        const currentTime: number = clock.getElapsedTime() * 1000;
    
        this.delta = currentTime - this.current;
        this.current = currentTime;
    
        // Accumulate the time that has passed
        this.accumulator60FPS += this.delta;
        this.accumulator30FPS += this.delta;
        this.accumulator10FPS += this.delta;
    
        // Trigger uncapped tick event using the project's EventEmitter class
        this.trigger("tick");
    
        // Trigger 60FPS tick event
        if (this.accumulator60FPS >= 1000 / 60) {
          this.delta60FPS = currentTime - this.lastTick60FPS;
          this.lastTick60FPS = currentTime;
    
          // Same logic as "this.trigger("tick")" but for 60FPS
          this.trigger("tick60FPS");
          this.accumulator60FPS -= 1000 / 60;
        }
    
        // Trigger 30FPS tick event
        if (this.accumulator30FPS >= 1000 / 30) {
          this.delta30FPS = currentTime - this.lastTick30FPS;
          this.lastTick30FPS = currentTime;
    
          this.trigger("tick30FPS");
          this.accumulator30FPS -= 1000 / 30;
        }
    
        // Trigger 10FPS tick event
        if (this.accumulator10FPS >= 1000 / 10) {
          this.delta10FPS = currentTime - this.lastTick10FPS;
          this.lastTick10FPS = currentTime;
    
          this.trigger("tick10FPS");
          this.accumulator10FPS -= 1000 / 10;
        }
    
        animationFrameId = window.requestAnimationFrame(() => {
          this.tick();
        });
      }
    }
    

    Then, in the
    Experience.ts
    file, we simply place the methods according to the required FPS.

    constructor() {
       if (instance) {
          return instance;
        }
        
        ...
    	  
        this.time = new Time();
        
        ...
    	  
    	  
        //  The game loops (here called tick) are updated when the EventEmitter class is triggered.
        this.time.on("tick", () => {
          this.update();
        });
        this.time.on("tick60FPS", () => {
          this.update60();
        });
        this.time.on("tick30FPS", () => {
          this.update30();
        });
        this.time.on("tick10FPS", () => {
          this.update10();
        });
        }
    
    
      update() {
        this.renderer.update();
      }
    
      update60() {
        this.camera.update60FPS();
        this.world.update60FPS(); 
        this.physics.update60FPS();
      }
    
      update30() {
        this.physics.update30FPS();
        this.world.update30FPS();
      }
      
      update10() {
        this.physics.update10FPS();
        this.world.update10FPS();	
      }

    Selected Feature Breakdown: Code & Explanation

    Cinematic Page Transitions: Return Animation Effects

    Inspired by techniques from the film industry, the transitions between the 3D game and the more traditionally
    structured pages, such as the Case Studies, About, and Credits pages, were carefully designed to feel seamless and
    cinematic.

    The first-time visit animation provides context and immerses users into the website experience. Meanwhile, the other
    page transitions play a crucial role in ensuring a smooth shift between the game and the more conventional layout of
    the Case Studies and About page, preserving immersion while naturally guiding users from one experience to the next.
    Without these transitions, it would feel like abruptly jumping between two entirely different worlds.

    I’ll do a deep dive into the code for the animation when the user returns from the basement level. It’s a bit simpler
    than the other cinematic transitions but the underlying logic is the same, which makes it easier for you to adapt it
    to another project.

    Here the base file:

    // src/three/Experience/Theater/World/CameraTransition/CameraIntroReturning.ts
    
    import { Vector3, CatmullRomCurve3 } from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import { DebugPath } from "@/three/Experience/Utils/DebugPath/DebugPath";
    
    import { createSmoothLookAtTransition } from "./cameraUtils";
    import { setPlayerPosition } from "@/three/Experience/Utils/playerPositionUtils";
    
    import { gsap } from "gsap";
    import { MotionPathPlugin } from "gsap/MotionPathPlugin";
    
    import {
      CAMERA_POSITION_SEAT,
      PLAYER_POSITION_RETURNING,
    } from "@/three/Experience/Constant/PlayerPosition";
    
    import type { Debug } from "@/three/Experience/Utils/Debugger/types";
    import type { Scene, Camera } from "@/types/three.types";
    
    
    const DURATION_RETURNING_FORWARD = 5;
    const DURATION_LOOKAT_RETURNING_FORWARD = 4;
    const RETURNING_PLAYER_QUATERNION = [0, 0, 0, 1];
    const RETURNING_PLAYER_CAMERA_FINAL_POSITION = [
      7.3927162062108955, 3.4067893207543367, 4.151297331541345,
    ];
    const RETURNING_PLAYER_ROTATION = -0.3;
    const RETURNING_PLAYER_CAMERA_FINAL_LOOKAT = [
      2.998858990830107, 2.5067893207543412, -1.55606797749978944,
    ];
    
    gsap.registerPlugin(MotionPathPlugin);
    
    let instance: CameraIntroReturning | null = null;
    
    export default class CameraIntroReturning {
      private scene: Scene;
      private experience: Experience;
      private timelineAnimation: GSAPTimeline;
      private debug: Debug;
      private debugPath: DebugPath;
      private camera: Camera;
      private lookAtTransitionStarted: boolean = false;
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.debug = this.experience.debug;
    
        this.timelineAnimation = gsap.timeline({
          paused: true,
          onComplete: () => {
            this.timelineAnimation.clear().kill();
          },
        });
      }
      init() {
        this.camera = this.experience.camera.instance;
        this.initPath();
      }
    
      initPath() {
        ...
      }
      
      initTimeline() {
        ...
      }
    
      createSmoothLookAtTransition(
       ...
      }
    
      setPositionPlayer() {
       ...
      }
    
      playAnimation() {
       ...
      }
    
      ...
    
      destroySingleton() {
       ...
      }
    }

    The
    init
    method, called from another file, initiates the creation of the animation. At first, we set the path for the
    animation, then the timeline.

    init() {
        this.camera = this.experience.camera.instance;
        this.initPath();
     }
    
    initPath() {
      // create the path for the camera
      const pathPoints = new CatmullRomCurve3([
        new Vector3(CAMERA_POSITION_SEAT[0], CAMERA_POSITION_SEAT[1], 15),
        new Vector3(5.12, 4, 8.18),
        new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_POSITION),
      ]);
    
      // init the timeline
      this.initTimeline(pathPoints);
    }
    
    initTimeline(path: CatmullRomCurve3) {
     ...
    }

    The timeline animation is split into two: a) The camera moves vertically from the basement to the theater, above the
    seats.

    ...
    
    initTimeline(path: CatmullRomCurve3) {
        // get the points
        const pathPoints = path.getPoints(30);
    
        // create the gsap timeline
        this.timelineAnimation
          // set the initial position
          .set(this.camera.position, {
            x: CAMERA_POSITION_SEAT[0],
            y: CAMERA_POSITION_SEAT[1] - 3,
            z: 15,
          })
          .add(() => {
            this.camera.lookAt(3.5, 1, 0);
          })
          //   Start the animation! In this case the camera is moving from the basement to above the seat
          .to(this.camera.position, {
            x: CAMERA_POSITION_SEAT[0],
            y: CAMERA_POSITION_SEAT[1],
            z: 15,
            duration: 3,
            ease: "elastic.out(0.1,0.1)",
          })
          .to(
            this.camera.position,
            {
    		      ...
            },
          )
          ...
      }

    b) The camera follows a path while smoothly transitioning its view to the final location.

     .to(
        this.camera.position,
        {
          // then we use motion path to move the camera to the player behind the raccoon
          motionPath: {
            path: pathPoints,
            curviness: 0,
            autoRotate: false,
          },
          ease: "power1.inOut",
          duration: DURATION_RETURNING_FORWARD,
          onUpdate: function () {
            const progress = this.progress();
    
            // wait until progress reaches a certain point to rotate to the camera at the player LookAt
            if (
              progress >=
                1 -
                  DURATION_LOOKAT_RETURNING_FORWARD /
                    DURATION_RETURNING_FORWARD &&
              !this.lookAtTransitionStarted
            ) {
    	         this.lookAtTransitionStarted = true; 
    	         
               // Create a new Vector3 to store the current look direction
               const currentLookAt = new Vector3();
    
                // Get the current camera's forward direction (where it's looking)
                instance!.camera.getWorldDirection(currentLookAt);
    
                // Extend the look direction by 100 units and add the camera's position
                // This creates a point in space that the camera is currently looking at
                currentLookAt.multiplyScalar(100).add(instance!.camera.position);
    
                // smooth lookAt animation
    	          createSmoothLookAtTransition(
    	            currentLookAt,
    	            new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_LOOKAT),
    	            DURATION_LOOKAT_RETURNING_FORWARD,
    	            this.camera
    	          );
            }
          },
        },
      )
      .add(() => {
        // animation is completed, you can add some code here
      });

    As you noticed, I used a utility function called
    smoothLookAtTransition
    since I needed this functionality in multiple places.

    import type { Vector3 } from "three";
    import { gsap } from "gsap";
    
    import type { Camera } from "@/types/three.types";
    
    export const createSmoothLookAtTransition = (
      from: Vector3,
      to: Vector3,
      duration: number,
      camera: Camera,
      ease: string = "power2.out",
    ) => {
      const lookAtPosition = { x: from.x, y: from.y, z: from.z };
      return gsap.to(lookAtPosition, {
        x: to.x,
        y: to.y,
        z: to.z,
        duration,
        ease: ease,
        onUpdate: () => {
          camera.lookAt(lookAtPosition.x, lookAtPosition.y, lookAtPosition.z);
        },
      });
    };

    With everything ready, the animation sequence is run when
    playAnimation()
    is triggered.

    playAnimation() {
        // first set the position of the player
        this.setPositionPlayer();
        // then play the animation
        this.timelineAnimation.play();
      }
    
      setPositionPlayer() {
       // an simple utils to update the position of the player when the user land in the scene, return or switch scene.
        setPlayerPosition(this.experience, {
          position: PLAYER_POSITION_RETURNING,
          quaternion: RETURNING_PLAYER_QUATERNION,
          rotation: RETURNING_PLAYER_ROTATION,
        });
      }

    Scroll-Triggered Animations: Showcasing Books on About Pages

    While the game is fun and filled with details, the case studies and about pages are crucial to the overall experience,
    even though they follow a more standardized format. These pages still have their own unique appeal. They are filled
    with subtle details and animations, particularly scroll-triggered effects such as split text animations when
    paragraphs enter the viewport, along with fade-out effects on SVGs and other assets. These animations create a vibe
    that mirrors the mysterious yet intriguing atmosphere of the game, inviting visitors to keep scrolling and exploring.

    While I can’t cover every animation in detail, I ‘d like to share the technical approach behind the book animations
    featured on the about page. This effect blends DOM scroll event tracking with a Three.js scene, creating a seamless
    interaction between the user ‘s scrolling behavior and the 3D-rendered books. As visitors scroll down the page, the
    books transition elegantly and respond dynamically to their movement.

    Before we dive into the
    Three.js
    file, let ‘s look into the
    Vue
    component.

    //src/components/BookGallery/BookGallery.vue
    <template>
      <!-- the ID is used in the three.js file -->
      <div class="book-gallery" id="bookGallery" ref="bookGallery"></div>
    </template>
    
    <script setup lang="ts">
    import { onBeforeUnmount, onMounted, onUnmounted, ref } from "vue";
    
    import gsap from "gsap";
    import { ScrollTrigger } from "gsap/ScrollTrigger";
    
    import type { BookGalleryProps } from "./types";
    
    gsap.registerPlugin(ScrollTrigger);
    
    const props = withDefaults(defineProps<BookGalleryProps>(), {});
    
    const bookGallery = ref<HTMLBaseElement | null>(null);
    
    const setupScrollTriggers = () => {
     ...
    };
    
    const triggerAnimation = (index: number) => {
      ...
    };
    
    onMounted(() => {
      setupScrollTriggers();
    });
    
    onUnmounted(() => {
      ...
    });
    </script>
    
    <style lang="scss" scoped>
    .book-gallery {
      position: relative;
      height: 400svh; // 1000svh * 4 books
    }
    </style>

    Thresholds are defined for each book to determine which one will be active – that is, the book that will face the
    camera.

    const setupScrollTriggers = () => {
      if (!bookGallery.value) return;
    
      const galleryHeight = bookGallery.value.clientHeight;
      const scrollThresholds = [
        galleryHeight * 0.15,
        galleryHeight * (0.25 + (0.75 - 0.25) / 3),
        galleryHeight * (0.25 + (2 * (0.75 - 0.25)) / 3),
        galleryHeight * 0.75,
      ];
    
      ...
    };

    Then I added some
    GSAP
    magic by looping through each threshold and attaching scrollTrigger to it.

    const setupScrollTriggers = () => {
    
    	...
    
    	scrollThresholds.forEach((threshold, index) => {
    	    ScrollTrigger.create({
    	      trigger: bookGallery.value,
    	      markers: false,
    	      start: `top+=${threshold} center`,
    	      end: `top+=${galleryHeight * 0.5} bottom`,
    	      onEnter: () => {
    	        triggerAnimation(index);
    	      },
    	      onEnterBack: () => {
    	        triggerAnimation(index);
    	      },
    	      once: false,
    	    });
    	  });
    };

    On scroll, when the user enters or re-enters a section defined by the thresholds, a function is triggered within a
    Three.js
    file.

    const triggerAnimation = (index: number) => {
      window.experience?.world?.books?.createAnimation(index);
    };

    Now let ‘s look at
    Three.js
    file:

    // src/three/Experience/Basement/World/Books/Books.ts
    
    import * as THREE from "three";
    import Experience from "@/three/Experience/Basement/Experience/Experience";
    
    import { SCROLL_RATIO } from "@/constant/scroll";
    
    import { gsap } from "gsap";
    
    import type { Book } from "./books.types";
    import type { Material, Scene, Texture, ThreeGroup } from "@/types/three.types";
    import type { Sizes } from "@/three/Experience/Utils/Sizes/types";
    import type LoadModel from "@/three/Experience/factories/LoadModel/LoadModel";
    import type MaterialGenerator from "@/three/Experience/factories/MaterialGenerator/BasicMaterialGenerator";
    import type Resources from "@/three/Experience/Utils/Ressources/Resources";
    
    const GSAP_EASE = "power2.out";
    const GSAP_DURATION = 1;
    const NB_OF_VIEWPORTS_BOOK_SECTION = 5;
    
    let instance: Books | null = null;
    
    export default class Books {
      public scene: Scene;
      public experience: Experience;
      public resources: Resources;
      public loadModel: LoadModel;
      public sizes: Sizes;
    
      public materialGenerator: MaterialGenerator;
      public resourceDiffuse: Texture;
      public resourceNormal: Texture;
      public bakedMaterial: Material;
    
      public startingPostionY: number;
      public originalPosition: Book[];
      public activeIndex: number = 0;
      public isAnimationRunning: boolean = false;
      
      public bookGalleryElement: HTMLElement | null = null;
      public bookSectionHeight: number;
      public booksGroup: ThreeGroup;
    
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.sceneSecondary; // I am using a second scene for the books, so it's not affected by the primary scene (basement in the background)
        this.sizes = this.experience.sizes;
        
        this.resources = this.experience.resources;
        this.materialGenerator = this.experience.materialGenerator;
    
        this.init();
      }
    
      init() {
        ...
      }
    
      initModels() {
       ...
      }
    
      findPosition() {
       ...
      }
    
      setBookSectionHeight() {
       ...
      }
    
      initBooks() {
       ...
      }
    
      initBook() {
       ...
      }
    
      createAnimation() {
        ...
      }
    
      toggleIsAnimationRunning() {
        ...
      }
    
      ...
    
      destroySingleton() {
        ...
      }
    }

    When the file is initialized, we set up the textures and positions of the books.

    init() {
      this.initModels();
      this.findPosition();
      this.setBookSectionHeight();
      this.initBooks();
    }
    
    initModels() {
      this.originalPosition = [
          {
          name: "book1",
          meshName: null, // the name of the mesh from Blender will dynamically be written here
          position: { x: 0, y: -0, z: 20 },
          rotation: { x: 0, y: Math.PI / 2.2, z: 0 }, // some rotation on y axis so it looks more natural when the books are pilled
        },
        {
          name: "book2",
          meshName: null,
          position: { x: 0, y: -0.25, z: 20 },
          rotation: { x: 0, y: Math.PI / 1.8, z: 0 },
        },
        {
          name: "book3",
          meshName: null,
          position: { x: 0, y: -0.52, z: 20 },
          rotation: { x: 0, y: Math.PI / 2, z: 0 },
        },
        {
          name: "book4",
          meshName: null,
          position: { x: 0, y: -0.73, z: 20 },
          rotation: { x: 0, y: Math.PI / 2.3, z: 0 },
        },
      ];
    
      this.resourceDiffuse = this.resources.items.bookDiffuse;
      this.resourceNormal = this.resources.items.bookNormal;
    
        // a reusable class to set the material and normal map
      this.bakedMaterial = this.materialGenerator.setStandardMaterialAndNormal(
        this.resourceDiffuse,
        this.resourceNormal
      );
    }
    
    //#region position of the books
    
    // Finds the initial position of the book gallery in the DOM
    findPosition() {
      this.bookGalleryElement = document.getElementById("bookGallery");
    
      if (this.bookGalleryElement) {
        const rect = this.bookGalleryElement.getBoundingClientRect();
        this.startingPostionY = (rect.top + window.scrollY) / 200;
      }
    }
    
    //  Sets the height of the book section based on viewport and scroll ratio
    setBookSectionHeight() {
      this.bookSectionHeight =
        this.sizes.height * NB_OF_VIEWPORTS_BOOK_SECTION * SCROLL_RATIO;
    }
    
    //#endregion position of the books
    

    Each book mesh is created and added to the scene as a
    THREE.Group
    .

    init() {
      ...
      this.initBooks();
    }
    
    ...
    
    initBooks() {
      this.booksGroup = new THREE.Group();
      this.scene.add(this.booksGroup);
      
      this.originalPosition.forEach((position, index) => {
        this.initBook(index, position);
      });
    }
    
    initBook(index: number, position: Book) {
      const bookModel = this.experience.resources.items[position.name].scene;
      this.originalPosition[index].meshName = bookModel.children[0].name;
    
      //Reusable code to set the models. More details under the Design Parterns section
      this.loadModel.addModel(
        bookModel,
        this.bakedMaterial,
        false,
        false,
        false,
        true,
        true,
        2,
        true
      );
    
      this.scene.add(bookModel);
    
      bookModel.position.set(
        position.position.x,
        position.position.y - this.startingPostionY,
        position.position.z
      );
      
      bookModel.rotateY(position.rotation.y);
      bookModel.scale.set(10, 10, 10);
      this.booksGroup.add(bookModel);
    }

    Each time a book
    enters
    or
    reenters
    its thresholds, the triggers from the
    Vue
    file run the animation
    createAnimation
    in this file, which rotates the active book in front of the camera and stacks the other books into a pile.

    ...
    
    createAnimation(activeIndex: number) {
        if (!this.originalPosition) return;
    
        this.originalPosition.forEach((item: Book) => {
          const bookModel = this.scene.getObjectByName(item.meshName);
          if (bookModel) {
            gsap.killTweensOf(bookModel.rotation);
            gsap.killTweensOf(bookModel.position);
          }
        });
        this.toggleIsAnimationRunning(true);
    
        this.activeIndex = activeIndex;
        this.originalPosition.forEach((item: Book, index: number) => {
          const bookModel = this.scene.getObjectByName(item.meshName);
    
          if (bookModel) {
            if (index === activeIndex) {
              gsap.to(bookModel.rotation, {
                x: Math.PI / 2,
                z: Math.PI / 2.2,
                y: 0,
                duration: 2,
                ease: GSAP_EASE,
                delay: 0.3,
                onComplete: () => {
                  this.toggleIsAnimationRunning(false);
                },
              });
              gsap.to(bookModel.position, {
                y: 0,
                duration: GSAP_DURATION,
                ease: GSAP_EASE,
                delay: 0.1,
              });
            } else {
            // pile unactive book
              gsap.to(bookModel.rotation, {
                x: 0,
                y: 0,
                z: 0,
                duration: GSAP_DURATION - 0.2,
                ease: GSAP_EASE,
              });
    
              const newYPosition = activeIndex < index ? -0.14 : +0.14;
    
              gsap.to(bookModel.position, {
                y: newYPosition,
                duration: GSAP_DURATION,
                ease: GSAP_EASE,
                delay: 0.1,
              });
            }
          }
        });
      }
    
    
      toggleIsAnimationRunning(bool: boolean) {
        this.isAnimationRunning = bool;
      }

    Interactive Physics Simulations: Rope Dynamics

    The game is the main attraction of the website. The entire concept began back in 2022, when I set out to build a small
    mini-game where you could jump on tables and smash things and it was my favorite part to work on.

    Beyond being fun to develop, the interactive physics elements make the experience more engaging, adding a whole new
    layer of excitement and exploration that simply isn’t possible in a flat, static environment.

    While I can ‘t possibly cover all the physics-related elements, one of my favorites is the rope system near the menu.
    It’s a subtle detail, but it was one of the first things I coded when I started leaning into a more theatrical,
    artistic direction.

    The ropes were also built with performance in mind—optimized to look and behave convincingly without dragging down the
    framerate.

    This is the base file for the meshes:

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
    
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import RopeMaterialGenerator from "@/three/Experience/Factories/MaterialGenerator/RopeMaterialGenerator";
    
    import ropesLocation from "./ropesLocation.json";
    
    import type { Location, List } from "@/types/experience/experience.types";
    import type { Scene, Resources, Physics, RopeMesh, CurveQuad } from "@/types/three.types";
    
    let instance: RopeModel | null = null;
    
    export default class RopeModel {
      public scene: Scene;
      public experience: Experience;
      public resources: Resources;
      public physics: Physics;
      public material: Material;
      public list: List;
      public ropeMaterialGenerator: RopeMaterialGenerator;
    
      public ropeLength: number = 20;
      public ropeRadius: number = 0.02;
      public ropeRadiusSegments: number = 8;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.resources = this.experience.resources;
        this.physics = this.experience.physics;
        this.ropeMaterialGenerator = new RopeMaterialGenerator();
        
        this.ropeLength = this.experience.physics.rope.numberOfSpheres || 20;
        this.ropeRadius = 0.02;
        this.ropeRadiusSegments = 8;
    
        this.list = {
          rope: [],
        };
    
        this.initRope();
      }
      
      initRope() {
       ...
      }
      
      createRope() {
        ...
      }
      
      setArrayOfVertor3() {
        ...
      }
      
      setYValues() {
        ...
      }
      
      setMaterial() {
        ...
      }
    
      addRopeToScene() {
        ...
      }
    
      //#region update at 60FPS
      update() {
       ...
      }
      
      updateLineGeometry() {
       ...
      }
      //#endregion update at 60FPS
    
      destroySingleton() {
        ...
      }
    }

    Mesh creation is initiated inside the constructor.

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
    
     constructor() {
    	...
        this.initRope();
      }
      
      initRope() {
        // Generate the material that will be used for all ropes
        this.setMaterial();
    
        // Create a rope at each location specified in the ropesLocation configuration
        ropesLocation.forEach((location) => {
          this.createRope(location);
        });
      }
    
      createRope(location: Location) {
        // Generate the curve that defines the rope's path
        const curveQuad = this.setArrayOfVertor3();
        this.setYValues(curveQuad);
    
        const tube = new THREE.TubeGeometry(
          curveQuad,
          this.ropeLength,
          this.ropeRadius,
          this.ropeRadiusSegments,
          false
        );
    
        const rope = new THREE.Mesh(tube, this.material);
    
        rope.geometry.attributes.position.needsUpdate = true;
    
        // Add the rope to the scene and set up its physics. I'll explain it later.
        this.addRopeToScene(rope, location);
      }
    
      setArrayOfVertor3() {
        const arrayLimit = this.ropeLength;
        const setArrayOfVertor3 = [];
        // Create points in a vertical line, spaced 1 unit apart
        for (let index = 0; index < arrayLimit; index++) {
          setArrayOfVertor3.push(new THREE.Vector3(10, 9 - index, 0));
          if (index + 1 === arrayLimit) {
            return new THREE.CatmullRomCurve3(
              setArrayOfVertor3,
              false,
              "catmullrom",
              0.1
            );
          }
        }
      }
    
      setYValues(curve: CurveQuad) {
        // Set each point's Y value to its index, creating a vertical line
        for (let i = 0; i < curve.points.length; i++) {
          curve.points[i].y = i;
        }
      }
      
      setMaterial(){
    	  ...
      }

    Since the rope texture is used in multiple places, I use a factory pattern for efficiency.

    ...
    
    setMaterial() {
        this.material = this.ropeMaterialGenerator.generateRopeMaterial(
          "rope",
          0x3a301d, // Brown color
          1.68, // Normal Repeat
          0.902, // Normal Intensity
          21.718, // Noise Strength
          1.57, // UV Rotation
          9.14, // UV Height
          this.resources.items.ropeDiffuse, // Diffuse texture map
          this.resources.items.ropeNormal // Normal map for surface detail
        );
      }
    // src/three/Experience/Factories/MaterialGenerator/RopeMaterialGenerator.ts
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import vertexShader from "@/three/Experience/Shaders/Rope/vertex.glsl";
    import fragmentShader from "@/three/Experience/Shaders/Rope/fragment.glsl";
    
    import type { ResourceDiffuse, RessourceNormal } from "@/types/three.types";
    import type Debug from "@/three/Experience/Utils/Debugger/Debug";
    
    let instance: RopeMaterialGenerator | null = null;
    
    export default class RopeMaterialGenerator {
      public experience: Experience;
    
      private debug: Debug;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.debug = this.experience.debug;
      }
    
      generateRopeMaterial(
        name: string,
        uLightColor: number,
        uNormalRepeat: number,
        uNormalIntensity: number,
        uNoiseStrength: number,
        uvRotate: number,
        uvHeight: number,
        resourceDiffuse: ResourceDiffuse,
        ressourceNormal: RessourceNormal
      ) {
        const normalTexture = ressourceNormal;
        normalTexture.wrapS = THREE.RepeatWrapping;
        normalTexture.wrapT = THREE.RepeatWrapping;
    
        const diffuseTexture = resourceDiffuse;
        diffuseTexture.wrapS = THREE.RepeatWrapping;
        diffuseTexture.wrapT = THREE.RepeatWrapping;
    
        const customUniforms = {
          uAddedLight: {
            value: new THREE.Color(0x000000),
          },
          uLightColor: {
            value: new THREE.Color(uLightColor),
          },
          uNormalRepeat: {
            value: uNormalRepeat,
          },
          uNormalIntensity: {
            value: uNormalIntensity,
          },
          uNoiseStrength: {
            value: uNoiseStrength,
          },
          uShadowStrength: {
            value: 1.296,
          },
          uvRotate: {
            value: uvRotate, 
          },
          uvHeight: {
            value: uvHeight,
          },
          uLightPosition: {
            value: new THREE.Vector3(60, 100, 60),
          },
          normalMap: {
            value: normalTexture,
          },
          diffuseMap: {
            value: diffuseTexture,
          },
          uAlpha: {
            value: 1,
          },
        };
    
        const shaderUniforms = THREE.UniformsUtils.clone(
          THREE.UniformsLib["lights"]
        );
        const shaderUniformsNormal = THREE.UniformsUtils.clone(
          THREE.UniformsLib["normalmap"]
        );
        const uniforms = Object.assign(
          shaderUniforms,
          shaderUniformsNormal,
          customUniforms
        );
    
        const materialFloor = new THREE.ShaderMaterial({
          uniforms: uniforms,
          vertexShader: vertexShader,
          fragmentShader: fragmentShader,
          precision: "lowp",
        });
    
        return materialFloor;
      }
      
      
      destroySingleton() {
        ...
      }
    }
    

    The vertex and its fragment

    // src/three/Experience/Shaders/Rope/vertex.glsl
    
    uniform float uNoiseStrength;      // Controls the intensity of noise effect
    uniform float uNormalIntensity;    // Controls the strength of normal mapping
    uniform float uNormalRepeat;       // Controls the tiling of normal map
    uniform vec3 uLightColor;          // Color of the light source
    uniform float uShadowStrength;     // Intensity of shadow effect
    uniform vec3 uLightPosition;       // Position of the light source
    uniform float uvRotate;            // Rotation angle for UV coordinates
    uniform float uvHeight;            // Height scaling for UV coordinates
    uniform bool isShadowBothSides;    // Flag for double-sided shadow rendering
    
    
    varying float vNoiseStrength;      // Passes noise strength to fragment shader
    varying float vNormalIntensity;    // Passes normal intensity to fragment shader
    varying float vNormalRepeat;       // Passes normal repeat to fragment shader
    varying vec2 vUv;                  // UV coordinates for texture mapping
    varying vec3 vColorPrimary;        // Primary color for the material
    varying vec3 viewPos;              // Position in view space
    varying vec3 vLightColor;          // Light color passed to fragment shader
    varying vec3 worldPos;             // Position in world space
    varying float vShadowStrength;     // Shadow strength passed to fragment shader
    varying vec3 vLightPosition;       // Light position passed to fragment shader
    
    // Helper function to create a 2D rotation matrix
    mat2 rotate(float angle) {
        return mat2(cos(angle), -sin(angle), sin(angle), cos(angle));
    }
    
    void main() {
        // Calculate rotation angle and its sine/cosine components
        float angle = 1.0 * uvRotate;
        float s = sin(angle);
        float c = cos(angle);
    
        // Create rotation matrix for UV coordinates
        mat2 rotationMatrix = mat2(c, s, -s, c);
    
        // Define pivot point for UV rotation
        vec2 pivot = vec2(0.5, 0.5);
    
        // Transform vertex position to clip space
        gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
    
        // Apply rotation and height scaling to UV coordinates
        vUv = rotationMatrix * (uv - pivot) + pivot;
        vUv.y *= uvHeight;
    
        // Pass various parameters to fragment shader
        vNormalRepeat = uNormalRepeat;
        vNormalIntensity = uNormalIntensity;
        viewPos = vec3(0.0, 0.0, 0.0);  // Initialize view position
        vNoiseStrength = uNoiseStrength;
        vLightColor = uLightColor;
        vShadowStrength = uShadowStrength;
        vLightPosition = uLightPosition;
    }
    // src/three/Experience/Shaders/Rope/fragment.glsl
    // Uniform textures for normal and diffuse mapping
    uniform sampler2D normalMap;
    uniform sampler2D diffuseMap;
    
    // Varying variables passed from vertex shader
    varying float vNoiseStrength;
    varying float vNormalIntensity;
    varying float vNormalRepeat;
    varying vec2 vUv;
    varying vec3 viewPos;
    varying vec3 vLightColor;
    varying vec3 worldPos;
    varying float vShadowStrength;
    varying vec3 vLightPosition;
    
    // Constants for lighting calculations
    const float specularStrength = 0.8;
    const vec4 colorShadowTop = vec4(vec3(0.0, 0.0, 0.0), 1.0);
    
    void main() {
        // normal, diffuse and light accumulation
        vec3 samNorm = texture2D(normalMap, vUv * vNormalRepeat).xyz * 2.0 - 1.0;
        vec4 diffuse = texture2D(diffuseMap, vUv * vNormalRepeat);
        vec4 addedLights = vec4(0.0, 0.0, 0.0, 1.0);
    
        // Calculate diffuse lighting
        vec3 lightDir = normalize(vLightPosition - worldPos);
        float diff = max(dot(lightDir, samNorm), 0.0);
        addedLights.rgb += diff * vLightColor;
    
        // Calculate specular lighting
        vec3 viewDir = normalize(viewPos - worldPos);
        vec3 reflectDir = reflect(-lightDir, samNorm);
        float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0);
        addedLights.rgb += specularStrength * spec * vLightColor;
    
        // Calculate top shadow effect. In this case, this higher is it, the darker it gets.
        float shadowTopStrength = 1.0 - pow(vUv.y, vShadowStrength) * 0.5;
        float shadowFactor = smoothstep(0.0, 0.5, shadowTopStrength);
    
        // Mix diffuse color with shadow. 
        vec4 mixedColorWithShadowTop = mix(diffuse, colorShadowTop, shadowFactor);
        // Mix lighting with shadow
        vec4 addedLightWithTopShadow = mix(addedLights, colorShadowTop, shadowFactor);
    
        // Final color composition with normal intensity control
        gl_FragColor = mix(mixedColorWithShadowTop, addedLightWithTopShadow, vNormalIntensity);
    }

    Once the material is created and added to the mesh, the
    addRopeToScene
    function adds the rope to the scene, then calls the
    addPhysicsToRope
    function from the physics file.

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
      addRopeToScene(mesh: Mesh, location: Location) {
        this.list.rope.push(mesh); //Add the rope to an array, which will be used by the physics file to update the mesh
        this.scene.add(mesh);
        this.physics.rope.addPhysicsToRope(location); // same as src/three/Experience/Theater/Physics/Theater/Rope/Rope.addPhysicsToRope(location)
      }

    Let ‘s now focus on the physics file.

    // src/three/Experience/Theater/Physics/Theater/Rope/Rope.ts
    
    import * as CANNON from "cannon-es";
    
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type { Location } from "@/types/experience.types";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { Scene, SphereBody } from "@/types/three.types";
    
    let instance: Rope | null = null;
    
    const SIZE_SPHERE = 0.05;
    const ANGULAR_DAMPING = 1;
    const DISTANCE_BETWEEN_SPHERES = SIZE_SPHERE * 5;
    const DISTANCE_BETWEEN_SPHERES_BOTTOM = 2.3;
    const DISTANCE_BETWEEN_SPHERES_TOP = 6;
    const LINEAR_DAMPING = 0.5;
    const NUMBER_OF_SPHERES = 20;
    
    export default class Rope {
      public experience: Experience;
      public physics: Physics;
      public scene: Scene;
      public list: list[];
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.physics = this.experience.physics;
    
        this.list = {
          rope: [],
        };
      }
    
      //#region add physics
      addPhysicsToRope() {
       ...
      }
    
      setRopePhysics() {
        ...
      }
      
      setMassRope() {
       ...
      }
      
      setDistanceBetweenSpheres() {
        ...
      }
      
      setDistanceBetweenConstraints() {
       ...
      }
      
      addConstraints() {
        ...
      }
      //#endregion add physics
    
      //#region update at 60FPS
      update() {
        ...
      }
    
      loopRopeWithPhysics() {
        ...
      }
      
      updatePoints() {
        ...
      }
      //#endregion update at 60FPS
    
      destroySingleton() {
        ...
      }
    }

    The rope’s physics is created from the mesh file using the methods
    addPhysicsToRope
    , called using
    this.physics.rope.addPhysicsToRope(location);.

    addPhysicsToRope(location: Location) {
      this.setRopePhysics(location);
    }
    
    setRopePhysics(location: Location) {
      const sphereShape = new CANNON.Sphere(SIZE_SPHERE);
      const rope = [];
    
      let lastBody = null;
      for (let index = 0; index < NUMBER_OF_SPHERES; index++) {
        // Create physics body for each sphere in the rope. The spheres will be what collide with the player
        const spherebody = new CANNON.Body({ mass: this.setMassRope(index) });
    
        spherebody.addShape(sphereShape);
        spherebody.position.set(
          location.x,
          location.y - index * DISTANCE_BETWEEN_SPHERES,
          location.z
        );
        this.physics.physics.addBody(spherebody);
        rope.push(spherebody);
        spherebody.linearDamping = LINEAR_DAMPING;
        spherebody.angularDamping = ANGULAR_DAMPING;
    
        // Create constraints between consecutive spheres
        lastBody !== null
          ? this.addConstraints(spherebody, lastBody, index)
          : null;
    
        lastBody = spherebody;
    
        if (index + 1 === NUMBER_OF_SPHERES) {
          this.list.rope.push(rope);
        }
      }
    }
    
    setMassRope(index: number) {
      return index === 0 ? 0 : 2; // first sphere is fixed (mass 0)
    }
    
    setDistanceBetweenSpheres(index: number, locationY: number) {
      return locationY - DISTANCE_BETWEEN_SPHERES * index;
    }
    
    setDistanceBetweenConstraints(index: number) {
    // since the user only interact the spheres are the bottom, so the distance between the spheres is gradualy increasing from the bottom to the top//Since the user only interacts with the spheres that are at the bottom, the distance between the spheres is gradually increasing from the bottom to the top
      if (index <= 2) {
        return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_TOP;
      }
      if (index > 2 && index <= 8) {
        return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_BOTTOM;
      }
      return DISTANCE_BETWEEN_SPHERES;
    }
    
    addConstraints(
      sphereBody: CANNON.Body,
      lastBody: CANNON.Body,
      index: number
    ) {
      this.physics.physics.addConstraint(
        new CANNON.DistanceConstraint(
          sphereBody,
          lastBody,
          this.setDistanceBetweenConstraints(index)
        )
      );
    }
    

    When configuring physics parameters, strategy is key. Although users won ‘t consciously notice during gameplay, they
    can only interact with the lower portion of the rope. Therefore, I concentrated more physics detail where it matters –
    by adding more spheres to the bottom of the rope.

    Since the user only interacts with the bottom of the rope, the density of the physics sphere is higher at the bottom
    of the rope than at the top of the rope.

    Rope meshes are then updated every frame from the physics file.

     //#region update at 60FPS
     update() {
      this.loopRopeWithPhysics();
    }
    
    loopRopeWithPhysics() {
      for (let index = 0; index < this.list.rope.length; index++) {
        this.updatePoints(this.list.rope[index], index);
      }
    }
    
    updatePoints(element: CANNON.Body[], indexParent: number) {
      element.forEach((item: CANNON.Body, index: number) => {
        // Update the mesh with the location of each of the physics spheres
        this.experience.world.rope.list.rope[
          indexParent
        ].geometry.parameters.path.points[index].copy(item.position);
      });
    }
    //#endregion update at 60FPS

    Animations in the DOM – ticket tearing particles

    While the website heavily relies on Three.js to create an immersive experience, many elements remain DOM-based. One of
    my goals for this portfolio was to combine both worlds: the rich, interactive 3D environments and the efficiency of
    traditional DOM elements. Furthermore, I genuinely enjoy coding DOM-based micro-interactions, so skipping out on them
    wasn ‘t an option!

    One of my favorite DOM animations is the ticket-tearing effect, especially the particles flying away. It ‘s subtle,
    but adds a bit of charm. The effect is not only fun to watch but also relatively easy to adapt to other projects.
    First, let ‘s look at the structure of the components.

    TicketBase.vue
    is a fairly simple file with minimal styling. It handles the tearing animation and a few basic functions. Everything
    else related to the ticket such as the style is handled by other components passed through slots.

    To make things clearer, I ‘ve cleaned up my
    TicketBase.vue
    file a bit to highlight how the particle effect works.

    import { computed, ref, watch, useSlots } from "vue";
    import { useAudioStore } from "@/stores/audio";
    
    import type { TicketBaseProps } from "./types";
    
    const props = withDefaults(defineProps<TicketBaseProps>(), {
      isTearVisible: true,
      isLocked: false,
      cardId: null,
      isFirstTear: false,
      runTearAnimation: false,
      isTearable: false,
      markup: "button",
    });
    
    const { setCurrentFx } = useAudioStore();
    
    const emit = defineEmits(["hover:enter", "hover:leave"]);
    
    const particleContainer = ref<HTMLElement | null>(null);
    const particleContainerTop = ref<HTMLElement | null>(null);
    const timeoutParticles = ref<NodeJS.Timeout | null>(null);
    const isAnimationStarted = ref<boolean>(false);
    const isTearRipped = ref<boolean>(false);
    
    const isTearable = computed(
      () => isTearVisible || (!isTearVisible && isFirstTear)
    );
    
    const handleClick = () => {
      ...
    };
    
    const runTearAnimation = () => {
      ...
    };
    
    const createParticles = () => {
      ...
    };
    
    const deleteParticles = () => {
      ...
    };
    
    const toggleIsAnimationStarted = () => {
    ...
    };
    
    const cssClasses = computed(() => [
      ...
    ]);
    
    
    
    .ticket-base {
       ...
     }
    
    
    
    /* particles can't be scoped */
    .particle {
    ...
    }

    When a ticket is clicked (or the user presses Enter), it runs the function
    handleClick()
    , which then calls
    runTearAnimation()
    .

    const handleClick = () => {
      if (!props.isTearable || props.isLocked || isAnimationStarted.value) return;
    	...
    
      runTearAnimation();
    };
    
    ...
    
    const runTearAnimation = () => {
      toggleIsAnimationStarted(true);
    
      createParticles(particleContainerTop.value, "bottom");
      createParticles(particleContainer.value, "top");
      isTearRipped.value = true;
      // add other functions such ad tearing SFX
    };
    
    
    ...
    
    const toggleIsAnimationStarted = (bool: boolean) => {
      isAnimationStarted.value = bool;
    };

    The
    createParticles
    function creates a few new
    <div>
    elements, which act as the little particles. These divs are then appended to either the main part of the ticket or the
    torn part.

    const createParticles = (containerSelector: HTMLElement, direction: string) => {
      const numParticles = 5;
      for (let i = 0; i < numParticles; i++) {
        const particle = document.createElement("div");
        particle.className = "particle";
    
        // Calculate left position based on index and add small random offset
        const baseLeft = (i / numParticles) * 100;
        const randomOffset = (Math.random() - 0.5) * 10;
        particle.style.left = `calc(${baseLeft}% + ${randomOffset}%)`;
    
        // Assign unique animation properties
        const duration = Math.random() * 0.3 + 0.1;
        const translateY = (i / numParticles) * -20 - 2;
        const scale = Math.random() * 0.5 + 0.5;
        const delay = ((numParticles - i - 1) / numParticles) * 0;
    
        particle.style.animation = `flyAway ${duration}s ${delay}s ease-in forwards`;
        particle.style.setProperty("--translateY", `${translateY}px`);
        particle.style.setProperty("--scale", scale.toString());
    
        if (direction === "bottom") {
          particle.style.animation = `flyAwayBottom ${duration}s ${delay}s ease-in forwards`;
        }
    
        containerSelector.appendChild(particle);
    
        // Remove particle after animation ends
        particle.addEventListener("animationend", () => {
          particle.remove();
        });
      }
    };

    The particles are animated using a CSS keyframes animation called
    flyAway
    or
    flyAwayBottom
    .

    .particle {
      position: absolute;
      width: 0.2rem;
      height: 0.2rem;
      background-color: var(--color-particles); /* === #655c52 */
    
      animation: flyAway 3s ease-in forwards;
    }
    
    @keyframes flyAway {
      0% {
        transform: translateY(0) scale(1);
        opacity: 1;
      }
      100% {
        transform: translateY(var(--translateY)) scale(var(--scale));
        opacity: 0;
      }
    }
    
    @keyframes flyAwayBottom {
      0% {
        transform: translateY(0) scale(1);
        opacity: 1;
      }
      100% {
        transform: translateY(calc(var(--translateY) * -1)) scale(var(--scale));
        opacity: 0;
      }
    }

    Additional Featured Animations

    There are so many features, details easter eggs and animation I wanted to cover in this article, but it’s simply not
    possible to go through everything as it would be too much and many deserve their own tutorial.

    That said, here are some of my favorites to code. They definitely deserve a spot in this article.

    Reflections on Aurel’s Grand Theater

    Even though it took longer than I originally anticipated, Aurel ‘s Grand Theater was an incredibly fun and rewarding
    project to work on. Because it wasn ‘t a client project, it offered a rare opportunity to freely experiment, explore
    new ideas, and push myself outside my comfort zone, without the usual constraints of budgets or deadlines.

    Looking back, there are definitely things I ‘d approach differently if I were to start again. I ‘d spend more time
    defining the art direction upfront, lean more heavily into GPU, and perhaps implement Rapier. But despite these
    reflections, I had an amazing time building this project and I ‘m satisfied with the final result.

    While recognition was never the goal, I ‘m deeply honored that the site was acknowledged. It received FWA of the Day,
    Awwwards Site of the Day and Developer Award, as well as GSAP’s Site of the Week and Site of the Month.

    I ‘m truly grateful for the recognition, and I hope this behind-the-scenes look and shared code snippets inspire you
    in your own creative coding journey.



    Source link

  • Is XDR the Ultimate Answer to Withstanding the Modern Cyberwarfare Era?

    Is XDR the Ultimate Answer to Withstanding the Modern Cyberwarfare Era?


    The digital realm has morphed into a volatile battleground. Organizations are no longer just facing isolated cyber incidents but are squarely in the crosshairs of sophisticated cyberwarfare. Nation-states, organized cybercrime syndicates, and resourceful individual attackers constantly pursue vulnerabilities, launching relentless attacks. Traditional security measures are increasingly insufficient, leaving businesses dangerously exposed. So, how can organizations effectively defend their critical digital assets against this escalating tide of sophisticated and persistent threats? The answer, with increasing certainty, lies in the power of Extended Detection and Response (XDR).

    The Limitations of Traditional Security in the Cyberwarfare Era

    For years, security teams have been navigating a fragmented landscape of disparate security tools. Endpoint Detection and Response (EDR), Network Detection and Response (NDR), email security gateways, and cloud security solutions have operated independently, each generating a stream of alerts that often lacked crucial context and demanded time-consuming manual correlation. This lack of integration created significant blind spots, allowing malicious actors to stealthily move laterally within networks and establish long-term footholds, leading to substantial damage and data breaches. The complexity inherent in managing these siloed systems has become a major impediment to effective threat defense in this new era of cyber warfare.

    READ: Advisory: Pahalgam Attack themed decoys used by APT36 to target the Indian Government

    XDR: A Unified Defense Against Advanced Cyber Threats

    XDR fundamentally breaks down these security silos. It’s more than just an upgrade to EDR; it represents a transformative shift towards a unified security incident detection and response platform that spans multiple critical security layers. Imagine having a centralized view that provides a comprehensive understanding of your entire security posture, seamlessly correlating data from your endpoints, network infrastructure, email communications, cloud workloads, and more. This holistic visibility forms the bedrock of a resilient defense strategy in the face of modern cyberwarfare tactics.

    Key Advantages of XDR in the Age of Cyber Warfare

    Unprecedented Visibility and Context for Effective Cyber Defense:

    XDR ingests and intelligently analyzes data from a wide array of security telemetry sources, providing a rich and contextual understanding of emerging threats. Instead of dealing with isolated and often confusing alerts, security teams gain a complete narrative of an attack lifecycle, from the initial point of entry to lateral movement attempts and data exfiltration activities. This comprehensive context empowers security analysts to accurately assess the scope and severity of a security incident, leading to more informed and effective response actions against sophisticated cyber threats.

    Enhanced Threat Detection Capabilities Against Advanced Attacks

    By correlating seemingly disparate data points across multiple security domains, XDR can effectively identify sophisticated and evasive attacks that might easily bypass traditional, siloed security tools. Subtle anomalies and seemingly innocuous behavioral patterns, which could appear benign in isolation, can paint a clear and alarming picture of malicious activity when analyzed holistically by XDR. This significantly enhances the ability to detect and neutralize advanced persistent threats (APTs), zero-day exploits, and other complex cyberattacks that characterize modern cyber warfare.

    Faster and More Efficient Incident Response in a Cyber Warfare Scenario

    In the high-pressure environment of cyber warfare, rapid response is paramount. XDR automates many of the time-consuming and manual tasks associated with traditional incident response processes, such as comprehensive data collection, in-depth threat analysis, and thorough investigation workflows. This automation enables security teams to respond with greater speed and decisiveness, effectively containing security breaches before they can escalate and minimizing the potential impact of a successful cyberattack. Automated response actions, such as isolating compromised endpoints or blocking malicious network traffic, can be triggered swiftly and consistently based on the correlated intelligence provided by XDR.

    Improved Productivity for Security Analysts Facing Cyber Warfare Challenges

    The sheer volume of security alerts generated by a collection of disconnected security tools can quickly overwhelm even the most skilled security teams, leading to alert fatigue and a higher risk of genuinely critical threats being missed. XDR addresses this challenge by consolidating alerts from across the security landscape, intelligently prioritizing them based on rich contextual information, and providing security analysts with the comprehensive information they need to quickly understand, triage, and effectively respond to security incidents. This significantly reduces the workload on security teams, freeing up valuable time and resources to focus on proactive threat hunting activities and the implementation of more robust preventative security measures against the evolving threats of cyber warfare.

    READ: Seqrite XDR Awarded AV-TEST Approved Advanced EDR Certification. Here’s Why?

    Proactive Threat Hunting Capabilities in the Cyber Warfare Landscape

    With a unified and comprehensive view of the entire security landscape provided by XDR, security analysts can proactively hunt for hidden and sophisticated threats and subtle indicators of compromise (IOCs) that might not trigger traditional, signature-based security alerts. By leveraging the power of correlated data analysis and applying advanced behavioral analytics, security teams can uncover dormant threats and potential attack vectors before they can be exploited and cause significant harm in the context of ongoing cyber warfare.

    Future-Proofing Your Security Posture Against Evolving Cyber Threats

    The cyber threat landscape is in a constant state of evolution, with new attack vectors, sophisticated techniques, and increasingly complex methodologies emerging on a regular basis. XDR’s inherently unified architecture and its ability to seamlessly integrate with new and emerging security layers ensure that your organization’s defenses remain adaptable and highly resilient in the face of future, as-yet-unknown threats that characterize the dynamic nature of cyber warfare.

    Introducing Seqrite XDR: Your AI-Powered Shield in the Cyberwarfare Era

    In this challenging and ever-evolving cyberwarfare landscape, Seqrite XDR emerges as your powerful and intelligent ally. Now featuring SIA – Seqrite Intelligent Assistant, a groundbreaking virtual security analyst powered by the latest advancements in GenAI technology, Seqrite XDR revolutionizes your organization’s security operations. SIA acts as a crucial force multiplier for your security team, significantly simplifying complex security tasks, dramatically accelerating in-depth threat investigations through intelligent contextual summarization and actionable insights, and delivering clear, concise, and natural language-based recommendations directly to your analysts.

    Unlock Unprecedented Security Capabilities with Seqrite XDR and SIA

    • SIA – Your LLM Powered Virtual Security Analyst: Leverage the power of cutting-edge Gen AI to achieve faster response times and enhanced security analysis. SIA provides instant access to critical incident details, Indicators of Compromise (IOCs), and comprehensive incident timelines. Seamlessly deep-link to relevant incidents, security rules, and automated playbooks across the entire Seqrite XDR platform, empowering your analysts with immediate context and accelerating their workflows.
    • Speed Up Your Response with Intelligent Automation: Gain instant access to all critical incident-related information, including IOCs and detailed incident timelines. Benefit from seamless deep-linking capabilities to incidents, relevant security rules, and automated playbooks across the Seqrite XDR platform, significantly accelerating your team’s response capabilities in the face of cyber threats.
    • Strengthen Your Investigations with AI-Powered Insights: Leverage SIA to gain comprehensive contextual summarization of complex security events, providing your analysts with a clear understanding of the attack narrative. Receive valuable insights into similar past threats, suggested mitigation strategies tailored to your environment, and emerging threat trends, empowering your team to make more informed decisions during critical investigations.
    • Make Smarter Security Decisions with AI-Driven Recommendations: Utilize pre-built and intuitive conversational prompts specifically designed for security analysts, enabling them to quickly query and understand complex security data. Benefit from clear visualizations, concise summaries of key findings, and structured, actionable recommendations generated by SIA, empowering your team to make more effective and timely security decisions.

    With Seqrite XDR, now enhanced with the power of SIA – your GenAI-powered virtual security analyst, you can transform your organization’s security posture by proactively uncovering hidden threats and sophisticated adversaries that traditional, siloed security tools often miss. Don’t wait until it’s too late.

    Contact our cybersecurity experts today to learn how Seqrite XDR and SIA can provide the ultimate answer to withstanding the modern cyberwarfare era. Request a personalized demo now to experience the future of intelligent security.

     



    Source link

  • How to Choose the Right ZTNA Solution for your Enterprise

    How to Choose the Right ZTNA Solution for your Enterprise


    As organizations continue to embrace hybrid work models and migrate applications to the cloud, traditional network security approaches like VPNs are proving inadequate. Zero-trust network Access (ZTNA) has emerged as the modern framework for secure access, operating on the principle of “never trust, always verify.” However, with numerous vendors offering different ZTNA solutions, selecting the right one requires careful consideration of organizational needs, solution types, key features, and implementation factors.

    Assessing Organizational Requirements

    The first step in selecting a ZTNA solution is thoroughly evaluating your organization’s specific needs. Consider the nature of your workforce: do employees work remotely, in-office, or in a hybrid arrangement? The solution must accommodate secure access from various locations while ensuring productivity. Additionally, assess whether third-party vendors or contractors require controlled access to specific resources, as this will influence whether an agent-based or agentless approach is more suitable.

    Another critical factor is the sensitivity of the data and applications being accessed. Organizations handling financial, healthcare, or other regulated data must ensure the ZTNA solution complies with industry standards such as GDPR, HIPAA, or SOC 2. Furthermore, examine how the solution integrates with your existing security infrastructure, including identity and access management (IAM) systems, endpoint detection and response (EDR) tools, and security information and event management (SIEM) platforms. A seamless integration ensures cohesive security policies and reduces operational complexity.

    Understanding ZTNA Deployment Models

    ZTNA solutions generally fall into two primary categories: service-initiated (agent-based) and network-initiated (agentless). Service-initiated ZTNA requires installing a lightweight agent on user devices, which then connects to a cloud-based broker that enforces access policies. This model is ideal for organizations with managed corporate devices, as it provides granular control over endpoint security.

    On the other hand, network-initiated ZTNA does not require software installation. Instead, users access resources through a web portal or browser, enforcing policies via DNS or routing controls. This approach is better suited for third-party users or unmanaged devices, offering flexibility without compromising security. Some vendors provide hybrid models that combine both approaches, allowing organizations to tailor access based on user roles and device types.

    Essential Features of a Robust ZTNA Solution

    When evaluating ZTNA providers, prioritize solutions that offer strong identity-centric security. Multi-factor authentication (MFA) and continuous authentication mechanisms, such as behavioral analytics, ensure that only verified users gain access. Role-based access control (RBAC) further enhances security by enforcing the principle of least privilege, granting users access only to the resources they need.

    Granular access controls are another critical feature. Look for solutions that provide application-level segmentation rather than just network-level controls. Context-aware policies, which consider device posture, geographic location, and time of access, add a layer of security.

    Moreover, A robust ZTNA solution should include several other essential features to ensure security and flexibility. It must support user device binding to associate users with their specific devices securely. Additionally, it should support local users in accommodating on-premises authentication needs. Compatibility with legacy identity providers (IdPs) is crucial for seamless integration with existing systems. Furthermore, the solution should enable session recording over various protocols to enhance monitoring and compliance.

    Integration capabilities should not be overlooked. The ideal ZTNA solution should seamlessly connect with existing security tools, such as SIEM and SOAR platforms, for centralized monitoring and incident response. Additionally, API-based automation can streamline policy management, reducing administrative overhead. Finally, user experience plays a pivotal role in adoption. Features like single sign-on (SSO) and fast, reliable connectivity help maintain productivity while ensuring security.

    Evaluating Deployment and Cost Considerations

    Implementation complexity and cost are decisive factors in choosing a ZTNA solution. Cloud-based ZTNA, delivered as a SaaS offering, typically involves minimal deployment effort and is ideal for organizations with predominantly cloud-based applications. While offering greater control, on-premises deployments require more extensive setup and maintenance, making them better suited for highly regulated industries with strict data residency requirements. Hybrid models strike a balance, catering to organizations with mixed infrastructure.

    Cost structures vary among providers, with some offering per-user licensing and others charging based on application access. Be mindful of potential hidden costs, such as bandwidth usage or fees for additional security integrations. Conducting a proof-of-concept (POC) trial can provide valuable insights into the solution’s real-world performance and help justify investment by demonstrating potential cost savings, such as reduced VPN expenses or improved security efficiency.

    Conclusion: Making an Informed Decision

    Choosing the right ZTNA solution demands a structured approach. Begin by assessing your organization’s unique requirements, including workforce dynamics, data sensitivity, and existing security infrastructure. Next, understand the different deployment models to determine whether an agent-based, agentless, or hybrid solution aligns with your needs. Prioritize features that enhance security without compromising usability and carefully evaluate deployment efforts and costs to ensure smooth implementation.

    By following this comprehensive guide, organizations can adopt a ZTNA solution that strengthens security and supports operational efficiency and scalability. As the threat landscape evolves, a well-chosen ZTNA framework will provide flexibility and resilience to safeguard critical assets in an increasingly perimeter-less world.

    Discover how Seqrite ZTNA can transform your organization’s security with a robust, cloud-native zero-trust solution tailored for modern enterprises. Contact us today or request a demo to start your journey toward a more secure and efficient network!



    Source link

  • Integrating Rive into a React Project: Behind the Scenes of Valley Adventures

    Integrating Rive into a React Project: Behind the Scenes of Valley Adventures


    Bringing new tools into a workflow is always exciting—curiosity bumps up against the comfort of familiar methods. But when our longtime client, Chumbi Valley, came to us with their Valley Adventures project, we saw the perfect opportunity to experiment with Rive and craft cartoon-style animations that matched the playful spirit of the brand.

    Rive is a powerful real-time interactive design tool with built-in support for interactivity through State Machines. In this guide, we’ll walk you through how we integrated a .riv file into a React environment and added mouse-responsive animations.

    We’ll also walk through a modernized integration method using Rive’s newer Data Binding feature—our current preferred approach for achieving the same animation with less complexity and greater flexibility.

    Animation Concept & File Preparation

    Valley Adventures is a gamified Chumbi NFT staking program, where magical creatures called Chumbi inhabit an enchanted world. The visual direction leans heavily into fairytale book illustrations—vibrant colors, playful characters, and a whimsical, cartoon-like aesthetic.

    To immediately immerse users in this world, we went with a full-section hero animation on the landing page. We split the animation into two parts:

    • an idle animation that brings the scene to life;
    • a cursor-triggered parallax effect, adding depth and interactivity.

    Several elements animate simultaneously—background layers like rustling leaves and flickering fireflies, along with foreground characters that react to movement. The result is a dynamic, storybook-like experience that invites users to explore.

    The most interesting—and trickiest—part of the integration was tying animations to mouse tracking. Rive provides a built-in way to handle this: by applying constraints with varying strengths to elements within a group that’s linked to Mouse Tracking, which itself responds to the cursor’s position.

    However, we encountered a limitation with this approach: the HTML buttons layered above the Rive asset were blocking the hover state, preventing it from triggering the animation beneath.

    To work around this, we used a more robust method that gave us finer control and avoided those problems altogether. 

    Here’s how we approached it:

    1. Create four separate timelines, each with a single keyframe representing an extreme position of the animation group:
      • Far left
      • Far right
      • Top
      • Bottom
    2. Add two animation layers, each responsible for blending between opposite keyframes:
      • Layer 1 blends the far-left and far-right timelines
      • Layer 2 blends the top and bottom timelines
    3. Tie each layer’s blend amount to a numeric input—one for the X axis, one for the Y axis.

    By adjusting the values of these inputs based on the cursor’s position, you can control how tightly the animation responds on each axis. This approach gives you a smoother, more customizable parallax effect—and prevents unexpected behavior caused by overlapping UI.

    Once the animation is ready, simply export it as a .riv file—and leave the rest of the magic to the devs.

    How We Did It: Integrating a Rive File into a React Project

    Before we dive further, let’s clarify what a .riv file actually is.

    A .riv file is the export format from the Rive editor. It can include:

    • vector graphics,
    • timeline animations,
    • a State Machine with input parameters.

    In our case, we’re using a State Machine with two numeric inputs: Axis_X and Axis_Y. These inputs are tied to how we control animation in Rive, using values from the X and Y axes of the cursor’s position.

    These inputs drive the movement of different elements—like the swaying leaves, fluttering fireflies, and even subtle character reactions—creating a smooth, interactive experience that responds to the user’s mouse.

    Step-by-Step Integration

    Step 1: Install the Rive React runtime

    Install the official package:

    npm install @rive-app/react-canvas

    Step 2: Create an Animation Component

    Create a component called RiveBackground.tsx to handle loading and rendering the animation.

    Step 3: Connect animation

    const { rive, setCanvasRef, setContainerRef } = useRive({
      src: 'https://cdn.rive.app/animations/hero.riv',
      autoplay: true,
      layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
      onLoad: () => setIsLoaded(true),
      enableRiveAssetCDN: true,
    });
    

    For a better understanding, let’s take a closer look at each prop you’ll typically use when working with Rive in React:

    What each option does:

    Property Description
    src Path to your .riv file — can be local or hosted via CDN
    autoplay Automatically starts the animation once it’s loaded
    layout Controls how the animation fits into the canvas (we’re using Cover and Center)
    onLoad Callback that fires when the animation is ready — useful for setting isLoaded
    enableRiveAssetCDN Allows loading of external assets (like fonts or textures) from Rive’s CDN

    Step 4: Connect State Machine Inputs

    const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
    const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);

    This setup connects directly to the input values defined inside the State Machine, allowing us to update them dynamically in response to user interaction.

    • State Machine 1 — the name of your State Machine, exactly as defined in the Rive editor
    • Axis_X and Axis_Y — numeric inputs that control movement based on cursor position
    • 0 — the initial (default) value for each input

    ☝️ Important: Make sure your .riv file includes the exact names: Axis_X, Axis_Y, and State Machine 1. These must match what’s defined in the Rive editor — otherwise, the animation won’t respond as expected.

    Step 5: Handle Mouse Movement

    useEffect(() => {
      if (!numX || !numY) return;
    
      const handleMouseMove = (e: MouseEvent) => {
        const { innerWidth, innerHeight } = window;
        numX.value = (e.clientX / innerWidth) * 100;
        numY.value = 100 - (e.clientY / innerHeight) * 100;
      };
    
      window.addEventListener('mousemove', handleMouseMove);
      return () => window.removeEventListener('mousemove', handleMouseMove);
    }, [numX, numY]);

    What’s happening here:

    • We use clientX and clientY to track the mouse position within the browser window.
    • The values are normalized to a 0–100 range, matching what the animation expects.
    • These normalized values are then passed to the Axis_X and Axis_Y inputs in the Rive State Machine, driving the interactive animation.

    ⚠️ Important: Always remember to remove the event listener when the component unmounts to avoid memory leaks and unwanted behavior. 

    Step 6: Cleanup and Render the Component

    useEffect(() => {
      return () => rive?.cleanup();
    }, [rive]);

    And the render:

    return (
      <div
        ref={setContainerRef}
        className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
      >
        <canvas ref={setCanvasRef} />
      </div>
    );
    • cleanup() — frees up resources when the component unmounts. Always call this to prevent memory leaks.
    • setCanvasRef and setContainerRef — these must be connected to the correct DOM elements in order for Rive to render the animation properly.

    And here’s the complete code:

    import {
      useRive,
      useStateMachineInput,
      Layout,
      Fit,
      Alignment,
    } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function RiveBackground({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        animations: ['State Machine 1','Timeline 1','Timeline 2'
    ],
        autoplay: true,
        layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
        onLoad: () => setIsLoaded(true),
        enableRiveAssetCDN: true,
      });
    
      const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
      const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);
    
      useEffect(() => {
        if (!numX || !numY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
    	if (!numX || !numY) {
            return;
          }
    
          const { innerWidth, innerHeight } = window;
          numX.value = (e.clientX / innerWidth) * 100;
          numY.value = 100 - (e.clientY / innerHeight) * 100;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
        return () => window.removeEventListener('mousemove', handleMouseMove);
      }, [numX, numY]);
    
      useEffect(() => {
        return () => {
          rive?.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }
    

    Step 7: Use the Component

    Now you can use the RiveBackground like any other component:

    <RiveBackground className="hero-background" />

    Step 8: Preload the WASM File

    To avoid loading the .wasm file at runtime—which can delay the initial render—you can preload it in App.tsx:

    import riveWASMResource from '@rive-app/canvas/rive.wasm';
    
    <link
      rel="preload"
      href={riveWASMResource}
      as="fetch"
      crossOrigin="anonymous"
    />

    This is especially useful if you’re optimizing for first paint or overall performance.

    Simple Parallax: A New Approach with Data Binding

    In the first part of this article, we used a classic approach with a State Machine to create the parallax animation in Rive. We built four separate animations (top, bottom, left, right), controlled them using input variables, and blended their states to create smooth motion. This method made sense at the time, especially before Data Binding support was introduced.

    But now that Data Binding is available in Rive, achieving the same effect is much simpler—just a few steps. Data binding in Rive is a system that connects editor elements to dynamic data and code via view models, enabling reactive, runtime-driven updates and interactions between design and development.

    In this section, we’ll show how to refactor the original Rive file and code using the new approach.

    Updating the Rive File

    1. Remove the old setup:
      • Go to the State Machine.
      • Delete the input variables: top, bottom, left, right.
      • Remove the blending states and their associated animations.
    2. Group the parallax layers:
      • Wrap all the parallax layers into a new group—e.g., ParallaxGroup.
    3. Create binding parameters:
      • Select ParallaxGroup and add:
        • pointerX (Number)
        • pointerY (Number)
    4. Bind coordinates:
      • In the properties panel, set:
        • X → pointerX
        • Y → pointerY

    Now the group will move dynamically based on values passed from JavaScript.

    The Updated JS Code

    Before we dive into the updated JavaScript, let’s quickly define an important concept:

    When using Data Binding in Rive, viewModelInstance refers to the runtime object that links your Rive file’s bindable properties (like pointerX or pointerY) to your app’s logic. In the Rive editor, you assign these properties to elements like positions, scales, or rotations. At runtime, your code accesses and updates them through the viewModelInstance—allowing for real-time, declarative control without needing a State Machine.

    With that in mind, here’s how the new setup replaces the old input-driven logic:

    import { useRive } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function ParallaxEffect({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        autoplay: true,
        autoBind: true,
        onLoad: () => setIsLoaded(true),
      });
    
      useEffect(() => {
        if (!rive) return;
    
        const vmi = rive.viewModelInstance;
        const pointerX = vmi?.number('pointerX');
        const pointerY = vmi?.number('pointerY');
    
        if (!pointerX || !pointerY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
          const { innerWidth, innerHeight } = window;
          const x = (e.clientX / innerWidth) * 100;
          const y = 100 - (e.clientY / innerHeight) * 100;
          pointerX.value = x;
          pointerY.value = y;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
    
        return () => {
          window.removeEventListener('mousemove', handleMouseMove);
          rive.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }

    The Result

    You get the same parallax effect, but:

    • without input variables or blending;
    • without a State Machine;
    • with simple control via the ViewModel.

    Official Live Example from Rive

    👉 CodeSandbox: Data Binding Parallax

    Conclusion

    Data Binding is a major step forward for interactive Rive animations. Effects like parallax can now be set up faster, more reliably, and with cleaner logic. We strongly recommend this approach for new projects.

    Final Thoughts

    So why did we choose Rive over Lottie for this project?

    • Interactivity: With Lottie, achieving the same level of interactivity would’ve required building a custom logic layer from scratch. With Rive, we got that behavior baked into the file—plug and play.
    • Optimization: Rive gives you more control over each asset inside the .riv file, and the output tends to be lighter overall.

    Our biggest takeaway? Don’t be afraid to experiment with new tools—especially when they feel like the right fit for your project’s concept. Rive matched the playful, interactive vibe of Valley Adventures perfectly, and we’re excited to keep exploring what it can do.



    Source link

  • Sine and Cosine – A friendly guide to the unit circle



    Welcome to the world of sine and cosine! These two functions are the backbone of trigonometry, and they’re much simpler than they seem. In this article, we will explore the unit circle, the home of sine and cosine, and learn





    Source link

  • Deploy CoreML Models on the Server with Vapor | by Drew Althage


    Recently, at Sovrn, we had an AI Hackathon where we were encouraged to experiment with anything related to machine learning. The Hackathon yielded some fantastic projects from across the company. Everything from SQL query generators to chatbots that can answer questions about our products and other incredible work. I thought this would be a great opportunity to learn more about Apple’s ML tools and maybe even build something with real business value.

    A few of my colleagues and I teamed up to play with CreateML and CoreML to see if we could integrate some ML functionality into our iOS app. We got a model trained and integrated into our app in several hours, which was pretty amazing. But we quickly realized that we had a few problems to solve before we could actually ship this thing.

    • The model was hefty. It was about 50MB. That’s a lot of space to take up in our app bundle.
    • We wanted to update the model without releasing a new app version.
    • We wanted to use the model in the web browser as well.

    We didn’t have time to solve all of these problems. But the other day I was exploring the Vapor web framework and the thought hit me, “Why not deploy CoreML models on the server?”



    Source link

  • 3 Fundamental Concepts to Fully Understand how the Fetch API Works | by Jay Cruz


    Cyberpunk-inspired scene with a cyborg Pitbull dog playing fetch.
    Generated with DALL-E 3

    Understanding the Fetch API can be challenging, particularly for those new to JavaScript’s unique approach to handling asynchronous operations. Among the many features of modern JavaScript, the Fetch API stands out for its ability to handle network requests elegantly. However, the syntax of chaining .then() methods can seem unusual at first glance. To fully grasp how the Fetch API works, it’s vital to understand three core concepts:

    In programming, synchronous code is executed in sequence. Each statement waits for the previous one to finish before executing. JavaScript, being single-threaded, runs code in a linear fashion. However, certain operations, like network requests, file system tasks, or timers, could block this thread, making the user experience unresponsive.

    Here’s a simple example of synchronous code:

    function doTaskOne() {
    console.log('Task 1 completed');
    }

    function doTaskTwo() {
    console.log('Task 2 completed');
    }

    doTaskOne();
    doTaskTwo();
    // Output:
    // Task 1 completed
    // Task 2 completed



    Source link

  • WebAssembly with Go: Taking Web Apps to the Next Level | by Ege Aytin


    Let’s dive a bit deeper into the heart of our WebAssembly integration by exploring the key segments of our Go-based WASM code.

    involves preparing and specifying our Go code to be compiled for a WebAssembly runtime.

    // go:build wasm
    // +build wasm

    These lines serve as directives to the Go compiler, signaling that the following code is designated for a WebAssembly runtime environment. Specifically:

    • //go:build wasm: A build constraint ensuring the code is compiled only for WASM targets, adhering to modern syntax.
    • // +build wasm: An analogous constraint, utilizing older syntax for compatibility with prior Go versions.

    In essence, these directives guide the compiler to include this code segment only when compiling for a WebAssembly architecture, ensuring an appropriate setup and function within this specific runtime.

    package main

    import (
    "context"
    "encoding/json"
    "syscall/js"

    "google.golang.org/protobuf/encoding/protojson"

    "github.com/Permify/permify/pkg/development"
    )

    var dev *development.Development

    func run() js.Func {
    // The `run` function returns a new JavaScript function
    // that wraps the Go function.
    return js.FuncOf(func(this js.Value, args []js.Value) interface{} {

    // t will be used to store the unmarshaled JSON data.
    // The use of an empty interface{} type means it can hold any type of value.
    var t interface{}

    // Unmarshal JSON from JavaScript function argument (args[0]) to Go's data structure (map).
    // args[0].String() gets the JSON string from the JavaScript argument,
    // which is then converted to bytes and unmarshaled (parsed) into the map `t`.
    err := json.Unmarshal([]byte(args[0].String()), &t)

    // If an error occurs during unmarshaling (parsing) the JSON,
    // it returns an array with the error message "invalid JSON" to JavaScript.
    if err != nil {
    return js.ValueOf([]interface{}{"invalid JSON"})
    }

    // Attempt to assert that the parsed JSON (`t`) is a map with string keys.
    // This step ensures that the unmarshaled JSON is of the expected type (map).
    input, ok := t.(map[string]interface{})

    // If the assertion is false (`ok` is false),
    // it returns an array with the error message "invalid JSON" to JavaScript.
    if !ok {
    return js.ValueOf([]interface{}{"invalid JSON"})
    }

    // Run the main logic of the application with the parsed input.
    // It’s assumed that `dev.Run` processes `input` in some way and returns any errors encountered during that process.
    errors := dev.Run(context.Background(), input)

    // If no errors are present (the length of the `errors` slice is 0),
    // return an empty array to JavaScript to indicate success with no errors.
    if len(errors) == 0 {
    return js.ValueOf([]interface{}{})
    }

    // If there are errors, each error in the `errors` slice is marshaled (converted) to a JSON string.
    // `vs` is a slice that will store each of these JSON error strings.
    vs := make([]interface{}, 0, len(errors))

    // Iterate through each error in the `errors` slice.
    for _, r := range errors {
    // Convert the error `r` to a JSON string and store it in `result`.
    // If an error occurs during this marshaling, it returns an array with that error message to JavaScript.
    result, err := json.Marshal(r)
    if err != nil {
    return js.ValueOf([]interface{}{err.Error()})
    }
    // Add the JSON error string to the `vs` slice.
    vs = append(vs, string(result))
    }

    // Return the `vs` slice (containing all JSON error strings) to JavaScript.
    return js.ValueOf(vs)
    })
    }

    Within the realm of Permify, the run function stands as a cornerstone, executing a crucial bridging operation between JavaScript inputs and Go’s processing capabilities. It orchestrates real-time data interchange in JSON format, safeguarding that Permify’s core functionalities are smoothly and instantaneously accessible via a browser interface.

    Digging into run:

    • JSON Data Interchange: Translating JavaScript inputs into a format utilizable by Go, the function unmarshals JSON, transferring data between JS and Go, assuring that the robust processing capabilities of Go can seamlessly manipulate browser-sourced inputs.
    • Error Handling: Ensuring clarity and user-awareness, it conducts meticulous error-checking during data parsing and processing, returning relevant error messages back to the JavaScript environment to ensure user-friendly interactions.
    • Contextual Processing: By employing dev.Run, it processes the parsed input within a certain context, managing application logic while handling potential errors to assure steady data management and user feedback.
    • Bidirectional Communication: As errors are marshaled back into JSON format and returned to JavaScript, the function ensures a two-way data flow, keeping both environments in synchronized harmony.

    Thus, through adeptly managing data, error-handling, and ensuring a fluid two-way communication channel, run serves as an integral bridge, linking JavaScript and Go to ensure the smooth, real-time operation of Permify within a browser interface. This facilitation of interaction not only heightens user experience but also leverages the respective strengths of JavaScript and Go within the Permify environment.

    // Continuing from the previously discussed code...

    func main() {
    // Instantiate a channel, 'ch', with no buffer, acting as a synchronization point for the goroutine.
    ch := make(chan struct{}, 0)

    // Create a new instance of 'Container' from the 'development' package and assign it to the global variable 'dev'.
    dev = development.NewContainer()

    // Attach the previously defined 'run' function to the global JavaScript object,
    // making it callable from the JavaScript environment.
    js.Global().Set("run", run())

    // Utilize a channel receive expression to halt the 'main' goroutine, preventing the program from terminating.
    <-ch
    }

    1. ch := make(chan struct{}, 0): A synchronization channel is created to coordinate the activity of goroutines (concurrent threads in Go).
    2. dev = development.NewContainer(): Initializes a new container instance from the development package and assigns it to dev.
    3. js.Global().Set("run", run()): Exposes the Go run function to the global JavaScript context, enabling JavaScript to call Go functions.
    4. <-ch: Halts the main goroutine indefinitely, ensuring that the Go WebAssembly module remains active in the JavaScript environment.

    In summary, the code establishes a Go environment running within WebAssembly that exposes specific functionality (run function) to the JavaScript side and keeps itself active and available for function calls from JavaScript.

    Before we delve into Permify’s rich functionalities, it’s paramount to elucidate the steps of converting our Go code into a WASM module, priming it for browser execution.

    For enthusiasts eager to delve deep into the complete Go codebase, don’t hesitate to browse our GitHub repository: Permify Wasm Code.

    Kickstart the transformation of our Go application into a WASM binary with this command:

    GOOS=js GOARCH=wasm go build -o permify.wasm main.go

    This directive cues the Go compiler to churn out a .wasm binary attuned for JavaScript environments, with main.go as the source. The output, permify.wasm, is a concise rendition of our Go capabilities, primed for web deployment.

    In conjunction with the WASM binary, the Go ecosystem offers an indispensable JavaScript piece named wasm_exec.js. It’s pivotal for initializing and facilitating our WASM module within a browser setting. You can typically locate this essential script inside the Go installation, under misc/wasm.

    However, to streamline your journey, we’ve hosted wasm_exec.js right here for direct access: wasm_exec.

    cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" .

    Equipped with these pivotal assets — the WASM binary and its companion JavaScript — the stage is set for its amalgamation into our frontend.

    To kick things off, ensure you have a directory structure that clearly separates your WebAssembly-related code from the rest of your application. Based on your given structure, the loadWasm folder seems to be where all the magic happens:

    loadWasm/

    ├── index.tsx // Your main React component that integrates WASM.
    ├── wasm_exec.js // Provided by Go, bridges the gap between Go's WASM and JS.
    └── wasmTypes.d.ts // TypeScript type declarations for WebAssembly.

    To view the complete structure and delve into the specifics of each file, refer to the Permify Playground on GitHub.

    Inside the wasmTypes.d.ts, global type declarations are made which expand upon the Window interface to acknowledge the new methods brought in by Go’s WebAssembly:

    declare global {
    export interface Window {
    Go: any;
    run: (shape: string) => any[];
    }
    }
    export {};

    This ensures TypeScript recognizes the Go constructor and the run method when called on the global window object.

    In index.tsx, several critical tasks are accomplished:

    • Import Dependencies: First off, we import the required JS and TypeScript declarations:
    import "./wasm_exec.js";
    import "./wasmTypes.d.ts";
    • WebAssembly Initialization: The asynchronous function loadWasm takes care of the entire process:
    async function loadWasm(): Promise<void> {
    const goWasm = new window.Go();
    const result = await WebAssembly.instantiateStreaming(
    fetch("play.wasm"),
    goWasm.importObject
    );
    goWasm.run(result.instance);
    }

    Here, new window.Go() initializes the Go WASM environment. WebAssembly.instantiateStreaming fetches the WASM module, compiles it, and creates an instance. Finally, goWasm.run activates the WASM module.

    • React Component with Loader UI: The LoadWasm component uses the useEffect hook to asynchronously load the WebAssembly when the component mounts:
    export const LoadWasm: React.FC<React.PropsWithChildren<{}>> = (props) => {
    const [isLoading, setIsLoading] = React.useState(true);

    useEffect(() => {
    loadWasm().then(() => {
    setIsLoading(false);
    });
    }, []);

    if (isLoading) {
    return (
    <div className="wasm-loader-background h-screen">
    <div className="center-of-screen">
    <SVG src={toAbsoluteUrl("/media/svg/rocket.svg")} />
    </div>
    </div>
    );
    } else {
    return <React.Fragment>{props.children}</React.Fragment>;
    }
    };

    While loading, SVG rocket is displayed to indicate that initialization is ongoing. This feedback is crucial as users might otherwise be uncertain about what’s transpiring behind the scenes. Once loading completes, children components or content will render.

    Given your Go WASM exposes a method named run, you can invoke it as follows:

    function Run(shape) {
    return new Promise((resolve) => {
    let res = window.run(shape);
    resolve(res);
    });
    }

    This function essentially acts as a bridge, allowing the React frontend to communicate with the Go backend logic encapsulated in the WASM.

    To integrate a button that triggers the WebAssembly function when clicked, follow these steps:

    1. Creating the Button Component

    First, we’ll create a simple React component with a button:

    import React from "react";

    type RunButtonProps = {
    shape: string;
    onResult: (result: any[]) => void;
    };

    function RunButton({ shape, onResult }: RunButtonProps) {
    const handleClick = async () => {
    let result = await Run(shape);
    onResult(result);
    };

    return <button onClick={handleClick}>Run WebAssembly</button>;
    }

    In the code above, the RunButton component accepts two props:

    • shape: The shape argument to pass to the WebAssembly run function.
    • onResult: A callback function that receives the result of the WebAssembly function and can be used to update the state or display the result in the UI.
    1. Integrating the Button in the Main Component

    Now, in your main component (or wherever you’d like to place the button), integrate the RunButton:

    import React, { useState } from "react";
    import RunButton from "./path_to_RunButton_component"; // Replace with the actual path

    function App() {
    const [result, setResult] = useState<any[]>([]);

    // Define the shape content
    const shapeContent = {
    schema: `|-
    entity user {}

    entity account {
    relation owner @user
    relation following @user
    relation follower @user

    attribute public boolean
    action view = (owner or follower) or public
    }

    entity post {
    relation account @account

    attribute restricted boolean

    action view = account.view

    action comment = account.following not restricted
    action like = account.following not restricted
    }`,
    relationships: [
    "account:1#owner@user:kevin",
    "account:2#owner@user:george",
    "account:1#following@user:george",
    "account:2#follower@user:kevin",
    "post:1#account@account:1",
    "post:2#account@account:2",
    ],
    attributes: [
    "account:1$public|boolean:true",
    "account:2$public|boolean:false",
    "post:1$restricted|boolean:false",
    "post:2$restricted|boolean:true",
    ],
    scenarios: [
    {
    name: "Account Viewing Permissions",
    description:
    "Evaluate account viewing permissions for 'kevin' and 'george'.",
    checks: [
    {
    entity: "account:1",
    subject: "user:kevin",
    assertions: {
    view: true,
    },
    },
    ],
    },
    ],
    };

    return (
    <div>
    <RunButton shape={JSON.stringify(shapeContent)} onResult={setResult} />
    <div>
    Results:
    <ul>
    {result.map((item, index) => (
    <li key={index}>{item}</li>
    ))}
    </ul>
    </div>
    </div>
    );
    }

    In this example, App is a component that contains the RunButton. When the button is clicked, the result from the WebAssembly function is displayed in a list below the button.

    Throughout this exploration, the integration of WebAssembly with Go was unfolded, illuminating the pathway toward enhanced web development and optimal user interactions within browsers.

    The journey involved setting up the Go environment, converting Go code to WebAssembly, and executing it within a web context, ultimately giving life to the interactive platform showcased at play.permify.co.

    This platform stands not only as an example but also as a beacon, illustrating the concrete and potent capabilities achievable when intertwining these technological domains.



    Source link

  • Advisory: Pahalgam Attack themed decoys used by APT36 to target the Indian Government

    Advisory: Pahalgam Attack themed decoys used by APT36 to target the Indian Government


    Seqrite Labs APT team has discovered “Pahalgam Terror Attack” themed documents being used by the Pakistan-linked APT group Transparent Tribe (APT36) to target Indian Government and Defense personnel. The campaign involves both credential phishing and deployment of malicious payloads, with fake domains impersonating Jammu & Kashmir Police and Indian Air Force (IAF) created shortly after the April 22, 2025 attack. This advisory alerts about the phishing PDF and domains used to uncover similar activity along with macro-laced document used to deploy the group’s well-known Crimson RAT.

    Analysis

    The PDF in question was created on April 24, 2025, with the author listed as “Kalu Badshah”. The names of this phishing document are related to the response measures by the Indian Government regarding the attack.

    • “Action Points & Response by Govt Regarding Pahalgam Terror Attack .pdf”
    • “Report Update Regarding Pahalgam Terror Attack.pdf”
    Picture 1

    The content of the document is masked and the link embedded within the document is the primary vector for the attack. If clicked, it leads to a fake login page which is part of a social engineering effort to lure individuals. The embedded URL triggered is:

    • hxxps://jkpolice[.]gov[.]in[.]kashmirattack[.]exposed/service/home/

    The domain mimics the legitimate Jammu & Kasmir Police (jkpolice[.]gov[.]in), an official Indian police website, but the fake one introduces a subdomain kashmirattack[.]exposed.

    Picture 2

    The addition of “kashmirattack” indicates a thematic connection to the sensitive geopolitical issue, in this case, related to the recent attack in the Kashmir region. Once the government credentials are entered for @gov.in or @nic.in, they are sent directly back to the host. Pivoting on the author’s name, we observed multiple such phishing documents.

    Picture 3

    Multiple names have been observed for each phishing document related to various government and defence meetings to lure the targets, showcasing how quickly the group crafts lures around ongoing events in the country:

    • Report & Update Regarding Pahalgam Terror Attack.pdf
    • Report Update Regarding Pahalgam Terror Attack.pdf
    • Action Points & Response by Govt Regarding Pahalgam Terror Attack .pdf
    • J&K Police Letter Dated 17 April 2025.pdf
    • ROD on Review Meeting held on 10 April 2025 by Secy DRDO.pdf
    • RECORD OF DISCUSSION TECHNICAL REVIEW MEETING NOTICE, 07 April 2025 (1).pdf
    • MEETING NOTICE – 13th JWG meeting between India and Nepal.pdf
    • Agenda Points for Joint Venture Meeting at IHQ MoD on 04 March 2025.pdf
    • DO Letter Integrated HQ of MoD dated 3 March.pdf
    • Collegiate Meeting Notice & Action Points MoD 24 March.pdf
    • Letter to the Raksha Mantri Office Dated 26 Feb 2025.pdf
    • pdf
    • Alleged Case of Sexual Harassment by Senior Army Officer.pdf
    • Agenda Points of Meeting of Dept of Defence held at 11March 25.html
    • Action Points of Meeting of Dept of Defence held at 10March 25.html
    • Agenda Points of Meeting of External Affairs Dept 10 March 25.pdf.html

    PowerPoint PPAM Dropper

    A PowerPoint add-on file with the same name as of the phishing document “Report & Update Regarding Pahalgam Terror Attack.ppam” has been identified which contains malicious macros. It extracts both the embedded files into a hidden directory under user’s profile with a dynamic name, determines the payload based on the Windows version and eventually opens the decoy file with the same phishing URL embedded along with executing the Crimson RAT payload.

    Picture 4

    The final Crimson RAT dropped has internal name “jnmxrvt hcsm.exe” and dropped as “WEISTT.jpg” with similar PDB convention:

    • C:\jnmhxrv cstm\jnmhxrv cstm\obj\Debug\jnmhxrv cstm.pdb

    All three RAT payloads have compilation timestamp on 2025-04-21, just before the Pahalgam terror attack. As usual the hardcoded default IP is present as a decoy and the actual C2 after decoding is – 93.127.133[.]58. It supports the following 22 commands for command and control apart from retrieving system and user information.

    Commands Functionality
    procl / getavs Get a list of all processes
    endpo Kill process based on PID
    scrsz Set screen size to capture
    cscreen Get screenshot
    dirs Get all disk drives
    stops Stop screen capture
    filsz Get file information (Name, Creation Time, Size)
    dowf Download the file from C2
    cnls Stop uploading, downloading and screen capture
    scren Get screenshots continuously
    thumb Get a thumbnail of the image as GIF with size ‘of 200×150.’
    putsrt Set persistence via Run registry key
    udlt Download & execute file from C2 with ‘vdhairtn’ name
    delt Delete file
    file Exfiltrate the file to C2
    info Get machine info (Computer name, username, IP, OS name, etc.)
    runf Execute command
    afile Exfiltrate file to C2 with additional information
    listf Search files based on extension
    dowr Download file from C2 (No execution)
    fles Get the list of files in a directory
    fldr Get the list of folders in a directory

    Infrastructure and Attribution

    The phishing domains identified through hunting have the creation day just one or two days after the documents were created.

    Domains Creation IP ASN
    jkpolice[.]gov[.]in[.]kashmirattack[.]exposed 2025-04-24 37.221.64.134
    78.40.143.189
    AS 200019 (Alexhost Srl)
    AS 45839 (Shinjiru Technology)
    iaf[.]nic[.]in[.]ministryofdefenceindia[.]org 2025-04-16 37.221.64.134 AS 200019 (Alexhost Srl)
    email[.]gov[.]in[.]ministryofdefenceindia[.]org 2025-04-16 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofdefenceindia[.]link 2025-02-18 45.141.59.167 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofdefence[.]de 2025-04-10 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]briefcases[.]email 2025-04-06 45.141.58.224
    78.40.143.98
    AS 213373 (IP Connect Inc)
    AS 45839 (Shinjiru Technology)
    email[.]gov[.]in[.]modindia[.]link 2025-03-02 84.54.51.12 AS 200019 (Alexhost Srl)
    email[.]gov[.]in[.]defenceindia[.]ltd 2025-03-20 45.141.58.224
    45.141.58.33
    AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiadefencedepartment[.]link 2025-02-25 45.141.59.167 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]departmentofspace[.]info 2025-04-20 45.141.58.224 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiangov[.]download 2025-04-06 45.141.58.33
    78.40.143.98
    AS 213373 (IP Connect Inc)
    AS 45839 (Shinjiru Technology)
    indianarmy[.]nic[.]in[.]departmentofdefence[.]de 2025-04-10 176.65.143.215 AS 215208
    indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org 2025-04-16 176.65.143.215 AS 215208
    email[.]gov[.]in[.]indiandefence[.]work 2025-03-10 45.141.59.72 AS 213373 (IP Connect Inc)
    email[.]gov[.]in[.]indiangov[.]download 2025-04-06 78.40.143.98 AS 45839 (Shinjiru Technology)
    email[.]gov[.]in[.]drdosurvey[.]info 2025-03-19 192.64.118.76 AS 22612 (NAMECHEAP-NET)

    This kind of attack is typical in hacktivism, where the goal is to create chaos or spread a political message by exploiting sensitive or emotionally charged issues. In this case, the threat actor is exploiting existing tensions surrounding Kashmir to maximize the impact of their campaign and extract intelligence around these issues.

    The suspicious domains are part of a phishing and disinformation infrastructure consistent with tactics previously used by APT36 (Transparent Tribe) that has a long history of targeting:

    • Indian military personnel
    • Government agencies
    • Defense and research organizations
    • Activists and journalists focused on Kashmir

    PPAM for initial access has been used since many years to embed malicious executables as OLE objects. Domain impersonation to create deceptive URLs that mimic Indian government, or military infrastructure has been seen consistently since last year. They often exploit sensitive topics like Kashmir conflict, border skirmishes, and military movements to create lures for spear-phishing campaigns. Hence these campaigns are attributed to APT36 with high confidence, to have involved delivering Crimson RAT, hidden behind fake documents or malicious links embedded in spoofed domains.

    Potential Impact: Geopolitical and Cybersecurity Implications

    The combination of a geopolitical theme and cybersecurity tactics suggests that this document is part of a broader disinformation campaign. The reference to Kashmir, a region with longstanding political and territorial disputes, indicates the attacker’s intention to exploit sensitive topics to stir unrest or create division.

    Additionally, using PDF files as a delivery mechanism for malicious links is a proven technique aiming to influence public perception, spread propaganda, or cause disruptions. Here’s how the impact could manifest:

    • Disruption of Sensitive Operations: If an official or government worker were to interact with this document, it could compromise their personal or organizational security.
    • Information Operations: The document could lead to the exposure of sensitive documents or the dissemination of false information, thereby creating confusion and distrust among the public.
    • Espionage and Data Breaches: The phishing attempt could ultimately lead to the theft of sensitive data or the deployment of malware within the target’s network, paving the way for further exploitation.

    Recommendations

    Email & Document Screening: Implement advanced threat protection to scan PDFs and attachments for embedded malicious links or payloads.

    Restrict Macro Execution: Disable macros by default, especially from untrusted sources, across all endpoints.

    Network Segmentation & Access Controls: Limit access to sensitive systems and data; apply the principle of least privilege.

    User Awareness & Training: Conduct regular training on recognizing phishing, disinformation, and geopolitical manipulation tactics.

    Incident Response Preparedness: Ensure a tested response plan is in place for phishing, disinformation, or suspected nation-state activity.

    Threat Intelligence Integration: Leverage geopolitical threat intel to identify targeted campaigns and proactively block indicators of compromise (IOCs).

    Monitor for Anomalous Behaviour: Use behavioural analytics to detect unusual access patterns or data exfiltration attempts.

    IOCs

    Phishing Documents

    c4fb60217e3d43eac92074c45228506a

    172fff2634545cf59d59c179d139e0aa

    7b08580a4f6995f645a5bf8addbefa68

    1b71434e049fb8765d528ecabd722072

    c4f591cad9d158e2fbb0ed6425ce3804

    5f03629508f46e822cf08d7864f585d3

    f5cd5f616a482645bbf8f4c51ee38958

    fa2c39adbb0ca7aeab5bc5cd1ffb2f08

    00cd306f7cdcfe187c561dd42ab40f33

    ca27970308b2fdeaa3a8e8e53c86cd3e

    Phishing Domains

    jkpolice[.]gov[.]in[.]kashmirattack[.]exposed

    iaf[.]nic[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]departmentofdefenceindia[.]link

    email[.]gov[.]in[.]departmentofdefence[.]de

    email[.]gov[.]in[.]briefcases[.]email

    email[.]gov[.]in[.]modindia[.]link

    email[.]gov[.]in[.]defenceindia[.]ltd

    email[.]gov[.]in[.]indiadefencedepartment[.]link

    email[.]gov[.]in[.]departmentofspace[.]info

    email[.]gov[.]in[.]indiangov[.]download

    indianarmy[.]nic[.]in[.]departmentofdefence[.]de

    indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org

    email[.]gov[.]in[.]indiandefence[.]work

    email[.]gov[.]in[.]indiangov[.]download

    email[.]gov[.]in[.]drdosurvey[.]info

    Phishing URLs

    hxxps://iaf[.]nic[.]in[.]ministryofdefenceindia[.]org/publications/default[.]htm

    hxxps://jkpolice[.]gov[.]in[.]kashmiraxxack[.]exposed/service/home

    hxxps://email[.]gov[.]in[.]ministryofdefenceindia[.]org/service/home/

    hxxps://email[.]gov[.]in[.]departmentofdefenceindia[.]link/service/home/

    hxxps://email[.]gov[.]in[.]departmentofdefence[.]de/service/home/

    hxxps://email[.]gov[.]in[.]indiangov[.]download/service/home/

    hxxps://indianarmy[.]nic[.]in[.]departmentofdefence[.]de/publications/publications-site-main/index[.]html

    hxxps://indianarmy[.]nic[.]in[.]ministryofdefenceindia[.]org/publications/publications-site-main/index[.]htm

    hxxps://email[.]gov[.]in[.]briefcases[.]email/service/home/

    hxxps://email[.]gov[.]in[.]modindia[.]link/service/home/

    hxxps://email[.]gov[.]in[.]defenceindia[.]ltd/service/home/

    hxxps://email[.]gov[.]in[.]indiadefencedepartment[.]link/service/home/

    hxxps://email[.]gov[.]in[.]departmentofspace[.]info/service/home/

    hxxps://email[.]gov[.]in[.]indiandefence[.]work/service/home/

    PPAM/XLAM

    d946e3e94fec670f9e47aca186ecaabe

    e18c4172329c32d8394ba0658d5212c2

    2fde001f4c17c8613480091fa48b55a0

    c1f4c9f969f955dec2465317b526b600

    Crimson RAT

    026e8e7acb2f2a156f8afff64fd54066

    fb64c22d37c502bde55b19688d40c803

    70b8040730c62e4a52a904251fa74029

    3efec6ffcbfe79f71f5410eb46f1c19e

    b03211f6feccd3a62273368b52f6079d

    93.127.133.58 (Ports – 1097, 17241, 19821, 21817, 23221, 27425)

    104.129.27.14 (Ports – 8108, 16197, 19867, 28784, 30123)

    MITRE ATT&CK

    Reconnaissance T1598.003 Phishing for Information: Spearphishing Link
    Resource Development T1583.001 Acquire Infrastructure: Domains
    Initial Access T1566.001 Phishing: Spearphishing Attachment
    Execution T1204.001

    T1059.005

    User Execution: Malicious Link

    Command and Scripting Interpreter: Visual Basic

    Persistence T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
    Discovery T1033

    T1057

    T1082

    T1083

    System Owner/User Discovery

    Process Discovery

    System Information Discovery

    File and Directory Discovery

    Collection T1005

    T1113

    Data from Local System

    Screen Capture

    Exfiltration T1041 Exfiltration Over C2 Channel

     

    Authors:

    Sathwik Ram Prakki

    Rhishav Kanjilal



    Source link

  • Native Design Tokens: The Foundation of Consistent, Scalable, Open Design

    Native Design Tokens: The Foundation of Consistent, Scalable, Open Design


    As design and development teams grow and projects span across web, mobile, and internal tools, keeping everything consistent becomes tricky. Even small changes, like updating a brand color or adjusting spacing, can turn into hours of manual work across design files, codebases, and documentation. It is easy for things to drift out of sync.

    That is where design tokens come in. They are a way to define and reuse the key design decisions like colors, typography, and spacing in a format that both designers and developers can use. Instead of repeating values manually, tokens let teams manage these decisions from a central place and apply them consistently across tools and platforms.

    With Penpot’s new native support for design tokens, this workflow becomes more accessible and better integrated. Designers can now create and manage tokens directly inside their design files. Developers can rely on those same tokens being structured and available for use in code. No plugins, no copy pasting, no mismatched styles.

    In this article, we will look at what design tokens are and why they matter, walk through how Penpot implements them, and explore some real world workflows and use cases. Whether you are working solo or managing a large design system, tokens can help bring order and clarity to your design decisions—and we will show you how.

    What are Design Tokens?

    Design tokens are a way to describe the small but important visual decisions that make up your user interface. Things like primary colors, heading sizes, border radius, or spacing between elements. Instead of hardcoding those values in a design file or writing them directly into code, you give each one a name and store it as a token.

    Each token is a small piece of structured data. It has a name, a value, and a type. For example, a button background might be defined like this:

    "button-background": {
      "$value": "#005FCC",
      "$type": "color"
    }

    By putting all your decisions into a token format like this, they can be shared and reused across different projects and tools. Designers can use tokens inside the design tool, while developers can use them to generate CSS variables, theme files, or design system code. It is a way to keep everyone aligned, without needing to sync manually.

    The idea behind tokens has been around for a while, but it is often hard to implement unless you are using very specific tools or have custom workflows in place. Penpot changes that by building token support directly into the tool. You do not need extra plugins or complex naming systems. You define tokens once, and they are available everywhere in your design.

    Tokens are also flexible. You can create simple ones like colors or font sizes, or more complex groups for shadows, typography, or spacing systems. You can even reference other tokens, so if your design language evolves, you only need to change one thing.

    Why Should You Care About Design Tokens?

    Consistency and efficiency are two of the main reasons design tokens are becoming essential in design and development work. They reduce the need for manual coordination, avoid inconsistencies, and make it easier to scale design decisions. Here is how they help across different roles:

    For designers
    Tokens remove the need to repeat yourself. Instead of manually applying the same color or spacing across every frame, you define those values once and apply them as tokens. That means no more copy-pasting styles or fixing inconsistencies later. Everything stays consistent, and updates take seconds, not hours.

    For developers
    You get design values in a format that is ready to use. Tokens act as a shared language between design and code, so instead of pulling hex codes out of a mockup, you work directly with the same values defined by the design team. It reduces friction, avoids mismatches, and makes handoff smoother.

    For teams and larger systems
    Tokens are especially useful when multiple people are working on the same product or when you are managing a design system across several platforms or brands. They allow you to define decisions once and reuse them everywhere, keeping things in sync and easy to update when the brand evolves or when new platforms are added.

    Watch this quick and complete demo as Laura Kalbag, designer, developer and educator at Penpot, highlights the key benefits and main uses of Penpot’s design tokens:

    What Sets Penpot Apart?

    Penpot is not just adding support for design tokens as a separate feature. Tokens are being built directly into how Penpot works. They are part of the core design process, not an extra tool you have to manage on the side.

    You can create tokens from the canvas or from the token panel, organize them into sets, and apply them to components, styles, or entire boards. You do not need to keep track of where a value is used—Penpot does that for you. When you change a token, any component using it updates automatically.

    Take a look at this really great overview:

    Tokens in Penpot follow the same format defined by the Design Tokens Community Group, which makes them easy to sync with code and other tools. They are stored in a way that works across platforms, and they are built to be shared, copied, or extended as your project grows.

    You also get extra capabilities like:

    • Tokens that can store text, numbers, and more
    • Math operations between tokens (for example, spacing that is based on a base value)
    • Integration with Penpot’s graph engine, so you can define logic and conditions around your tokens

    That means you can do more than just store values—you can create systems that adapt based on context or scale with your product.

    Key features

    Penpot design tokens support different token types, themes, and sets.

    Design tokens in Penpot are built to be practical and flexible from the start. Whether you are setting up a simple style guide or building a full design system, these features help you stay consistent without extra effort.

    • Native to the platform
      Tokens are a core part of Penpot. You do not need plugins, workarounds, or naming tricks to make them work. You can create, edit, and apply them directly in your files.
    • Based on open standards
      Penpot follows the format defined by the Design Tokens Community Group (W3C), which means your tokens are portable and ready for integration with other tools or codebases.
    • Component aware
      You can inspect which tokens are applied to components right on the canvas, and copy them out for use in code or documentation.
    • Supports multiple types
      Tokens can represent strings, numbers, colors, font families, shadows, and more. This means you are not limited to visual values—you can also manage logic-based or structural decisions.
    • Math support
      Define tokens in relation to others. For example, you can set a spacing token to be twice your base unit, and it will update automatically when the base changes.
    • Graph engine integration
      Tokens can be part of more advanced workflows using Penpot’s visual graph engine. This opens the door for conditional styling, dynamic UI variations, or even generative design.

    Practical Use Cases

    Design tokens are flexible building blocks that can support a range of workflows. Here are a few ways they’re already proving useful:

    • Scaling across platforms
      Tokens make it easier to maintain visual consistency across web, mobile, and desktop interfaces. When spacing, colors, and typography are tokenized, they adapt across screen sizes and tech stacks without manual rework.
    • Creating themes and variants
      Whether you’re supporting light and dark modes, multiple brands, or regional styles, tokens let you swap out entire visual styles by changing a single set of values—without touching your components.
    • Simplifying handoff and implementation
      Because tokens are defined in code-friendly formats, they eliminate guesswork. Developers can use tokens as source-of-truth values, reducing design drift and unnecessary back-and-forth.
    • Prototyping and iterating quickly
      Tokens make it easier to explore design ideas without breaking things. Want to try out a new font scale or update your color palette? Change the token values and everything updates—no tedious find-and-replace needed.
    • Versioning design decisions
      You can track changes to tokens over time just like code. That means your design system becomes easier to maintain, document, and evolve—without losing control.

    Your First Tokens in Penpot

    So how do you actually work with tokens in Penpot?

    The best way to understand design tokens is to try them out. Penpot makes this surprisingly approachable, even if you’re new to the concept. Here’s how to start creating and using tokens inside the editor.

    Creating a Token

    1. Open your project and click on the Tokens tab in the left panel.
    2. You’ll see a list of token types like color, dimension, font size, etc.
    3. Click the + button next to any token type to create a new token.

    You’ll be asked to fill in:

    • Name: Something like dimension.small or color.primary
    • Value: For example, 8px for a dimension, or #005FCC for a color
    • Description (optional): A short note about what it’s for

    Hit Save, and your token will appear in the list. Tokens are grouped by type, so it stays tidy even as your set grows.

    If you try to create a token with a name that already exists, you’ll get an error. Token names must be unique.

    Editing and Duplicating Tokens

    You can right-click any token to edit or duplicate it.

    • Edit: Change the name, value, or description
    • Duplicate: Makes a copy with -copy added to the name

    Handy if you’re exploring alternatives or setting up variants.

    Referencing Other Tokens (Aliases)

    Tokens can point to other tokens. This lets you define a base token and reuse it across multiple other tokens. If the base value changes, everything that references it updates automatically.

    For example:

    1. Create a token called dimension.small with a value of 8px
    2. Create another token called spacing.small
    3. In spacing.small, set the value to {dimension.small}

    Now if you ever update dimension.small to 4px, the spacing token will reflect that change too.

    Token references are case-sensitive, so be precise.

    Using Math in Tokens

    Penpot supports simple math in token values—especially useful for dimension tokens.

    You can write things like:

    • {dimension.base} * 2
    • 16 + 4
    • {spacing.small} + {spacing.medium}

    Let’s say dimension.base is 4px, and you want a larger version that’s always double. You can set dimension.large to:

    csharpCopyEdit{dimension.base} * 2
    

    This means if you ever change the base, the large size follows along.

    Math expressions support basic operators:

    • + addition
    • - subtraction
    • * multiplication

    This adds a lightweight logic layer to your design decisions—especially handy for spacing scales, typography ramps, or breakpoints.

    What’s Next for Penpot Design Tokens?

    Penpot has an exciting roadmap for design tokens that will continue to expand their functionality:

    • GitHub Sync: A feature allowing teams to easily export and import design tokens, facilitating smooth collaboration between design and development teams.
    • Gradients: An upcoming addition to design tokens, enabling designers to work with gradients as part of their design system.
    • REST API & Automation: The future addition of a REST API will enable even deeper integrations and allow teams to automate their design workflows.

    Since Penpot is open source and works under a culture of sharing as much as they can, as early as possible, you can check out their open Taiga board to see what the team is working on in real time and what’s coming up next.

    Conclusion

    Penpot’s design tokens are more than just a tool for managing visual consistency—they are a game-changer for how design and development teams collaborate. Whether you’re a junior UI designer trying to learn scalable design practices, a senior developer looking to streamline design implementation, or an enterprise team managing a complex design system, design tokens can help bring order to complexity.

    As Penpot continues to refine and expand this feature, now is the perfect time to explore the possibilities it offers.

    Give it a try!

    Are you excited about Penpot’s new design token feature? Check it out and explore the potential of scalable design, and stay tuned for updates. We look forward to see how you will start incorporating design tokens into your workflow!



    Source link