برچسب: Design

  • How Readymag’s free layout model drives unconventional web design

    How Readymag’s free layout model drives unconventional web design



    Readymag is a design tool for creating websites on a blank canvas. Grids and templates remain useful, but Readymag also makes room for another approach, one where designers can experiment more freely with composition, storytelling, and visual rhythm. As the web evolves, the free layout model feels increasingly relevant beyond art or experimental work. 

    Between structure and freedom

    Design history often swings between order and freedom. Some seek clarity and repetition, while others chase the chance to break rules for expression and surprise. Web design reflects this tension, shaped from the start by both technical limits and visual experimentation.

    Printing technology once dictated strict, grid-based layouts, later formalized by the Swiss school of graphic design. Early web technologies echoed this logic, making grids the default structure for clarity and usability. Yet many have pushed against it. Avant-garde and postmodern designers experimented with chaotic compositions, and on the web, Flash-era sites turned pages into performances.

    Today, grid and freedom approaches coexist. Tools like Readymag make it possible to borrow from both as needed, sometimes emphasizing structure, sometimes prioritizing expressiveness through typography, imagery, and motion.

    The philosophy and psychology of freedom

    If the grid in design symbolizes order, free layout is its breakaway gesture. Beyond altering page composition, it reflects deeper psychological and philosophical drives: the urge to experiment, assert individuality, and search for new meanings. Printing presses produce flawless, identical letters. A handwritten mark is always unique. Free layout works the same way: it allows designers to create something unique and memorable.

    Working without the grid means inviting randomness, juxtaposing the incompatible, chasing unexpected solutions. Not all experiments yield finished products, but they often shape new languages. In this sense, free layout isn’t chaos for chaos’s sake—it’s a laboratory where future standards are born.

    Freedom also changes the user’s experience. While grids reduce cognitive load, free composition is useful in creating emphasis and rhythm. Psychologists note that attention sharpens when expectations are disrupted. The most engaging designs often draw on both approaches, balancing clarity with moments of surprise.

    How does it work in practice

    While the philosophy of free layout may sound abstract, tools make it tangible. Each editor or builder imposes its own logic: some enforce rigid structures, others allow almost unlimited freedom. Comparing them shows how this philosophy plays out in practice.

    Classic digital design tools like Photoshop were built as a blank canvas: the designer chooses whether or not to use a grid. Interface tools like Figma also offer both modes—you can stick to columns and auto-layout, or position elements freely and experiment with composition.

    By contrast, pure web builders follow code logic. They work with containers, sections, and grids. Here the designer acts like an architect, assembling a structure that will display consistently across devices, support responsiveness, and guarantee predictability. Freedom is limited in favor of stability and usability.

    Readymag stands apart. Its philosophy is closer to InDesign than to HTML: a blank canvas where elements can be placed however the designer wishes. The power of this approach is in prioritizing storytelling, impression, and experimentation. 

    Storytelling and creativity

    Free layout gives the author a key tool: to direct attention the way a filmmaker frames a shot. Magazine longreads, promo pages, art projects—all of these rely on narrative. The reader needs to be guided through the story, tension built, emphasis placed. A strict grid often hinders this: it imposes uniform rhythm, equalizes blocks, and drains momentum. Free layout, by contrast, enables visual drama—a headline slicing into a photo, text running diagonally, an illustration spilling past the frame. Reading turns into an experience.

    The best websites of recent years show this in practice. They use deliberately broken grids: elements that float, shift, and create the sense of a living space. The unconventional arrangement itself becomes part of the story. Users don’t just read or look; they walk through the composition. Chaotic typography or abrupt animation goes beyond simple illustration and becomes a metaphor.

    Let’s explore a few examples of how this works in practice (all the websites below were made by Readymag users).

    This multimedia longread on the Nagorno-Karabakh conflict traces its history and recent escalation through text and imagery. The design relies on bold typography, layered photographs, and shifting compositions that alternate between grid-like order and free placement. Scrolling becomes a narrative device: sections unfold with rhythm and contrast, guiding the reader while leaving space for visual tension and moments of surprise. The result is a reading experience that balances structure with expressiveness, reflecting the gravity of the subject through form as well as content.

    On this website a collection of P.Y.E. sunglasses is presented through an immersive layout. Scrolling triggers rotations, shifts, and lens-like distortions, turning the screen into an expressive, almost performative space. Here, free composition sets the mood and builds a narrative around the product. Yet when it comes to the catalog itself, the design switches back to a clear grid, allowing for easy comparison of models and prices.

    Everything.can.be.scanned collects ordinary objects—tickets, pill packs, toys, scraps—and presents them as digital scans. The interface abandons order: items float in cluttered compositions, and the user is invited to drag them around, building their own arrangements. Texts and playful interactions, like catching disappearing shadows, add layers of exploration. Here, free layout is not just an aesthetic choice but the core mechanic, turning randomness into a way of seeing.

    Hayal & Hakikat recounts the story of Ottoman-era convicts through archival portraits that appear in sequence as the user scrolls. The repetition of images creates a grid-like rhythm, while interruptions like shifts in placement and sudden pauses break the order and add dramatic tension. The balance of structure and disruption mirrors the subject itself, turning the act of looking into part of the narrative.

    The analogy with film and theater is clear. Editing isn’t built from uniform shots: directors speed or slow the rhythm, insert sharp cuts, break continuity for dramatic effect. Theater works the same way—through pauses, sudden light changes, an actor stepping into the audience. On the web, free layout plays that role. It can disrupt the scrolling rhythm, halt attention, force the user to reset expectations. It is a language of emotion rather than information. More than a compositional device, it becomes a narrative tool—shaping story dynamics, heightening drama, setting rhythm. Where the goal is to engage, surprise, and immerse, it often proves stronger than the traditional grid.

    The future

    Today, freeform layout on the web is still often seen as a niche tool used in art projects and experimental media. But as technology evolves, it’s becoming clear that its philosophy can move beyond experimentation and grow into one of the fundamental languages of the future internet.

    A similar shift once happened in print. The transition from letterpress to phototypesetting and then to modern printing technologies expanded what was possible on the page and gave designers more freedom with layouts. The web is going through the same process: early constraints shaped a grid-based logic, but new technologies and tools like Readymag make it much simpler to experiment with custom arrangements when the project calls for it.

    User expectations are also changing. A generation raised on games, TikTok, and memes is attuned not to linear order but to flow, interplay, unpredictability. For them, strict grids may feel corporate, even dull. This suggests that in the future, grid-based and freeform layouts will continue to coexist, each used where it works best, and often together in the same design.



    Source link

  • Developing Creativity & Emotional Design Skills for Beginners

    Developing Creativity & Emotional Design Skills for Beginners



    This article kicks off our series “Creating Emotionally Meaningful Experiences with AI, Three.js, and Blender.” In it, Andrew invites us into his world and shares a deeply personal journey into creativity, emotion, and the joy of making. It may just shift how we see our own creative potential and the meaning behind what we make.

    Introduction

    Before I start, I want to give credits to Miffy by Dick Bruna, Denis Wipart, Moon, Southern Shotty, Xianyao Wei, Ning Huang, and Evelyn Hsiao. The characters belong to the Miffy Universe by Dick Bruna. The 3D characters you are seeing are a recreation of his art as a fan piece of his artwork. Denis, Moon, and Southern Shotty were the main inspirations for the scenes. I also want to shoutout to Ning Huang, Xianyao Wei, and Evelyn Hsiao as they helped with scene idea generation concepts and inspiration. For the full list of credits, and the Blender and Figma Files, see the GitHub.

    My opinions and writing are entirely my own and are not and should not be a reflection of the credited individuals in this article and should most definitely not be taken as whole/universal truths. We each have our own systems of beliefs, and this article and future articles are reflections of my beliefs. It doesn’t mean I’m right or wrong, that determination is up to you.

    This article is part of our series Creating Emotionally Meaningful Experiences with AI, Three.js, and Blender:

    • Part 1: Developing Creativity & Emotional Design Skills for Beginners
      Learn how to overcome creative block, copy-and-tweak with confidence, and design projects that truly resonate.
    • Part 2: Overcoming the AI Emotional Void & Building Creative Safety
      Explore how to overcome the AI emotional void & the importance of psychological safety for creative work.
    • Part 3: Finding Meaning: Emotional Regulation & Creative Intuition
      Developing emotional regulation and pattern matching skills and how to give meaning to your work for beginners.

    Who this series is for

    If you talk to the talented and famous people today, a lot of them will admit when they first started what they are doing now they thought they were “too dumb” to understand it. If you read the designer/developer spotlights here on Codrops, you’ll see a lot of very famous and talented people claim the same thing, that when they first started, they felt like a fraud or incapable of doing it. And yet, now they are known for what they’re doing, amazing at it, pushing the industry forward, and inspiring others. Here’s Mr. Doob, the legendary creator of Three.js, claiming he was convinced he wasn’t smart enough at first as well as other famous artists (including Danish Mir and crzyzhaa). They don’t say that because they’re lying and want to seem humble. They say it because it’s true. Getting older is realizing how broken people are even if they’re famous and talented and how we fake so many aspects of ourselves as humans. The difference between those you admire and yourself is likely just consistency, time, luck, and emotional management skills which are things I want to discuss in this article.

    A lot of people are self-aware of their problems, but not self-aware enough on how to get oneself to fix those problems. That’s why we have therapists and life-coaches: to help provide guidance on how to actually change oneself. The great news is there are ways to develop that without a therapist and life-coach more effectively. You already change and grow yourself naturally over the years, but instead of letting it be passive, you can make it way more active. Of course you’ll never be perfect, but perfection isn’t the goal, growth is.

    This series isn’t for the talented people out there, it’s for the people that don’t believe they are talented when they actually are. It’s for those who suffer from psychological blockers like extreme perfectionism that can lead to boredom, unfulfilled dreams, or chronic self-doubt. Talent is less about having natural abilities and more about consistency and having systems in place that make you consistent. That takes emotional work, and hopefully emotional work I can make understandable.

    This series is also for those who want to make things emotionally meaningful. While what makes something “meaningful” is highly subjective, I hope to introduce broader patterns and systems that can help develop your own unique natural intuition and insight capabilities. That way, you can emotionally connect with and help others easier. If you’re on the business side, well, with products and services today being so similar, the main differentiator/competitive advantage is no longer the capabilities of a product/service, but how you make people feel. This is especially true now with AI, which has accelerated the need for emotionally meaningful experiences. The societal trends we see today highlight this growing emotional void, e.g. the Gen Z dating crisis, the rise of public vulnerability like “20 things I wish I knew in my 20s” etc. In other words, younger generations want psychological saftey that traditional structures and value systems struggle to support. Learning empathy for marketing purposes sounds insidious, but this is a highly nuanced topic that needs a separate article or, quite honestly, a book. I will cover this portion more in Part 2 and Part 3 and not very much in this article.

    For the record, I still doubt myself a lot. There’s a lot of days where I pretend to know what I’m doing, but secretly learn stuff while I’m doing the work. And that’s normal, it’s called imposter syndrome and it’ll probably never go away, but at the very least you shouldn’t feel like an imposter unless you lack integrity. Like at some point though, you are self-aware enough to realize (mostly) what your limitations are and what they aren’t and adjust accordingly. That way you never fake too much confidence and overpromise while underdelivering. If you asked me some React best practices or optimizations I probably couldn’t answer many of them. However, give me a day and I can probably get back to you an answer and how that would change my future and/or existing projects. And honestly that’s what you do on the job all the time.

    In other words, it’s not about who you are or what you know at the current moment, it’s about having the systems in place (whether conscious or not) that allow you to feel confident in yourself to tackle a problem with certainty. When you tell someone you can help them with their project, you’re not saying you know exactly how in the moment, but what you are saying is you know you will be capable of figuring it out and can do it within the constraints (e.g., budget/deadline) agreed upon by both parties. The issue here is that many of us lack self-awareness of what our capabilities really are. I hope this series helps in growing that self-awareness.

    I will note, though, as you go through this series you may feel highly uncomfortable or even guilty and that’s totally normal. I still feel guilty and uncomfortable every day discovering unflattering truths about myself or when my actions violate my own words. Of course, I hope you feel inspired, but I’m not asking you to change or grow, just keep doing what you’re doing. If you hate yourself before reading this article/series, you won’t magically stop hating yourself after reading this article despite that being my intention. Sometimes the pain of staying the same has to feel worse than the pain of changing before your brain decides to make a decision. That’s nothing to be ashamed about, I went through that. I didn’t choose growth willingly, I was forced into growing. However, it’s best not to wait for pain or use that as an excuse. My recent growth is definitely a blend of choice and being forced, but not entirely forced anymore.

    Emotional processing takes time, your logical understanding comes first which makes it seem like you “should” be able to change, but that’s not how change works. Just let it sit, and time will show you what you really value deep down. Maybe it’s now, maybe it’s later, or maybe it’s never. It doesn’t matter. No matter what happens, don’t judge yourself, just seek to understand yourself better.

    1. Intro to Emotional Design and Emotional Design Patterns

    1.1 What is Creativity and Overcoming Creative Block

    To better understand emotional design (which requres one to be creative), let’s take a look at what creativity is.

    I often hear people say they aren’t creative or they’re only good at copying things and have difficulties translating their thoughts into things other people can see or use. This is an extremely common feeling even among super talented creatives. There’s a term for it: creative block, which is identified by having difficulties starting/completing projects, inability to generate ideas etc. There’s a lot of causes for it, like if your external environment is dry and dull, or maybe you have extreme perfectionism from bad parenting etc. There are different solutions for each of these problems, but since there are so many causes of creative block, I want to try and provide a more broad solution by discussing what creativity actually is.

    Simply put, creativity is the process of copying others and tweaking things. In other words, we all stand on the shoulders of giants. Think of any viral website or project and it’s likely a combination of things you’ve probably seen before, just in a slightly different way. We tend to call it “inspiration,” but behind inspiration is copying. “Copying” has this negative connotation associated with it, when in reality we do that all around us every single day. You see someone’s clothing you admire? You start wearing that brand, copying them too, but slightly adjust the style. Then someone likes your style and copies you and modifiers it slightly and the pattern goes on and on until the style becomes so different you can’t tell.

    Copying and tweaking exists everywhere. Slack, Discord, and Microsoft Teams are all similar too. Even life’s creativity of us as humans is about copying our human DNA and tweaking it to create each one of us as unique individuals with distinct characteristics. The list never ends. Everything is a copy of something with tweaks. When you see everything in the world as copies of each other, you learn way faster and you can better identify the differences between those copies and how to create differences between those copies. In a sense, copying is just consistency, e.g., you copy similar actions and routines you do from one day to another. But you don’t want to stay consistent if it’s making you miserable (i.e. consistently harmful actions vs consistently non-harmful actions). That’s where tweaking AKA growth comes in.

    I highly recommend watching this video on creativity. Even though I developed my thoughts independently prior to watching this video, someone else already said very similar things to what I’m saying. I’m not even being novel when I thought I was. My novelty is not in my idea, but the way I say/execute that same idea, that’s my “tweaking” part. I essentially “copied” them, even when I didn’t know they existed until after I wrote this article and came back to add this paragraph. I don’t agree with some of their framing, but the underlying idea/concept is the same as what I present in this section. The book Steal Like An Artist also shares similar views as I do, but it is framed differently as well.

    In other words, creativity is just about looking at what others are doing and see how you can do it a bit differently. This distinguishment that different domains are inherently “creative” or “not-creative” is outdated. The reason we have this distinguishment is because we like to simplify things, we need to communicate, and economic factors. From a societal perspective, being a lawyer isn’t considered creative, but it is incredibly creative in the sense you have to know when to present evidence, when to hold back, and how to say things to emotionally appeal to a jury etc. That’s all creativity too. Perhaps it’s not “artistic” creativity, but it’s definitely emotional and linguistic creativity. And the cool thing is that you can use these lawyer timing tactics you know about in video game design patterns too!

    For the past thousand years, we work to simplify things as humans. That’s why we have biases, stereotypes, and why we like to be lazy. Our natural default as humans is to simplify things (less cognitive load = less resource consumption for our bodies = better survival chances) and that’s why we ended up with “domains” and fields of studies. Today we’re realizing that many fields are more interconnected than we thought. Modern breakthroughs happen when different domains are cross pollinated. That’s how “the Godfather of AI” created the idea that LLMs are based off of, he applied the human brain workings to technology and stated to “be a contrarian” in an interview because people thought what he was doing was dumb. But he wasn’t just a contrarian, he was also a conformist. The contrarian in him was the “tweaking part” but he still built his knowledge using the math and science off other researchers, the “copying” part.

    Incredible breakthroughs or paradigm shifts are just people who have spent a significant amount of time tweaking. The term “tweaking” isn’t meant to be reductive/dismissve as you can have very sophisticated and non-obvious tweaking patterns, but the idea is that even huge changes start from copying. Copying and tweaking isn’t meant to be easy or some mechanical process, it’s just an easy way of thinking about creativity. I’m not alone on this, Steve Jobs’s email to himself expresses this similar sentiment as well as many other people we typically see as genius or industry leaders. It doesn’t mean just because you realize this you’re a “good” humble person, but what it does mean is that you are self-aware enough to realize that you are the sum of the world and people around you (the good and the bad). The more you uncover the sources that make you who you are, the more you learn about yourself (AKA increasing your self-awareness). Whether you’re cynical and dislike most people or not, in some odd way that does bring some amount of peace and optimism when you accept that you are the sum of other people.

    Tweaking is just like exercise. We know we should exercise, and some do it consistently, some try then give up, and others do it on and off. And we all have different reasons we want to exercise, some do it to attract a romantic partner while others do it to lose weight, feel better, or manage health issues or a combination of multiple reasons. It’s the same reason why we’re creative and push through that “ugly phase” of work, some do it because they’re driven by money, others do it because they want to use their creativity to connect with others etc.

    Copying is the foundation of creativity, it’s not something to feel bad about. Whether you believe it or not, copying things means you’re on the right track to becoming more “creative” in the way that people typically interpret that word. If you always copy and never tweak, then it’s like you’re someone who always talks about wanting exercise but never doing it. Just like how you can’t learn the piano by only watching someone play piano, you have to actually practice with a piano. Likewise, with creativity, you actually have to practice being creative by tweaking.

    Literally the only thing you have to do is credit others. No one thinks you’re less creative when you credit your inspirations AKA the people/things you copied and tweaked from. If they do, they’re the kind of people who will weaponize transparency and honestly anything against you, and those people always exist. It’s really easy to feel that people will judge you, but that’s just a defensive mechanism when you aren’t confident in yourself and don’t love yourself enough (of course, that’s not the only reason people hide credits, but this is a very common reason). It took me years to overcome this as I used to hide my inspirations to make people think I’m more creative than I am. However, as I developed more self-confidence, I feel no need to hide the people I took inspiration from. If you still hide credits like I used to, don’t feel embarrassed about it. Self-love is a journey in itself and having low self-esteem may not even have been your fault (like if you had abusive parents or were bullied).

    So how do you overcome creative block? Simply put, you find fun in the process. Treat your failures like exercises and explorations of curiosity rather than reflections on your own capabilities and self-worth. Instead of thinking, “Wow, I spent 3 hours on that, what a waste of time, I must not be good enough” think, “Wow I’m really proud of myself I spent 3 hours trying something new and failing. I discovered how not to do something which will help me later because I know I have to fail 1000 times before I succeed.” Don’t feed your self-doubt. Treat learning to be creative like any other skill, whether that be learning how to code, piano, or some sport etc. That’s pretty generic advice you’ve probably heard before, but hopefully this section provides the context on where that advice comes from.

    There’s also this argument that some are born more creative than others. Sure, but it doesn’t mean you can’t work on it if you want to. And quite honestly we’re all born differently haha, some people are born with taller genetics while others shorter genetics etc. As someone who used to suck at art, I can safely say if you have the right methods, you can develop creative/artistic abilities even if it’s less explicit than other fields.

    It is important to note that just because you spend a lot of time tweaking things doesn’t mean you’ll get better outcomes. You can be “more creative” but create things that no one really likes. Like a child who draws a crazy creature, it’s very original, but not many enjoy it beyond the people around them (although even that too can be resonant). Just like exercise, you can do a lot of exercise, but if you do it wrong, you won’t get optimal health benefits/outcomes. However, this is not the point of this section. This section is not to provide a guide on how to practice creatively effectively, rather, it’s about how to overcome creative block. The next sections addresses how to create “well-recieved” projects and practice creativity more “effectively.”

    1.2 A note on well-received emotionally resonant projects

    Take a look at the following room portfolios in the screenshot below. I included one from the legendary Bruno Simon, Henry Heffernan a very talented developer, and one I made myself (yes, including myself is self-aggrandizing in a way, but my main intention is to show I practice what I preach). They all performed well in terms of publicity in the creative space (some more than others obviously, but that’s not the point). The question is then, why did these stand out across many other room portfolios? I mean at the end of the day look at them, they’re just rooms? Where’s the originality? Snorefest central, thank you next (I’m joking, I love these websites haha).

    From left to right, Bruno Simon, Henry Heffernan, and Soo Ah’s room portfolios

    Take a look at another set of websites that were emotionally resonant. Where’s the originality here either? They’re all looping worlds with some sort of person on a two-wheeled vehicle.

    From left to right, By super talented Sébastien Lempens, Joshua, and a team of creatives.

    If you look at all six of these websites and break them down to their basics, it does seem like they all just copied each other. That must mean all I have to do is make my own room portfolio or looping world with a two-wheeled vehicle and people will like it!

    Well, no, that’s not how it works. There’s a lot of factors that go into each of these that make them stand out even though, at a base level, you can view them as “unoriginal.” You can obviously tell all of them spend a lot of time tweaking, but they tweaked appropriately to impact the third factor, creating emotional resonance. It’s like music, most modern well-performing songs are based on very popular chord progressions (basically sets of notes in specific orders that people have determined sound pretty good). Just because you select a common chord progression a lot of famous songs use doesn’t mean you’ll make something that takes off in the music industry. Similarly, if you’re writing a book you can use the same “hero’s journey” plotline many famous books use, but have your book not perform well.

    You might argue that other factors like luck, timing, and peoples’ reputation/status greatly contributed to their “success” and you’d be 100% right, but those are factors that are largely out of your control or are simply by-products of doing the right tweaking with the right emotional resonance in the first place; so let’s focus on the parts you can control more directly.

    Emotional resonance consists of many components so lets break down where the emotional resonance comes from each project. At the end of the day, it’s rooted in human psychology, but instead of getting into the academic terms, let’s focus on the high level concepts. I can’t cover every component, but I’ll try and cover the easier ones to understand.

    • Bruno Simon’s Room
      • He’s famous and is a trendsetter. People naturally look up to and admire talented people and give more weight to their creations.
      • He created a room of high semi-realistic fidelity with cool functionalities. This appeals to developers and inspires them to pick up 3D art and 3D artists to pick up code. Not many people knew you could do something to this level of fidelity in a browser before. The novelty shock created resonance.
      • He made something personal to him, his own room. It feels like an intimate vulnerable look into his own life and what he values. There’s a ton of objects in there that represent who he is, such as the dog bed suggests he is a dog owner and has a caring side for animals and the streamer setup suggests he’s modern and makes videos. This creates a connection with him as a person and makes a person want to create their own personal sharing through a room portfolio.
    • Henry Heffernan’s Room
      • Like Bruno’s, this is super personal. It clearly shows he has a passion for the old-time vibe computers and the nostalgic games on the computer definitely emotionally impacted a lot of people in that way.
      • It’s very interactive and real and the subtle details like screen flickering and fingerprints shows attention to detail. When we see a screen flicker in real life or fingerprints on our screen we get upset and frustrated, but when it’s done purposefully in his portfolio, it becomes funny, thoughtful, and perhaps a little endearing. Shared frustration through a new medium creates emotional resonance. Here, the details also show that he’d carry that over into other aspects of his creations and technical skill which is also inherently attractive.
    • Soo Ah’s Room
      • Cute things basically. Not much else to say here. There are also popular references to League of Legends, BTS, motivational quotes, and Kirby appealing to a wide demographic of gamers, KPOP fans and those into self-help motivational quotes.
      • The colors are also lavender and pink which are associated with soothing creams/smells/moods, and the rounded corners of the 3D models contribute to the softness. There’s also a dominant wood-vibe which gives it that homey “one-with-nature” vibe.
    • Joshua’s World
      • Simple and easy to understand. People immediately know stories about bikers who travel long distances, and he used that concept as a way to share his own journey. All models have a consistent color palette and subtle animations for detail.
      • The low-poly style is inherently cute.
    • Sébastien LempensPortfolio
      • Uses the Eiffel tower, a reference to Paris, a very romantic city. The idea of exploring a city like Paris inherently touches people’s emotions.
      • The time takes place during sunset/sunrise, another really emotionally moving time for a lot of people. There’s even a term for this, the “golden hour.”
      • The wind turbines signify space, and clean energy (or cleanliness in general). We all love clean cities that we can explore and clean signifies safety.
      • The Ferris wheel resonates playfulness.
    • Molazone
      • While this isn’t as widely known (at least to my knowledge), it did win FWA of the month, so it did quite well among judges, but perhaps not so well-known known outside of the creative circle. A large part of it is probably because it was created by a professional studio and team, and that inherently puts it at a disadvantage in terms of emotional resonance. If it was made by a single person or a small team of individuals together rather than having an official studio, it definitely would have performed better on social media. However, I think it’s one of those sites that someone will repost and then get a lot of attention.
      • In any regard, this resonated with the judges not only for its design consistency and technical complexity but also, like the others, had a huge adventure component to it, like the colosseum, water bridge, and dark forest etc. which are all very common tropes associated with adventure, danger, and mystery all intended to evoke emotional responses.

    There is also something they all share in common, and that it’s a 3D website. Since most websites are 2D, seeing 3D in a website inherently has a “wow” factor to it. To a more technical demographic, like gamers, they might not be as impressed, but once they learn it’s a website it typically has a larger impact than if you told them it was a standalone downloadable game. Of course, to people that work in this field, they’ve probably become desensitized to the “wow” factor.

    Now you might be reading my analysis and think it applies to your website too, e.g., “I’ve got cute things too like low-poly models, why am I not getting attention?” Well, the difference there is probably the fact that you don’t know how to convey the emotions effectively. It’s a pretty common pattern known as the Dunning Kruger effect. You don’t know what you don’t know so you think yours is as good as theirs. This was me for a long time, and still today in the areas I’m not good at, so I always specifically look for that gap in my knowledge. The answer to find that gap is to that question is to observe more work and try and identify more patterns (more details in Section 1.3).

    You might also think this is a post-hoc analysis and I’m making up things. That it’s easy to look at well-received projects in retrospect. That may be partially true, but this is also how we analyze things after finishing completed projects to see what we should continue doing or not. It also informs future design choices. We do this all the time which why many creatives are so consistent, we do a post-hoc analysis to predict future outcomes for future works. In the next section, I do an analysis before anyone has seen the website and I’m confident in that analysis and its outcome.

    Try and discover what you don’t know. Maybe your design isn’t consistent or you didn’t know you should round the corners of your 3D models to make it even cuter. Or you rounded the corners too much and it looks chaotic rather than cute. When we learn design/art principles/fundamentals, we’re just learning applied psychology and describing patterns that others have already discovered about our species as humans. So you can learn the fundamentals, but also develop your own awareness of patterns across well-received projects. In a way, you create your own sense, intuition, and set of fundamentals to work from which is honestly pretty cool. I talk about this in the next section.

    1.3 How to make things emotionally resonate by focusing on universal human emotions/experiences & develop your creative intuition with exercises (AKA starting a bit reductive and mechanical in order to eventually be intuitive and unique)

    While we’re all unique in our own ways, we share some fundamental universal experiences and emotions. Those core feelings are exactly what you want to identify and invoke when working on your projects. We have so many terms for this like “UI design,” “3D artist,” “UX researcher,” but really at the end of the day it’s about your emotions and how you make people feel about your stuff. You’re not a “designer” or a “developer,” you’re a person who creates things that invoke emotions in yourself and others. Even a UX researcher just focuses on how to make a user less frustrated. Frustration is the core emotion of why they do the work they do. Again, as a society we just like to simplify and label things. These job titles artificially constrain you to what you think you are capable of (even if subconsciously). They constrain you to a set of practices within that established “domain” and don’t encourage out-of-domain thinking and cross pollination. Think outside of your domain and more universally.

    I breakdown the emotional design components of the demo scenes for this article below to hopefully show that focusing on the core fundamental concepts leads to a lot of shared patterns. I am not focusing on the obvious idea that they’re personalized for Codrops which causes an emotional impact, just broadly applicable design patterns.

    There are three key factors I want to point out (there are many others I want to include, but for the sake of simplicity, let’s focus on these). These factors contribute how emotionally impactful something you create would be, for example if it feels completed, full, original and filled with emotional artifacts people will recognize the effort and the thoughtfulness.

    • Completion – how much percentage of the scene is done. Of course this metric is very subjective. For example, if you’re doing something standalone rather than a full scene, that emotional context is different. Sometimes “less is more.” But let’s keep it simple for now.
    • Time spent tweaking – how “original” you could probably say it was relative to my amount of copying from references/inspiration.
    • Universal Emotions – the (broader/high-level) emotions that each scene is intended to invoke.

    Notice how all of these scenes are different styles, but the key factors are pretty similar. This suggests that the overall emotional impact of all of these scenes is pretty similar even if that impact is appealing to different audience demographics due to stylistic and theming preferences. In other words, some people might like one style over the other (toonish 2.5D vs stylistic 3D vs realism paper) or the general idea better (like if you like food you’d prefer the sushi shop, or if you prefer adventure you’d like the paper world), but no matter which demographic it appeals to, the appeal is likely equal.

    That impact is as follows: enough to be inspirational, but not enough to be as emotionally impactful as more polished projects can. So lets go one by one on these factors for each of these scenes to see why that’s their resulting impact as well as uncover some emotional design patterns.

    For the Paper Pirate One

    • Completion (5/10) – This scene is far from complete. It’s missing quite a few details like clouds, sun rotating, birds moving etc. It could also use better animations on-scroll rather than a slight move left to right. For example, when I enter the scene all the puppet sticks could animate in from the top and bottom respectively or Miffy’s arm could be slightly moving and maybe on click of Miffy, Miffy will jump and take a swing at the Codrops Sea Monster in which the Sea Monster will bounce back with a little bit before returning to fight making the scene much more engaging. Similarly, I could click on Panda Boris and he might tag team jump with Miffy, etc.
    • Time spent tweaking (5/10) – For this idea, I’ve seen a lot of online puppet shows (albeit with wood) that have very similar concepts which is how I came up with the idea. It’s pretty non-original except instead of using wood I decided to use paper. Even though my original inspiration was from wood puppet shows, I later discovered paper puppet shows already exist too. Now everyone who knows about paper puppet shows thinks I’m unoriginal and probably copied those even when I didn’t (and that’s kind of beautiful at the same time). The characters are just adjusted copies of graphics online with different outfits (Original Miffy art and Boris art here). Both pirate outfits are super common online too. So technically I didn’t do anything special what-so-ever. The most “unique” thing in this scene was the Codrops Sea Monster and that was inspired by Sid from Ice Age.
    • Universal Emotions (Cute, Playful, Adventure)
      • Cute – The characters themselves, overly rounded cutouts, and exaggerated proportions (like the Codrops Sea Monster’s Eyes or the ships sail) = soft and round like plushies or bears, but conveyed through paper.
      • Playful – These are children’s characters dressing up as pirates which is inherently playful. There’s also the element of using notebook paper, taking something mundane and turning it as a source of joy that feels playful.
      • Adventure – Adventure isn’t technically a “universal emotion” but it’s an adjective that contains an experience or set of universal human emotions like fear and excitement etc. So you could break it down further into those core emotions if you want to, but for simplicity let’s use this adjective and call it a universal experience. There are many elements of adventure here, the idea of traveling on the open sea, fighting a fantasy monster, and being a pirate likely infers more adventures and treasures etc. It’s also a “safe danger” medium in the sense that you’re never actually going to fight a sea monster like this because it is digital. It’s the same way if you’re around an insect expert who knows which insects are dangerous or not. If you’re usually afraid of the woods because of insects, having that expert near you will make you feel safer and want to go into the woods and feel that adventure and the thrill of “safe danger.”

    For the Sushi & Ramen Restaurant One

    • Completion (5/10) – Again, this scene is far from complete. There’s not much detail in the restaurant and not much story telling around the shop and Pusheen seems to be just placed there with no real context other than minimal contribution to the cuteness factor. Quite honestly, if Pusheen was by herself it might have more of an emotional impact than the entire scene has collectively because it would be in a different frame of context compared to a whole scene. Also the colors aren’t quite done as they feel a bit rushed like the plant pot and the informational sign upfront just uses white bars rather than something cooler like characters/shapes of food in a more playful way.
    • Time spent tweaking (5/10) – If you look at the original scene I was inspired from, you can see the idea is pretty much the same, cute animal, slight handpainted shine and colorful mixing. The only difference I did was adapt the characters to a new style and add a bunch of food stuff.
    • Universal Emotions (Cute, Playful, Craving)
      • Cute – Again, very rounded characters, bright colors.
      • Playful – It’s inherently playful having characters like a rabbit or panda cooking. Especially since they’re kids and the restaurant is in the shape of the Codrops logo, it feels more like engaging in a dress-up pretend activity rather than running a real restaurant.
      • Craving – Okay, need I say more? FOOD!!!! 😋😋😋😋😋😋That moment when you wanna eat something you love and find delicious, pure euphoria.

    For Snow Scene One

    • Completion (5/10) – Yes, this scene is far from complete as well; missed opportunity to add more tree decorations, have them swaying/bouncing to the music. Could’ve added some cute presents and snow bunnies roaming around etc.
    • Time spent tweaking (5/10) – Look at the original scene I was inspired by. I didn’t really do anything new and the outfits were copies from official Miffy videos.
    • Universal Emotions (Cute, Playful, Mischievous)
      • Cute – Like the other two, very round-like and using stylized textures rather than realism ones for the wood, stone, tree etc.
      • Playful – It’s a snowball fight with a snowman and outdoor winter activities.
      • Mischievous – There’s a “no codrops” sign there, for whatever reason I don’t know, but just the fact is there adds to the rebellious vibe. Most importantly though you can see Miffy throwing snowballs at the house which is mischievous.

    Of course, there are a lot of other factors, such as who created the scene, the purpose of the scene, how familiar people are with 3D art/websites, when the website was posted, where it was posted, etc. that all contribute to the emotional impact. However, this section is just focused on getting someone to think about emotional design (i.e., the core of why we do what we do) rather than just metrics (e.g., how many views/likes you’ll get). The moment you start creating things while focusing on the emotion behind the scenes is how you become better at observing and identifying your knowledge gaps. Notice how all three scenes are in three different styles yet all feel cute? Once you identify and internalize what evokes the cuteness emotional feeling, it will apply to whatever you decide to execute with that emotion.

    Take a look at the art below. The primary emotion/experience is cute. Think about why, notice the gradients, the color mixing and color palette, the faces on the characters, the use of miniatures to signify smallness, all the round stuff, and the handpainted highlights etc. You’ll see these patterns repeated across cute works in a variety of different styles. It’s all very similar principles just in a different style. You can also see how so many different artists land on very similar styles independently or get inspired from each other ALL the time. It’s just like how two people independently discovered calculus or how programmers copy and tweak open-source code repos. We all share universal human emotions and experiences, they are bound to be repeated. All you have to do is identify those emotional design patterns. It’s not much different from relating to someone who got a paper cut when you get a paper cut. That is a universal emotional pattern just like these art pieces of cuteness.

    Left pixel art by robertlbybee and right toon art by Stylized Box
    Left tea bag by JZ and right juice box by levimagony

    Guess where else these emotional design patterns exist? You’re absolutely correct! UI designs, typography, in real life, etc.! Take a look at the following, all very roundish again. You intuitively know it your entire life, but hopefully putting it side by side shows how similar everything really is. It’s not magic, it’s observation.

    Random images of cute things I found online. They seem disconnected from each other, but they’re all so consistently round and puffy (which are the exact factors that contribute to the universal emotional design pattern)!

    You don’t have to be a 3D artist, UX researcher, UI designer, a game developer, a chef, or an animal expert to recognize all these things make you feel cuteness. That universal emotion pattern is consistent across all these “fields”/”domains.”

    If this is so “obvious” then why am I not super rich and famous like these other artists? Likewise, why aren’t the creativity and cognitive science researchers rich and famous if they “know” what creativity is? Why not just create a complete project, go viral, and make a lot of money? Well, because we’re humans and we each have our own limitations and personal goals.

    For me specifically, I’m a teacher. I can guide someone in a direction, but it doesn’t necessarily mean I’m the “best” at that direction. It’s like saying all high-school math teachers should stop teaching math because they’re not innovative in their math field. I’m just a teacher, not a standout top-tier practitioner. My main source of emotional satisfaction is from inspiring beginner creatives with concept websites rather than spending the extra days/weeks polishing a project for maximum impact among more experienced individuals.

    Of course, the opposite is true as well. Just because someone is a standout top-tier practitioner does not mean they feel fulfilled from teaching and can teach well. Someone can be highly creative without being self-aware enough to break down how they developed their creativity in easily digestible pieces of information.

    1.4 Applying what we learned with creativity exercises

    Talk is cheap, so let’s put the things discussed in this article into practice. If you want, open up a design tool like Figma to follow along. You don’t need to be creative or artistic at all to follow along, which is the whole point of this article.

    Emotional design principles + copying and tweaking = “new” art

    Take a look at the image directly above, let me walk you through these steps that you can try on your own. We’re basically copying the two existing art works and combining them to make something “new” with emotional design tweaks.

    • Step 1 – Make a rectangle.
    • Step 2 – Round the corners to suggest softness [cute emotional design pattern applied]
    • Step 3 – Make the stroke width thicker to suggest thickness and plumpiness [cute emotional design pattern applied]
    • Step 4 – Round and bloat the whole entire shape to signify plumpness [cute emotional design pattern applied]
    • Step 5 – Copy the face from the pink pixel art character and add another box for the straw detail [copy real-life juice boxes and artist].
    • Step 6 – Thicken the outline just like the body [cute emotional design pattern applied]
    • Step 7 – Round the straw just like the body [cute emotional design pattern applied]
    • Step 8 – Add a hole for the straw to make it blend in and copy the diamond stars from Levi’s art [copy real-life juice boxes and artist]
    • Step 9 – Copy the diamond stars from Levi’s art [copy artist]

    In probably a minute or two, you’ve just created a cute juice box character 🥳🥳🥳!!! You should be really proud of yourself and I’m proud of you 😊!! Obviously, it’s nothing that will stand out, but that’s how we all start. You might feel like you just copied those two artists, but that’s exactly what creativity is: copying and tweaking!!! You are ALWAYS the sum of the world around you, so lean into that when you make things. Now you can just copy these principles to any sort of object you want, like a cute rock character! The more you practice, the better your intuition will be and the faster you get at tweaking!

    Literally a random rock character made in 10 seconds with the copy and pasted face + diamond stars

    So what to do next? Well, just keep practicing. Maybe take four reference images and copy something you like from each of those references to create something “new.” Like you could even copy me and extend it, why not make a rock shaped juice box cute character? WHO’S STOPPING US FROM MAKING A ROCK SHAPED JUICE BOX MWHAHAAHAHAHAHAHAHA 😈😈😈😈. And of course, observe more. Did you take a look at the ghost’s face in the Tea Bag by JZ in the image above? Doesn’t it look very similar to the character we copied from? The mouth is just closer to the eyes and elongated! The face is also missing the blush, but we can keep that for our ghost versions!

    Another exercise is practicing medium transfer. The video below is a PowerPoint presentation I made two years ago for a college class. You could recreate something similar on a website! Copy it and tweak it with HTML/CSS/JS. Make a reusable case file component. Of course, you don’t have to use what I did. Just take something from one medium that you like and put it into a different medium. In this case, it went from real-life looking case files -> PowerPoint presentation -> website interaction. Notice the job titles and mediums: FBI agent (real case files), a designer (PowerPoint), and a programmer (website implementation). Be the bridge between mediums. Or should I say, be the medium between the mediums haha.

    There’s so many other ways you can use this same exact FBI context, for example:

    • Case files are secretive, so why not add an easter egg on your website that is secretive?
      • You could add a secret code, letters you can click on to discover and unscramble to unlock a hidden portion of your website.
    • Crime’s emotions/experiences are dark, intense, suspicious, and moody, think about what kind of color palette is associated with that.

    The last exercise I’ll give is literally just to trial and error random stuff. Don’t even think about what you’re doing. Don’t look at the clock, just take a reference and run with it quickly. Don’t worry about if it’s a “good” reference or anything like that at all. If it ends up being fun then you can continue! Creativity is just as much trial and error as coding or anything else you’re learning. You can see below I googled a random architecture image online and used that to design a graphic using the same exact copy and pasted face from our juice box character, just with added eyebrows. The character shape is from the image, and the “beard” is just the stair railings. Everything is source material if you want it to be.

    Just some random copying and tweaking. Let yourself go, remove the pressure of creating something cool. Just make something and have fun when you do it. Stop thinking how good it is or what’s the purpose. The only purpose is to have fun.

    You can also seem “deep” and make up stuff in your existing works, like my secret hidden message is “LOVE.” It wasn’t intended at all, but I’ll pretend that it was intentional and I totally didn’t just discover this looking in retrospect! The point isn’t to seem deep to others, but to discover unseen patterns in your own works! This way, when you are inspired by something, your brain will naturally start looking for ways to incorporate actual deep hidden meanings in your works. Look for meanings when there are no meanings, exactly as I did here.

    Randomly discovering a secret word (“LOVE”) in my scenes in retrospect for fun.

    In closing, don’t just stop at the feeling that you’re feeling. Ask yourself “why” you feel the way that you do and why others feel the way they do. What is it about that thing that makes you and others feel that way? Develop that self-awareness and empathy. You’ll discover so many patterns and the exceptions too, like making cute things with sharp objects instead of rounded ones. Once you understand how to convey emotions, you can bend the rules and emotions you evoke in any way you want to. You’ll eventually tweak so much and end up developing your own style before you even realize it! People will ask you to make something and you can quickly do so even without looking at a reference! Of course, that doesn’t mean give up references, but it does mean you’ve finally developed that ability and intuition.

    Analyzing emotions is not meant to take away the innocence of our spontaneous and authentic feelings. If anything, being aware of them make us feel them deeper and warmer. It’s like knowing how your favorite food is made. If anything, it just makes you appreciate your favorite food more.

    1.5 Pattern matching outside of art

    I hope this article makes it clear that pattern matching is basically just your intuition and exists elsewhere outside of analyzing art. You actually do it every single day whether you realize it or not (e.g., when you’re trying to read/guess other people’s intentions). This article just makes it more explicit by taking a look at the intuitive brain of a creative person.

    Bringing your subconscious processes to your conscious brain feels unnatural, but it gets easier over time. It’s just developing self-awareness; like when you were a toddler you weren’t really self-aware, but as you get older, you discover more parts of yourself. Some people never work on self-awareness and they peak emotionally, that’s why you have abusive parents who are 50+ years old. Working on yourself is always a choice.

    You can see extreme forms of pattern matching like what we did outside of the art field. For example, this famous YouTuber can quickly identify locations based on just a sky or ground! It seems bizarre, but if you break it down into patterns, it actually makes sense on how someone is capable of doing something that amazing. Like certain countries have certain trees, certain clouds, certain skies, etc. Most people don’t see these patterns. They just think sky = sky, rather than sky = gradient + colors + type of cloud + direction of gradient + certain skies + is it night/time day time + weather assosiated with type of cloud/sky + which countries have certain weathers more likely to produce that kind of cloud/sky. He sees all those patterns that others don’t.

    It’s also partly the fact that games like GeoGuessr aren’t updated constantly with new images of places so when you guess on photos that aren’t pure skies, you somewhat memorize intuitvely what kind of skies are matched with what kind of places. In other words, if you look at the same 10,000 images 100 times, then even if you don’t pay attention to the skies in those 10,000 images, your brain already by default picked up subconcious patterns and intuition you can tap into later when you need them.

    Pattern matching for him is like “Okay this image has type A trees which means XYZ countries, and it has type B concrete which means XZ countries, and it has type C clouds which means it likely has weather type G which means it’s in the northern hemisphere so it’s likely country Z.” That’s how it works when you make art too! More on emotional pattern matching and similar topics in the future articles!

    2. Technical Implementation

    This section will just briefly talk about some technical details of this project. I didn’t do many new things in this article compared to the one I wrote here (just make sure to check the ReadMe file on GitHub for the corrections to that article as well). This project is also mainly coded with AI generated code so don’t look at the code for any inspiration there 😅. This section is not beginner friendly and assumes knowledge of Blender and programming concepts.

    2.1 Created a Blender Python Script using AI to select all objects and add a second UV Map named “SimpleBake”

    For the SimpleBake Blender addon, there’s an option that allows you to use a pre-existing UV Maps named “SimpleBake.” it’s super tedious to manually select each object and create that UV Map so I asked ChatGPT to generate me a script that does it automatically.

    import bpy
    
    # Loop through all mesh objects in the scene
    for obj in bpy.data.objects:
        if obj.type == 'MESH':
            uv_layers = obj.data.uv_layers
    
            # Add "SimpleBake" UV map if it doesn't exist
            if "SimpleBake" not in uv_layers:
                new_uv = uv_layers.new(name="SimpleBake")
                print(f"Added 'SimpleBake' UV map to: {obj.name}")
            else:
                new_uv = uv_layers["SimpleBake"]
                print(f"'SimpleBake' UV map already exists in: {obj.name}")
    
            # Set "SimpleBake" as the active (selected) UV map — but not the render UV map
            uv_layers.active = new_uv  # SELECTED in the UI

    2.2 Created a Blender Addon with AI to export curve as JSON or a three.js curve

    Basically the title, I created this with Claude AI with three prompts. The first was creating the plugin that could export a curve as points, then asking it to only export the control points not sampled points, and the third prompt was to put it in three.js curve format and update the coordinate system. With better prompting you could probably do it with one prompt.

    bl_info = {
        "name": "Curve to Three.js Points Exporter",
        "author": "Claude",
        "version": (1, 0),
        "blender": (3, 0, 0),
        "location": "File > Export > Curve to Three.js Points",
        "description": "Export curve points for Three.js CatmullRomCurve3",
        "warning": "",
        "doc_url": "",
        "category": "Import-Export",
    }
    
    import bpy
    import bmesh
    from bpy.props import StringProperty, IntProperty, BoolProperty
    from bpy_extras.io_utils import ExportHelper
    from mathutils import Vector
    import json
    import os
    
    class ExportCurveToThreeJS(bpy.types.Operator, ExportHelper):
        """Export curve points for Three.js CatmullRomCurve3"""
        bl_idname = "export_curve.threejs_points"
        bl_label = "Export Curve to Three.js Points"
        
        filename_ext = ".json"
        
        filter_glob: StringProperty(
            default="*.json",
            options={'HIDDEN'},
            maxlen=255,
        )
        
        # Properties
        sample_count: IntProperty(
            name="Sample Count",
            description="Number of points to sample from the curve",
            default=50,
            min=3,
            max=1000,
        )
        
        export_format: bpy.props.EnumProperty(
            name="Export Format",
            description="Choose export format",
            items=[
                ('JSON', "JSON", "Export as JSON file"),
                ('JS', "JavaScript", "Export as JavaScript file"),
            ],
            default='JSON',
        )
        
        point_source: bpy.props.EnumProperty(
            name="Point Source",
            description="Choose what points to export",
            items=[
                ('CONTROL', "Control Points", "Use original curve control points"),
                ('SAMPLED', "Sampled Points", "Sample points along the curve"),
            ],
            default='CONTROL',
        )
        
        include_tangents: BoolProperty(
            name="Include Tangents",
            description="Export tangent vectors at each point",
            default=False,
        )
        
        def execute(self, context):
            return self.export_curve(context)
        
        def export_curve(self, context):
            # Get the active object
            obj = context.active_object
            
            if not obj:
                self.report({'ERROR'}, "No active object selected")
                return {'CANCELLED'}
            
            if obj.type != 'CURVE':
                self.report({'ERROR'}, "Selected object is not a curve")
                return {'CANCELLED'}
            
            # Get curve data
            curve = obj.data
            
            # Sample points along the curve
            points = []
            tangents = []
            
            if self.point_source == 'CONTROL':
                # Extract control points directly from curve
                for spline in curve.splines:
                    if spline.type == 'NURBS':
                        # NURBS curve - use control points
                        for point in spline.points:
                            # Convert homogeneous coordinates to 3D
                            world_pos = obj.matrix_world @ Vector((point.co[0], point.co[1], point.co[2]))
                            # Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
                            points.append([world_pos.x, world_pos.z, -world_pos.y])
                            
                    elif spline.type == 'BEZIER':
                        # Bezier curve - use control points
                        for point in spline.bezier_points:
                            world_pos = obj.matrix_world @ point.co
                            # Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
                            points.append([world_pos.x, world_pos.z, -world_pos.y])
                            
                    elif spline.type == 'POLY':
                        # Poly curve - use points
                        for point in spline.points:
                            world_pos = obj.matrix_world @ Vector((point.co[0], point.co[1], point.co[2]))
                            # Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
                            points.append([world_pos.x, world_pos.z, -world_pos.y])
            else:
                # Sample points along the evaluated curve
                depsgraph = context.evaluated_depsgraph_get()
                eval_obj = obj.evaluated_get(depsgraph)
                mesh = eval_obj.to_mesh()
                
                if not mesh:
                    self.report({'ERROR'}, "Could not convert curve to mesh")
                    return {'CANCELLED'}
                
                # Create bmesh from mesh
                bm = bmesh.new()
                bm.from_mesh(mesh)
                
                # Get vertices (points along the curve)
                if len(bm.verts) == 0:
                    self.report({'ERROR'}, "Curve has no vertices")
                    bm.free()
                    return {'CANCELLED'}
                
                # Sample evenly distributed points
                for i in range(self.sample_count):
                    # Calculate interpolation factor
                    t = i / (self.sample_count - 1)
                    vert_index = int(t * (len(bm.verts) - 1))
                    
                    # Get vertex position
                    vert = bm.verts[vert_index]
                    world_pos = obj.matrix_world @ vert.co
                    # Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
                    points.append([world_pos.x, world_pos.z, -world_pos.y])
                    
                    if self.include_tangents:
                        world_normal = obj.matrix_world.to_3x3() @ vert.normal
                        # Convert normal to Three.js coordinate system
                        tangents.append([world_normal.x, world_normal.z, -world_normal.y])
                
                bm.free()
            
            if len(points) == 0:
                self.report({'ERROR'}, "No points found in curve")
                return {'CANCELLED'}
            
            # Prepare export data
            export_data = {
                "points": points,
                "count": len(points),
                "curve_name": obj.name,
                "blender_version": bpy.app.version_string,
            }
            
            if self.include_tangents:
                export_data["tangents"] = tangents
            
            # Export based on format
            if self.export_format == 'JSON':
                self.export_json(export_data)
            else:
                self.export_javascript(export_data)
            
            self.report({'INFO'}, f"Exported {len(points)} points from curve '{obj.name}'")
            return {'FINISHED'}
        
        def export_json(self, data):
            """Export as JSON file"""
            with open(self.filepath, 'w') as f:
                json.dump(data, f, indent=2)
        
        def export_javascript(self, data):
            """Export as JavaScript file with Three.js code"""
            # Change file extension to .js
            filepath = os.path.splitext(self.filepath)[0] + '.js'
            
            with open(filepath, 'w') as f:
                f.write("// Three.js CatmullRomCurve3 from Blender\n")
                f.write("// Generated by Blender Curve to Three.js Points Exporter\n")
                f.write("// Coordinates converted from Blender (Z-up) to Three.js (Y-up)\n\n")
                f.write("import * as THREE from 'three';\n\n")
                
                # Write points array
                f.write("const curvePoints = [\n")
                for point in data["points"]:
                    f.write(f"  new THREE.Vector3({point[0]:.6f}, {point[1]:.6f}, {point[2]:.6f}),\n")
                f.write("];\n\n")
                
                # Write curve creation code
                f.write("// Create the CatmullRomCurve3\n")
                f.write("const curve = new THREE.CatmullRomCurve3(curvePoints);\n")
                f.write("curve.closed = false; // Set to true if your curve should be closed\n\n")
                
                # Write usage example
                f.write("// Usage example:\n")
                f.write("// const points = curve.getPoints(100); // Get 100 points along the curve\n")
                f.write("// const geometry = new THREE.BufferGeometry().setFromPoints(points);\n")
                f.write("// const material = new THREE.LineBasicMaterial({ color: 0xff0000 });\n")
                f.write("// const line = new THREE.Line(geometry, material);\n")
                f.write("// scene.add(line);\n\n")
                
                f.write("export { curve, curvePoints };\n")
    
    
    def menu_func_export(self, context):
        self.layout.operator(ExportCurveToThreeJS.bl_idname, text="Curve to Three.js Points")
    
    
    def register():
        bpy.utils.register_class(ExportCurveToThreeJS)
        bpy.types.TOPBAR_MT_file_export.append(menu_func_export)
    
    
    def unregister():
        bpy.utils.unregister_class(ExportCurveToThreeJS)
        bpy.types.TOPBAR_MT_file_export.remove(menu_func_export)
    
    
    if __name__ == "__main__":
        register()

    2.3 A boat load of conditional rendering

    There were no complex render targets or anything like that for scene transitions, it was just pre-positioned 3D objects toggling its visibility on and off with conditional rendering based on the progress value the camera is along the curve. It’s not really a good practice for more complex scenes like these as it can cause crashes if you conditionally render a ton at a time, but works for a demo on most desktop/laptops at least.

    How the models were setup in Blender

    2.4 Creating invisible bounding boxes for SVGs with Figma

    When I wanted to make sleeping Miffy and Panda Boris for the nighttime ship scene, I did not design them with the same size as their day versions. That means when I replaced the image textures with the night versions, the default UV map no longer looked good with them. While I could adjust the UV or position of the plane with code, it’s easier just to create an invisible bounding box with the same width and height as the day characters in Figma and have the nighttime characters fit within that bounding box.

    Final Words

    I’m not a programmer, an artist, or a UI designer. I’m someone who creates things.

    I’m not a creative. I’m a copier who tweaks things.

    I’m not talented. I’m an observer recognizing emotions.

    You can call yourself a programmer, an artist, a creative, or whatever you want if the time calls for it, but don’t let these words define you and your capabilities. The only way to stay relevant with AI in the picture is by incorporating cross-domain thinking beyond labels. Become interdisciplinary. It sounds daunting, but if you have the right learning systems in place, you can somewhat learn multiple domains in roughly the same time it would take you to learn one domain with ineffective learning systems (e.g., spending 1000 hours in tutorial hell in one domain vs spending 1000 hours pattern matching across multiple domains). Effective learning systems will show you connections and parallels between fields of study, speeding up your learning process. It’s quite similar to how we identified a core emotional design pattern that shows up in real-life, 3D art, 2D art, and UI designs.

    Yes, this article does have elements of survivorship bias, as there are many people who create round things and don’t end up creating cute things that spark emotional resonance because there are way more factors to creating something cute than just making round things, but the purpose was to show a path that many creative people take to become more creative rather than showing every last step of intuition development and every single observation and design choice. In the future articles, I’ll be addressing more components on how to make the “tweaking” phase more effective.

    As humans, we like to take complex arguments and simplify them with our pre-existing biases and treat those simplifcations as fundamentally true when they’re just partial truths. Simplifications like those in this article are not truths, they’re building blocks to help guide someone to a specific view (in this case, the view is that creativity is learnable). This article provided a highly reductive and systematic/analytical system for creativity that I hope will naturally lead someone to develop creative intuition and spontaneous insight.

    If you look at your life, there are probably so many moments you thought something was too difficult at first until you realized it wasn’t, whether that be getting in a relationship, public speaking, calculus, 3D art, programming, or literally anything you used to be afraid of and now aren’t. Treat creativity just like those things. Just another thing you think is difficult but know you’ll get better at with time.

    Anyway, I hope this article unlocked some potential cognitive blockers you have and made you realize that you’ve got hidden skills inside of you. Growth is slow and quite painful at times. Take your time! Or maybe you’re perfectly happy where you’re at and don’t really want to change, which is totally okay as well. Like I said, never judge yourself and I’m not going to judge or pressure you either. You’re special in your own way. Your interpretation of this article, whether you think it’s good or bad, helpful or not helpful, that’s what makes you special.

    Don’t judge, just reflect and seek to understand. Time will show you your values.

    With a lot of love,

    Andrew~😊





    Source link

  • Global by Design: Leading Across Borders to Shape Digital Experiences

    Global by Design: Leading Across Borders to Shape Digital Experiences


    I’m Oliver Muñoz, the founder of Uncommon, a digital studio based in Melbourne. These days, I focus less on fine pixels myself and more on leading teams across time zones to do their best work.

    After more than a decade freelancing, I decided I wanted to spend more time with my family and less in front of the computer. My first son was about to be born, and I knew I had to make a choice: keep designing every detail myself, or step into leadership and create more space to be present at home. That decision to delegate and trust others was the moment I gave creative leadership a real go.

    This story is not about pixels, code, or prototypes; it is about what it takes to lead creatives across time zones and cultures toward a shared vision that wins awards.

    Origins of leadership

    I always wanted to lead by example, but during my agency years, the opportunity never quite came. It could be because I was freelancing, maybe it was my craft, or perhaps it was the fact that I was an immigrant. At times, I felt I had to work double to get half as far.

    One pivotal moment came after contracting for a global agency for twelve months. The design director offered me a full-time role as a Senior Designer, but I only agreed on the condition that she would mentor me into a Design Lead role within six months. She could not commit, so I declined on the spot. That was when I realised leadership was not something I would be handed; I had to create the opportunity myself.

    Building a global team

    At Uncommon, I believe in bringing in the right experts for each project, no matter where they are in the world. The foundation is always the same: communication, collaboration and clarity. Those three pillars do not just apply to us internally; they extend to our clients and their teams as well.

    We rely on all the usual communication tools, but with one rule: every project discussion must live in the dedicated Slack channel. That way time zones do not become bottlenecks; someone in Europe can wake up and skim through everything discussed in Australia the previous day without losing context.

    The other challenge is culture. Many of my team members do not speak English as their first language (mine is Español/Spanish), so sometimes feedback can come across as blunt or even harsh when literally translated. Part of my job as a leader is to read between the lines and make sure nothing gets lost or misinterpreted in translation.

    Creative sessions and collaboration

    Every project begins with a strategy workshop with the client. Because of geography, not everyone can join live, so we document everything and share it back with the team. From there, each creative gets space to explore, research and design independently. A few days later, we regroup online, share progress and spark new ideas off each other’s work.

    I encourage the team to seek inspiration outside the obvious. If we are designing a healthcare booking system, do not just look at other healthcare apps; look at how airlines handle complex flows, or how Airbnb structures information. Borrow what works and apply it in unexpected places.

    Inevitably, different perspectives lead to different opinions. When we hit a deadlock, I return to the brief and the workshop findings to guide us. Often, it comes down to cultural context; the way something works in the U.S. is not necessarily right for Australia. Luckily, I tend to choose collaborators who are already a few steps ahead of the brief, so real deadlocks are rare.

    The human side of leadership

    Remote leadership means I cannot control the environment in which my team works. Distractions happen. Sometimes it is tempting to accept the first idea for a small component and move on. When that happens, I ask the team to park the safe option and keep searching for something more inventive. It is not always popular in the moment; people can get frustrated with me, but when the work earns recognition from peers or even industries outside our own, the team sees the value in going the extra mile.

    I have also learned I do not need to have all the answers. Initially, I attempted to solve everything on my own. Now, when in doubt, I let the team debate and find their way forward. They are the experts. My job is to steer, not dictate. Sometimes the best leadership move is simply to pause, take a breath, and let go.

    Leading for outcomes

    Awards were never the goal. They are a pat on the back, not the finish line. At the end of the day, an award is just the result of votes from people you have probably never met. What matters more is that the work solved the client’s problem in a way that surprised them and us.

    That said, awards do have a practical benefit. Clients discover us through those platforms, and it helps attract the kind of people who value craft. So while they are not everything, they have become part of our strategy for growth.

    Style and values

    I do not see myself as a director with a rigid script, but more as a coach who sets the stage for others to shine. Part of my job is to recognise strengths, knowing who will thrive on a marketing website versus who will excel in product design, and put people in the right role.

    My non-negotiables are openness and empathy. I need to stay open to better ideas than my own, and I need to understand when life outside of work affects someone’s pace.

    Humility, to me, means surrounding myself with people who are better than I am. If I am consistently producing more or better work than my team, then I have hired the wrong people. The best sign that I am doing my job well is being the worst designer in the room.

    Looking back

    Every project brings challenges, distance, culture, and deadlines, but the hardest moments are usually about trust. Trusting the team to explore without me hovering, trusting myself to step back and let them solve problems. The lesson I keep coming back to is that leadership is less about control and more about creating the conditions for trust to grow.

    Inspiration and advice

    Early in my career, after a failed internship, the Creative Director pulled me aside and said, “I have been to your country, eaten your food, talked to the locals. You need to embrace who you are and where you come from; that is how you will succeed.” That advice has stuck with me. Play to your strengths. Do not try to be something you are not.

    For anyone leading a globally distributed team, my advice is simple: have cultural context. Your experiences are not the same as your team’s. Take time for casual, human conversations that are not about deadlines. Asking about someone’s cat or weekend can go further than you think.

    Looking ahead, I hope leadership becomes more relaxed, more human. Less about the suit, more about the fun. We all need to remember why we started doing this in the first place.

    Closing

    This project proved to me that creativity does not live in a single city or time zone. It thrives when people from different backgrounds rally around a shared vision. Leadership, in this context, is about orchestrating that energy, not controlling it.

    I am not here to sell a course or a product. But if you would like to follow along as I keep exploring what it means to lead and create in a global, digital-first world, you can find me on LinkedIn or Instagram. I share the wins, the lessons, and sometimes even the doubts, because that is all part of the journey.



    Source link

  • The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration

    The Journey Behind inspo.page: A Better Way to Collect Web Design Inspiration



    Have you ever landed on a website and thought, “Wow, this is absolutely beautiful”? You know that feeling when every little animation flows perfectly, when clicking a button feels satisfying, when the whole experience just feels premium.

    That’s exactly what happened to me a few years ago, and it changed everything.

    The Moment Everything Clicked

    I was browsing the web when I stumbled across one of those websites. You know the type where every micro-animation has been crafted with care, where every transition feels intentional. It wasn’t just pretty; it made me feel something.

    That’s when I got hooked on web design.

    But here’s the thing: I wanted to create websites like that too. I wanted to capture that same magic, those same emotions. So I started doing what any curious designer does. I began collecting inspiration.

    Spotting a Gap

    At first, I used the usual inspiration websites. They’re fantastic for discovering beautiful sites and getting that creative spark. But I noticed something: they showed you the whole website, which is great for overall inspiration.

    The thing is, sometimes I’d get obsessed with just one specific detail. Maybe it was a button animation, or how an accordion opened, or a really smooth page transition. I’d bookmark the entire site, but then later I’d spend ages trying to find that one perfect element again.

    I started thinking there might be room for something more specific. Something where you could find inspiration at the component level, not just the full-site level.

    Starting Small

    So I started building my own library. Whenever I saw something cool (a smooth page transition, an elegant pricing section, a cool navigation animation) I’d record it and save it with really specific tags like “card,” “hero section,” or “page transition.”

    Early versions of my local library I had on Eagle

    Real, useful categories that actually helped me find what I needed later. I did this for years. It became my secret weapon for client projects and personal work.

    From Personal Tool to Public Resource

    After a few years of building this personal collection, I had a thought: “If this helps me so much, maybe other designers and developers could use it too.”

    That’s when I decided I should share this with the world. But I didn’t want to just dump my library online and call it a day. It was really important to me that people could filter stuff easily, that it would be intuitive, and that it would work well on both mobile and desktop. I wanted it to look good and actually be useful.

    Early version of inspo.page, filters where not sticky at the bottom

    That’s how inspo.page was born.

    How It Actually Works

    The idea behind inspo.page is simple: instead of broad categories, I built three specific filter systems:

    • What – All the different components and layouts. Looking for card designs? Different types of lists? Different types of modals? It’s all here.
    • Where – Sections of websites. Need inspiration for a hero section? A pricing page? Social proof section? Filter by where it appears on a website.
    • Motion – Everything related to movement. Page transitions, parallax effects, hover animations.

    The magic happens when you combine these filters. Want to see card animations specifically for pricing sections? Or parallax effects used for presenting services? Just stack the filters and get exactly what you’re looking for.

    The Technical Side

    On the technical side, I’m using Astro and Sanity. Because I’m sometimes lazy and I really wanted a project that’s future-proof, I wanted to make it as simple as possible for me to curate inspiration.

    That’s why I came up with this automation system where I just hit record and that’s it. It automatically grabs the URL, creates different video versions, compresses everything, hosts it to Bunny.net, and then sends it to the CMS so I just have to tag it and publish.

    Tagging system inside Sanity

    I really wanted to find a system that makes it as easy as possible for me to do what I want to do because I knew if there was too much resistance, I’d eventually stop doing it.

    The Hardest Part

    You’d probably think the hardest part was all the technical stuff like setting up automations and managing video uploads. But honestly, that was the easy part.

    The real challenge was figuring out how to organize everything so people could actually find what they’re looking for.

    I must have redesigned the entire tagging system at least 10 times. Every time I thought I had it figured out, I’d realize it was either way too complicated or way too vague. Too many specific tags and people get overwhelmed scrolling through endless options. Too few broad categories and everything just gets lumped together uselessly.

    It’s this weird balancing act. You need enough categories to be helpful, but not so many that people give up before they even start filtering. And the categories have to make sense to everyone, not just me.

    I think I’ve got a system now that works pretty well, but it might change in the future. If users tell me there’s a better way to organize things, I’m really all ears because honestly, it’s a difficult problem to solve. Even though I have something that seems to work now, there might be a much better approach out there.

    The Human Touch in an AI World

    Here’s something I think about a lot: AI can build a decent-looking website in minutes now. Seriously, it’s pretty impressive.

    But there’s still something missing. AI can handle layouts and basic styling, but it can’t nail the human stuff yet. Things like the timing of a hover effect, the weight of a transition, or knowing exactly how a micro-interaction should feel. That’s pure taste and intuition.

    Those tiny details are what make websites feel alive instead of just functional. And in a world where anyone can generate a website in 5 minutes, those details are becoming more valuable than ever.

    That’s exactly where inspo.page comes in. It helps you find inspiration for the things that separate good websites from unforgettable ones.

    What’s Next

    Every week, I’m adding more inspiration to the platform. I’m not trying to build the biggest collection out there, just something genuinely useful. If I can help a few designers and developers find that perfect animation a little bit faster, then I’m happy.

    Want to check it out? Head over to inspo.page and see if you can find your next favorite interaction. You can filter by specific components (like cards, buttons, modals, etc.), website sections (hero, pricing, etc.), or motion patterns (parallax, page transitions, you name it).

    And if you stumble across a website with some really nice animations or micro-interactions, feel free to share it using the feedback button (top right) on the site. I’m always on the lookout for inspiration pieces that have that special touch. Can’t promise I’ll add everything, but I definitely check out what people send.

    Hope you find something that sparks your next great design!



    Source link

  • Design Has Never Been More Important: Inside Shopify’s Acquisition of Molly

    Design Has Never Been More Important: Inside Shopify’s Acquisition of Molly


    When the conversation turns to artificial intelligence, many assume that design is one of the professions most at risk of automation. But Shopify’s latest move sends a very different message. The e-commerce giant has revived the role of Chief Design Officer earlier this year and acquired Brooklyn-based creative studio Molly — signaling that, far from being diminished, design will sit at the center of its AI strategy.

    At the helm is Carl Rivera, Shopify’s Chief Design Officer, who believes this moment is an inflection point not just for the company, but for the design industry as a whole.

    “At a time when the market is saying maybe you don’t need designers anymore,” Rivera told me, “we’re saying the opposite. They’ve never been more important than they are right now.”

    A Statement of Intent

    Shopify has a long history of treating design as a strategic advantage. In its early days, co-founder Daniel Weinand held the title of Chief Design Officer and helped shape Shopify’s user-first approach. But when Weinand left the company, the role disappeared — until now.

    Bringing it back, Rivera argues, is both symbolic and practical. “It’s really interesting to consider that the moment Shopify decides to reinstate the Chief Design Officer role is at the dawn of AI,” he said. “That’s not a coincidence.”

    For Rivera, design is the best tool for navigating uncertainty. “When you face ambiguity and don’t know where the world is going, there’s no better way to imagine that future than through design,” he explained. “Design turns abstract ideas into something you can hold and touch, so everyone can align on the same vision.”

    Why Molly?

    Central to Shopify’s announcement is the acquisition of Molly, the Brooklyn-based design studio co-founded by Jaytel and Marvin Schwaibold. Known for their experimental but disciplined approach, Molly has collaborated with Shopify in the past.

    Rivera recalled how the deal came together almost organically. “I was having dinner with Marvin, and we were talking about the future I wanted to build at Shopify. The alignment was immediate. It was like — of course we should do this together. We could go faster, go further, and it would be more fun.”

    The studio will operate as an internal agency, but Rivera is careful to stress that Molly won’t exist in isolation. “What attracted me to Molly is not just their output, but their culture,” he said. “That culture is exactly the one we want to spread across Shopify. They’ll be a cultural pillar that helps manifest the ways of working we want everyone to embrace.”

    Importantly, the internal agency won’t replace Shopify’s existing design teams. Instead, it will augment them in moments that call for speed, experimentation, or tackling problems shaped by AI. “If something changes in the market and we need to respond quickly, Molly can embed with a team for a few months, supercharging their generative process,” Rivera explained.

    Redefining AI + Design

    Rivera is energized by the possibilities of AI and how it can transform the way people interact with technology. While today’s implementations often serve as early steps in that journey, he believes the real opportunity lies in what comes next.

    He acknowledges that many current products still treat AI as an add-on. “You have the product, which looks the same as it has for ten years, and then a little panel next to it that says AI. That can’t be the future,” Rivera said.

    For him, these early patterns are just the beginning — a foundation to build on. He envisions AI woven deeply into user experiences, reshaping interaction patterns themselves. “If AI had existed ten years ago, I don’t believe products would look the way they do today. We need to move beyond chat as the default interface and create experiences where AI feels native, invisible, and context-aware.”

    That, he argues, is where design proves indispensable. “It’s designers who will define the interaction patterns of AI in commerce. This is our role: to make the abstract real, to imagine the future, and to bring it into the present.”

    Measuring Success: Subjective by Design

    In a world obsessed with metrics, Rivera offers a refreshingly contrarian view of how design success should be measured.

    “Designers have often felt insecure, so they chase numbers to prove their value,” he said. “But to me, the most important measure isn’t a KPI. It’s whether the work feels right. Are we proud of it? Did it accelerate our vision? Does it make the product more delightful? I’m comfortable leaning on instinct.”

    That doesn’t mean ignoring business outcomes. But Rivera wants his teams to be guided first by craft, ambition, and impact on user experience — not by dashboards.

    Advice for Designers in an AI Era

    For independent designers and studio owners — many of whom worry that AI might disrupt their livelihoods — Rivera offers encouragement.

    He believes the most valuable skill today is adaptability: “The best trait a designer can have right now is the ability to quickly learn a new problem and generate many different options. That’s what the agency world trains you to do, and it’s exactly what big companies like Shopify need.”

    In fact, Rivera sees agency and freelance experience as increasingly attractive in large-scale design hiring. “People who have jumped between many problems quickly bring a unique skill set. That adaptability is crucial when technology and user expectations are changing so fast.”

    The Ambition at Shopify

    Rivera is clear about his mandate. He sums it up in three goals:

    1. Build the place where the world’s best designers choose to work.
    2. Enable them to do the best work of their careers.
    3. Define the future interaction patterns of AI in commerce.

    It’s an ambitious vision, but one he believes is within reach. “Ambition begets ambition,” he told his team in a recent message. “By raising expectations for ourselves and each other, we’ll attract people who want that environment, and they’ll keep raising the bar.”

    For Shopify, investing in design now goes beyond aesthetics. It is about shaping the future of commerce itself. As Rivera put it:

    “We don’t need to dream up sci-fi scenarios. The future is already here — just unevenly distributed. Our job is to bring it into the hands of entrepreneurs and make it usable for everyone.”

    Borrowing from William Gibson’s famous line, Rivera frames Shopify’s bet on Molly and design as a way of redistributing that future, through creativity, craft, and culture.





    Source link

  • Design as Rhythm and Rebellion: The Work of Enrico Gisana

    Design as Rhythm and Rebellion: The Work of Enrico Gisana


    My name is Enrico Gisana, and I’m a creative director, graphic and motion designer.

    I’m the co-founder of GG—OFFICE, a small independent visual arts studio based in Modica, Sicily. I consider myself a multidisciplinary designer because I bring together different skills and visual languages. I work across analog and digital media, combining graphic design, typography, and animation, often blending these elements through experimental approaches. My design approach aims to push the boundaries of traditional graphic conventions, constantly questioning established norms to explore new visual possibilities.

    My work mainly focuses on branding, typography, and motion design, with a particular emphasis on kinetic typography.

    Between 2017 and 2025, I led numerous graphic and motion design workshops at various universities and art academies in Italy, including Abadir (Catania), Accademia di Belle Arti di Frosinone, Accademia di Belle Arti di Roma, CFP Bauer (Milan), and UNIRSM (San Marino). Since 2020, I’ve been teaching motion design at Abadir Academy in Catania, and since 2025, kinetic typography at CFP Bauer in Milan.

    Featured work

    TYPEXCEL — Variable font

    I designed an online half-day workshop for high school students on the occasion of an open day at the Academy of Design and Visual Communication Abadir, held in 2021.

    The goal of this workshop was to create a first contact with graphic design, but most of all with typography, using an Excel spreadsheet as a modular grid composed of editable and variable cells, instead of professional software which requires specific knowledge.

    The cell pattern allowed the students to create letters, icons, and glyphs. It was a stimulating exercise that helped them discover and develop their own design and creative skills.

    This project was published in Slanted Magazine N°40 “Experimental Type”.

    DEMO Festival

    DEMO Festival (Design in Motion Festival) is one of the world’s most prominent motion design festivals, founded by the renowned Dutch studio Studio Dumbar. The festival takes over the entire digital screen network of Amsterdam Central Station, transforming public space into a 24-hour exhibition of cutting-edge motion work from around the globe.

    I’ve had the honor of being selected multiple times to showcase my work at DEMO: in 2019 with EYE SEQUENCE; in 2022 with ALIEN TYPE and VERTICAL; and again in 2025 with ALIEN TRIBE, HELLOCIAOHALLOSALUTHOLA, and FREE JAZZ.

    In the 2025 edition, ALIEN TRIBE and HELLOCIAOHALLOSALUTHOLA were also selected for the Special Screens program, which extended the festival’s presence beyond the Netherlands. These works were exhibited in digital spaces across cities including Eindhoven, Rotterdam, Tilburg, Utrecht, Hamburg, and Düsseldorf, reaching a broader international audience.

    MARCO FORMENTINI

    My collaboration with Italian footwear designer Marco Formentini, based in Amsterdam, began with the creation of his visual identity and gradually expanded into other areas, including apparel experiments and the design of his personal website.

    Each phase of the project reflects his eclectic and process-driven approach to design, while also allowing me to explore form, texture, and narrative through different media.

    Below is a closer look at the three main outputs of this collaboration: logo, t-shirt, and website.

    Logo

    Designed for Italian footwear designer Marco Formentini, this logo reflects his broad, exploratory approach to design. Rather than sticking to a traditional monogram, I fused the letters “M” and “F” into a single, abstract shape, something that feels more like a symbol than a set of initials. The result is a wild, otherworldly mark that evokes movement, edge, and invention, mirroring Marco’s ability to shift across styles and scales while always keeping his own perspective.

    Website

    I conceived Marco Formentini’s website as a container, a digital portfolio without a fixed structure. It gathers images, sketches, prototypes, and renderings not through a linear narrative but through a visual flow that embraces randomness.

    The layout is split into two vertical columns, each filled with different types of visual content. By moving the cursor left or right, the columns dynamically resize, allowing the user to shift focus and explore the material in an intuitive and fluid way. This interactive system reflects Marco’s eclectic approach to footwear design, a space where experimentation and process take visual form.

    Website development by Marco Buccolo.

    Check it out: marco-formentini.com

    T—Shirt

    Shortly after working on his personal brand, I shared with Marco Formentini a few early graphic proposals for a potential t-shirt design, while he happened to be traveling through the Philippines with his friend Jo.

    Without waiting for a full release, he spontaneously had a few pieces printed at a local shop he stumbled upon during the trip, mixing one of the designs on the front with a different proposal on the back. An unexpected real-world test run for the identity, worn into the streets before even hitting the studio.

    Ditroit

    This poster was created to celebrate the 15th anniversary of Ditroit, a motion design and 3D studio based in Milan.

    At the center is an expressive “15”, a tribute to the studio’s founder, a longtime friend and former graffiti companion. The design reconnects the present with our shared creative roots and the formative energy of those early years.

    Silver on black: a color pairing rooted in our early graffiti experiments, reimagined here to celebrate fifteen years of visual exploration.

    Tightype

    A series of typographic animations I created for the launch of Habitas, the typeface designed by Tightype and released in 2021.

    The project explores type in motion, not just as a vehicle for content but as a form of visual expression in itself. Shapes bounce, rotate and multiply, revealing the personality of the font through rhythm and movement.

    Jane Machine

    SH SH SH SH is the latest LP from Jane Machine.

    The cover is defined by the central element of the lips, directly inspired by the album’s title. The lips not only mimic the movement of the “sh” sound but also evoke the noise of tearing paper. I amplified this effect through the creative process by first printing a photograph of the lips and then tearing it, introducing a tactile quality that contrasts with and complements the more electronic aesthetic of the colors and typography.

    Background

    I’m a creative director and graphic & motion designer with a strong focus on typography.

    My visual journey started around the age of 12, shaped by underground culture: I was into graffiti, hip hop, breakdancing, and skateboarding.

    As I grew up, I explored other scenes, from punk to tekno, from drum and bass to more experimental electronic music. What always drew me in, beyond the music itself, was the visual world around it: free party flyers, record sleeves, logos, and type everywhere.

    Between 2004 and 2010, I produced tekno music, an experience that deeply shaped my approach to design. That’s where I first learned about timelines, beats, and rhythm, all elements that today are at the core of how I work with motion.

    Art has also played a major role in shaping my visual culture, from the primitive signs of hieroglyphs to Cubism, Dadaism, Russian Constructivism, and the expressive intensity of Antonio Ligabue.

    The aesthetics and attitude of those worlds continue to influence everything I do and how I see things.

    In 2013, I graduated in Graphic Design from IED Milano and started working with various agencies. In 2014, I moved back to Modica, Sicily, where I’m still based today.

    Some of my animation work has been featured at DEMO Festival, the international motion design event curated by Studio Dumbar, in the 2019, 2022, and 2025 editions.

    In 2022, I was published in Slanted Magazine #40 (EXPERIMENTAL TYPE) with TYPEXCEL, Variable font, a project developed for a typography workshop aimed at high school students, entirely built inside an Excel spreadsheet.

    Since 2020, I’ve been teaching Motion Design at Abadir, Academy of Design and Visual Communication in Catania, and in 2025 I started teaching Type in Motion at Bauer in Milan.

    In 2021, together with Francesca Giampiccolo, I founded GG—OFFICE, a small independent visual studio based in Modica, Sicily.

    GG—OFFICE is a design space where branding and motion meet through a tailored and experimental approach. Every project grows from dialogue, evolves through research, and aims to shape contemporary, honest, and visually forward identities.

    In 2025, Francesca and I gave a talk on the theme of madness at Desina Festival in Naples, a wild, fun, and beautifully chaotic experience.

    Design Philosophy

    My approach to design is rooted in thought, I think a lot, as well as in research, rhythm, and an almost obsessive production of drafts.

    Every project is a unique journey where form always follows meaning, and never simply does what the client says.

    This is not about being contrary; it’s about bringing depth, intention and a point of view to the process.

    I channel the raw energy and DIY mindset of the subcultures that shaped me early on. I’m referring to those gritty, visual sound-driven scenes that pushed boundaries and blurred the line between image and sound. I’m not talking about the music itself, but about the visual culture that surrounded it. That spirit still fuels my creative engine today.

    Typography is my playground, not just a visual tool but a way to express structure, rhythm and movement.

    Sometimes I push letterforms to their limit, to the point where they lose readability and become pure visual matter.

    Whether I’m building a brand identity or animating graphics, I’m always exploring new visual languages, narrative rhythms and spatial poetry.

    Tools and Techniques

    I work across analog and digital tools, but most of my design and animation takes shape in Adobe Illustrator, After Effects, InDesign and Photoshop. And sometimes even Excel 🙂 especially when I want to break the rules and rethink typography in unconventional ways.

    I’m drawn to processes that allow for exploration and controlled chaos. I love building visual systems, breaking them apart and reconstructing them with intention.

    Typography, to me, is a living structure, modular, dynamic and often influenced by visual or musical rhythm.

    My workflow starts with in-depth research and a large amount of hand sketching.

    I then digitize the material, print it, manipulate it manually by cutting, collaging and intervening physically, then scan it again and bring it back into the digital space.

    This back-and-forth between mediums helps me achieve a material quality and a sense of imperfection that pure digital work often lacks.

    Inspiration

    Beyond the underground scenes and art movements I mentioned earlier, my inspiration comes from everything around me. I’m a keen observer and deeply analytical. Since I was a kid, I’ve been fascinated by people’s gestures, movements, and subtle expressions.

    For example, when I used to go to parties, I would often stand next to the DJ, not just to watch their technique, but to study their body language, movements, and micro-expressions. Even the smallest gesture can spark an idea.

    I believe inspiration is everywhere. It’s about being present and training your eye to notice the details most people overlook.

    Future Goals

    I don’t have a specific goal or destination. My main aim is to keep doing things well and to never lose my curiosity. For me, curiosity is the fuel that drives creativity and growth, so I want to stay open, keep exploring, and enjoy the process without forcing a fixed outcome.

    Message to Readers

    Design is not art!

    Design is method, planning, and process. However, that method can, and sometimes should, be challenged, as long as you remain fully aware of what you are doing. It is essential that what you create can be reproduced consistently and, depending on the project, works effectively across different media and formats. I always tell my students that you need to know the rules before you can break them. To do good design, you need a lot of passion and a lot of patience.

    Contact



    Source link

  • Rethinking Design: Why Privacy Shouldn’t Be an Afterthought

    Rethinking Design: Why Privacy Shouldn’t Be an Afterthought


    As organizations continue to embrace digital transformation, how we think about personal data has changed fundamentally. Data is no longer just a by-product of business processes; it is often the product itself. This shift brings a pressing responsibility: privacy cannot be treated as an after-the-fact fix. It must be part of the architecture from the outset.

    This is the thinking behind Privacy by Design. This concept is gaining renewed attention not just because regulators endorse it but also because it is increasingly seen as a marker of digital maturity.

    So, what is Privacy by Design?

    At a basic level, Privacy by Design (often abbreviated as PbD) means designing systems, products, and processes with privacy built into them from the start. It’s not a tool or a checklist; it’s a way of thinking.

    Rather than waiting until the end of the development cycle to address privacy risks, teams proactively factor privacy into the design, architecture, and decision-making stages. This means asking the right questions early:

    • Do we need to collect this data?
    • How will it be stored, shared, and eventually deleted?
    • Are there less invasive ways to achieve the same business goal?

    This mindset goes beyond technology. It is as much about product strategy and organizational alignment as it is about encryption or access controls.

    Why It’s Becoming Non-Negotiable

    The global regulatory environment is a key driver here. GDPR, for instance, formalized this approach in Article 25, which explicitly calls for “data protection by design and by default.” However, the need for privacy by design is not just about staying compliant.

    Customers today are more aware than ever of how their data is used. Organizations that respect that reality – minimizing collection, improving transparency, and offering control – tend to earn more trust. And in a landscape where trust is hard to gain and easy to lose, that’s a competitive advantage.

    Moreover, designing with privacy in mind from an engineering perspective reduces technical debt. Fixing privacy issues after launch usually means expensive rework and rushed patches. Building it right from day one leads to better outcomes.

    Turning Principles into Practice

    For many teams, the challenge is not agreeing with the idea but knowing how to apply it. Here’s what implementation often looks like in practice:

    1. Product & Engineering Collaboration

    Product teams define what data is needed and why. Engineering teams determine how it’s collected, stored, and protected. Early conversations between both help identify red flags and trade-offs before anything goes live.

    1. Embedding Privacy into Architecture

    This includes designing data flows with limitations, such as separating identifiers, encrypting sensitive attributes at rest, and ensuring role-based access to personal data. These aren’t just compliance tasks; they are innovative design practices that also improve security posture.

    1. Privacy as a Default Setting

    Instead of asking users to configure privacy settings after onboarding, PbD insists on secure defaults. If a feature collects data, users should have to opt in, not find a buried toggle to opt out.

    1. Periodic Reviews, Not Just One-Time Checks

    Privacy by Design isn’t a one-and-done activity. As systems evolve and new features roll out, periodic reviews help ensure that decisions made early on still hold up in practice.

    1. Cross-Functional Awareness

    Not every developer needs to be a privacy expert, but everyone in the development lifecycle—from analysts to QA—should be familiar with core privacy principles. A shared vocabulary goes a long way toward spotting and resolving issues early.

    Going Beyond Compliance

    A common mistake is to treat Privacy by Design as a box to tick. However, the organizations that do it well tend to treat it differently.

    They don’t ask, “What’s the minimum we need to do to comply?” Instead, they ask, “How do we build responsibly?”

    They don’t design features and then layer privacy on top. They create privacy into the feature.

    They don’t stop at policies. They create workflows and tooling that enforce those policies consistently.

    This mindset fosters resilience, reduces risk, and, over time, becomes part of the organization’s culture. In this mindset, product ideas are evaluated for feasibility and market fit and ethical and privacy alignment.

    Final Thoughts

    Privacy by Design is about intent. When teams build with privacy in mind, they send a message that the organization values the people behind the data.

    This approach is very much expected in an era where privacy concerns are at the centre of digital discourse. For those leading security, compliance, or product teams, the real opportunity lies in making privacy a requirement and a differentiator.

    Seqrite brings Privacy by Design to life with automated tools for data discovery, classification, and protection—right from the start. Our solutions embed privacy into every layer of your IT infrastructure, ensuring compliance and building trust. Explore how Seqrite can simplify your privacy journey.

     



    Source link

  • Behind the Curtain: Building Aurel’s Grand Theater from Design to Code

    Behind the Curtain: Building Aurel’s Grand Theater from Design to Code


    “Aurel’s Grand Theater” is an experimental, unconventional solo portfolio project that invites users to read case
    studies, solve mysteries to unlock secret pages, or freely explore the theater – jumping around and even smashing
    things!

    I had an absolute blast working on it, even though it took much longer than I anticipated. Once I finally settled on a
    creative direction, the project took about a year to complete – but reaching that direction took nearly two years on
    its own. Throughout the journey, I balanced a full-time job as a lead web developer, freelance gigs, and an unexpected
    relocation to the other side of the world. The cherry on top? I went through
    way
    too many artistic iterations. It ‘s my longest solo project to date, but also one of the most fun and creatively
    rewarding. It gave me the chance to dive deep into creative coding and design.

    This article takes you behind the scenes of the project – covering everything from design to code, including tools,
    inspiration, project architecture, design patterns, and even feature breakdowns with code snippets you can adapt for
    your own work.

    The Creative Process: Behind the Curtain

    Genesis

    After eight years, my portfolio no longer reflected my skills or creativity. I wanted to create something unconventional – an experience where visitors become active participants rather than passive observers. Most importantly, I wanted it to be something I ‘d genuinely enjoy building. I was wrapping up “ Leap for Mankind” at the time and had a blast working on it, blending storytelling with game and interactive elements. I wanted to create another experimental website that combines game mechanics with a narrative experience.

    From the beginning, I envisioned a small character that could freely explore its environment – smashing objects, interacting with surrounding elements, and navigating not just the floor but also vertical spaces by jumping onto tables and chairs. The goal was to transform the portfolio from a passive viewing experience into a fun, interactive one. At the same time, I recognized that some content demands clarity over creativity. For example, case studies require a more traditional format that emphasizes readability.

    One of the key challenges, then, was designing a portfolio that could seamlessly transition between an immersive 3D game world and more conventional documentation pages – without disrupting the overall experience.

    Building the Foundation

    I had a general concept of the website in mind, so I started coding a proof of concept (POC) for the game back in
    2022. In this early version, the player could move around, bump into objects, and jump – laying the foundation for the
    interactive world I envisioned. Interestingly, much of the core code structure from that POC made it into the final
    product. While the technical side was coming together, I still hadn ‘t figured out the artistic direction at that
    point.

    Trials and Errors

    As a full-time web developer, I rarely find myself wrestling with artistic direction. Until now, every freelance and
    side project I took on began with a clear creative vision that simply needed technical execution.

    This time was different. At first, I leaned toward a cartoonish aesthetic with bold outlines, thinking it would
    emphasize my creativity. I tried to convince myself it worked, but something felt off – especially when pairing the
    visual style with the user interface. The disconnect between my vision and its execution was unfamiliar territory, and
    it led me down a long and winding path of creative exploration.

    Early artistic direction

    I experimented with other styles too, like painterly visuals, which held promise but proved too time-consuming. Each
    artistic direction felt either not suitable for me or beyond my practical capabilities as a developer moonlighting as
    a designer.

    The theater concept – which ultimately became central to the portfolio ‘s identity – arrived surprisingly late. It
    wasn ‘t part of the original vision but surfaced only after countless iterations and discarded ideas. In total,
    finding an artistic direction that truly resonated took nearly two years – a journey further complicated by a major
    relocation across continents, ongoing work and freelance commitments, and personal responsibilities.

    The extended timeline wasn ‘t due to technical complexity, but to an unexpected battle with creative identity. What
    began as a straightforward portfolio refresh evolved into a deeper exploration of how to merge professional
    presentation with personal expression – pushing me far beyond code and into the world of creative direction.

    Tools & Inspiration: The Heart of Creation

    After numerous iterations and abandoned concepts, I finally arrived at a creative direction that resonated with my
    vision. Rather than detailing every artistic detour, I ‘ll focus on the tools and direction that ultimately led to the
    final product.

    Design Stack

    Below is the stack I use to design my 3D projects:

    UI/UX & Visual Design

    • Figma
      : When I first started, everything was laid out in a Photoshop file. Over the years, I tried various design tools,
      but I ‘ve been using Figma consistently since 2018 – and I ‘ve been really satisfied with it ever since.
    • Miro
      : reat for moodboarding and early ideation. It helps me visually organize thoughts and explore concepts during the
      initial phase.

    3D Modeling & Texturing

    • Blender
      : My favorite tool for 3D modeling. It ‘s incredibly powerful and flexible, though it does have a steep learning
      curve at first. Still, it ‘s well worth the effort for the level of creative control it offers.
    • Adobe Substance 3D Painter
      : The gold standard in my workflow for texture painting. It’s expensive, but the quality and precision it delivers
      make it indispensable.

    Image Editing

    • Krita
      : I only need light photo editing, and Krita handles that perfectly without locking me into Adobe ‘s ecosystem – a
      practical and efficient alternative.

    Drawing Inspiration from Storytellers

    While I drew inspiration from many sources, the most influential were Studio Ghibli and the mystical world of Harry
    Potter. Ghibli ‘s meticulous attention to environmental detail shaped my understanding of atmosphere, while the
    enchanting realism of the Harry Potter universe helped define the mood I wanted to evoke. I also browsed platforms
    like ArtStation and Pinterest for broader visual inspiration, while sites like Behance, FWA, and Awwwards influenced
    the more granular aspects of UX/UI design.

    Initially, I organized these references on an InVision board. However, when the platform shut down mid-project, I had
    to migrate everything to Miro – an unexpected transition and symbolic disruption that echoed the broader delays in the
    project.

    Mood board of Aurel’s Grand Theater

    Designing the Theater

    The theater concept emerged as the perfect metaphor for a portfolio: a space where different works could be presented
    as “performances,” while maintaining a cohesive environment. It also aligned beautifully with the nostalgic,
    pre-digital vibe inspired by many of my visual references.

    Environment design is a specialized discipline I wasn ‘t very familiar with initially. To create a theater that felt
    visually engaging and believable, I studied techniques from the
    FZD School
    . These approaches were invaluable in conceptualizing spaces that truly feel alive: places where you can sense people
    living their lives, working, and interacting with the environment.

    To make the environment feel genuinely inhabited, I incorporated details that suggest human presence: scattered props,
    tools, theater posters, food items, pamphlets, and even bits of miscellaneous junk throughout the space. These
    seemingly minor elements were crucial in transforming the static 3D model into a setting rich with history, mood, and
    character.

    The 3D Modeling Process

    Optimizing for Web Performance

    Creating 3D environments for the web comes with unique challenges that differ significantly from video modelling. When
    scenes need to be rendered in real-time by a browser, every polygon matters.

    To address this, I adopted a strict low-poly approach and focused heavily on building reusable modular components.
    These elements could be instantiated throughout the environment without duplicating unnecessary geometry or textures.

    While the final result is still relatively heavy, this modular system allowed me to construct more complex and
    detailed scenes while maintaining reasonable download sizes and rendering performance, which wouldn ‘t have been
    possible without this approach.

    Texture Over Geometry

    Rather than modeling intricate details that would increase polygon counts, I leveraged textures to suggest complexity.

    Adobe Substance 3D became my primary tool for creating rich material surfaces that could convey detail without
    overloading the renderer. This approach was particularly effective for elements like the traditional Hanok windows
    with their intricate wooden lattice patterns. Instead of modeling each panel, which would have been
    performance-prohibitive, I painted the details into textures and applied them to simple geometric forms.

    Frameworks & Patterns: Behind the Scenes of Development

    Tech Stack

    This is a comprehensive overview of the technology stack I used for Aurel’s Grand Theater website, leveraging my
    existing expertise while incorporating specialized tools for animation and 3D effects.

    Core Framework

    • Vue.js
      : While I previously worked with React, Vue has been my primary framework since 2018. Beyond simply enjoying and
      loving this framework, it makes sense for me to maintain consistency between the tools I use at work and on my side
      projects. I also use Vite and Pinia.

    Animation & Interaction

    • GSAP
      : A cornerstone of my development toolkit for nearly a decade, primarily utilized on this project for:

      • ScrollTrigger functionality
      • MotionPath animations
      • Timeline and tweens
      • As a personal challenge, I created my own text-splitting functionality for this project (since it wasn ‘t client
        work), but I highly recommend GSAP ‘s SplitText for most use cases.
    • Lenis
      : My go-to library for smooth scrolling. It integrates beautifully with scroll animations, especially when working
      with Three.js.

    3D Graphics & Physics

    • Three.js
      : My favorite 3D framework and a key part of my toolkit since 2015. I enjoy using it to bring interactive 3D
      elements to the web.
    • Cannon.js
      : Powers the site ‘s physics simulations. While I considered alternatives like Rapier, I stuck with Cannon.js since
      it was already integrated into my 2022 proof-of-concept. Replacing it would have introduced unnecessary delays.

    Styling

    • Queso
      : A headless CSS framework developed at MamboMambo (my workplace). I chose it for its comprehensive starter
      components and seamless integration with my workflow. Despite being in beta, it ‘s already reliable and flexible.

    This tech stack strikes a balance between familiar tools and specialized libraries that enable the visual and
    interactive elements that define the site’s experience.

    Architecture

    I follow Clean Code principles and other industry best practices, including aiming to keep my files small,
    independent, reusable, concise, and testable.

    I’ve also adopted the component folder architecture developed at my workplace. Instead of placing
    Vue
    files directly inside the
    ./components
    directory, each component resides in its own folder. This folder contains the
    Vue
    file along with related types, unit tests, supporting files, and any child components.

    Although initially designed for
    Vue
    components, I ‘ve found this structure works equally well for organizing logic with
    Typescript
    files,
    utilities
    ,
    directives
    , and more. It ‘s a clean, consistent system that improves code readability, maintainability, and scalability.

    MyFile
    ├── MyFile.vue
    ├── MyFile.test.ts
    ├── MyFile.types.ts
    ├── index.ts (export the types and the vue file)
    ├── data.json (optional files needed in MyFile.vue such as .json files)
    │ 
    ├── components
    │   ├── MyFileChildren
    │   │   ├── MyFileChildren.vue
    │   │   ├── MyFileChildren.test.ts
    │   │   ├── MyFileChildren.types.ts
    │   │   ├── index.ts
    │   ├── MyFileSecondChildren
    │   │   ├── MyFileSecondChildren.vue
    │   │   ├── MyFileSecondChildren.test.ts
    │   │   ├── MyFileSecondChildren.types.ts
    │   │   ├── index.ts

    The overall project architecture follows the high-level structure outlined below.

    src/
    ├── assets/             # Static assets like images, fonts, and styles
    ├── components/         # Vue components
    ├── composables/        # Vue composables for shared logic
    ├── constant/           # Project wide constants
    ├── data/               # Project wide data files
    ├── directives/         # Vue custom directives
    ├── router/             # Vue Router configuration and routes
    ├── services/           # Services (e.g i18n)
    ├── stores/             # State management (Pinia)
    ├── three/              
    │   ├── Experience/    
    │   │   ├── Theater/                 # Theater experience
    │   │   │   ├── Experience/          # Core experience logic
    │   │   │   ├── Progress/            # Loading and progress management
    │   │   │   ├── Camera/              # Camera configuration and controls
    │   │   │   ├── Renderer/            # WebGL renderer setup and configuration
    │   │   │   ├── Sources/             # List of resources
    │   │   │   ├── Physics/             # Physics simulation and interactions
    │   │   │   │   ├── PhysicsMaterial/ # Physics Material
    │   │   │   │   ├── Shared/          # Physics for models shared across scenes
    │   │   │   │   │   ├── Pit/         # Physics simulation and interactions
    │   │   │   │   │   │   ├── Pit.ts   # Physics for models in the pit
    │   │   │   │   │   │   ├── ...       
    │   │   │   │   ├── Triggers/         # Physics Triggers
    │   │   │   │   ├── Scenes/           # Physics for About/Leap/Mont-Saint-Michel
    │   │   │   │   │   ├── Leap/         
    │   │   │   │   │   │   ├── Leap.ts   # Physics for Leap For Mankind's models       
    │   │   │   │   │   │   ├── ...         
    │   │   │   │   │   └── ...          
    │   │   │   ├── World/               # 3D world setup and management
    │   │   │   │   ├── World/           # Main world configuration and setup
    │   │   │   │   ├── PlayerModel/     # Player character model and controls
    │   │   │   │   ├── CameraTransition/ # Camera movement and transitions
    │   │   │   │   ├── Environments/    # Environment setup and management
    │   │   │   │   │   ├── Environment.ts # Environment configuration
    │   │   │   │   │   └── types.ts     # Environment type definitions
    │   │   │   │   ├── Scenes/          # Different scene configurations
    │   │   │   │   │   ├── Leap/ 
    │   │   │   │   │   │   ├── Leap.ts  # Leap For Mankind model's logic
    │   │   │   │   │   └── ...      
    │   │   │   │   ├── Tutorial/        # Tutorial meshes & logic
    │   │   │   │   ├── Bleed/           # Bleed effect logic
    │   │   │   │   ├── Bird/            # Bird model logic
    │   │   │   │   ├── Markers/         # Points of interest
    │   │   │   │   ├── Shared/          # Models & meshes used across scenes
    │   │   │   │   └── ...         
    │   │   │   ├── SharedMaterials/     # Reusable Three.js materials
    │   │   │   └── PostProcessing/      # Post-processing effects
    │   │   │
    │   │   ├── Basement/                # Basement experience
    │   │   ├── Idle/                    # Idle state experience
    │   │   ├── Error404/                # 404 error experience
    │   │   ├── Constant/                # Three.js related constants
    │   │   ├── Factories/               # Three.js factory code
    │   │   │   ├── RopeMaterialGenerator/
    │   │   │   │   ├── RopeMaterialGenerator.ts        
    │   │   │   │   └── ...
    │   │   │   ├── ... 
    │   │   ├── Utils/                   # Three.js utilities other reusable functions
    │   │   └── Shaders/                 # Shaders programs
    ├── types/              # Project-wide TypeScript type definitions
    ├── utils/              # Utility functions and helpers
    ├── vendors/            # Third-party vendor code
    ├── views/              # Page components and layouts
    ├── workers/            # Web Workers
    ├── App.vue             # Root Vue component
    └── main.ts             # Application entry point

    This structured approach helps me manage the code base efficiently and maintain clear separation of concerns
    throughout the codebase, making both development and future maintenance significantly more straightforward.

    Design Patterns

    Singleton

    Singletons play a key role in this type of project architecture, enabling efficient code reuse without incurring
    performance penalties.

    import Experience from "@/three/Experience/Experience";
    import type { Scene } from "@/types/three.types";
    
    let instance: SingletonExample | null = null;
    
    export default class SingletonExample {
      private scene: Scene;
      private experience: Experience;
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
      }
    
      init() {
        // initialize the singleton
      }
    
      someMethod() {
        // some method
      }
    
      update() {
        // update the singleton
      }
      
      update10fps() {
        // Optional: update methods capped at 10FPS
      }
    
      destroySingleton() {
        // clean up three.js + destroy the singleton
      }
    }
    

    Split Responsibility Architecture

    As shown earlier in the project architecture section, I deliberately separated physics management from model handling
    to produce smaller, more maintainable files.

    World Management Files:

    These files are responsible for initializing factories and managing meshes within the main loop. They may also include
    functions specific to individual world items.

    Here’s an example of one such file:

    // src/three/Experience/Theater/mockFileModel/mockFileModel.ts
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type {
      List,
      LoadModel
    } from "@/types/experience/experience.types";
    import type { Scene } from "@/types/three.types";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { Resources } from "@/three/Experience/Utils/Ressources/Resources";
    import type { MaterialGenerator } from "@/types/experience/materialGeneratorType";
    
    
    let instance: mockWorldFile | null = null;
    export default class mockWorldFile {
      private experience: Experience;
      private list: List;
      private physics: Physics;
      private resources: Resources;
      private scene: Scene;
      private materialGenerator: MaterialGenerator;
      public loadModel: LoadModel;
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
    
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.resources = this.experience.resources;
        this.physics = this.experience.physics;
    
        // factories
        this.materialGenerator = this.experience.materialGenerator;
        this.loadModel = this.experience.loadModel;
    
         // Most of the material are init in a file called sharedMaterials
        const bakedMaterial = this.experience.world.sharedMaterials.bakedMaterial;
        // physics infos such as position, rotation, scale, weight etc.
        const paintBucketPhysics = this.physics.items.paintBucket; 
    
        // Array of objects of models. This will be used to update it's position, rotation, scale, etc.
        this.list = {
          paintBucket: [],
          ...
        };
    
        // get the resource file
        const resourcePaintBucket = this.resources.items.paintBucketWhite;
    
         //Reusable code to add models with physics to the scene. I will talk about that later.
        this.loadModel.setModels(
          resourcePaintBucket.scene,
          paintBucketPhysics,
          "paintBucketWhite",
          bakedMaterial,
          true,
          true,
          false,
          false,
          false,
          this.list.paintBucket,
          this.physics.mock,
          "metalBowlFalling",
        );
      }
    
      otherMethod() {
        ...
      }
    
      destroySingleton() {
        ...
      }
    }

    Physics Management Files

    These files trigger the factories to apply physics to meshes, store the resulting physics bodies, and update mesh
    positions on each frame.

    // src/three/Experience/Theater/pathTo/mockFilePhysics
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import additionalShape from "./additionalShape.json";
    
    import type {
      PhysicsResources,
      TrackName,
      List,
      modelsList
    } from "@/types/experience/experience.types";
    import type { cannonObject } from "@/types/three.types";
    import type PhysicsGenerator from "../Factories/PhysicsGenerator/PhysicsGenerator";
    import type UpdateLocation from "../Utils/UpdateLocation/UpdateLocation";
    import type UpdatePositionMesh from "../Utils/UpdatePositionMesh/UpdatePositionMesh";
    import type AudioGenerator from "../Utils/AudioGenerator/AudioGenerator";
    
    let instance: MockFilePhysics | null = null;
    
    export default class MockFilePhysics {
      private experience: Experience;
      private list: List;
      private physicsGenerator: PhysicsGenerator;
      private updateLocation: UpdateLocation;
      private modelsList: modelsList;
      private updatePositionMesh: UpdatePositionMesh;
      private audioGenerator: AudioGenerator;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.debug = this.experience.debug;
        this.physicsGenerator = this.experience.physicsGenerator;
        this.updateLocation = this.experience.updateLocation;
        this.updatePositionMesh = this.experience.updatePositionMesh;
        this.audioGenerator = this.experience.audioGenerator;
    
        // Array of objects of physics. This will be used to update the model's position, rotation, scale etc.
        this.list = {
          paintBucket: [],
        };
      }
    
      setModelsList() {
        //When the load progress reaches a certain percentage, we can set the models list, avoiding some potential bugs or unnecessary conditional logic. Please note that the method update is never run until the scene is fully ready.
        this.modelsList = this.experience.world.constructionToolsModel.list;
      }
    
      addNewItem(
        element: PhysicsResources,
        listName: string,
        trackName: TrackName,
        sleepSpeedLimit: number | null = null,
      ) {
    
        // factory to add physics, I will talk about that later
        const itemWithPhysics = this.physicsGenerator.createItemPhysics(
          element,
          null,
          true,
          true,
          trackName,
          sleepSpeedLimit,
        );
    
        // Additional optional shapes to the item if needed
        switch (listName) {
          case "broom":
            this.physicsGenerator.addMultipleAdditionalShapesToItem(
              itemWithPhysics,
              additionalShape.broomHandle,
            );
            break;
    
        }
    
        this.list[listName].push(itemWithPhysics);
      }
    
      // this methods is called everyfame.
      update() {
        // reusable code to update the position of the mesh
        this.updatePositionMesh.updatePositionMesh(
          this.modelsList["paintBucket"],
          this.list["paintBucket"],
        );
      }
    
    
      destroySingleton() {
        ...
      }
    }

    Since the logic for updating mesh positions is consistent across the project, I created reusable code that can be
    applied in nearly all physics-related files.

    // src/three/Experience/Utils/UpdatePositionMesh/UpdatePositionMesh.ts
    
    export default class UpdatePositionMesh {
      updatePositionMesh(meshList: MeshList, physicList: PhysicList) {
        for (let index = 0; index < physicList.length; index++) {
          const physic = physicList[index];
          const model = meshList[index].model;
    
          model.position.set(
            physic.position.x,
            physic.position.y,
            physic.position.z
          );
          model.quaternion.set(
            physic.quaternion.x,
            physic.quaternion.y,
            physic.quaternion.z,
            physic.quaternion.w
          );
        }
      }
    }

    Factory Patterns

    To avoid redundant code, I built a system around reusable code. While the project includes multiple factories, these
    two are the most essential:

    Model Factory
    : LoadModel

    With few exceptions, all models—whether instanced or regular, with or without physics—are added through this factory.

    // src/three/Experience/factories/LoadModel/LoadModel.ts
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type {
      PhysicsResources,
      TrackName,
      List,
      modelListPath,
      PhysicsListPath
    } from "@/types/experience/experience.type";
    import type { loadModelMaterial } from "./types";
    import type { Material, Scene, Mesh } from "@/types/Three.types";
    import type Progress from "@/three/Experience/Utils/Progress/Progress";
    import type AddPhysicsToModel from "@/three/Experience/factories/AddPhysicsToModel/AddPhysicsToModel";
    
    let instance: LoadModel | null = null;
    
    
    export default class LoadModel {
      public experience: Experience;
      public progress: Progress;
      public mesh: Mesh;
      public addPhysicsToModel: AddPhysicsToModel;
      public scene: Scene;
    
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.progress = this.experience.progress;
        this.addPhysicsToModel = this.experience.addPhysicsToModel;
      }
    
    
      async setModels(
        model: Model,
        list: PhysicsResources[],
        physicsList: string,
        bakedMaterial: LoadModelMaterial,
        isCastShadow: boolean = false,
        isReceiveShadow: boolean = false,
        isIntancedModel: boolean = false,
        isDoubleSided: boolean = false,
        modelListPath: ModelListPath,
        physicsListPath: PhysicsListPath,
        trackName: TrackName = null,
        sleepSpeedLimit: number | null = null,
      ) {
        const loadedModel = isIntancedModel
          ? await this.addInstancedModel(
              model,
              bakedMaterial,
              true,
              true,
              isDoubleSided,
              isCastShadow,
              isReceiveShadow,
              list.length,
            )
            : await this.addModel(
                model,
                bakedMaterial,
                true,
                true,
                isDoubleSided,
                isCastShadow,
                isReceiveShadow,
              );
    
    
        this.addPhysicsToModel.loopListThenAddModelToSceneThenToPhysics(
          list,
          modelListPath,
          physicsListPath,
          physicsList,
          loadedModel,
          isIntancedModel,
          trackName,
          sleepSpeedLimit,
        );
      }
    
    
      addModel = (
        model: Model,
        material: Material,
        isTransparent: boolean = false,
        isFrustumCulled: boolean = true,
        isDoubleSided: boolean = false,
        isCastShadow: boolean = false,
        isReceiveShadow: boolean = false,
        isClone: boolean = true,
      ) => {
        model.traverse((child: THREE.Object3D) => {
          !isFrustumCulled ? (child.frustumCulled = false) : null;
          if (child instanceof THREE.Mesh) {
            child.castShadow = isCastShadow;
            child.receiveShadow = isReceiveShadow;
    
            material
              && (child.material = this.setMaterialOrCloneMaterial(
                  isClone,
                  material,
                ))
              
    
            child.material.transparent = isTransparent;
            isDoubleSided ? (child.material.side = THREE.DoubleSide) : null;
            isReceiveShadow ? child.geometry.computeVertexNormals() : null; // https://discourse.threejs.org/t/gltf-model-shadows-not-receiving-with-gltfmeshstandardsgmaterial/24112/9
          }
        });
    
        this.progress.addLoadedModel(); // Update the number of items loaded
        return { model: model };
      };
    
    
      setMaterialOrCloneMaterial(isClone: boolean, material: Material) {
        return isClone ? material.clone() : material;
      }
    
    
      addInstancedModel = () => {
       ...
      };
    
      // other methods
    
    
      destroySingleton() {
        ...
      }
    }
    Physics Factory: PhysicsGenerator

    This factory has a single responsibility: creative physics properties for meshes.

    // src/three/Experience/Utils/PhysicsGenerator/PhysicsGenerator.ts
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import * as CANNON from "cannon-es";
    
    import CannonUtils from "@/utils/cannonUtils.js";
    
    import type {
      Quaternion,
      PhysicsItemPosition,
      PhysicsItemType,
      PhysicsResources,
      TrackName,
      CannonObject,
    } from "@/types/experience/experience.types";
    
    import type { Scene, ConvexGeometry } from "@/types/three.types";
    import type Progress from "@/three/Experience/Utils/Progress/Progress";
    import type AudioGenerator from "@/three/Experience/Utils/AudioGenerator/AudioGenerator";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { physicsShape } from "./PhysicsGenerator.types"
    
    let instance: PhysicsGenerator | null = null;
    
    export default class PhysicsGenerator {
      public experience: Experience;
      public physics: Physics;
      public currentScene: string | null = null;
      public progress: Progress;
      public audioGenerator: AudioGenerator;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.resources = this.experience.resources;
        this.audioGenerator = this.experience.audioGenerator;
        this.physics = this.experience.physics;
        this.progress = this.experience.progress;
    
        this.currentScene = this.experience.currentScene;
      }
    
    
      //#region add physics to an object
    
      createItemPhysics(
        source: PhysicsResources, // object containing physics info such as mass, shape, position....
        convex?: ConvexGeometry | null = null,
        allowSleep?: boolean = true,
        isBodyToAdd?: boolean = true,
        trackName?: TrackName = null,
        sleepSpeedLimit?: number | null = null
      ) {
        const setSpeedLimit = sleepSpeedLimit ?? 0.15;
    
        // For this project I needed to detect if the user was in the Mont-Saint-Michel, Leap For Mankind, About or Archives scene.
        const localCurrentScene = source.locations[this.currentScene]
          ? this.currentScene
          : "about";
    
        switch (source.type as physicsShape) {
          case "box": {
            const boxShape = new CANNON.Box(new CANNON.Vec3(...source.shape));
            const boxBody = new CANNON.Body({
              mass: source.mass,
              position: new CANNON.Vec3(
                source.locations[localCurrentScene].position.x,
                source.locations[localCurrentScene].position.y,
                source.locations[localCurrentScene].position.z
              ),
              allowSleep: allowSleep,
              shape: boxShape,
              material: source.material
                ? source.material
                : this.physics.physics.defaultMaterial,
              sleepSpeedLimit: setSpeedLimit,
            });
    
            source.locations[localCurrentScene].quaternion
              && (boxBody.quaternion.y =
                  source.locations[localCurrentScene].quaternion.y);
    
            this.physics.physics.addBody(boxBody);
            this.updatedLoadedItem();
    
            // Add optional SFX that will be played if the item collides with another physics item
            trackName
              && this.audioGenerator.addEventListenersToObject(boxBody, TrackName);
    
            return boxBody;
          }
    
          // Then it's basicly the same logic for all other cases
          case "sphere": {
            ...
          }
    
          case "cylinder": {
           ...
          }
    
          case "plane": {
           ...
          }
    
          case "trigger": {
          ...
          }
    
          case "torus": {
            ...
          }
    
          case "trimesh": {
           ...
          }
    
          case "polyhedron": {
            ...
          }
    
          default:
            ...
            break;
        }
      }
    
      updatedLoadedItem() {
        this.progress.addLoadedPhysicsItem(); // Update the number of item loaded (physics only)
      }
    
      //#endregion add physics to an object
    
      // other
    
      destroySingleton() {
        ...
      }
    }

    FPS Capping

    With over 100 models and approximately 150 physics items loaded in the main scene, Aurel’s Grand Theater required
    performance-driven coding from the outset.

    I were to rebuild the project today, I would leverage GPU computing much more intensively. However, when I started the
    proof of concept in 2022, GPU computing for the web was still relatively new and not fully mature—at least, that was
    my perception at the time. Rather than recoding everything, I worked with what I had, which also presented a great
    personal challenge. In addition to using low-poly models and employing classic optimization techniques, I extensively
    used instanced meshes for all small, reusable items—even those with physics. I also relied on many other
    under-the-hood techniques to keep the performance as smooth as possible on this CPU-intensive website.

    One particularly helpful approach I implemented was adaptive frame rates. By capping the FPS to different levels (60,
    30, or 10), depending on whether the logic required rendering at those rates, I optimized performance. After all, some
    logic doesn ‘t require rendering every frame. This is a simple yet effective technique that can easily be incorporated
    into your own project.

    Now, let ‘s take a look at the file responsible for managing time in the project.

    // src/three/Experience/Utils/Time/Time.ts
    import * as THREE from "three";
    import EventEmitter from "@/three/Experience/Utils/EventEmitter/EventEmitter";
    
    let instance: Time | null = null;
    let animationFrameId: number | null = null;
    const clock = new THREE.Clock();
    
    export default class Time extends EventEmitter {
      private lastTick60FPS: number = 0;
      private lastTick30FPS: number = 0;
      private lastTick10FPS: number = 0;
    
      private accumulator60FPS: number = 0;
      private accumulator30FPS: number = 0;
      private accumulator10FPS: number = 0;
    
      public start: number = 0;
      public current: number = 0;
      public elapsed: number = 0;
      public delta: number = 0;
      public delta60FPS: number = 0;
      public delta30FPS: number = 0;
      public delta10FPS: number = 0;
    
      constructor() {
        if (instance) {
          return instance;
        }
        super();
        instance = this;
      }
    
      tick() {
        const currentTime: number = clock.getElapsedTime() * 1000;
    
        this.delta = currentTime - this.current;
        this.current = currentTime;
    
        // Accumulate the time that has passed
        this.accumulator60FPS += this.delta;
        this.accumulator30FPS += this.delta;
        this.accumulator10FPS += this.delta;
    
        // Trigger uncapped tick event using the project's EventEmitter class
        this.trigger("tick");
    
        // Trigger 60FPS tick event
        if (this.accumulator60FPS >= 1000 / 60) {
          this.delta60FPS = currentTime - this.lastTick60FPS;
          this.lastTick60FPS = currentTime;
    
          // Same logic as "this.trigger("tick")" but for 60FPS
          this.trigger("tick60FPS");
          this.accumulator60FPS -= 1000 / 60;
        }
    
        // Trigger 30FPS tick event
        if (this.accumulator30FPS >= 1000 / 30) {
          this.delta30FPS = currentTime - this.lastTick30FPS;
          this.lastTick30FPS = currentTime;
    
          this.trigger("tick30FPS");
          this.accumulator30FPS -= 1000 / 30;
        }
    
        // Trigger 10FPS tick event
        if (this.accumulator10FPS >= 1000 / 10) {
          this.delta10FPS = currentTime - this.lastTick10FPS;
          this.lastTick10FPS = currentTime;
    
          this.trigger("tick10FPS");
          this.accumulator10FPS -= 1000 / 10;
        }
    
        animationFrameId = window.requestAnimationFrame(() => {
          this.tick();
        });
      }
    }
    

    Then, in the
    Experience.ts
    file, we simply place the methods according to the required FPS.

    constructor() {
       if (instance) {
          return instance;
        }
        
        ...
    	  
        this.time = new Time();
        
        ...
    	  
    	  
        //  The game loops (here called tick) are updated when the EventEmitter class is triggered.
        this.time.on("tick", () => {
          this.update();
        });
        this.time.on("tick60FPS", () => {
          this.update60();
        });
        this.time.on("tick30FPS", () => {
          this.update30();
        });
        this.time.on("tick10FPS", () => {
          this.update10();
        });
        }
    
    
      update() {
        this.renderer.update();
      }
    
      update60() {
        this.camera.update60FPS();
        this.world.update60FPS(); 
        this.physics.update60FPS();
      }
    
      update30() {
        this.physics.update30FPS();
        this.world.update30FPS();
      }
      
      update10() {
        this.physics.update10FPS();
        this.world.update10FPS();	
      }

    Selected Feature Breakdown: Code & Explanation

    Cinematic Page Transitions: Return Animation Effects

    Inspired by techniques from the film industry, the transitions between the 3D game and the more traditionally
    structured pages, such as the Case Studies, About, and Credits pages, were carefully designed to feel seamless and
    cinematic.

    The first-time visit animation provides context and immerses users into the website experience. Meanwhile, the other
    page transitions play a crucial role in ensuring a smooth shift between the game and the more conventional layout of
    the Case Studies and About page, preserving immersion while naturally guiding users from one experience to the next.
    Without these transitions, it would feel like abruptly jumping between two entirely different worlds.

    I’ll do a deep dive into the code for the animation when the user returns from the basement level. It’s a bit simpler
    than the other cinematic transitions but the underlying logic is the same, which makes it easier for you to adapt it
    to another project.

    Here the base file:

    // src/three/Experience/Theater/World/CameraTransition/CameraIntroReturning.ts
    
    import { Vector3, CatmullRomCurve3 } from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import { DebugPath } from "@/three/Experience/Utils/DebugPath/DebugPath";
    
    import { createSmoothLookAtTransition } from "./cameraUtils";
    import { setPlayerPosition } from "@/three/Experience/Utils/playerPositionUtils";
    
    import { gsap } from "gsap";
    import { MotionPathPlugin } from "gsap/MotionPathPlugin";
    
    import {
      CAMERA_POSITION_SEAT,
      PLAYER_POSITION_RETURNING,
    } from "@/three/Experience/Constant/PlayerPosition";
    
    import type { Debug } from "@/three/Experience/Utils/Debugger/types";
    import type { Scene, Camera } from "@/types/three.types";
    
    
    const DURATION_RETURNING_FORWARD = 5;
    const DURATION_LOOKAT_RETURNING_FORWARD = 4;
    const RETURNING_PLAYER_QUATERNION = [0, 0, 0, 1];
    const RETURNING_PLAYER_CAMERA_FINAL_POSITION = [
      7.3927162062108955, 3.4067893207543367, 4.151297331541345,
    ];
    const RETURNING_PLAYER_ROTATION = -0.3;
    const RETURNING_PLAYER_CAMERA_FINAL_LOOKAT = [
      2.998858990830107, 2.5067893207543412, -1.55606797749978944,
    ];
    
    gsap.registerPlugin(MotionPathPlugin);
    
    let instance: CameraIntroReturning | null = null;
    
    export default class CameraIntroReturning {
      private scene: Scene;
      private experience: Experience;
      private timelineAnimation: GSAPTimeline;
      private debug: Debug;
      private debugPath: DebugPath;
      private camera: Camera;
      private lookAtTransitionStarted: boolean = false;
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.debug = this.experience.debug;
    
        this.timelineAnimation = gsap.timeline({
          paused: true,
          onComplete: () => {
            this.timelineAnimation.clear().kill();
          },
        });
      }
      init() {
        this.camera = this.experience.camera.instance;
        this.initPath();
      }
    
      initPath() {
        ...
      }
      
      initTimeline() {
        ...
      }
    
      createSmoothLookAtTransition(
       ...
      }
    
      setPositionPlayer() {
       ...
      }
    
      playAnimation() {
       ...
      }
    
      ...
    
      destroySingleton() {
       ...
      }
    }

    The
    init
    method, called from another file, initiates the creation of the animation. At first, we set the path for the
    animation, then the timeline.

    init() {
        this.camera = this.experience.camera.instance;
        this.initPath();
     }
    
    initPath() {
      // create the path for the camera
      const pathPoints = new CatmullRomCurve3([
        new Vector3(CAMERA_POSITION_SEAT[0], CAMERA_POSITION_SEAT[1], 15),
        new Vector3(5.12, 4, 8.18),
        new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_POSITION),
      ]);
    
      // init the timeline
      this.initTimeline(pathPoints);
    }
    
    initTimeline(path: CatmullRomCurve3) {
     ...
    }

    The timeline animation is split into two: a) The camera moves vertically from the basement to the theater, above the
    seats.

    ...
    
    initTimeline(path: CatmullRomCurve3) {
        // get the points
        const pathPoints = path.getPoints(30);
    
        // create the gsap timeline
        this.timelineAnimation
          // set the initial position
          .set(this.camera.position, {
            x: CAMERA_POSITION_SEAT[0],
            y: CAMERA_POSITION_SEAT[1] - 3,
            z: 15,
          })
          .add(() => {
            this.camera.lookAt(3.5, 1, 0);
          })
          //   Start the animation! In this case the camera is moving from the basement to above the seat
          .to(this.camera.position, {
            x: CAMERA_POSITION_SEAT[0],
            y: CAMERA_POSITION_SEAT[1],
            z: 15,
            duration: 3,
            ease: "elastic.out(0.1,0.1)",
          })
          .to(
            this.camera.position,
            {
    		      ...
            },
          )
          ...
      }

    b) The camera follows a path while smoothly transitioning its view to the final location.

     .to(
        this.camera.position,
        {
          // then we use motion path to move the camera to the player behind the raccoon
          motionPath: {
            path: pathPoints,
            curviness: 0,
            autoRotate: false,
          },
          ease: "power1.inOut",
          duration: DURATION_RETURNING_FORWARD,
          onUpdate: function () {
            const progress = this.progress();
    
            // wait until progress reaches a certain point to rotate to the camera at the player LookAt
            if (
              progress >=
                1 -
                  DURATION_LOOKAT_RETURNING_FORWARD /
                    DURATION_RETURNING_FORWARD &&
              !this.lookAtTransitionStarted
            ) {
    	         this.lookAtTransitionStarted = true; 
    	         
               // Create a new Vector3 to store the current look direction
               const currentLookAt = new Vector3();
    
                // Get the current camera's forward direction (where it's looking)
                instance!.camera.getWorldDirection(currentLookAt);
    
                // Extend the look direction by 100 units and add the camera's position
                // This creates a point in space that the camera is currently looking at
                currentLookAt.multiplyScalar(100).add(instance!.camera.position);
    
                // smooth lookAt animation
    	          createSmoothLookAtTransition(
    	            currentLookAt,
    	            new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_LOOKAT),
    	            DURATION_LOOKAT_RETURNING_FORWARD,
    	            this.camera
    	          );
            }
          },
        },
      )
      .add(() => {
        // animation is completed, you can add some code here
      });

    As you noticed, I used a utility function called
    smoothLookAtTransition
    since I needed this functionality in multiple places.

    import type { Vector3 } from "three";
    import { gsap } from "gsap";
    
    import type { Camera } from "@/types/three.types";
    
    export const createSmoothLookAtTransition = (
      from: Vector3,
      to: Vector3,
      duration: number,
      camera: Camera,
      ease: string = "power2.out",
    ) => {
      const lookAtPosition = { x: from.x, y: from.y, z: from.z };
      return gsap.to(lookAtPosition, {
        x: to.x,
        y: to.y,
        z: to.z,
        duration,
        ease: ease,
        onUpdate: () => {
          camera.lookAt(lookAtPosition.x, lookAtPosition.y, lookAtPosition.z);
        },
      });
    };

    With everything ready, the animation sequence is run when
    playAnimation()
    is triggered.

    playAnimation() {
        // first set the position of the player
        this.setPositionPlayer();
        // then play the animation
        this.timelineAnimation.play();
      }
    
      setPositionPlayer() {
       // an simple utils to update the position of the player when the user land in the scene, return or switch scene.
        setPlayerPosition(this.experience, {
          position: PLAYER_POSITION_RETURNING,
          quaternion: RETURNING_PLAYER_QUATERNION,
          rotation: RETURNING_PLAYER_ROTATION,
        });
      }

    Scroll-Triggered Animations: Showcasing Books on About Pages

    While the game is fun and filled with details, the case studies and about pages are crucial to the overall experience,
    even though they follow a more standardized format. These pages still have their own unique appeal. They are filled
    with subtle details and animations, particularly scroll-triggered effects such as split text animations when
    paragraphs enter the viewport, along with fade-out effects on SVGs and other assets. These animations create a vibe
    that mirrors the mysterious yet intriguing atmosphere of the game, inviting visitors to keep scrolling and exploring.

    While I can’t cover every animation in detail, I ‘d like to share the technical approach behind the book animations
    featured on the about page. This effect blends DOM scroll event tracking with a Three.js scene, creating a seamless
    interaction between the user ‘s scrolling behavior and the 3D-rendered books. As visitors scroll down the page, the
    books transition elegantly and respond dynamically to their movement.

    Before we dive into the
    Three.js
    file, let ‘s look into the
    Vue
    component.

    //src/components/BookGallery/BookGallery.vue
    <template>
      <!-- the ID is used in the three.js file -->
      <div class="book-gallery" id="bookGallery" ref="bookGallery"></div>
    </template>
    
    <script setup lang="ts">
    import { onBeforeUnmount, onMounted, onUnmounted, ref } from "vue";
    
    import gsap from "gsap";
    import { ScrollTrigger } from "gsap/ScrollTrigger";
    
    import type { BookGalleryProps } from "./types";
    
    gsap.registerPlugin(ScrollTrigger);
    
    const props = withDefaults(defineProps<BookGalleryProps>(), {});
    
    const bookGallery = ref<HTMLBaseElement | null>(null);
    
    const setupScrollTriggers = () => {
     ...
    };
    
    const triggerAnimation = (index: number) => {
      ...
    };
    
    onMounted(() => {
      setupScrollTriggers();
    });
    
    onUnmounted(() => {
      ...
    });
    </script>
    
    <style lang="scss" scoped>
    .book-gallery {
      position: relative;
      height: 400svh; // 1000svh * 4 books
    }
    </style>

    Thresholds are defined for each book to determine which one will be active – that is, the book that will face the
    camera.

    const setupScrollTriggers = () => {
      if (!bookGallery.value) return;
    
      const galleryHeight = bookGallery.value.clientHeight;
      const scrollThresholds = [
        galleryHeight * 0.15,
        galleryHeight * (0.25 + (0.75 - 0.25) / 3),
        galleryHeight * (0.25 + (2 * (0.75 - 0.25)) / 3),
        galleryHeight * 0.75,
      ];
    
      ...
    };

    Then I added some
    GSAP
    magic by looping through each threshold and attaching scrollTrigger to it.

    const setupScrollTriggers = () => {
    
    	...
    
    	scrollThresholds.forEach((threshold, index) => {
    	    ScrollTrigger.create({
    	      trigger: bookGallery.value,
    	      markers: false,
    	      start: `top+=${threshold} center`,
    	      end: `top+=${galleryHeight * 0.5} bottom`,
    	      onEnter: () => {
    	        triggerAnimation(index);
    	      },
    	      onEnterBack: () => {
    	        triggerAnimation(index);
    	      },
    	      once: false,
    	    });
    	  });
    };

    On scroll, when the user enters or re-enters a section defined by the thresholds, a function is triggered within a
    Three.js
    file.

    const triggerAnimation = (index: number) => {
      window.experience?.world?.books?.createAnimation(index);
    };

    Now let ‘s look at
    Three.js
    file:

    // src/three/Experience/Basement/World/Books/Books.ts
    
    import * as THREE from "three";
    import Experience from "@/three/Experience/Basement/Experience/Experience";
    
    import { SCROLL_RATIO } from "@/constant/scroll";
    
    import { gsap } from "gsap";
    
    import type { Book } from "./books.types";
    import type { Material, Scene, Texture, ThreeGroup } from "@/types/three.types";
    import type { Sizes } from "@/three/Experience/Utils/Sizes/types";
    import type LoadModel from "@/three/Experience/factories/LoadModel/LoadModel";
    import type MaterialGenerator from "@/three/Experience/factories/MaterialGenerator/BasicMaterialGenerator";
    import type Resources from "@/three/Experience/Utils/Ressources/Resources";
    
    const GSAP_EASE = "power2.out";
    const GSAP_DURATION = 1;
    const NB_OF_VIEWPORTS_BOOK_SECTION = 5;
    
    let instance: Books | null = null;
    
    export default class Books {
      public scene: Scene;
      public experience: Experience;
      public resources: Resources;
      public loadModel: LoadModel;
      public sizes: Sizes;
    
      public materialGenerator: MaterialGenerator;
      public resourceDiffuse: Texture;
      public resourceNormal: Texture;
      public bakedMaterial: Material;
    
      public startingPostionY: number;
      public originalPosition: Book[];
      public activeIndex: number = 0;
      public isAnimationRunning: boolean = false;
      
      public bookGalleryElement: HTMLElement | null = null;
      public bookSectionHeight: number;
      public booksGroup: ThreeGroup;
    
    
      constructor() {
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.sceneSecondary; // I am using a second scene for the books, so it's not affected by the primary scene (basement in the background)
        this.sizes = this.experience.sizes;
        
        this.resources = this.experience.resources;
        this.materialGenerator = this.experience.materialGenerator;
    
        this.init();
      }
    
      init() {
        ...
      }
    
      initModels() {
       ...
      }
    
      findPosition() {
       ...
      }
    
      setBookSectionHeight() {
       ...
      }
    
      initBooks() {
       ...
      }
    
      initBook() {
       ...
      }
    
      createAnimation() {
        ...
      }
    
      toggleIsAnimationRunning() {
        ...
      }
    
      ...
    
      destroySingleton() {
        ...
      }
    }

    When the file is initialized, we set up the textures and positions of the books.

    init() {
      this.initModels();
      this.findPosition();
      this.setBookSectionHeight();
      this.initBooks();
    }
    
    initModels() {
      this.originalPosition = [
          {
          name: "book1",
          meshName: null, // the name of the mesh from Blender will dynamically be written here
          position: { x: 0, y: -0, z: 20 },
          rotation: { x: 0, y: Math.PI / 2.2, z: 0 }, // some rotation on y axis so it looks more natural when the books are pilled
        },
        {
          name: "book2",
          meshName: null,
          position: { x: 0, y: -0.25, z: 20 },
          rotation: { x: 0, y: Math.PI / 1.8, z: 0 },
        },
        {
          name: "book3",
          meshName: null,
          position: { x: 0, y: -0.52, z: 20 },
          rotation: { x: 0, y: Math.PI / 2, z: 0 },
        },
        {
          name: "book4",
          meshName: null,
          position: { x: 0, y: -0.73, z: 20 },
          rotation: { x: 0, y: Math.PI / 2.3, z: 0 },
        },
      ];
    
      this.resourceDiffuse = this.resources.items.bookDiffuse;
      this.resourceNormal = this.resources.items.bookNormal;
    
        // a reusable class to set the material and normal map
      this.bakedMaterial = this.materialGenerator.setStandardMaterialAndNormal(
        this.resourceDiffuse,
        this.resourceNormal
      );
    }
    
    //#region position of the books
    
    // Finds the initial position of the book gallery in the DOM
    findPosition() {
      this.bookGalleryElement = document.getElementById("bookGallery");
    
      if (this.bookGalleryElement) {
        const rect = this.bookGalleryElement.getBoundingClientRect();
        this.startingPostionY = (rect.top + window.scrollY) / 200;
      }
    }
    
    //  Sets the height of the book section based on viewport and scroll ratio
    setBookSectionHeight() {
      this.bookSectionHeight =
        this.sizes.height * NB_OF_VIEWPORTS_BOOK_SECTION * SCROLL_RATIO;
    }
    
    //#endregion position of the books
    

    Each book mesh is created and added to the scene as a
    THREE.Group
    .

    init() {
      ...
      this.initBooks();
    }
    
    ...
    
    initBooks() {
      this.booksGroup = new THREE.Group();
      this.scene.add(this.booksGroup);
      
      this.originalPosition.forEach((position, index) => {
        this.initBook(index, position);
      });
    }
    
    initBook(index: number, position: Book) {
      const bookModel = this.experience.resources.items[position.name].scene;
      this.originalPosition[index].meshName = bookModel.children[0].name;
    
      //Reusable code to set the models. More details under the Design Parterns section
      this.loadModel.addModel(
        bookModel,
        this.bakedMaterial,
        false,
        false,
        false,
        true,
        true,
        2,
        true
      );
    
      this.scene.add(bookModel);
    
      bookModel.position.set(
        position.position.x,
        position.position.y - this.startingPostionY,
        position.position.z
      );
      
      bookModel.rotateY(position.rotation.y);
      bookModel.scale.set(10, 10, 10);
      this.booksGroup.add(bookModel);
    }

    Each time a book
    enters
    or
    reenters
    its thresholds, the triggers from the
    Vue
    file run the animation
    createAnimation
    in this file, which rotates the active book in front of the camera and stacks the other books into a pile.

    ...
    
    createAnimation(activeIndex: number) {
        if (!this.originalPosition) return;
    
        this.originalPosition.forEach((item: Book) => {
          const bookModel = this.scene.getObjectByName(item.meshName);
          if (bookModel) {
            gsap.killTweensOf(bookModel.rotation);
            gsap.killTweensOf(bookModel.position);
          }
        });
        this.toggleIsAnimationRunning(true);
    
        this.activeIndex = activeIndex;
        this.originalPosition.forEach((item: Book, index: number) => {
          const bookModel = this.scene.getObjectByName(item.meshName);
    
          if (bookModel) {
            if (index === activeIndex) {
              gsap.to(bookModel.rotation, {
                x: Math.PI / 2,
                z: Math.PI / 2.2,
                y: 0,
                duration: 2,
                ease: GSAP_EASE,
                delay: 0.3,
                onComplete: () => {
                  this.toggleIsAnimationRunning(false);
                },
              });
              gsap.to(bookModel.position, {
                y: 0,
                duration: GSAP_DURATION,
                ease: GSAP_EASE,
                delay: 0.1,
              });
            } else {
            // pile unactive book
              gsap.to(bookModel.rotation, {
                x: 0,
                y: 0,
                z: 0,
                duration: GSAP_DURATION - 0.2,
                ease: GSAP_EASE,
              });
    
              const newYPosition = activeIndex < index ? -0.14 : +0.14;
    
              gsap.to(bookModel.position, {
                y: newYPosition,
                duration: GSAP_DURATION,
                ease: GSAP_EASE,
                delay: 0.1,
              });
            }
          }
        });
      }
    
    
      toggleIsAnimationRunning(bool: boolean) {
        this.isAnimationRunning = bool;
      }

    Interactive Physics Simulations: Rope Dynamics

    The game is the main attraction of the website. The entire concept began back in 2022, when I set out to build a small
    mini-game where you could jump on tables and smash things and it was my favorite part to work on.

    Beyond being fun to develop, the interactive physics elements make the experience more engaging, adding a whole new
    layer of excitement and exploration that simply isn’t possible in a flat, static environment.

    While I can ‘t possibly cover all the physics-related elements, one of my favorites is the rope system near the menu.
    It’s a subtle detail, but it was one of the first things I coded when I started leaning into a more theatrical,
    artistic direction.

    The ropes were also built with performance in mind—optimized to look and behave convincingly without dragging down the
    framerate.

    This is the base file for the meshes:

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
    
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    import RopeMaterialGenerator from "@/three/Experience/Factories/MaterialGenerator/RopeMaterialGenerator";
    
    import ropesLocation from "./ropesLocation.json";
    
    import type { Location, List } from "@/types/experience/experience.types";
    import type { Scene, Resources, Physics, RopeMesh, CurveQuad } from "@/types/three.types";
    
    let instance: RopeModel | null = null;
    
    export default class RopeModel {
      public scene: Scene;
      public experience: Experience;
      public resources: Resources;
      public physics: Physics;
      public material: Material;
      public list: List;
      public ropeMaterialGenerator: RopeMaterialGenerator;
    
      public ropeLength: number = 20;
      public ropeRadius: number = 0.02;
      public ropeRadiusSegments: number = 8;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.resources = this.experience.resources;
        this.physics = this.experience.physics;
        this.ropeMaterialGenerator = new RopeMaterialGenerator();
        
        this.ropeLength = this.experience.physics.rope.numberOfSpheres || 20;
        this.ropeRadius = 0.02;
        this.ropeRadiusSegments = 8;
    
        this.list = {
          rope: [],
        };
    
        this.initRope();
      }
      
      initRope() {
       ...
      }
      
      createRope() {
        ...
      }
      
      setArrayOfVertor3() {
        ...
      }
      
      setYValues() {
        ...
      }
      
      setMaterial() {
        ...
      }
    
      addRopeToScene() {
        ...
      }
    
      //#region update at 60FPS
      update() {
       ...
      }
      
      updateLineGeometry() {
       ...
      }
      //#endregion update at 60FPS
    
      destroySingleton() {
        ...
      }
    }

    Mesh creation is initiated inside the constructor.

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
    
     constructor() {
    	...
        this.initRope();
      }
      
      initRope() {
        // Generate the material that will be used for all ropes
        this.setMaterial();
    
        // Create a rope at each location specified in the ropesLocation configuration
        ropesLocation.forEach((location) => {
          this.createRope(location);
        });
      }
    
      createRope(location: Location) {
        // Generate the curve that defines the rope's path
        const curveQuad = this.setArrayOfVertor3();
        this.setYValues(curveQuad);
    
        const tube = new THREE.TubeGeometry(
          curveQuad,
          this.ropeLength,
          this.ropeRadius,
          this.ropeRadiusSegments,
          false
        );
    
        const rope = new THREE.Mesh(tube, this.material);
    
        rope.geometry.attributes.position.needsUpdate = true;
    
        // Add the rope to the scene and set up its physics. I'll explain it later.
        this.addRopeToScene(rope, location);
      }
    
      setArrayOfVertor3() {
        const arrayLimit = this.ropeLength;
        const setArrayOfVertor3 = [];
        // Create points in a vertical line, spaced 1 unit apart
        for (let index = 0; index < arrayLimit; index++) {
          setArrayOfVertor3.push(new THREE.Vector3(10, 9 - index, 0));
          if (index + 1 === arrayLimit) {
            return new THREE.CatmullRomCurve3(
              setArrayOfVertor3,
              false,
              "catmullrom",
              0.1
            );
          }
        }
      }
    
      setYValues(curve: CurveQuad) {
        // Set each point's Y value to its index, creating a vertical line
        for (let i = 0; i < curve.points.length; i++) {
          curve.points[i].y = i;
        }
      }
      
      setMaterial(){
    	  ...
      }

    Since the rope texture is used in multiple places, I use a factory pattern for efficiency.

    ...
    
    setMaterial() {
        this.material = this.ropeMaterialGenerator.generateRopeMaterial(
          "rope",
          0x3a301d, // Brown color
          1.68, // Normal Repeat
          0.902, // Normal Intensity
          21.718, // Noise Strength
          1.57, // UV Rotation
          9.14, // UV Height
          this.resources.items.ropeDiffuse, // Diffuse texture map
          this.resources.items.ropeNormal // Normal map for surface detail
        );
      }
    // src/three/Experience/Factories/MaterialGenerator/RopeMaterialGenerator.ts
    import * as THREE from "three";
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import vertexShader from "@/three/Experience/Shaders/Rope/vertex.glsl";
    import fragmentShader from "@/three/Experience/Shaders/Rope/fragment.glsl";
    
    import type { ResourceDiffuse, RessourceNormal } from "@/types/three.types";
    import type Debug from "@/three/Experience/Utils/Debugger/Debug";
    
    let instance: RopeMaterialGenerator | null = null;
    
    export default class RopeMaterialGenerator {
      public experience: Experience;
    
      private debug: Debug;
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.debug = this.experience.debug;
      }
    
      generateRopeMaterial(
        name: string,
        uLightColor: number,
        uNormalRepeat: number,
        uNormalIntensity: number,
        uNoiseStrength: number,
        uvRotate: number,
        uvHeight: number,
        resourceDiffuse: ResourceDiffuse,
        ressourceNormal: RessourceNormal
      ) {
        const normalTexture = ressourceNormal;
        normalTexture.wrapS = THREE.RepeatWrapping;
        normalTexture.wrapT = THREE.RepeatWrapping;
    
        const diffuseTexture = resourceDiffuse;
        diffuseTexture.wrapS = THREE.RepeatWrapping;
        diffuseTexture.wrapT = THREE.RepeatWrapping;
    
        const customUniforms = {
          uAddedLight: {
            value: new THREE.Color(0x000000),
          },
          uLightColor: {
            value: new THREE.Color(uLightColor),
          },
          uNormalRepeat: {
            value: uNormalRepeat,
          },
          uNormalIntensity: {
            value: uNormalIntensity,
          },
          uNoiseStrength: {
            value: uNoiseStrength,
          },
          uShadowStrength: {
            value: 1.296,
          },
          uvRotate: {
            value: uvRotate, 
          },
          uvHeight: {
            value: uvHeight,
          },
          uLightPosition: {
            value: new THREE.Vector3(60, 100, 60),
          },
          normalMap: {
            value: normalTexture,
          },
          diffuseMap: {
            value: diffuseTexture,
          },
          uAlpha: {
            value: 1,
          },
        };
    
        const shaderUniforms = THREE.UniformsUtils.clone(
          THREE.UniformsLib["lights"]
        );
        const shaderUniformsNormal = THREE.UniformsUtils.clone(
          THREE.UniformsLib["normalmap"]
        );
        const uniforms = Object.assign(
          shaderUniforms,
          shaderUniformsNormal,
          customUniforms
        );
    
        const materialFloor = new THREE.ShaderMaterial({
          uniforms: uniforms,
          vertexShader: vertexShader,
          fragmentShader: fragmentShader,
          precision: "lowp",
        });
    
        return materialFloor;
      }
      
      
      destroySingleton() {
        ...
      }
    }
    

    The vertex and its fragment

    // src/three/Experience/Shaders/Rope/vertex.glsl
    
    uniform float uNoiseStrength;      // Controls the intensity of noise effect
    uniform float uNormalIntensity;    // Controls the strength of normal mapping
    uniform float uNormalRepeat;       // Controls the tiling of normal map
    uniform vec3 uLightColor;          // Color of the light source
    uniform float uShadowStrength;     // Intensity of shadow effect
    uniform vec3 uLightPosition;       // Position of the light source
    uniform float uvRotate;            // Rotation angle for UV coordinates
    uniform float uvHeight;            // Height scaling for UV coordinates
    uniform bool isShadowBothSides;    // Flag for double-sided shadow rendering
    
    
    varying float vNoiseStrength;      // Passes noise strength to fragment shader
    varying float vNormalIntensity;    // Passes normal intensity to fragment shader
    varying float vNormalRepeat;       // Passes normal repeat to fragment shader
    varying vec2 vUv;                  // UV coordinates for texture mapping
    varying vec3 vColorPrimary;        // Primary color for the material
    varying vec3 viewPos;              // Position in view space
    varying vec3 vLightColor;          // Light color passed to fragment shader
    varying vec3 worldPos;             // Position in world space
    varying float vShadowStrength;     // Shadow strength passed to fragment shader
    varying vec3 vLightPosition;       // Light position passed to fragment shader
    
    // Helper function to create a 2D rotation matrix
    mat2 rotate(float angle) {
        return mat2(cos(angle), -sin(angle), sin(angle), cos(angle));
    }
    
    void main() {
        // Calculate rotation angle and its sine/cosine components
        float angle = 1.0 * uvRotate;
        float s = sin(angle);
        float c = cos(angle);
    
        // Create rotation matrix for UV coordinates
        mat2 rotationMatrix = mat2(c, s, -s, c);
    
        // Define pivot point for UV rotation
        vec2 pivot = vec2(0.5, 0.5);
    
        // Transform vertex position to clip space
        gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
    
        // Apply rotation and height scaling to UV coordinates
        vUv = rotationMatrix * (uv - pivot) + pivot;
        vUv.y *= uvHeight;
    
        // Pass various parameters to fragment shader
        vNormalRepeat = uNormalRepeat;
        vNormalIntensity = uNormalIntensity;
        viewPos = vec3(0.0, 0.0, 0.0);  // Initialize view position
        vNoiseStrength = uNoiseStrength;
        vLightColor = uLightColor;
        vShadowStrength = uShadowStrength;
        vLightPosition = uLightPosition;
    }
    // src/three/Experience/Shaders/Rope/fragment.glsl
    // Uniform textures for normal and diffuse mapping
    uniform sampler2D normalMap;
    uniform sampler2D diffuseMap;
    
    // Varying variables passed from vertex shader
    varying float vNoiseStrength;
    varying float vNormalIntensity;
    varying float vNormalRepeat;
    varying vec2 vUv;
    varying vec3 viewPos;
    varying vec3 vLightColor;
    varying vec3 worldPos;
    varying float vShadowStrength;
    varying vec3 vLightPosition;
    
    // Constants for lighting calculations
    const float specularStrength = 0.8;
    const vec4 colorShadowTop = vec4(vec3(0.0, 0.0, 0.0), 1.0);
    
    void main() {
        // normal, diffuse and light accumulation
        vec3 samNorm = texture2D(normalMap, vUv * vNormalRepeat).xyz * 2.0 - 1.0;
        vec4 diffuse = texture2D(diffuseMap, vUv * vNormalRepeat);
        vec4 addedLights = vec4(0.0, 0.0, 0.0, 1.0);
    
        // Calculate diffuse lighting
        vec3 lightDir = normalize(vLightPosition - worldPos);
        float diff = max(dot(lightDir, samNorm), 0.0);
        addedLights.rgb += diff * vLightColor;
    
        // Calculate specular lighting
        vec3 viewDir = normalize(viewPos - worldPos);
        vec3 reflectDir = reflect(-lightDir, samNorm);
        float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0);
        addedLights.rgb += specularStrength * spec * vLightColor;
    
        // Calculate top shadow effect. In this case, this higher is it, the darker it gets.
        float shadowTopStrength = 1.0 - pow(vUv.y, vShadowStrength) * 0.5;
        float shadowFactor = smoothstep(0.0, 0.5, shadowTopStrength);
    
        // Mix diffuse color with shadow. 
        vec4 mixedColorWithShadowTop = mix(diffuse, colorShadowTop, shadowFactor);
        // Mix lighting with shadow
        vec4 addedLightWithTopShadow = mix(addedLights, colorShadowTop, shadowFactor);
    
        // Final color composition with normal intensity control
        gl_FragColor = mix(mixedColorWithShadowTop, addedLightWithTopShadow, vNormalIntensity);
    }

    Once the material is created and added to the mesh, the
    addRopeToScene
    function adds the rope to the scene, then calls the
    addPhysicsToRope
    function from the physics file.

    // src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
      addRopeToScene(mesh: Mesh, location: Location) {
        this.list.rope.push(mesh); //Add the rope to an array, which will be used by the physics file to update the mesh
        this.scene.add(mesh);
        this.physics.rope.addPhysicsToRope(location); // same as src/three/Experience/Theater/Physics/Theater/Rope/Rope.addPhysicsToRope(location)
      }

    Let ‘s now focus on the physics file.

    // src/three/Experience/Theater/Physics/Theater/Rope/Rope.ts
    
    import * as CANNON from "cannon-es";
    
    import Experience from "@/three/Experience/Theater/Experience/Experience";
    
    import type { Location } from "@/types/experience.types";
    import type Physics from "@/three/Experience/Theater/Physics/Physics";
    import type { Scene, SphereBody } from "@/types/three.types";
    
    let instance: Rope | null = null;
    
    const SIZE_SPHERE = 0.05;
    const ANGULAR_DAMPING = 1;
    const DISTANCE_BETWEEN_SPHERES = SIZE_SPHERE * 5;
    const DISTANCE_BETWEEN_SPHERES_BOTTOM = 2.3;
    const DISTANCE_BETWEEN_SPHERES_TOP = 6;
    const LINEAR_DAMPING = 0.5;
    const NUMBER_OF_SPHERES = 20;
    
    export default class Rope {
      public experience: Experience;
      public physics: Physics;
      public scene: Scene;
      public list: list[];
    
      constructor() {
        //    Singleton
        if (instance) {
          return instance;
        }
        instance = this;
    
        this.experience = new Experience();
        this.scene = this.experience.scene;
        this.physics = this.experience.physics;
    
        this.list = {
          rope: [],
        };
      }
    
      //#region add physics
      addPhysicsToRope() {
       ...
      }
    
      setRopePhysics() {
        ...
      }
      
      setMassRope() {
       ...
      }
      
      setDistanceBetweenSpheres() {
        ...
      }
      
      setDistanceBetweenConstraints() {
       ...
      }
      
      addConstraints() {
        ...
      }
      //#endregion add physics
    
      //#region update at 60FPS
      update() {
        ...
      }
    
      loopRopeWithPhysics() {
        ...
      }
      
      updatePoints() {
        ...
      }
      //#endregion update at 60FPS
    
      destroySingleton() {
        ...
      }
    }

    The rope’s physics is created from the mesh file using the methods
    addPhysicsToRope
    , called using
    this.physics.rope.addPhysicsToRope(location);.

    addPhysicsToRope(location: Location) {
      this.setRopePhysics(location);
    }
    
    setRopePhysics(location: Location) {
      const sphereShape = new CANNON.Sphere(SIZE_SPHERE);
      const rope = [];
    
      let lastBody = null;
      for (let index = 0; index < NUMBER_OF_SPHERES; index++) {
        // Create physics body for each sphere in the rope. The spheres will be what collide with the player
        const spherebody = new CANNON.Body({ mass: this.setMassRope(index) });
    
        spherebody.addShape(sphereShape);
        spherebody.position.set(
          location.x,
          location.y - index * DISTANCE_BETWEEN_SPHERES,
          location.z
        );
        this.physics.physics.addBody(spherebody);
        rope.push(spherebody);
        spherebody.linearDamping = LINEAR_DAMPING;
        spherebody.angularDamping = ANGULAR_DAMPING;
    
        // Create constraints between consecutive spheres
        lastBody !== null
          ? this.addConstraints(spherebody, lastBody, index)
          : null;
    
        lastBody = spherebody;
    
        if (index + 1 === NUMBER_OF_SPHERES) {
          this.list.rope.push(rope);
        }
      }
    }
    
    setMassRope(index: number) {
      return index === 0 ? 0 : 2; // first sphere is fixed (mass 0)
    }
    
    setDistanceBetweenSpheres(index: number, locationY: number) {
      return locationY - DISTANCE_BETWEEN_SPHERES * index;
    }
    
    setDistanceBetweenConstraints(index: number) {
    // since the user only interact the spheres are the bottom, so the distance between the spheres is gradualy increasing from the bottom to the top//Since the user only interacts with the spheres that are at the bottom, the distance between the spheres is gradually increasing from the bottom to the top
      if (index <= 2) {
        return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_TOP;
      }
      if (index > 2 && index <= 8) {
        return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_BOTTOM;
      }
      return DISTANCE_BETWEEN_SPHERES;
    }
    
    addConstraints(
      sphereBody: CANNON.Body,
      lastBody: CANNON.Body,
      index: number
    ) {
      this.physics.physics.addConstraint(
        new CANNON.DistanceConstraint(
          sphereBody,
          lastBody,
          this.setDistanceBetweenConstraints(index)
        )
      );
    }
    

    When configuring physics parameters, strategy is key. Although users won ‘t consciously notice during gameplay, they
    can only interact with the lower portion of the rope. Therefore, I concentrated more physics detail where it matters –
    by adding more spheres to the bottom of the rope.

    Since the user only interacts with the bottom of the rope, the density of the physics sphere is higher at the bottom
    of the rope than at the top of the rope.

    Rope meshes are then updated every frame from the physics file.

     //#region update at 60FPS
     update() {
      this.loopRopeWithPhysics();
    }
    
    loopRopeWithPhysics() {
      for (let index = 0; index < this.list.rope.length; index++) {
        this.updatePoints(this.list.rope[index], index);
      }
    }
    
    updatePoints(element: CANNON.Body[], indexParent: number) {
      element.forEach((item: CANNON.Body, index: number) => {
        // Update the mesh with the location of each of the physics spheres
        this.experience.world.rope.list.rope[
          indexParent
        ].geometry.parameters.path.points[index].copy(item.position);
      });
    }
    //#endregion update at 60FPS

    Animations in the DOM – ticket tearing particles

    While the website heavily relies on Three.js to create an immersive experience, many elements remain DOM-based. One of
    my goals for this portfolio was to combine both worlds: the rich, interactive 3D environments and the efficiency of
    traditional DOM elements. Furthermore, I genuinely enjoy coding DOM-based micro-interactions, so skipping out on them
    wasn ‘t an option!

    One of my favorite DOM animations is the ticket-tearing effect, especially the particles flying away. It ‘s subtle,
    but adds a bit of charm. The effect is not only fun to watch but also relatively easy to adapt to other projects.
    First, let ‘s look at the structure of the components.

    TicketBase.vue
    is a fairly simple file with minimal styling. It handles the tearing animation and a few basic functions. Everything
    else related to the ticket such as the style is handled by other components passed through slots.

    To make things clearer, I ‘ve cleaned up my
    TicketBase.vue
    file a bit to highlight how the particle effect works.

    import { computed, ref, watch, useSlots } from "vue";
    import { useAudioStore } from "@/stores/audio";
    
    import type { TicketBaseProps } from "./types";
    
    const props = withDefaults(defineProps<TicketBaseProps>(), {
      isTearVisible: true,
      isLocked: false,
      cardId: null,
      isFirstTear: false,
      runTearAnimation: false,
      isTearable: false,
      markup: "button",
    });
    
    const { setCurrentFx } = useAudioStore();
    
    const emit = defineEmits(["hover:enter", "hover:leave"]);
    
    const particleContainer = ref<HTMLElement | null>(null);
    const particleContainerTop = ref<HTMLElement | null>(null);
    const timeoutParticles = ref<NodeJS.Timeout | null>(null);
    const isAnimationStarted = ref<boolean>(false);
    const isTearRipped = ref<boolean>(false);
    
    const isTearable = computed(
      () => isTearVisible || (!isTearVisible && isFirstTear)
    );
    
    const handleClick = () => {
      ...
    };
    
    const runTearAnimation = () => {
      ...
    };
    
    const createParticles = () => {
      ...
    };
    
    const deleteParticles = () => {
      ...
    };
    
    const toggleIsAnimationStarted = () => {
    ...
    };
    
    const cssClasses = computed(() => [
      ...
    ]);
    
    
    
    .ticket-base {
       ...
     }
    
    
    
    /* particles can't be scoped */
    .particle {
    ...
    }

    When a ticket is clicked (or the user presses Enter), it runs the function
    handleClick()
    , which then calls
    runTearAnimation()
    .

    const handleClick = () => {
      if (!props.isTearable || props.isLocked || isAnimationStarted.value) return;
    	...
    
      runTearAnimation();
    };
    
    ...
    
    const runTearAnimation = () => {
      toggleIsAnimationStarted(true);
    
      createParticles(particleContainerTop.value, "bottom");
      createParticles(particleContainer.value, "top");
      isTearRipped.value = true;
      // add other functions such ad tearing SFX
    };
    
    
    ...
    
    const toggleIsAnimationStarted = (bool: boolean) => {
      isAnimationStarted.value = bool;
    };

    The
    createParticles
    function creates a few new
    <div>
    elements, which act as the little particles. These divs are then appended to either the main part of the ticket or the
    torn part.

    const createParticles = (containerSelector: HTMLElement, direction: string) => {
      const numParticles = 5;
      for (let i = 0; i < numParticles; i++) {
        const particle = document.createElement("div");
        particle.className = "particle";
    
        // Calculate left position based on index and add small random offset
        const baseLeft = (i / numParticles) * 100;
        const randomOffset = (Math.random() - 0.5) * 10;
        particle.style.left = `calc(${baseLeft}% + ${randomOffset}%)`;
    
        // Assign unique animation properties
        const duration = Math.random() * 0.3 + 0.1;
        const translateY = (i / numParticles) * -20 - 2;
        const scale = Math.random() * 0.5 + 0.5;
        const delay = ((numParticles - i - 1) / numParticles) * 0;
    
        particle.style.animation = `flyAway ${duration}s ${delay}s ease-in forwards`;
        particle.style.setProperty("--translateY", `${translateY}px`);
        particle.style.setProperty("--scale", scale.toString());
    
        if (direction === "bottom") {
          particle.style.animation = `flyAwayBottom ${duration}s ${delay}s ease-in forwards`;
        }
    
        containerSelector.appendChild(particle);
    
        // Remove particle after animation ends
        particle.addEventListener("animationend", () => {
          particle.remove();
        });
      }
    };

    The particles are animated using a CSS keyframes animation called
    flyAway
    or
    flyAwayBottom
    .

    .particle {
      position: absolute;
      width: 0.2rem;
      height: 0.2rem;
      background-color: var(--color-particles); /* === #655c52 */
    
      animation: flyAway 3s ease-in forwards;
    }
    
    @keyframes flyAway {
      0% {
        transform: translateY(0) scale(1);
        opacity: 1;
      }
      100% {
        transform: translateY(var(--translateY)) scale(var(--scale));
        opacity: 0;
      }
    }
    
    @keyframes flyAwayBottom {
      0% {
        transform: translateY(0) scale(1);
        opacity: 1;
      }
      100% {
        transform: translateY(calc(var(--translateY) * -1)) scale(var(--scale));
        opacity: 0;
      }
    }

    Additional Featured Animations

    There are so many features, details easter eggs and animation I wanted to cover in this article, but it’s simply not
    possible to go through everything as it would be too much and many deserve their own tutorial.

    That said, here are some of my favorites to code. They definitely deserve a spot in this article.

    Reflections on Aurel’s Grand Theater

    Even though it took longer than I originally anticipated, Aurel ‘s Grand Theater was an incredibly fun and rewarding
    project to work on. Because it wasn ‘t a client project, it offered a rare opportunity to freely experiment, explore
    new ideas, and push myself outside my comfort zone, without the usual constraints of budgets or deadlines.

    Looking back, there are definitely things I ‘d approach differently if I were to start again. I ‘d spend more time
    defining the art direction upfront, lean more heavily into GPU, and perhaps implement Rapier. But despite these
    reflections, I had an amazing time building this project and I ‘m satisfied with the final result.

    While recognition was never the goal, I ‘m deeply honored that the site was acknowledged. It received FWA of the Day,
    Awwwards Site of the Day and Developer Award, as well as GSAP’s Site of the Week and Site of the Month.

    I ‘m truly grateful for the recognition, and I hope this behind-the-scenes look and shared code snippets inspire you
    in your own creative coding journey.



    Source link

  • Native Design Tokens: The Foundation of Consistent, Scalable, Open Design

    Native Design Tokens: The Foundation of Consistent, Scalable, Open Design


    As design and development teams grow and projects span across web, mobile, and internal tools, keeping everything consistent becomes tricky. Even small changes, like updating a brand color or adjusting spacing, can turn into hours of manual work across design files, codebases, and documentation. It is easy for things to drift out of sync.

    That is where design tokens come in. They are a way to define and reuse the key design decisions like colors, typography, and spacing in a format that both designers and developers can use. Instead of repeating values manually, tokens let teams manage these decisions from a central place and apply them consistently across tools and platforms.

    With Penpot’s new native support for design tokens, this workflow becomes more accessible and better integrated. Designers can now create and manage tokens directly inside their design files. Developers can rely on those same tokens being structured and available for use in code. No plugins, no copy pasting, no mismatched styles.

    In this article, we will look at what design tokens are and why they matter, walk through how Penpot implements them, and explore some real world workflows and use cases. Whether you are working solo or managing a large design system, tokens can help bring order and clarity to your design decisions—and we will show you how.

    What are Design Tokens?

    Design tokens are a way to describe the small but important visual decisions that make up your user interface. Things like primary colors, heading sizes, border radius, or spacing between elements. Instead of hardcoding those values in a design file or writing them directly into code, you give each one a name and store it as a token.

    Each token is a small piece of structured data. It has a name, a value, and a type. For example, a button background might be defined like this:

    "button-background": {
      "$value": "#005FCC",
      "$type": "color"
    }

    By putting all your decisions into a token format like this, they can be shared and reused across different projects and tools. Designers can use tokens inside the design tool, while developers can use them to generate CSS variables, theme files, or design system code. It is a way to keep everyone aligned, without needing to sync manually.

    The idea behind tokens has been around for a while, but it is often hard to implement unless you are using very specific tools or have custom workflows in place. Penpot changes that by building token support directly into the tool. You do not need extra plugins or complex naming systems. You define tokens once, and they are available everywhere in your design.

    Tokens are also flexible. You can create simple ones like colors or font sizes, or more complex groups for shadows, typography, or spacing systems. You can even reference other tokens, so if your design language evolves, you only need to change one thing.

    Why Should You Care About Design Tokens?

    Consistency and efficiency are two of the main reasons design tokens are becoming essential in design and development work. They reduce the need for manual coordination, avoid inconsistencies, and make it easier to scale design decisions. Here is how they help across different roles:

    For designers
    Tokens remove the need to repeat yourself. Instead of manually applying the same color or spacing across every frame, you define those values once and apply them as tokens. That means no more copy-pasting styles or fixing inconsistencies later. Everything stays consistent, and updates take seconds, not hours.

    For developers
    You get design values in a format that is ready to use. Tokens act as a shared language between design and code, so instead of pulling hex codes out of a mockup, you work directly with the same values defined by the design team. It reduces friction, avoids mismatches, and makes handoff smoother.

    For teams and larger systems
    Tokens are especially useful when multiple people are working on the same product or when you are managing a design system across several platforms or brands. They allow you to define decisions once and reuse them everywhere, keeping things in sync and easy to update when the brand evolves or when new platforms are added.

    Watch this quick and complete demo as Laura Kalbag, designer, developer and educator at Penpot, highlights the key benefits and main uses of Penpot’s design tokens:

    What Sets Penpot Apart?

    Penpot is not just adding support for design tokens as a separate feature. Tokens are being built directly into how Penpot works. They are part of the core design process, not an extra tool you have to manage on the side.

    You can create tokens from the canvas or from the token panel, organize them into sets, and apply them to components, styles, or entire boards. You do not need to keep track of where a value is used—Penpot does that for you. When you change a token, any component using it updates automatically.

    Take a look at this really great overview:

    Tokens in Penpot follow the same format defined by the Design Tokens Community Group, which makes them easy to sync with code and other tools. They are stored in a way that works across platforms, and they are built to be shared, copied, or extended as your project grows.

    You also get extra capabilities like:

    • Tokens that can store text, numbers, and more
    • Math operations between tokens (for example, spacing that is based on a base value)
    • Integration with Penpot’s graph engine, so you can define logic and conditions around your tokens

    That means you can do more than just store values—you can create systems that adapt based on context or scale with your product.

    Key features

    Penpot design tokens support different token types, themes, and sets.

    Design tokens in Penpot are built to be practical and flexible from the start. Whether you are setting up a simple style guide or building a full design system, these features help you stay consistent without extra effort.

    • Native to the platform
      Tokens are a core part of Penpot. You do not need plugins, workarounds, or naming tricks to make them work. You can create, edit, and apply them directly in your files.
    • Based on open standards
      Penpot follows the format defined by the Design Tokens Community Group (W3C), which means your tokens are portable and ready for integration with other tools or codebases.
    • Component aware
      You can inspect which tokens are applied to components right on the canvas, and copy them out for use in code or documentation.
    • Supports multiple types
      Tokens can represent strings, numbers, colors, font families, shadows, and more. This means you are not limited to visual values—you can also manage logic-based or structural decisions.
    • Math support
      Define tokens in relation to others. For example, you can set a spacing token to be twice your base unit, and it will update automatically when the base changes.
    • Graph engine integration
      Tokens can be part of more advanced workflows using Penpot’s visual graph engine. This opens the door for conditional styling, dynamic UI variations, or even generative design.

    Practical Use Cases

    Design tokens are flexible building blocks that can support a range of workflows. Here are a few ways they’re already proving useful:

    • Scaling across platforms
      Tokens make it easier to maintain visual consistency across web, mobile, and desktop interfaces. When spacing, colors, and typography are tokenized, they adapt across screen sizes and tech stacks without manual rework.
    • Creating themes and variants
      Whether you’re supporting light and dark modes, multiple brands, or regional styles, tokens let you swap out entire visual styles by changing a single set of values—without touching your components.
    • Simplifying handoff and implementation
      Because tokens are defined in code-friendly formats, they eliminate guesswork. Developers can use tokens as source-of-truth values, reducing design drift and unnecessary back-and-forth.
    • Prototyping and iterating quickly
      Tokens make it easier to explore design ideas without breaking things. Want to try out a new font scale or update your color palette? Change the token values and everything updates—no tedious find-and-replace needed.
    • Versioning design decisions
      You can track changes to tokens over time just like code. That means your design system becomes easier to maintain, document, and evolve—without losing control.

    Your First Tokens in Penpot

    So how do you actually work with tokens in Penpot?

    The best way to understand design tokens is to try them out. Penpot makes this surprisingly approachable, even if you’re new to the concept. Here’s how to start creating and using tokens inside the editor.

    Creating a Token

    1. Open your project and click on the Tokens tab in the left panel.
    2. You’ll see a list of token types like color, dimension, font size, etc.
    3. Click the + button next to any token type to create a new token.

    You’ll be asked to fill in:

    • Name: Something like dimension.small or color.primary
    • Value: For example, 8px for a dimension, or #005FCC for a color
    • Description (optional): A short note about what it’s for

    Hit Save, and your token will appear in the list. Tokens are grouped by type, so it stays tidy even as your set grows.

    If you try to create a token with a name that already exists, you’ll get an error. Token names must be unique.

    Editing and Duplicating Tokens

    You can right-click any token to edit or duplicate it.

    • Edit: Change the name, value, or description
    • Duplicate: Makes a copy with -copy added to the name

    Handy if you’re exploring alternatives or setting up variants.

    Referencing Other Tokens (Aliases)

    Tokens can point to other tokens. This lets you define a base token and reuse it across multiple other tokens. If the base value changes, everything that references it updates automatically.

    For example:

    1. Create a token called dimension.small with a value of 8px
    2. Create another token called spacing.small
    3. In spacing.small, set the value to {dimension.small}

    Now if you ever update dimension.small to 4px, the spacing token will reflect that change too.

    Token references are case-sensitive, so be precise.

    Using Math in Tokens

    Penpot supports simple math in token values—especially useful for dimension tokens.

    You can write things like:

    • {dimension.base} * 2
    • 16 + 4
    • {spacing.small} + {spacing.medium}

    Let’s say dimension.base is 4px, and you want a larger version that’s always double. You can set dimension.large to:

    csharpCopyEdit{dimension.base} * 2
    

    This means if you ever change the base, the large size follows along.

    Math expressions support basic operators:

    • + addition
    • - subtraction
    • * multiplication

    This adds a lightweight logic layer to your design decisions—especially handy for spacing scales, typography ramps, or breakpoints.

    What’s Next for Penpot Design Tokens?

    Penpot has an exciting roadmap for design tokens that will continue to expand their functionality:

    • GitHub Sync: A feature allowing teams to easily export and import design tokens, facilitating smooth collaboration between design and development teams.
    • Gradients: An upcoming addition to design tokens, enabling designers to work with gradients as part of their design system.
    • REST API & Automation: The future addition of a REST API will enable even deeper integrations and allow teams to automate their design workflows.

    Since Penpot is open source and works under a culture of sharing as much as they can, as early as possible, you can check out their open Taiga board to see what the team is working on in real time and what’s coming up next.

    Conclusion

    Penpot’s design tokens are more than just a tool for managing visual consistency—they are a game-changer for how design and development teams collaborate. Whether you’re a junior UI designer trying to learn scalable design practices, a senior developer looking to streamline design implementation, or an enterprise team managing a complex design system, design tokens can help bring order to complexity.

    As Penpot continues to refine and expand this feature, now is the perfect time to explore the possibilities it offers.

    Give it a try!

    Are you excited about Penpot’s new design token feature? Check it out and explore the potential of scalable design, and stay tuned for updates. We look forward to see how you will start incorporating design tokens into your workflow!



    Source link