NITEX is not just another fashion-tech company. Their mission is to redefine the supply chain for fashion – bringing speed, sustainability, and intelligence to a traditionally rigid process. Their platform spans the entire workflow: design, trend forecasting, material sourcing, production, and logistics. In short, they offer a seamless, end-to-end system for brands who want to move faster and smarter.
When NITEX approached us, the challenge was clear: they needed more than a website. They needed a platform that could translate their vision into an experience that worked for multiple audiences – brands seeking services, investors looking for clarity, factories wanting partnerships, and talent exploring opportunities.
The project took shape over several months, moving from brand definition to UX architecture, UI design, and technical development. The turning point came with the realization that a single, linear site could not balance storytelling with action. To resolve this, we developed a dual-structure model: one path for narrative and inspiration, and another for practical conversion. This idea shaped every design and technical decision moving forward.
Crafting the Hybrid Identity
NITEX’s identity needed to reflect a unique duality: part fashion brand, part technology company. Our approach was to build a system that could flex between editorial elegance and sharp technical clarity.
At the heart of the identity sits the NITEX logo, an angular form created from a forward-leaning N and X. This symbol is more than a mark – it acts as a flexible frame. The hollow center creates a canvas for imagery, data, or color, visualizing collaboration and adaptability.
This angular geometry informed much of the visual language across the site:
Buttons expand or tilt along the logo’s angles when hovered.
The progress bar in navigation and footer fills in the same diagonal form.
Headlines reveal themselves with angled wipes, reinforcing a consistent rhythm.
Typography was kept bold yet minimal, with global sans-serif structures that feel equally at home in high fashion and digital environments. Imagery played an equally important role. We chose photography that conveyed motion and energy, often with candid blur or dynamic framing. To push this further, we incorporated AI-generated visuals, adding intensity and reinforcing the sense of momentum at the core of the NITEX story. The result is a brand system that feels dynamic, flexible, and scalable – capable of stretching from streetwear to luxury contexts while always staying rooted in clarity and adaptability.
Building the Engine
A complex brand and experience required a strong technical foundation. For this, our developers chose tools that balanced performance, flexibility, and scalability:
Frontend: Nuxt
Backend / CMS: Sanity
Animations & Motion: GSAP and the Web Animations API
The heavy reliance on native CSS transitions and the Web Animations API ensured smooth performance even on low-powered devices. GSAP was used to orchestrate more complex transitions while still keeping load times and resource use efficient. A key architectural decision was to give overlays their own URLs. This meant that when users opened deep-dive layers or content modules, those states were addressable, shareable, and SEO-friendly. This approach kept the experience immersive while ensuring that content remained accessible outside the narrative scroll.
Defining the Flow
Several features stand out in the NITEX site for how they balance storytelling with functionality:
Expandable overlays: Each narrative chapter can unfold into deep-dive layers – showing case studies, workflow diagrams, or leadership perspectives without breaking the scroll.
Dynamic conversion flows: Forms adapt to the user’s audience type – brands, investors, talent, or factories – showing tailored fields and next steps.
Calendar integration: Visitors can book demos or design lab visits directly, streamlining the lead process and reinforcing immediacy.
This mix of storytelling modules and smart conversion flows ensured that every audience had a pathway forward, whether to be inspired, informed, or engaged.
Bringing It to Life
NITEX’s brand identity found its fullest expression in the motion and interaction design of the site. The site opens with scroll-based storytelling, each chapter unfolding with smooth transitions. Page transitions maintain energy, using angled wipes and overlays that slide in from the side. These overlays carry their own links, allowing users to dive deep without losing orientation. The angular motion language of the logo carries through:
Buttons expand dynamically on hover.
Rectangular components tilt into angular forms.
The dual-image module sees the N and X frame track the viewport, dynamically revealing new perspectives.
This creates a consistent visual rhythm, where every motion feels connected to the brand’s DNA. The imagery reinforces this, emphasizing speed and creativity through motion blur, candid composition, and AI-driven intensity. Importantly, we kept the overall experience modular and scalable. Each content block is built on a flexible grid with clear typographic hierarchy. This ensures usability while leaving room for surprise – whether it’s an animated reveal, a bold image transition, or a subtle interactive detail.
Under the Hood
From a structural standpoint, the site was designed to scale as NITEX grows. The codebase follows a modular approach, with reusable components that can be repurposed across sections. Sanity’s CMS allows editors to easily add new chapters, forms, or modules without breaking the system.
The split-entry structure – narrative vs. action – was the architectural anchor. This allowed us to keep storytelling immersive without sacrificing usability for users who came with a clear transactional intent.
Looking Back
This project was as much about balance as it was about creativity. Balancing brand storytelling with user conversion. Balancing motion and expressiveness with speed and performance. Balancing multiple audience needs within a single coherent system.
One of the most rewarding aspects was seeing how the dual-experience model solved what initially felt like an unsolvable challenge: how to serve users who want inspiration and those who want action without building two entirely separate sites.
The deep-dive overlays also proved powerful, letting NITEX show rather than just tell their story. They allowed us to layer complexity while keeping the surface experience clean and intuitive.
Looking ahead, the NITEX platform is built to evolve. Future possibilities include investor dashboards with live performance metrics, brand-specific case modules curated by industry, or interactive workflow tools aligned with NITEX’s trend-to-delivery logic. The foundation we built makes all of this possible.
Ultimately, the NITEX project reflects the company’s own values: clarity, adaptability, and speed. For us, it was an opportunity to merge brand design, UX, UI, and development into a single seamless system – one that redefines what a fashion-tech platform can look and feel like.
Codrops’ “design” has been long overdue for a refresh. I’ve had ideas for a new look floating around for ages, but actually making time to bring them to life has been tough. It’s the classic shoemaker’s shoes problem: I spend my days answering emails, editing articles and (mostly) managing Codrops and the amazing contributions from the community, while the site itself quietly gathers dust 😂
Still, the thought of reimagining Codrops has been sitting in the back of my mind. I’d already been eyeing Anima as a tool that could make the process faster, so I reached out to their team. They were kind enough to support us with this review (thank you so much!) and it’s a true win-win: I get to finally test my idea for Codrops, and you get a good look at how the tool holds up in practice 🤜🤛
So, Anima is a platform made to bridge the gap between design and development. It allows you to take an existing website, either one of your own projects or something live on the web, and bring it into a workspace where the layout and elements can be inspected, edited, and reworked. From there, you can export the result as clean, production-ready code in React, HTML/CSS, or Tailwind. In practice, this means you can quickly prototype new directions, remix existing layouts, or test ideas without starting completely from scratch.
Obviously, you should not use this to copy other people’s work, but rather to prototype your own ideas and remix your projects!
Let me take you along on a little experiment I ran with it.
Getting started
Anima Link to Code was introduced in July this year and promises to take any design or web page and transform it into live editable code. You can generate, preview, and export production ready code in React, TypeScript, Tailwind CSS, or plain HTML and CSS. That means you can start with a familiar environment, test an idea, and immediately see how it holds up in real code rather than staying stuck in the design stage. It also means you can poke around, break things, and try different directions without manually rebuilding the scaffolding each time. That kind of speed is what usually makes or breaks whether I stick with an experiment or abandon it halfway through.
To begin, I decided to use the Codrops homepage as my guinea pig. I have always wondered how it would feel reimagined as a bento style grid. Normally, if I wanted to try that, I would either spend hours rewriting markup and CSS by hand or rely on an AI prompt that would often spiral into unrelated layouts and syntax errors. It would be already a great help if I could envision my idea and play with it bit!
After pasting in the Codrops URL, this is what came out. A React project was generated in seconds.
The first impression was surprisingly positive. The homepage looked recognizable and the layout did not completely collapse. Yes, there was a small glitch where the Webzibition box background was not sized correctly, but overall it was close enough that I felt comfortable moving on. That is already more than I can say for many auto generation tools where the output is so mangled that you do not even know where to start.
Experimenting with a bento grid
Now for the fun part. I typed a simple prompt that said, “Make a bento grid of all these items.” Almost immediately I hit an error. My usual instinct in this situation is to give up since vibe coding often collapses the moment an error shows up, and then it becomes a spiral of debugging someone else’s half generated mess. But let’s try this instead of quitting right away 🙂 The fix worked and I got a quirky but functioning bento grid layout:
The result was not exactly what I had in mind. Some elements felt off balance and the spacing was not ideal. Still, I had something on screen to iterate on, which is already a win compared to starting from scratch. So I pushed further. Could I bring the Creative Hub and Webzibition modules into this grid? A natural language prompt like “Place the Creative Hub box into the bento style container of the articles” felt like a good test.
And yes, it actually worked. The Creative Hub box slipped into the grid container:
The layout was starting to look cramped, so I tried another prompt. I asked Anima to also move the Webzibition box into the same container and to make it span full width. The generation was quick with barely a pause, and suddenly the page turns into this:
This really showed me what it’s good at: iteration is fast. You don’t have to stop, rethink the grid, or rewrite CSS by hand. You just throw an idea in, see what comes back, and keep moving. It feels more like sketching in a notebook than carefully planning a layout. For prototyping, that rhythm is exactly what I want. Really into this type of layout for Codrops!
Looking under the hood
Visuals are only half the story. The bigger question is what kind of code Anima actually produces. I opened the generated React and Tailwind output, fully expecting a sea of meaningless divs and tangled class names.
To my surprise, the code was clean. Semantic elements were present, the structure was logical, and everything was just readable. There was no obvious divitis, and the markup did not feel like something I would want to burn and rewrite from scratch. It even got me thinking about how much simpler maintaining Codrops might be if it were a lean React app with Tailwind instead of living inside the layers of WordPress 😂
There is also a Chrome extension called Web to Code, which lets you capture any page you are browsing and instantly get editable code. With this it’s easy to capture and generate inner pages like dashboards, login screens, or even private areas of a site you are working on could be pulled into a sandbox and played with directly.
Pros and cons
Pros: Fast iteration, surprisingly clean code, easy setup, beginner-friendly, genuinely fun to experiment with.
Cons: Occasional glitches, exported code still needs cleanup, limited customization, not fully production-ready.
Final thoughts
Anima is not magic and it is not perfect. It will not replace deliberate coding, and it should not. But as a tool for quick prototyping, remixing existing designs, or exploring how a site might feel with a new structure, it is genuinely fun and surprisingly capable. The real highlight for me is the speed of iteration: you try an idea, see the result instantly, and either refine it or move on. That rhythm is addictive for creative developers who like to sketch in code rather than commit to heavy rebuilds from scratch.
Verdict: Anima shines as a playground for experimentation and learning. If you’re a designer or developer who enjoys fast iteration, you’ll likely find it inspiring. If you need production-ready results for client work, you’ll still want to polish the output or stick with more mature frameworks. But for curiosity, prototyping, and a spark of creative joy, Anima is worth your time and you might be surprised at how much fun it is to remix the web this way.
At the beginning of 2025, I finally decided to build myself a new portfolio. I still pretty much liked the one I made back in 2021, but I felt the need to put to good use all the cool stuff I’ve learned these past couple years working with WebGPU. And, besides, half of the projects featured in my case studies had been put offline anyway, so it was about time.
I didn’t really know where I was going at this point, except that:
It would, of course, feature multiple procedurally generated WebGPU scenes. I already had a few concepts to explore in mind, like particles or boids simulation.
I wanted to take care of the design myself. It may seem weird, especially since I was very happy with what Gilles came up designing for my last portfolio, and also because I do suck at design. But this would give me more freedom, and I’ve also always liked building things from scratch on my own.
Last but not least, it had to be fun!
1. The journey
The (tough) design and content process
Don’t do this!
At first, I had no idea what to do design wise. Fonts, colors: there are so many things that could go wrong.
I started with simple light and dark colors, kept the fonts Gilles had chosen for my previous portfolio and started to copy/paste its old text content. It didn’t feel that great, and it wasn’t fun for sure.
The very first design iterations… Still a long way to go!
I definitely needed colors. I could have wasted a few hours (or days) choosing the right pairing, but instead I decided this could be the right opportunity to use this random color palette generator utility I’ve coded a few years ago. I cleaned the code a bit, created a repo, published it to npm and added it to my project. I also slightly changed the tone of the copywriting, and that led me to something still not that great, but a bit more fun.
Slowly getting there
I let it site for a while and started working on other parts of the site, such as integrating the CMS or experimenting with the WebGPU scenes. It’s only after a long iteration process that I’ve finally set up my mind on this kind of old school video games retro vibe mixed with a more cheerful, cartoonish aesthetic, almost Candy Crush-esque. Impactful headings, popping animations, banded gradients… you name it.
Of course, I’ve never gone as far as creating a Figma project (I did select a few reference images as a moodboard though) and just tested a ton of stuff directly with code until I felt it wasn’t that bad anymore. All in all, it was a very long and painful process, and I guess every designer would agree at this point: don’t do this!
A few images from my final moodboard – all credits go to their respective authors.
Do you actually read portfolios content?
Another painful point was to settle on the actual content and overall structure of the site. Do I need detailed case studies pages? Do I need pages at all? Will the users even read all those long blocks of text I will struggle to write?
In the end, I chose to drop the case studies pages. I had a couple of reasons to do so:
Often times the project ends up being put offline for various reasons, and you end up showcasing something the user cannot visit anymore. This is exactly what happened on my previous portfolio.
Most of the client work I’ve been doing those past years has been for agencies, and I’m not always allowed to publicly share them. I have no problem with that, but it slightly reduced the number of projects I could highlight.
From there on, it was a quick decision to just go with a single landing page. I’d put direct links to the projects I could highlight and small videos of all the other projects or personal works I could feature. On top of that, I’d add a few “about” sections mixed with my WebGPU scenes, and that’d be the gist of it.
Speaking of the WebGPU scenes, I really wanted them to be meaningful, not just a technical demonstration of what I could do. But we’ll get to that later.
The final UX twist
After a few months, I felt like I was entering the final stage of development. The page structure was mostly done, all my various sections were there and I was working on the final animations and micro-interactions tweakings.
So I took a step back, and looked back at my initial expectations. I had my WebGPU scenes showcasing my various technical skills. I had handled the design myself, and it wasn’t that bad. But were the flashy colors and animations enough to make it a really fun experience overall?
I think you already know the answer. Something was missing. Except for the random color palette switcher, the UX basically consisted of scroll-driven animations. Most of the 3D scenes interactions were rudimentary. I needed an idea.
The design already had this video game cheerful look. So… What if I turned my whole portfolio into a game? Once again, I started writing down my ideas:
The user would need to interact with the different UI elements to unlock the theme switcher and color palette generator buttons.
Each WebGPU scene could serve as a way to unlock the following content, acting as a very basic “puzzle” game.
Keep track of the user overall progress.
Allow the user to skip the whole game process if they want to.
This means most of the users wouldn’t ever make it to the footer, or use this random palette generator tool I’ve struggled to implement. This might very well be the most riskiest, stupidest decision I’ve made so far. But it would give my portfolio this unique and fun touch I was looking for in the first place, so I went all in.
Of course, it goes without saying it implied a major refactoring of the whole code and I needed to come up with original interaction ideas for the WebGPU scenes, but I like to think it was worth it.
Gamification mechanisms: unlocking content and rewarding message
Are you one of the few that unlocked the color palette generator button?
2. Technical study
Now that you know all the whys, let’s have a look at the hows!
Tech stack
I’ve decided to try Sanity Studio as I’ve never worked with it before and as I knew it would be a relatively small project, it’d be a perfect fit to start using it. Even though I felt like I just scratched its surface, I liked the overall developer experience it provided. On the other hand, I already had a good experience working with Nuxt3 so this was an easy choice.
No need to mention why I chose GSAP and Lenis — everyone knows those are great tools to deliver smooth animated websites.
Of course, the WebGPU scenes had to be done with gpu-curtains, the 3D engine I spent so much time working on these past two years. It was a great way to test it in a real-life scenario and gave me the opportunity to fix a few bugs or add a couple features along the way.
And since I wanted the whole process to be as transparent as possible, I’ve published the whole source code as a monorepo on GitHub.
Animations
I won’t go too deep into how I handled the various animations, simply because I’ve essentially used CSS and a bit of GSAP here and there, mostly for canvas animations, SplitText effects or the videos carousel using ScrollTrigger observer.
The basic scenes
There are a lot of components on the website that needed to draw something onto a <canvas> and react to the theme and/or color palette changes.
Since switching theme from light to dark (or vice versa) also updates the color palette by tweaking the HSV value component of the colors a bit, I’ve just put a setColors() method in there to handle these changes.
The progress handling here is actually a remain of when the WebGPU scenes animations were mostly scroll-driven (before I introduced the game mechanisms), but since a few scenes still used it, I kept it in there.
All the 2D canvas scenes extend that class, including the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you notice that?).
The WebGPU scenes
One of the very cool features introduced by WebGPU is that you can render to multiple <canvas> elements using only one WebGPU device. I used this to build 4 different scenes (we’ll take a closer look at each of them below), that all extend a WebGPUScene.ts class:
In the real version, this class also handles the creation of a Tweakpane GUI folder (useful for debugging or tweaking values), but for the sake of clarity I removed the related code here.
As you can see, each of these scenes closely monitors its own performance using a custom QualityManager class. We’ll talk about that later, in the performance section.
Okay, now that we have the basic architecture in mind, let’s break down each of the WebGPU scenes!
Since WebGPU is not fully supported yet, I’ve created fallback versions using the 2D canvas API and the Scene class we’ve seen above for each of the following scenes.
Hero scene
The scenes featured in the portfolio somehow respect a kind of complexity order, meaning the more you advance in the portfolio, the more technically involved the scenes become.
In that way, the hero scene is by far the most simple technically speaking, but it had to look particularly striking and engaging to immediately capture the user’s attention. It was thought as some sort of mobile puzzle game splash screen.
Let’s go!
It’s made of a basic, single fullscreen quad. The idea here is to first rotate its UV components each frame, map them to polar coordinates and use that to create colored triangles segments.
// Center UVs at (0.5, 0.5)
var centeredUV = uv - vec2f(0.5);
// Apply rotation using a 2D rotation matrix
let angleOffset = params.time * params.speed; // Rotation angle in radians
let cosA = cos(angleOffset);
let sinA = sin(angleOffset);
// Rotate the centered UVs
centeredUV = vec2<f32>(
cosA * centeredUV.x - sinA * centeredUV.y,
sinA * centeredUV.x + cosA * centeredUV.y
);
// Convert to polar coordinates
let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
let radius = length(centeredUV);
// Map angle to triangle index
let totalSegments = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
let triIndex = floor(normalizedAngle * totalSegments); // Get triangle index
// Compute fractional part for blending
let segmentFraction = fract(normalizedAngle * totalSegments); // Value in [0,1] within segment
let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many color indices
let color = select(vec4(params.colors[colorIndex], 1.0), vec4f(0.0), isEmpty);
There’s actually a wavy noise applied to the UV beforehand using concentric circles, but you get the idea.
Interestingly enough, the most difficult part was to achieve the rounded rectangle entering animation while preserving the correct aspect ratio. This was done using this function:
fn roundedRectSDF(uv: vec2f, resolution: vec2f, radiusPx: f32) -> f32 {
let aspect = resolution.x / resolution.y;
// Convert pixel values to normalized UV space
let marginUV = vec2f(radiusPx) / resolution;
let radiusUV = vec2f(radiusPx) / resolution;
// Adjust radius X for aspect ratio
let radius = vec2f(radiusUV.x * aspect, radiusUV.y);
// Center UV around (0,0) and apply scale (progress)
var p = uv * 2.0 - 1.0; // [0,1] → [-1,1]
p.x *= aspect; // fix aspect
p /= max(0.0001, params.showProgress); // apply scaling
p = abs(p);
// Half size of the rounded rect
let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
let halfSizeScaled = vec2f(halfSize.x * aspect, halfSize.y);
let d = p - halfSizeScaled;
let outside = max(d, vec2f(0.0));
let dist = length(outside) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
return dist;
}
Highlighted videos slider scene
Next up is the highlighted videos slider. The original idea came from an old WebGL prototype I had built a few years ago and never used.
Do you spot any similarities?
The idea is to displace the planes vertices to wrap them around a cylinder.
var position: vec3f = attributes.position;
// curve
let angle: f32 = 1.0 / curve.nbItems;
let cosAngle = cos(position.x * PI * angle);
let sinAngle = sin(position.x * PI * angle);
position.z = cosAngle * curve.itemWidth;
position.x = sinAngle;
I obviously used this for the years titles, whereas the videos and trail effects behind them are distorted using a post-processing pass.
While this was originally tied to the vertical scroll values (and I really liked the feeling it produced), I had to update its behavior when I switched to the whole gamification idea, making it an horizontal carousel.
Going at the speed of light!
Thanks to gpu-curtains DOM to WebGPU syncing capabilities, it was relatively easy to set up the videos grid prototype using the Plane class.
The trail effect is done using a compute shader writing to a storage texture. The compute shader only runs when necessary, which means when the slider is moving. I’m sure it could have been done in a thousands different ways, but it was a good excuse to play with compute shaders and storage textures. Here’s the compute shader involved:
I thought I was done here, but while running production build tests I stumbled upon an issue. Unfortunately, preloading all those videos to use as WebGPU textures resulted in a huge initial payload and also significantly affected the CPU load. To mitigate that, I’ve implemented a sequential video preloading where I’d have to wait for each video to have enough data before loading the next one. This gave a huge boost regarding initial load time and CPU overhead.
Sequential videos loading waterfall
Invoices scene
The third WebGPU scene was initially supposed to constitute my own take at 3D boids simulations, using instancing and a compute shader. After a bit of work, I had a bunch of instances that were following my mouse, but the end result was not living up to my expectations. The spheres were sometimes overlapping each other, or disappearing behind the edges of the screen. I kept improving it, adding self-collision, edge detections and attraction/repulsion mechanisms until I was happy enough with the result.
I like to call it the “invoices” scene, because the sphere instances here actually represent all the invoices I actually issued during my freelance career, scaled based on the amounts. Since I’m using google sheets to handle most of my accounting, I’ve made a little script that gathers all my invoices amount in a single, separate private sheet each time I’m updating my accounting sheets. I then fetch and parse that sheet to create the instances. It was a fun little side exercise and turns this scene into an ironically meaningful experiment: each time you click and hold, you kind of help me collect my money.
Give me my money!
The compute shader uses a buffer ping-pong technique: you start with two identically filled buffers (e.g. packed raw data) then at each compute dispatch call, you read the data from the first buffer and update the second one accordingly. Once done, you swap the two buffers before the next call and repeat the process. If you’re familiar with WebGL, this is often done with textures. WebGPU and compute shaders allow us to do so with buffers, which is way more powerful. Here is the complete compute shader code:
struct ParticleB {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct ParticleA {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct SimParams {
deltaT: f32,
mousePosition: vec3f,
mouseAttraction: f32,
spheresRepulsion: f32,
boxReboundFactor: f32,
boxPlanes: array<vec4f, 6>
};
@group(0) @binding(0) var<uniform> params: SimParams;
@group(0) @binding(1) var<storage, read> particlesA: array<ParticleA>;
@group(0) @binding(2) var<storage, read_write> particlesB: array<ParticleB>;
fn constrainToFrustum(pos: vec3<f32>, ptr_velocity: ptr<function, vec3<f32>>, radius: f32) -> vec3<f32> {
var correctedPos = pos;
for (var i = 0u; i < 6u; i++) { // Loop through 6 frustum planes
let plane = params.boxPlanes[i];
let dist = dot(plane.xyz, correctedPos) + plane.w;
if (dist < radius) { // If inside the plane boundary (radius = 1)
// Move the point inside the frustum
let correction = plane.xyz * (-dist + radius); // Push inside the frustum
// Apply the position correction
correctedPos += correction;
// Reflect velocity with damping
let normal = plane.xyz;
let velocityAlongNormal = dot(*(ptr_velocity), normal);
if (velocityAlongNormal < 0.0) { // Ensure we only reflect if moving towards the plane
*(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * normal;
}
}
}
return correctedPos;
}
fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
let theta = length(omega) * dt;
if (theta < 1e-5) {
return vec4(0.0, 0.0, 0.0, 1.0);
}
let axis = normalize(omega);
let halfTheta = 0.5 * theta;
let sinHalf = sin(halfTheta);
return vec4(axis * sinHalf, cos(halfTheta));
}
fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
return vec4(
a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
a.w * b.w - dot(a.xyz, b.xyz)
);
}
fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
let omega = vec4(angularVel, 0.0);
let dq = 0.5 * quaternionMul(q, omega);
return normalize(q + dq * dt);
}
@compute @workgroup_size(64) fn main(
@builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
) {
var index = GlobalInvocationID.x;
var vPos = particlesA[index].position.xyz;
var vVel = particlesA[index].velocity.xyz;
var collision = particlesA[index].velocity.w;
var vQuat = particlesA[index].rotation;
var angularVelocity = particlesA[index].angularVelocity.xyz;
var vData = particlesA[index].data;
let sphereRadius = vData.x;
var newCollision = vData.y;
collision += (newCollision - collision) * 0.2;
collision = smoothstep(0.0, 1.0, collision);
newCollision = max(0.0, newCollision - 0.0325);
let mousePosition: vec3f = params.mousePosition;
let minDistance: f32 = sphereRadius; // Minimum allowed distance between spheres
// Compute attraction towards sphere 0
var directionToCenter = mousePosition - vPos;
let distanceToCenter = length(directionToCenter);
// Slow down when close to the attractor
var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Only attract if outside the minimum distance
vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
vVel *= 0.95;
}
// Collision Handling: Packing spheres instead of pushing them away
var particlesArrayLength = arrayLength(&particlesA);
for (var i = 0u; i < particlesArrayLength; i++) {
if (i == index) {
continue;
}
let otherPos = particlesA[i].position.xyz;
let otherRadius = particlesA[i].data.x;
let collisionMinDist = sphereRadius + otherRadius;
let toOther = otherPos - vPos;
let dist = length(toOther);
if (dist < collisionMinDist) {
let pushDir = normalize(toOther);
let overlap = collisionMinDist - dist;
let pushStrength = otherRadius / sphereRadius; // radius
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
newCollision = min(1.0, pushStrength * 1.5);
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.1 * pushStrength;
}
}
let projectedVelocity = dot(vVel, directionToCenter); // Velocity component towards mouse
let mainSphereRadius = 1.0;
if(distanceToCenter <= (mainSphereRadius + minDistance)) {
let pushDir = normalize(directionToCenter);
let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
newCollision = 1.0;
if(params.mouseAttraction > 0.0) {
vPos -= pushDir * overlap;
}
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.05;
}
vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
// Apply velocity update
vPos += vVel * params.deltaT;
angularVelocity *= 0.98;
let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
// Write back
particlesB[index].position = vec4(vPos, 0.0);
particlesB[index].velocity = vec4(vVel, collision);
particlesB[index].data = vec4(vData.x, newCollision, vData.z, vData.w);
particlesB[index].rotation = updatedQuat;
particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
}
One of my main inspirations for this scene was this awesome demo by Patrick Schroen. I spent a lot of time looking for the right rendering tricks to use and finally set up my mind on volumetric lighting. The implementation is quite similar to what Maxime Heckel explained in this excellent breakdown article. Funnily enough, I was already deep into my own implementation when he released that piece, and I owe him the idea of using a blue noise texture.
Volumetric lighting debugging
As a side note, during the development phase this was the first scene that required an actual user interaction and it played a pivotal role in my decision to turn my folio into a game.
Open source scene
For the last scene, I wanted to experiment a bit more with particles and curl noise because I’ve always liked how organic and beautiful it can get. I had already published an article using these concepts, so I had to come up with something different. Jaume Sanchez’ Polygon Shredder definitely was a major inspiration here.
Since this experiment was part of my open source commitment section, I had the idea to use my GitHub statistics as a data source for the particles. Each statistic (number of commits, followers, issues closed and so on) is assigned to a color and turned into a bunch of particles. You can even toggle them on and off using the filters in the information pop-up. Once again, this changed a rather technical demo into something more meaningful.
Curl noise and particles are always a good match
While working on the portfolio, I was also exploring new rendering techniques with gpu-curtains such as planar reflections. Traditionally used for mirror effects or floor reflections, it consists of rendering a part of your scene a second time but from a different camera angle and projecting it onto a plane. Having nailed this, I thought it would be a perfect match there and added it to the scene.
Last but not least, and as a reminder of the retro video games vibe, I wanted to add a pixelated mouse trail post-processing effect. I soon realized it would be way too much though, and ended up showing it only when the user is actually drawing a line, making it more subtle.
Using the filters can actually help you unlock features!
Performance and accessibility
On such highly interactive and immersive pages, performance is key. Here are a few tricks I’ve used to try to maintain the most fluid experience across all devices.
Dynamic imports
I’ve used Nuxt dynamic imported components and lazy hydration for almost every non critical components of the page. In the same way, all WebGPU scenes are dynamically loaded only if WebGPU is supported. This significantly decreased the initial page load time.
// pseudo code
import type { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
let scene: WebGPUHeroScene | CanvasHeroScene | null;
const canvas = useTemplateRef("canvas");
const { colors } = usePaletteGenerator();
onMounted(async () => {
const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
if ($hasWebGPU && canvas.value) {
const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
scene = new WebGPUHeroScene({
gpuCurtains: $gpuCurtains,
container: canvas.value,
colors: colors.value,
});
} else if (canvas.value) {
scene = new CanvasHeroScene({
container: canvas.value,
isReducedMotion: $isReducedMotion,
colors: colors.value,
});
}
});
I’m not particularly fond of Lighthouse reports but as you can see the test result is quite good (note that it’s running without WebGPU though).
PageSpeed Insights report
Monitoring WebGPU performance in real time
I’ve briefly mentionned it earlier, but each WebGPU scene actually monitors its own performance by keeping track of its FPS rate in real time. To do so, I’ve written 2 separate classes: FPSWatcher, that records the average FPS over a given period of time, and QualityManager, that uses a FPSWatcher to set a current quality rating on a 0 to 10 scale based on the average FPS.
It’s very basic: I just record the elapsed time between two render calls, put that into an array and run a callback every updateDelay milliseconds with the latest FPS average value. It is then used by the QualityManager class, that does all the heavy lifting to assign an accurate current quality score:
import type { FPSWatcherParams } from "./FPSWatcher";
import FPSWatcher from "./FPSWatcher";
export interface QualityManagerParams {
label?: string;
updateDelay?: FPSWatcherParams["updateDelay"];
targetFPS?: number;
onQualityChange?: (newQuality: number) => void;
}
export class QualityManager {
label: string;
fpsWatcher: FPSWatcher;
targetFPS: number;
#lastFPS: number | null;
#active: boolean;
onQualityChange: (newQuality: number) => void;
quality: {
current: number;
min: number;
max: number;
};
constructor({
label = "Quality manager",
updateDelay = 1000,
targetFPS = 60,
onQualityChange = (newQuality) => {},
}: QualityManagerParams = {}) {
this.label = label;
this.onQualityChange = onQualityChange;
this.quality = {
min: 0,
max: 10,
current: 7,
};
this.#active = true;
this.targetFPS = targetFPS;
this.#lastFPS = null;
this.fpsWatcher = new FPSWatcher({
updateDelay,
onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
});
}
get active() {
return this.#active;
}
set active(value: boolean) {
if (!this.active && value) {
this.fpsWatcher.restart();
}
this.#active = value;
}
onFPSWatcherUpdate(averageFPS = 0) {
const lastFpsRatio = this.#lastFPS
? Math.round(averageFPS / this.#lastFPS)
: 1;
const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
// if fps ratio is over 0.95, we should increase
// else we decrease
const boostedFpsRatio = fpsRatio / 0.95;
// smooth change multiplier avoid huge changes in quality
// except if we've seen a big change from last FPS values
const smoothChangeMultiplier = 0.5 * lastFpsRatio;
// quality difference that should be applied (number with 2 decimals)
const qualityDiff =
Math.round((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
if (Math.abs(qualityDiff) > 0.25) {
const newQuality = Math.min(
Math.max(
this.quality.current + Math.round(qualityDiff),
this.quality.min
),
this.quality.max
);
this.setCurrentQuality(newQuality);
}
this.#lastFPS = averageFPS;
}
setCurrentQuality(newQuality: number) {
this.quality.current = newQuality;
this.onQualityChange(this.quality.current);
}
update() {
if (this.active) {
this.fpsWatcher.update();
}
}
}
The most difficult part here is to smoothly handle the quality changes to avoid huge drops or gains in quality. You also don’t want to fall in a loop where for example:
The average FPS are poor, so you degrade your current quality.
You detect a quality loss and therefore decide to switch off an important feature, such as shadow mapping.
Removing the shadow mapping gives you a FPS boost and after the expected delay the current quality is upgraded.
You detect a quality gain, decide to re-enable shadow mapping and soon enough, you’re back to step 1.
Typically, the quality rating is used to update things such as the current pixel ratio of the scene, frame buffers resolutions, number of shadow maps PCF samples, volumetric raymarching steps and so on. In worst case scenarios, it can even disable shadow mapping or post processing effects.
Accessibility
Finally, the site had to respect at least the basic accessibility standards. I’m not an accessibility expert and I may have made a few mistakes here and there, but the key points are that the HTML is semantically correct, it is possible to navigate using the keyboard and the prefers-reduced-motion preference is respected. I achieved that by disabling entirely the gamification concept for these users, removing every CSS and JavaScript animations, and made the scenes fall back to their 2D canvas versions, without being animated at all.
Conclusion
Well, it was a long journey, wasn’t it?
Working on my portfolio these past 6 months has been a truly demanding task, technically but also emotionally. I’m still having a lot of self doubts about the overall design, key UX choices or level of creativity. I also do think that it kind of honestly sums up who I am as a developer but also as a person. In the end, it’s probably what matters most.
I hope that you’ve learnt a few things reading this case study, whether it’d be about technical stuff or my own creative process. Thank you all, and remember: stay fun!
Doing a tech talk is easy. Doing a good talk is harder. We’re going to see some tips to improve the delivery of your conferences.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
I love to deliver tech talks: they help me improve both my technical and communication skills.
Hey! If you’re starting doing tech talks, don’t miss my article Thoughts after my very first public speech where I explained what I did right and what I did wrong at my very first tech talk. Learn from my errors, and avoid them!💪
On one hand, teaching stuff requires technical preparations: you need to know what you’re talking about, and you need to know it pretty well. Even more, you need to know some advanced stuff to give the audience something they will remember – if everything is obvious, what will they remember from your talk?
On the other hand, tech talks require good communication skills: your job is to deliver a message to your audience, and you can do it only if your intent is clear and you avoid talking of useless (or misleading) stuff.
But, in the end, only having good content is not enough: you need to shape the talk in a way that stimulates the attention of the public and does not bore them.
note: I still have a lot of room for improvement, so I still have to work on myself to improve my talks!
1- Tell what are the topics covered by your talk
Why should someone attend your talk?
This is a simple question, but it must be clear to you way before submitting your talk to CFPs. Usually, the best reason to attend is because of the content of the conference (unless you attend a conference only for the free pizza and swags!).
You should always express what is the topic of your talk.
Where, and when?
In the title: the title should express what you’re going to say. «Azure DevOps: an intro to build and release pipelines» is better than «Let’s work with Azure DevOps!». Yes, it’s less fancy, but you are making the scope clear (build and release pipelines), the tool (Azure DevOps), and the difficulty of your talk (it’s an intro, not a talk that targets experts)
In the description of your talk: when submitting CFP, when sharing it on social media, and everywhere else you can add some text to describe your talk, you should add some more details. For instance, «In this session, we’re gonna see how to build and release .NET Core projects with Azure DevOps pipelines, how to use PR builds, how to manage variable substitution with Variable Groups…». This will help the reader decide whether or not attending to your session.
At the beginning of your talk: this is for people who forgot to read the session description. Repeat the points you’re gonna cover at the beginning of your talk, like right after the title and the slide about who are you. In this way, attendees can leave if they find out that the topic is not what they were expecting from the title. They don’t lose time on anything not interesting for them, and you don’t lose your focus watching at their bored faces.
2- Divide the talks into smaller blocks
Think of your own experience: are you able to keep the focus on a 1-hour long talk? Or do you get distracted after 10 minutes, start wandering with the mind, and so on?
Well, that’s normal. Generally, people have a short attention span. This means that you cannot talk for 60 minutes about the same topic: your audience will get bored soon.
So, you should split your talk into several smaller blocks. A good idea is to separate the sub-topics into 5 or 10 minutes slots, to help people understanding the precise topic of a block and, in case, pay less attention to that specific block (maybe because that’s a topic they already know, so not focusing 100% is fine).
3- Wake up the audience with simple questions
Sometimes the easiest way to regain the attention of the attendees is to ask them some simple questions: «Can you see my screen?», «Does any of you already used this tool?».
It’s easy to reply to these questions, even without thinking too much about the answer.
This kind of questions will wake up the audience and let them focus on what you’re saying for a bit more.
Needless to say, avoid asking those questions too many times, and don’t repeat always the same question.
4- Choose the right slide layout
Many monitors and screens are now in 16:9. So remember to adapt the slide layout to that format.
In the image below, we can see how the slide layout impacts the overall look: slides with a 4:3 layout are too small for current devices, and they just look… ugly!
Slides in 16:9 feel more natural for many screen layouts.
It’s a simple trick to remember, but it may have a great impact on your delivery.
5- Don’t move hands and body if it’s not necessary
Moving too much your body drives the attention away from the content of your talk. Avoid fidgeting, moving too much your hands and head.
Remember that every movement of your body should have a meaning. Use your movements to drive attention to a specific topic, or to imitate and explain some details.
For instance, use your hands to simulate how some modules communicate with each other.
When preparing your presentation, you are used to thinking of how you see the screen: you have your monitor size and resolution, and you can adjust your content based on that info.
But you don’t know how the audience will see your screen.
If you are doing an in-person talk, pay attention to the screens the audience sees: is the resolution fine? Do you have to increase the font size? Is it fine both for folks on the front and the last seats?
On the contrary, when doing an online talk, you don’t know the device your audience will use: PC, tablet, smart tv, smartphone?
This means that you can’t rely on the mouse cursor to point at a specific part of your monitor (eg: some text, a button, a menu item) as your audience may not see it.
A good idea is to use some kind of tools like ZoomIt: it allows you to zoom in a part of your screen and to draw lines in a virtual layer.
So, instead of saying «now click this button – hey, can you see my cursor?», use Zoomit to zoom on that button or, even better, to draw a rectangle or an arrow to highlight it.
7- Pin presentation folder on Resource Explorer
As we’ve already discussed in my article 10 underestimated tasks to do before your next virtual presentation, you should hide all the desktop icons – they tend to distract the audience. This also implies that even your folder you use to store the presentation assets has to be hidden.
But now… Damn, you’ve just closed the folder with all the conference assets! Now you have to find it again and navigate through your personal folders.
If you use Windows, luckily you can simply right-click on your folder, click Pin to Quick access
and have it displayed on the right bar of any folder you open.
In this way, you can easily reach any folder with just one click.
So your “main” folder will not be visible on your desktop, but you can still open it via the Quick Access panel.
8- Stress when a topic is important
You have created the presentation. You know why you built it, and what are the important stuff. Does your audience know what is important to remember?
If you are talking for one hour, you are giving the public a lot of information. Some are trivia, some are niche details, some are the key point of a topic.
So, make it clear what is important to remember and what is just a “good-to-know”.
For instance, when talking about clean code, stress why it is important to follow a certain rule if it can be a game-changer. «Use consistent names when classes have similar meaning» and «Choose whether using tabs or spaces, and use them for all your files» are both valid tips, but the first one has a different weight compared to the latter one.
Again, spend more time on the important stuff, and tell explicitly the audience that that part is important (and why).
9- Use the slide space in the best way possible
Let’s talk about the size of the slides’ font: keep it consistent or adapt it to the text and space in the slide?
I thought that keeping it consistent was a good idea – somehow it hurts my brain seeing different sizes in different slides.
But then I realized that there are some exceptions: for example, when a slide contains only a few words or a few points in a bullet list. In that case, you should occupy the space in a better way, to avoid all the emptiness around your text.
Here we have 2 slides with the same font:
The first one is fine, the second one is too empty.
Let’s adjust the font of the second slide:
It’s a bit better. Not excellent, but at least the audience can read it. The text is a bit bigger, but you’ll hardly notice it.
10- Turn off all the notifications
It’s simple: if you are sharing your screen, you don’t want your audience to see those weird messages you receive on Discord or the spam emails on Outlook.
So, turn off all the notifications. Of course, unless you are demonstrating how to integrate your stuff with platforms like Slack, Teams et cetera.
11- Use the slides as a reference, not as a teleprompter
Avoid full sentences in your slides. Nobody’s gonna read them – even more, if the audience is not paying attention!
So, prefer putting just a few words instead of full, long sentences: you should not read your slides as if they were a teleprompter, and you should help your audience getting back on track if they lost their focus.
Two bullet points like “Keep track of your progress” and “Fix weakness” are better than a single phrase like “Remember to use some tool to keep track of the progress of your project, so that you can find the weak points and fix them”.
– of course, unless it is a quote: you should write the full text if it is important.
12- “End” is the word
We’re nearly at the end of this session.
A simple yet powerful statement that can wake up your audience.
When you’ve lost your focus, you get triggered by some words, like end. You unconsciously remember that you are at that conference for some reason, and you have to focus to get the most from the last minutes of the conference.
So, try triggering the subconscious of your audience with some words like ending.
13- Recap what you’ve explained
Finally, you’re at the end of your talk.
What should the audience remember from it?
Spend some time to recap what you’ve seen, what are the key points of your conference, and what you’d like the others to remember.
It is a good way to help the audience focus again and thinking of questions to bring to your attention.
Wrapping up
In this article, I’ve summarized some of the things I’ve worked on to improve my tech talks.
There is still a lot to do, and a lot to learn. But I hope that those simple tricks will help other newbies like me to improve their talks.
This article kicks off our series “Creating Emotionally Meaningful Experiences with AI, Three.js, and Blender.” In it, Andrew invites us into his world and shares a deeply personal journey into creativity, emotion, and the joy of making. It may just shift how we see our own creative potential and the meaning behind what we make.
Introduction
Before I start, I want to give credits to Miffy by Dick Bruna, Denis Wipart, Moon, Southern Shotty, Xianyao Wei, Ning Huang, and Evelyn Hsiao. The characters belong to the Miffy Universe by Dick Bruna. The 3D characters you are seeing are a recreation of his art as a fan piece of his artwork. Denis, Moon, and Southern Shotty were the main inspirations for the scenes. I also want to shoutout to Ning Huang, Xianyao Wei, and Evelyn Hsiao as they helped with scene idea generation concepts and inspiration. For the full list of credits, and the Blender and Figma Files, see the GitHub.
My opinions and writing are entirely my own and are not and should not be a reflection of the credited individuals in this article and should most definitely not be taken as whole/universal truths. We each have our own systems of beliefs, and this article and future articles are reflections of my beliefs. It doesn’t mean I’m right or wrong, that determination is up to you.
This article is part of our series Creating Emotionally Meaningful Experiences with AI, Three.js, and Blender:
Part 1: Developing Creativity & Emotional Design Skills for Beginners Learn how to overcome creative block, copy-and-tweak with confidence, and design projects that truly resonate.
Part 2: Overcoming the AI Emotional Void & Building Creative Safety Explore how to overcome the AI emotional void & the importance of psychological safety for creative work.
Part 3: Finding Meaning: Emotional Regulation & Creative Intuition Developing emotional regulation and pattern matching skills and how to give meaning to your work for beginners.
Who this series is for
If you talk to the talented and famous people today, a lot of them will admit when they first started what they are doing now they thought they were “too dumb” to understand it. If you read the designer/developer spotlights here on Codrops, you’ll see a lot of very famous and talented people claim the same thing, that when they first started, they felt like a fraud or incapable of doing it. And yet, now they are known for what they’re doing, amazing at it, pushing the industry forward, and inspiring others. Here’s Mr. Doob, the legendary creator of Three.js, claiming he was convinced he wasn’t smart enough at first as well as other famous artists (including Danish Mir and crzyzhaa). They don’t say that because they’re lying and want to seem humble. They say it because it’s true. Getting older is realizing how broken people are even if they’re famous and talented and how we fake so many aspects of ourselves as humans. The difference between those you admire and yourself is likely just consistency, time, luck, and emotional management skills which are things I want to discuss in this article.
A lot of people are self-aware of their problems, but not self-aware enough on how to get oneself to fix those problems. That’s why we have therapists and life-coaches: to help provide guidance on how to actually change oneself. The great news is there are ways to develop that without a therapist and life-coach more effectively. You already change and grow yourself naturally over the years, but instead of letting it be passive, you can make it way more active. Of course you’ll never be perfect, but perfection isn’t the goal, growth is.
This series isn’t for the talented people out there, it’s for the people that don’t believe they are talented when they actually are. It’s for those who suffer from psychological blockers like extreme perfectionism that can lead to boredom, unfulfilled dreams, or chronic self-doubt. Talent is less about having natural abilities and more about consistency and having systems in place that make you consistent. That takes emotional work, and hopefully emotional work I can make understandable.
This series is also for those who want to make things emotionally meaningful. While what makes something “meaningful” is highly subjective, I hope to introduce broader patterns and systems that can help develop your own unique natural intuition and insight capabilities. That way, you can emotionally connect with and help others easier. If you’re on the business side, well, with products and services today being so similar, the main differentiator/competitive advantage is no longer the capabilities of a product/service, but how you make people feel. This is especially true now with AI, which has accelerated the need for emotionally meaningful experiences. The societal trends we see today highlight this growing emotional void, e.g. the Gen Z dating crisis, the rise of public vulnerability like “20 things I wish I knew in my 20s” etc. In other words, younger generations want psychological saftey that traditional structures and value systems struggle to support. Learning empathy for marketing purposes sounds insidious, but this is a highly nuanced topic that needs a separate article or, quite honestly, a book. I will cover this portion more in Part 2 and Part 3 and not very much in this article.
For the record, I still doubt myself a lot. There’s a lot of days where I pretend to know what I’m doing, but secretly learn stuff while I’m doing the work. And that’s normal, it’s called imposter syndrome and it’ll probably never go away, but at the very least you shouldn’t feel like an imposter unless you lack integrity. Like at some point though, you are self-aware enough to realize (mostly) what your limitations are and what they aren’t and adjust accordingly. That way you never fake too much confidence and overpromise while underdelivering. If you asked me some React best practices or optimizations I probably couldn’t answer many of them. However, give me a day and I can probably get back to you an answer and how that would change my future and/or existing projects. And honestly that’s what you do on the job all the time.
In other words, it’s not about who you are or what you know at the current moment, it’s about having the systems in place (whether conscious or not) that allow you to feel confident in yourself to tackle a problem with certainty. When you tell someone you can help them with their project, you’re not saying you know exactly how in the moment, but what you are saying is you know you will be capable of figuring it out and can do it within the constraints (e.g., budget/deadline) agreed upon by both parties. The issue here is that many of us lack self-awareness of what our capabilities really are. I hope this series helps in growing that self-awareness.
I will note, though, as you go through this series you may feel highly uncomfortable or even guilty and that’s totally normal. I still feel guilty and uncomfortable every day discovering unflattering truths about myself or when my actions violate my own words. Of course, I hope you feel inspired, but I’m not asking you to change or grow, just keep doing what you’re doing. If you hate yourself before reading this article/series, you won’t magically stop hating yourself after reading this article despite that being my intention. Sometimes the pain of staying the same has to feel worse than the pain of changing before your brain decides to make a decision. That’s nothing to be ashamed about, I went through that. I didn’t choose growth willingly, I was forced into growing. However, it’s best not to wait for pain or use that as an excuse. My recent growth is definitely a blend of choice and being forced, but not entirely forced anymore.
Emotional processing takes time, your logical understanding comes first which makes it seem like you “should” be able to change, but that’s not how change works. Just let it sit, and time will show you what you really value deep down. Maybe it’s now, maybe it’s later, or maybe it’s never. It doesn’t matter. No matter what happens, don’t judge yourself, just seek to understand yourself better.
1. Intro to Emotional Design and Emotional Design Patterns
1.1 What is Creativity and Overcoming Creative Block
To better understand emotional design (which requres one to be creative), let’s take a look at what creativity is.
I often hear people say they aren’t creative or they’re only good at copying things and have difficulties translating their thoughts into things other people can see or use. This is an extremely common feeling even among super talented creatives. There’s a term for it: creative block, which is identified by having difficulties starting/completing projects, inability to generate ideas etc. There’s a lot of causes for it, like if your external environment is dry and dull, or maybe you have extreme perfectionism from bad parenting etc. There are different solutions for each of these problems, but since there are so many causes of creative block, I want to try and provide a more broad solution by discussing what creativity actually is.
Simply put, creativity is the process of copying others and tweaking things. In other words, we all stand on the shoulders of giants. Think of any viral website or project and it’s likely a combination of things you’ve probably seen before, just in a slightly different way. We tend to call it “inspiration,” but behind inspiration is copying. “Copying” has this negative connotation associated with it, when in reality we do that all around us every single day. You see someone’s clothing you admire? You start wearing that brand, copying them too, but slightly adjust the style. Then someone likes your style and copies you and modifiers it slightly and the pattern goes on and on until the style becomes so different you can’t tell.
Copying and tweaking exists everywhere. Slack, Discord, and Microsoft Teams are all similar too. Even life’s creativity of us as humans is about copying our human DNA and tweaking it to create each one of us as unique individuals with distinct characteristics. The list never ends. Everything is a copy of something with tweaks. When you see everything in the world as copies of each other, you learn way faster and you can better identify the differences between those copies and how to create differences between those copies. In a sense, copying is just consistency, e.g., you copy similar actions and routines you do from one day to another. But you don’t want to stay consistent if it’s making you miserable (i.e. consistently harmful actions vs consistently non-harmful actions). That’s where tweaking AKA growth comes in.
I highly recommend watching this video on creativity. Even though I developed my thoughts independently prior to watching this video, someone else already said very similar things to what I’m saying. I’m not even being novel when I thought I was. My novelty is not in my idea, but the way I say/execute that same idea, that’s my “tweaking” part. I essentially “copied” them, even when I didn’t know they existed until after I wrote this article and came back to add this paragraph. I don’t agree with some of their framing, but the underlying idea/concept is the same as what I present in this section. The book Steal Like An Artist also shares similar views as I do, but it is framed differently as well.
In other words, creativity is just about looking at what others are doing and see how you can do it a bit differently. This distinguishment that different domains are inherently “creative” or “not-creative” is outdated. The reason we have this distinguishment is because we like to simplify things, we need to communicate, and economic factors. From a societal perspective, being a lawyer isn’t considered creative, but it is incredibly creative in the sense you have to know when to present evidence, when to hold back, and how to say things to emotionally appeal to a jury etc. That’s all creativity too. Perhaps it’s not “artistic” creativity, but it’s definitely emotional and linguistic creativity. And the cool thing is that you can use these lawyer timing tactics you know about in video game design patterns too!
For the past thousand years, we work to simplify things as humans. That’s why we have biases, stereotypes, and why we like to be lazy. Our natural default as humans is to simplify things (less cognitive load = less resource consumption for our bodies = better survival chances) and that’s why we ended up with “domains” and fields of studies. Today we’re realizing that many fields are more interconnected than we thought. Modern breakthroughs happen when different domains are cross pollinated. That’s how “the Godfather of AI” created the idea that LLMs are based off of, he applied the human brain workings to technology and stated to “be a contrarian” in an interview because people thought what he was doing was dumb. But he wasn’t just a contrarian, he was also a conformist. The contrarian in him was the “tweaking part” but he still built his knowledge using the math and science off other researchers, the “copying” part.
Incredible breakthroughs or paradigm shifts are just people who have spent a significant amount of time tweaking. The term “tweaking” isn’t meant to be reductive/dismissve as you can have very sophisticated and non-obvious tweaking patterns, but the idea is that even huge changes start from copying. Copying and tweaking isn’t meant to be easy or some mechanical process, it’s just an easy way of thinking about creativity. I’m not alone on this, Steve Jobs’s email to himself expresses this similar sentiment as well as many other people we typically see as genius or industry leaders. It doesn’t mean just because you realize this you’re a “good” humble person, but what it does mean is that you are self-aware enough to realize that you are the sum of the world and people around you (the good and the bad). The more you uncover the sources that make you who you are, the more you learn about yourself (AKA increasing your self-awareness). Whether you’re cynical and dislike most people or not, in some odd way that does bring some amount of peace and optimism when you accept that you are the sum of other people.
Tweaking is just like exercise. We know we should exercise, and some do it consistently, some try then give up, and others do it on and off. And we all have different reasons we want to exercise, some do it to attract a romantic partner while others do it to lose weight, feel better, or manage health issues or a combination of multiple reasons. It’s the same reason why we’re creative and push through that “ugly phase” of work, some do it because they’re driven by money, others do it because they want to use their creativity to connect with others etc.
Copying is the foundation of creativity, it’s not something to feel bad about. Whether you believe it or not, copying things means you’re on the right track to becoming more “creative” in the way that people typically interpret that word. If you always copy and never tweak, then it’s like you’re someone who always talks about wanting exercise but never doing it. Just like how you can’t learn the piano by only watching someone play piano, you have to actually practice with a piano. Likewise, with creativity, you actually have to practice being creative by tweaking.
Literally the only thing you have to do is credit others. No one thinks you’re less creative when you credit your inspirations AKA the people/things you copied and tweaked from. If they do, they’re the kind of people who will weaponize transparency and honestly anything against you, and those people always exist. It’s really easy to feel that people will judge you, but that’s just a defensive mechanism when you aren’t confident in yourself and don’t love yourself enough (of course, that’s not the only reason people hide credits, but this is a very common reason). It took me years to overcome this as I used to hide my inspirations to make people think I’m more creative than I am. However, as I developed more self-confidence, I feel no need to hide the people I took inspiration from. If you still hide credits like I used to, don’t feel embarrassed about it. Self-love is a journey in itself and having low self-esteem may not even have been your fault (like if you had abusive parents or were bullied).
So how do you overcome creative block? Simply put, you find fun in the process. Treat your failures like exercises and explorations of curiosity rather than reflections on your own capabilities and self-worth. Instead of thinking, “Wow, I spent 3 hours on that, what a waste of time, I must not be good enough” think, “Wow I’m really proud of myself I spent 3 hours trying something new and failing. I discovered how not to do something which will help me later because I know I have to fail 1000 times before I succeed.” Don’t feed your self-doubt. Treat learning to be creative like any other skill, whether that be learning how to code, piano, or some sport etc. That’s pretty generic advice you’ve probably heard before, but hopefully this section provides the context on where that advice comes from.
There’s also this argument that some are born more creative than others. Sure, but it doesn’t mean you can’t work on it if you want to. And quite honestly we’re all born differently haha, some people are born with taller genetics while others shorter genetics etc. As someone who used to suck at art, I can safely say if you have the right methods, you can develop creative/artistic abilities even if it’s less explicit than other fields.
It is important to note that just because you spend a lot of time tweaking things doesn’t mean you’ll get better outcomes. You can be “more creative” but create things that no one really likes. Like a child who draws a crazy creature, it’s very original, but not many enjoy it beyond the people around them (although even that too can be resonant). Just like exercise, you can do a lot of exercise, but if you do it wrong, you won’t get optimal health benefits/outcomes. However, this is not the point of this section. This section is not to provide a guide on how to practice creatively effectively, rather, it’s about how to overcome creative block. The next sections addresses how to create “well-recieved” projects and practice creativity more “effectively.”
1.2 A note on well-received emotionally resonant projects
Take a look at the following room portfolios in the screenshot below. I included one from the legendary Bruno Simon, Henry Heffernan a very talented developer, and one I made myself (yes, including myself is self-aggrandizing in a way, but my main intention is to show I practice what I preach). They all performed well in terms of publicity in the creative space (some more than others obviously, but that’s not the point). The question is then, why did these stand out across many other room portfolios? I mean at the end of the day look at them, they’re just rooms? Where’s the originality? Snorefest central, thank you next (I’m joking, I love these websites haha).
From left to right, Bruno Simon, Henry Heffernan, and Soo Ah’s room portfolios
Take a look at another set of websites that were emotionally resonant. Where’s the originality here either? They’re all looping worlds with some sort of person on a two-wheeled vehicle.
If you look at all six of these websites and break them down to their basics, it does seem like they all just copied each other. That must mean all I have to do is make my own room portfolio or looping world with a two-wheeled vehicle and people will like it!
Well, no, that’s not how it works. There’s a lot of factors that go into each of these that make them stand out even though, at a base level, you can view them as “unoriginal.” You can obviously tell all of them spend a lot of time tweaking, but they tweaked appropriately to impact the third factor, creating emotional resonance. It’s like music, most modern well-performing songs are based on very popular chord progressions (basically sets of notes in specific orders that people have determined sound pretty good). Just because you select a common chord progression a lot of famous songs use doesn’t mean you’ll make something that takes off in the music industry. Similarly, if you’re writing a book you can use the same “hero’s journey” plotline many famous books use, but have your book not perform well.
You might argue that other factors like luck, timing, and peoples’ reputation/status greatly contributed to their “success” and you’d be 100% right, but those are factors that are largely out of your control or are simply by-products of doing the right tweaking with the right emotional resonance in the first place; so let’s focus on the parts you can control more directly.
Emotional resonance consists of many components so lets break down where the emotional resonance comes from each project. At the end of the day, it’s rooted in human psychology, but instead of getting into the academic terms, let’s focus on the high level concepts. I can’t cover every component, but I’ll try and cover the easier ones to understand.
Bruno Simon’s Room
He’s famous and is a trendsetter. People naturally look up to and admire talented people and give more weight to their creations.
He created a room of high semi-realistic fidelity with cool functionalities. This appeals to developers and inspires them to pick up 3D art and 3D artists to pick up code. Not many people knew you could do something to this level of fidelity in a browser before. The novelty shock created resonance.
He made something personal to him, his own room. It feels like an intimate vulnerable look into his own life and what he values. There’s a ton of objects in there that represent who he is, such as the dog bed suggests he is a dog owner and has a caring side for animals and the streamer setup suggests he’s modern and makes videos. This creates a connection with him as a person and makes a person want to create their own personal sharing through a room portfolio.
Henry Heffernan’s Room
Like Bruno’s, this is super personal. It clearly shows he has a passion for the old-time vibe computers and the nostalgic games on the computer definitely emotionally impacted a lot of people in that way.
It’s very interactive and real and the subtle details like screen flickering and fingerprints shows attention to detail. When we see a screen flicker in real life or fingerprints on our screen we get upset and frustrated, but when it’s done purposefully in his portfolio, it becomes funny, thoughtful, and perhaps a little endearing. Shared frustration through a new medium creates emotional resonance. Here, the details also show that he’d carry that over into other aspects of his creations and technical skill which is also inherently attractive.
Soo Ah’s Room
Cute things basically. Not much else to say here. There are also popular references to League of Legends, BTS, motivational quotes, and Kirby appealing to a wide demographic of gamers, KPOP fans and those into self-help motivational quotes.
The colors are also lavender and pink which are associated with soothing creams/smells/moods, and the rounded corners of the 3D models contribute to the softness. There’s also a dominant wood-vibe which gives it that homey “one-with-nature” vibe.
Joshua’s World
Simple and easy to understand. People immediately know stories about bikers who travel long distances, and he used that concept as a way to share his own journey. All models have a consistent color palette and subtle animations for detail.
The low-poly style is inherently cute.
Sébastien Lempens‘ Portfolio
Uses the Eiffel tower, a reference to Paris, a very romantic city. The idea of exploring a city like Paris inherently touches people’s emotions.
The time takes place during sunset/sunrise, another really emotionally moving time for a lot of people. There’s even a term for this, the “golden hour.”
The wind turbines signify space, and clean energy (or cleanliness in general). We all love clean cities that we can explore and clean signifies safety.
The Ferris wheel resonates playfulness.
Molazone
While this isn’t as widely known (at least to my knowledge), it did win FWA of the month, so it did quite well among judges, but perhaps not so well-known known outside of the creative circle. A large part of it is probably because it was created by a professional studio and team, and that inherently puts it at a disadvantage in terms of emotional resonance. If it was made by a single person or a small team of individuals together rather than having an official studio, it definitely would have performed better on social media. However, I think it’s one of those sites that someone will repost and then get a lot of attention.
In any regard, this resonated with the judges not only for its design consistency and technical complexity but also, like the others, had a huge adventure component to it, like the colosseum, water bridge, and dark forest etc. which are all very common tropes associated with adventure, danger, and mystery all intended to evoke emotional responses.
There is also something they all share in common, and that it’s a 3D website. Since most websites are 2D, seeing 3D in a website inherently has a “wow” factor to it. To a more technical demographic, like gamers, they might not be as impressed, but once they learn it’s a website it typically has a larger impact than if you told them it was a standalone downloadable game. Of course, to people that work in this field, they’ve probably become desensitized to the “wow” factor.
Now you might be reading my analysis and think it applies to your website too, e.g., “I’ve got cute things too like low-poly models, why am I not getting attention?” Well, the difference there is probably the fact that you don’t know how to convey the emotions effectively. It’s a pretty common pattern known as the Dunning Kruger effect. You don’t know what you don’t know so you think yours is as good as theirs. This was me for a long time, and still today in the areas I’m not good at, so I always specifically look for that gap in my knowledge. The answer to find that gap is to that question is to observe more work and try and identify more patterns (more details in Section 1.3).
You might also think this is a post-hoc analysis and I’m making up things. That it’s easy to look at well-received projects in retrospect. That may be partially true, but this is also how we analyze things after finishing completed projects to see what we should continue doing or not. It also informs future design choices. We do this all the time which why many creatives are so consistent, we do a post-hoc analysis to predict future outcomes for future works. In the next section, I do an analysis before anyone has seen the website and I’m confident in that analysis and its outcome.
Try and discover what you don’t know. Maybe your design isn’t consistent or you didn’t know you should round the corners of your 3D models to make it even cuter. Or you rounded the corners too much and it looks chaotic rather than cute. When we learn design/art principles/fundamentals, we’re just learning applied psychology and describing patterns that others have already discovered about our species as humans. So you can learn the fundamentals, but also develop your own awareness of patterns across well-received projects. In a way, you create your own sense, intuition, and set of fundamentals to work from which is honestly pretty cool. I talk about this in the next section.
1.3 How to make things emotionally resonate by focusing on universal human emotions/experiences & develop your creative intuition with exercises (AKA starting a bit reductive and mechanical in order to eventually be intuitive and unique)
While we’re all unique in our own ways, we share some fundamental universal experiences and emotions. Those core feelings are exactly what you want to identify and invoke when working on your projects. We have so many terms for this like “UI design,” “3D artist,” “UX researcher,” but really at the end of the day it’s about your emotions and how you make people feel about your stuff. You’re not a “designer” or a “developer,” you’re a person who creates things that invoke emotions in yourself and others. Even a UX researcher just focuses on how to make a user less frustrated. Frustration is the core emotion of why they do the work they do. Again, as a society we just like to simplify and label things. These job titles artificially constrain you to what you think you are capable of (even if subconsciously). They constrain you to a set of practices within that established “domain” and don’t encourage out-of-domain thinking and cross pollination. Think outside of your domain and more universally.
I breakdown the emotional design components of the demo scenes for this article below to hopefully show that focusing on the core fundamental concepts leads to a lot of shared patterns. I am not focusing on the obvious idea that they’re personalized for Codrops which causes an emotional impact, just broadly applicable design patterns.
There are three key factors I want to point out (there are many others I want to include, but for the sake of simplicity, let’s focus on these). These factors contribute how emotionally impactful something you create would be, for example if it feels completed, full, original and filled with emotional artifacts people will recognize the effort and the thoughtfulness.
Completion – how much percentage of the scene is done. Of course this metric is very subjective. For example, if you’re doing something standalone rather than a full scene, that emotional context is different. Sometimes “less is more.” But let’s keep it simple for now.
Time spent tweaking – how “original” you could probably say it was relative to my amount of copying from references/inspiration.
Universal Emotions – the (broader/high-level) emotions that each scene is intended to invoke.
Notice how all of these scenes are different styles, but the key factors are pretty similar. This suggests that the overall emotional impact of all of these scenes is pretty similar even if that impact is appealing to different audience demographics due to stylistic and theming preferences. In other words, some people might like one style over the other (toonish 2.5D vs stylistic 3D vs realism paper) or the general idea better (like if you like food you’d prefer the sushi shop, or if you prefer adventure you’d like the paper world), but no matter which demographic it appeals to, the appeal is likely equal.
That impact is as follows: enough to be inspirational, but not enough to be as emotionally impactful as more polished projects can. So lets go one by one on these factors for each of these scenes to see why that’s their resulting impact as well as uncover some emotional design patterns.
For the Paper Pirate One
Completion(5/10) – This scene is far from complete. It’s missing quite a few details like clouds, sun rotating, birds moving etc. It could also use better animations on-scroll rather than a slight move left to right. For example, when I enter the scene all the puppet sticks could animate in from the top and bottom respectively or Miffy’s arm could be slightly moving and maybe on click of Miffy, Miffy will jump and take a swing at the Codrops Sea Monster in which the Sea Monster will bounce back with a little bit before returning to fight making the scene much more engaging. Similarly, I could click on Panda Boris and he might tag team jump with Miffy, etc.
Time spent tweaking (5/10) – For this idea, I’ve seen a lot of online puppet shows (albeit with wood) that have very similar concepts which is how I came up with the idea. It’s pretty non-original except instead of using wood I decided to use paper. Even though my original inspiration was from wood puppet shows, I later discovered paper puppet shows already exist too. Now everyone who knows about paper puppet shows thinks I’m unoriginal and probably copied those even when I didn’t (and that’s kind of beautiful at the same time). The characters are just adjusted copies of graphics online with different outfits (Original Miffy art and Boris art here). Both pirate outfits are super common online too. So technically I didn’t do anything special what-so-ever. The most “unique” thing in this scene was the Codrops Sea Monster and that was inspired by Sid from Ice Age.
Universal Emotions (Cute, Playful, Adventure) –
Cute – The characters themselves, overly rounded cutouts, and exaggerated proportions (like the Codrops Sea Monster’s Eyes or the ships sail) = soft and round like plushies or bears, but conveyed through paper.
Playful – These are children’s characters dressing up as pirates which is inherently playful. There’s also the element of using notebook paper, taking something mundane and turning it as a source of joy that feels playful.
Adventure – Adventure isn’t technically a “universal emotion” but it’s an adjective that contains an experience or set of universal human emotions like fear and excitement etc. So you could break it down further into those core emotions if you want to, but for simplicity let’s use this adjective and call it a universal experience. There are many elements of adventure here, the idea of traveling on the open sea, fighting a fantasy monster, and being a pirate likely infers more adventures and treasures etc. It’s also a “safe danger” medium in the sense that you’re never actually going to fight a sea monster like this because it is digital. It’s the same way if you’re around an insect expert who knows which insects are dangerous or not. If you’re usually afraid of the woods because of insects, having that expert near you will make you feel safer and want to go into the woods and feel that adventure and the thrill of “safe danger.”
For the Sushi & Ramen RestaurantOne
Completion(5/10) – Again, this scene is far from complete. There’s not much detail in the restaurant and not much story telling around the shop and Pusheen seems to be just placed there with no real context other than minimal contribution to the cuteness factor. Quite honestly, if Pusheen was by herself it might have more of an emotional impact than the entire scene has collectively because it would be in a different frame of context compared to a whole scene. Also the colors aren’t quite done as they feel a bit rushed like the plant pot and the informational sign upfront just uses white bars rather than something cooler like characters/shapes of food in a more playful way.
Time spent tweaking (5/10) – If you look at the original scene I was inspired from, you can see the idea is pretty much the same, cute animal, slight handpainted shine and colorful mixing. The only difference I did was adapt the characters to a new style and add a bunch of food stuff.
Universal Emotions (Cute, Playful, Craving) –
Cute – Again, very rounded characters, bright colors.
Playful – It’s inherently playful having characters like a rabbit or panda cooking. Especially since they’re kids and the restaurant is in the shape of the Codrops logo, it feels more like engaging in a dress-up pretend activity rather than running a real restaurant.
Craving – Okay, need I say more? FOOD!!!! 😋😋😋😋😋😋That moment when you wanna eat something you love and find delicious, pure euphoria.
For Snow Scene One
Completion(5/10) – Yes, this scene is far from complete as well; missed opportunity to add more tree decorations, have them swaying/bouncing to the music. Could’ve added some cute presents and snow bunnies roaming around etc.
Time spent tweaking (5/10) – Look at the original scene I was inspired by. I didn’t really do anything new and the outfits were copies from official Miffy videos.
Universal Emotions (Cute, Playful, Mischievous) –
Cute – Like the other two, very round-like and using stylized textures rather than realism ones for the wood, stone, tree etc.
Playful – It’s a snowball fight with a snowman and outdoor winter activities.
Mischievous – There’s a “no codrops” sign there, for whatever reason I don’t know, but just the fact is there adds to the rebellious vibe. Most importantly though you can see Miffy throwing snowballs at the house which is mischievous.
Of course, there are a lot of other factors, such as who created the scene, the purpose of the scene, how familiar people are with 3D art/websites, when the website was posted, where it was posted, etc. that all contribute to the emotional impact. However, this section is just focused on getting someone to think about emotional design (i.e., the core of why we do what we do) rather than just metrics (e.g., how many views/likes you’ll get). The moment you start creating things while focusing on the emotion behind the scenes is how you become better at observing and identifying your knowledge gaps. Notice how all three scenes are in three different styles yet all feel cute? Once you identify and internalize what evokes the cuteness emotional feeling, it will apply to whatever you decide to execute with that emotion.
Take a look at the art below. The primary emotion/experience is cute. Think about why, notice the gradients, the color mixing and color palette, the faces on the characters, the use of miniatures to signify smallness, all the round stuff, and the handpainted highlights etc. You’ll see these patterns repeated across cute works in a variety of different styles. It’s all very similar principles just in a different style. You can also see how so many different artists land on very similar styles independently or get inspired from each other ALL the time. It’s just like how two people independently discovered calculus or how programmers copy and tweak open-source code repos. We all share universal human emotions and experiences, they are bound to be repeated. All you have to do is identify those emotional design patterns. It’s not much different from relating to someone who got a paper cut when you get a paper cut. That is a universal emotional pattern just like these art pieces of cuteness.
Guess where else these emotional design patterns exist? You’re absolutely correct! UI designs, typography, in real life, etc.! Take a look at the following, all very roundish again. You intuitively know it your entire life, but hopefully putting it side by side shows how similar everything really is. It’s not magic, it’s observation.
Random images of cute things I found online. They seem disconnected from each other, but they’re all so consistently round and puffy (which are the exact factors that contribute to the universal emotional design pattern)!
You don’t have to be a 3D artist, UX researcher, UI designer, a game developer, a chef, or an animal expert to recognize all these things make you feel cuteness. That universal emotion pattern is consistent across all these “fields”/”domains.”
If this is so “obvious” then why am I not super rich and famous like these other artists? Likewise, why aren’t the creativity and cognitive science researchers rich and famous if they “know” what creativity is? Why not just create a complete project, go viral, and make a lot of money? Well, because we’re humans and we each have our own limitations and personal goals.
For me specifically, I’m a teacher. I can guide someone in a direction, but it doesn’t necessarily mean I’m the “best” at that direction. It’s like saying all high-school math teachers should stop teaching math because they’re not innovative in their math field. I’m just a teacher, not a standout top-tier practitioner. My main source of emotional satisfaction is from inspiring beginner creatives with concept websites rather than spending the extra days/weeks polishing a project for maximum impact among more experienced individuals.
Of course, the opposite is true as well. Just because someone is a standout top-tier practitioner does not mean they feel fulfilled from teaching and can teach well. Someone can be highly creative without being self-aware enough to break down how they developed their creativity in easily digestible pieces of information.
1.4 Applying what we learned with creativity exercises
Talk is cheap, so let’s put the things discussed in this article into practice. If you want, open up a design tool like Figma to follow along. You don’t need to be creative or artistic at all to follow along, which is the whole point of this article.
Emotional design principles + copying and tweaking = “new” art
Take a look at the image directly above, let me walk you through these steps that you can try on your own. We’re basically copying the two existing art works and combining them to make something “new” with emotional design tweaks.
Step 1 – Make a rectangle.
Step 2 – Round the corners to suggest softness [cute emotional design pattern applied]
Step 3 – Make the stroke width thicker to suggest thickness and plumpiness [cute emotional design pattern applied]
Step 4 – Round and bloat the whole entire shape to signify plumpness [cute emotional design pattern applied]
Step 5 – Copy the face from the pink pixel art character and add another box for the straw detail [copy real-life juice boxes and artist].
Step 6 – Thicken the outline just like the body [cute emotional design pattern applied]
Step 7 – Round the straw just like the body [cute emotional design pattern applied]
Step 8 – Add a hole for the straw to make it blend in and copy the diamond stars from Levi’s art [copy real-life juice boxes and artist]
Step 9 – Copy the diamond stars from Levi’s art [copy artist]
In probably a minute or two, you’ve just created a cute juice box character 🥳🥳🥳!!! You should be really proud of yourself and I’m proud of you 😊!! Obviously, it’s nothing that will stand out, but that’s how we all start. You might feel like you just copied those two artists, but that’s exactly what creativity is: copying and tweaking!!! You are ALWAYS the sum of the world around you, so lean into that when you make things. Now you can just copy these principles to any sort of object you want, like a cute rock character! The more you practice, the better your intuition will be and the faster you get at tweaking!
Literally a random rock character made in 10 seconds with the copy and pasted face + diamond stars
So what to do next? Well, just keep practicing. Maybe take four reference images and copy something you like from each of those references to create something “new.” Like you could even copy me and extend it, why not make a rock shaped juice box cute character? WHO’S STOPPING US FROM MAKING A ROCK SHAPED JUICE BOX MWHAHAAHAHAHAHAHAHA 😈😈😈😈. And of course, observe more. Did you take a look at the ghost’s face in the Tea Bag by JZ in the image above? Doesn’t it look very similar to the character we copied from? The mouth is just closer to the eyes and elongated! The face is also missing the blush, but we can keep that for our ghost versions!
Another exercise is practicing medium transfer. The video below is a PowerPoint presentation I made two years ago for a college class. You could recreate something similar on a website! Copy it and tweak it with HTML/CSS/JS. Make a reusable case file component. Of course, you don’t have to use what I did. Just take something from one medium that you like and put it into a different medium. In this case, it went from real-life looking case files -> PowerPoint presentation -> website interaction. Notice the job titles and mediums: FBI agent (real case files), a designer (PowerPoint), and a programmer (website implementation). Be the bridge between mediums. Or should I say, be the medium between the mediums haha.
There’s so many other ways you can use this same exact FBI context, for example:
Case files are secretive, so why not add an easter egg on your website that is secretive?
You could add a secret code, letters you can click on to discover and unscramble to unlock a hidden portion of your website.
Crime’s emotions/experiences are dark, intense, suspicious, and moody, think about what kind of color palette is associated with that.
The last exercise I’ll give is literally just to trial and error random stuff. Don’t even think about what you’re doing. Don’t look at the clock, just take a reference and run with it quickly. Don’t worry about if it’s a “good” reference or anything like that at all. If it ends up being fun then you can continue! Creativity is just as much trial and error as coding or anything else you’re learning. You can see below I googled a random architecture image online and used that to design a graphic using the same exact copy and pasted face from our juice box character, just with added eyebrows. The character shape is from the image, and the “beard” is just the stair railings. Everything is source material if you want it to be.
Just some random copying and tweaking. Let yourself go, remove the pressure of creating something cool. Just make something and have fun when you do it. Stop thinking how good it is or what’s the purpose. The only purpose is to have fun.
You can also seem “deep” and make up stuff in your existing works, like my secret hidden message is “LOVE.” It wasn’t intended at all, but I’ll pretend that it was intentional and I totally didn’t just discover this looking in retrospect! The point isn’t to seem deep to others, but to discover unseen patterns in your own works! This way, when you are inspired by something, your brain will naturally start looking for ways to incorporate actual deep hidden meanings in your works. Look for meanings when there are no meanings, exactly as I did here.
Randomly discovering a secret word (“LOVE”) in my scenes in retrospect for fun.
In closing, don’t just stop at the feeling that you’re feeling. Ask yourself “why” you feel the way that you do and why others feel the way they do. What is it about that thing that makes you and others feel that way? Develop that self-awareness and empathy. You’ll discover so many patterns and the exceptions too, like making cute things with sharp objects instead of rounded ones. Once you understand how to convey emotions, you can bend the rules and emotions you evoke in any way you want to. You’ll eventually tweak so much and end up developing your own style before you even realize it! People will ask you to make something and you can quickly do so even without looking at a reference! Of course, that doesn’t mean give up references, but it does mean you’ve finally developed that ability and intuition.
Analyzing emotions is not meant to take away the innocence of our spontaneous and authentic feelings. If anything, being aware of them make us feel them deeper and warmer. It’s like knowing how your favorite food is made. If anything, it just makes you appreciate your favorite food more.
1.5 Pattern matching outside of art
I hope this article makes it clear that pattern matching is basically just your intuition and exists elsewhere outside of analyzing art. You actually do it every single day whether you realize it or not (e.g., when you’re trying to read/guess other people’s intentions). This article just makes it more explicit by taking a look at the intuitive brain of a creative person.
Bringing your subconscious processes to your conscious brain feels unnatural, but it gets easier over time. It’s just developing self-awareness; like when you were a toddler you weren’t really self-aware, but as you get older, you discover more parts of yourself. Some people never work on self-awareness and they peak emotionally, that’s why you have abusive parents who are 50+ years old. Working on yourself is always a choice.
You can see extreme forms of pattern matching like what we did outside of the art field. For example, this famous YouTuber can quickly identify locations based on just a sky or ground! It seems bizarre, but if you break it down into patterns, it actually makes sense on how someone is capable of doing something that amazing. Like certain countries have certain trees, certain clouds, certain skies, etc. Most people don’t see these patterns. They just think sky = sky, rather than sky = gradient + colors + type of cloud + direction of gradient + certain skies + is it night/time day time + weather assosiated with type of cloud/sky + which countries have certain weathers more likely to produce that kind of cloud/sky. He sees all those patterns that others don’t.
It’s also partly the fact that games like GeoGuessr aren’t updated constantly with new images of places so when you guess on photos that aren’t pure skies, you somewhat memorize intuitvely what kind of skies are matched with what kind of places. In other words, if you look at the same 10,000 images 100 times, then even if you don’t pay attention to the skies in those 10,000 images, your brain already by default picked up subconcious patterns and intuition you can tap into later when you need them.
Pattern matching for him is like “Okay this image has type A trees which means XYZ countries, and it has type B concrete which means XZ countries, and it has type C clouds which means it likely has weather type G which means it’s in the northern hemisphere so it’s likely country Z.” That’s how it works when you make art too! More on emotional pattern matching and similar topics in the future articles!
2. Technical Implementation
This section will just briefly talk about some technical details of this project. I didn’t do many new things in this article compared to the one I wrote here (just make sure to check the ReadMe file on GitHub for the corrections to that article as well). This project is also mainly coded with AI generated code so don’t look at the code for any inspiration there 😅. This section is not beginner friendly and assumes knowledge of Blender and programming concepts.
2.1 Created a Blender Python Script using AI to select all objects and add a second UV Map named “SimpleBake”
For the SimpleBake Blender addon, there’s an option that allows you to use a pre-existing UV Maps named “SimpleBake.” it’s super tedious to manually select each object and create that UV Map so I asked ChatGPT to generate me a script that does it automatically.
import bpy
# Loop through all mesh objects in the scene
for obj in bpy.data.objects:
if obj.type == 'MESH':
uv_layers = obj.data.uv_layers
# Add "SimpleBake" UV map if it doesn't exist
if "SimpleBake" not in uv_layers:
new_uv = uv_layers.new(name="SimpleBake")
print(f"Added 'SimpleBake' UV map to: {obj.name}")
else:
new_uv = uv_layers["SimpleBake"]
print(f"'SimpleBake' UV map already exists in: {obj.name}")
# Set "SimpleBake" as the active (selected) UV map — but not the render UV map
uv_layers.active = new_uv # SELECTED in the UI
2.2 Created a Blender Addon with AI to export curve as JSON or a three.js curve
Basically the title, I created this with Claude AI with three prompts. The first was creating the plugin that could export a curve as points, then asking it to only export the control points not sampled points, and the third prompt was to put it in three.js curve format and update the coordinate system. With better prompting you could probably do it with one prompt.
bl_info = {
"name": "Curve to Three.js Points Exporter",
"author": "Claude",
"version": (1, 0),
"blender": (3, 0, 0),
"location": "File > Export > Curve to Three.js Points",
"description": "Export curve points for Three.js CatmullRomCurve3",
"warning": "",
"doc_url": "",
"category": "Import-Export",
}
import bpy
import bmesh
from bpy.props import StringProperty, IntProperty, BoolProperty
from bpy_extras.io_utils import ExportHelper
from mathutils import Vector
import json
import os
class ExportCurveToThreeJS(bpy.types.Operator, ExportHelper):
"""Export curve points for Three.js CatmullRomCurve3"""
bl_idname = "export_curve.threejs_points"
bl_label = "Export Curve to Three.js Points"
filename_ext = ".json"
filter_glob: StringProperty(
default="*.json",
options={'HIDDEN'},
maxlen=255,
)
# Properties
sample_count: IntProperty(
name="Sample Count",
description="Number of points to sample from the curve",
default=50,
min=3,
max=1000,
)
export_format: bpy.props.EnumProperty(
name="Export Format",
description="Choose export format",
items=[
('JSON', "JSON", "Export as JSON file"),
('JS', "JavaScript", "Export as JavaScript file"),
],
default='JSON',
)
point_source: bpy.props.EnumProperty(
name="Point Source",
description="Choose what points to export",
items=[
('CONTROL', "Control Points", "Use original curve control points"),
('SAMPLED', "Sampled Points", "Sample points along the curve"),
],
default='CONTROL',
)
include_tangents: BoolProperty(
name="Include Tangents",
description="Export tangent vectors at each point",
default=False,
)
def execute(self, context):
return self.export_curve(context)
def export_curve(self, context):
# Get the active object
obj = context.active_object
if not obj:
self.report({'ERROR'}, "No active object selected")
return {'CANCELLED'}
if obj.type != 'CURVE':
self.report({'ERROR'}, "Selected object is not a curve")
return {'CANCELLED'}
# Get curve data
curve = obj.data
# Sample points along the curve
points = []
tangents = []
if self.point_source == 'CONTROL':
# Extract control points directly from curve
for spline in curve.splines:
if spline.type == 'NURBS':
# NURBS curve - use control points
for point in spline.points:
# Convert homogeneous coordinates to 3D
world_pos = obj.matrix_world @ Vector((point.co[0], point.co[1], point.co[2]))
# Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
points.append([world_pos.x, world_pos.z, -world_pos.y])
elif spline.type == 'BEZIER':
# Bezier curve - use control points
for point in spline.bezier_points:
world_pos = obj.matrix_world @ point.co
# Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
points.append([world_pos.x, world_pos.z, -world_pos.y])
elif spline.type == 'POLY':
# Poly curve - use points
for point in spline.points:
world_pos = obj.matrix_world @ Vector((point.co[0], point.co[1], point.co[2]))
# Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
points.append([world_pos.x, world_pos.z, -world_pos.y])
else:
# Sample points along the evaluated curve
depsgraph = context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
mesh = eval_obj.to_mesh()
if not mesh:
self.report({'ERROR'}, "Could not convert curve to mesh")
return {'CANCELLED'}
# Create bmesh from mesh
bm = bmesh.new()
bm.from_mesh(mesh)
# Get vertices (points along the curve)
if len(bm.verts) == 0:
self.report({'ERROR'}, "Curve has no vertices")
bm.free()
return {'CANCELLED'}
# Sample evenly distributed points
for i in range(self.sample_count):
# Calculate interpolation factor
t = i / (self.sample_count - 1)
vert_index = int(t * (len(bm.verts) - 1))
# Get vertex position
vert = bm.verts[vert_index]
world_pos = obj.matrix_world @ vert.co
# Convert Blender (Z-up) to Three.js (Y-up): X, Z, -Y
points.append([world_pos.x, world_pos.z, -world_pos.y])
if self.include_tangents:
world_normal = obj.matrix_world.to_3x3() @ vert.normal
# Convert normal to Three.js coordinate system
tangents.append([world_normal.x, world_normal.z, -world_normal.y])
bm.free()
if len(points) == 0:
self.report({'ERROR'}, "No points found in curve")
return {'CANCELLED'}
# Prepare export data
export_data = {
"points": points,
"count": len(points),
"curve_name": obj.name,
"blender_version": bpy.app.version_string,
}
if self.include_tangents:
export_data["tangents"] = tangents
# Export based on format
if self.export_format == 'JSON':
self.export_json(export_data)
else:
self.export_javascript(export_data)
self.report({'INFO'}, f"Exported {len(points)} points from curve '{obj.name}'")
return {'FINISHED'}
def export_json(self, data):
"""Export as JSON file"""
with open(self.filepath, 'w') as f:
json.dump(data, f, indent=2)
def export_javascript(self, data):
"""Export as JavaScript file with Three.js code"""
# Change file extension to .js
filepath = os.path.splitext(self.filepath)[0] + '.js'
with open(filepath, 'w') as f:
f.write("// Three.js CatmullRomCurve3 from Blender\n")
f.write("// Generated by Blender Curve to Three.js Points Exporter\n")
f.write("// Coordinates converted from Blender (Z-up) to Three.js (Y-up)\n\n")
f.write("import * as THREE from 'three';\n\n")
# Write points array
f.write("const curvePoints = [\n")
for point in data["points"]:
f.write(f" new THREE.Vector3({point[0]:.6f}, {point[1]:.6f}, {point[2]:.6f}),\n")
f.write("];\n\n")
# Write curve creation code
f.write("// Create the CatmullRomCurve3\n")
f.write("const curve = new THREE.CatmullRomCurve3(curvePoints);\n")
f.write("curve.closed = false; // Set to true if your curve should be closed\n\n")
# Write usage example
f.write("// Usage example:\n")
f.write("// const points = curve.getPoints(100); // Get 100 points along the curve\n")
f.write("// const geometry = new THREE.BufferGeometry().setFromPoints(points);\n")
f.write("// const material = new THREE.LineBasicMaterial({ color: 0xff0000 });\n")
f.write("// const line = new THREE.Line(geometry, material);\n")
f.write("// scene.add(line);\n\n")
f.write("export { curve, curvePoints };\n")
def menu_func_export(self, context):
self.layout.operator(ExportCurveToThreeJS.bl_idname, text="Curve to Three.js Points")
def register():
bpy.utils.register_class(ExportCurveToThreeJS)
bpy.types.TOPBAR_MT_file_export.append(menu_func_export)
def unregister():
bpy.utils.unregister_class(ExportCurveToThreeJS)
bpy.types.TOPBAR_MT_file_export.remove(menu_func_export)
if __name__ == "__main__":
register()
How the addon looks and works! Select the curve object, File > Export > Addon > Choose Control Points and format as JavaScript.
2.3 A boat load of conditional rendering
There were no complex render targets or anything like that for scene transitions, it was just pre-positioned 3D objects toggling its visibility on and off with conditional rendering based on the progress value the camera is along the curve. It’s not really a good practice for more complex scenes like these as it can cause crashes if you conditionally render a ton at a time, but works for a demo on most desktop/laptops at least.
How the models were setup in Blender
2.4 Creating invisible bounding boxes for SVGs with Figma
When I wanted to make sleeping Miffy and Panda Boris for the nighttime ship scene, I did not design them with the same size as their day versions. That means when I replaced the image textures with the night versions, the default UV map no longer looked good with them. While I could adjust the UV or position of the plane with code, it’s easier just to create an invisible bounding box with the same width and height as the day characters in Figma and have the nighttime characters fit within that bounding box.
Sleeping Boris character grouped together with a bounding box.
Final Words
I’m not a programmer, an artist, or a UI designer. I’m someone who creates things.
I’m not a creative. I’m a copier who tweaks things.
I’m not talented. I’m an observer recognizing emotions.
You can call yourself a programmer, an artist, a creative, or whatever you want if the time calls for it, but don’t let these words define you and your capabilities. The only way to stay relevant with AI in the picture is by incorporating cross-domain thinking beyond labels. Become interdisciplinary. It sounds daunting, but if you have the right learning systems in place, you can somewhat learn multiple domains in roughly the same time it would take you to learn one domain with ineffective learning systems (e.g., spending 1000 hours in tutorial hell in one domain vs spending 1000 hours pattern matching across multiple domains). Effective learning systems will show you connections and parallels between fields of study, speeding up your learning process. It’s quite similar to how we identified a core emotional design pattern that shows up in real-life, 3D art, 2D art, and UI designs.
Yes, this article does have elements of survivorship bias, as there are many people who create round things and don’t end up creating cute things that spark emotional resonance because there are way more factors to creating something cute than just making round things, but the purpose was to show a path that many creative people take to become more creative rather than showing every last step of intuition development and every single observation and design choice. In the future articles, I’ll be addressing more components on how to make the “tweaking” phase more effective.
As humans, we like to take complex arguments and simplify them with our pre-existing biases and treat those simplifcations as fundamentally true when they’re just partial truths. Simplifications like those in this article are not truths, they’re building blocks to help guide someone to a specific view (in this case, the view is that creativity is learnable). This article provided a highly reductive and systematic/analytical system for creativity that I hope will naturally lead someone to develop creative intuition and spontaneous insight.
If you look at your life, there are probably so many moments you thought something was too difficult at first until you realized it wasn’t, whether that be getting in a relationship, public speaking, calculus, 3D art, programming, or literally anything you used to be afraid of and now aren’t. Treat creativity just like those things. Just another thing you think is difficult but know you’ll get better at with time.
Anyway, I hope this article unlocked some potential cognitive blockers you have and made you realize that you’ve got hidden skills inside of you. Growth is slow and quite painful at times. Take your time! Or maybe you’re perfectly happy where you’re at and don’t really want to change, which is totally okay as well. Like I said, never judge yourself and I’m not going to judge or pressure you either. You’re special in your own way. Your interpretation of this article, whether you think it’s good or bad, helpful or not helpful, that’s what makes you special.
Don’t judge, just reflect and seek to understand. Time will show you your values.
TLDR: Want to see how your site looks in different browsers without installing them all? Try a free online browser emulator at browserling.com/browse. It runs in your browser. No installs, no downloads.
What’s a Browser Emulator?
A browser emulator is a special browser that works like the real thing but runs on another computer in a virtual machine. You control it from your screen and can test sites in different browsers (Chrome, Firefox, Saferi, Edge, etc) without installing them. This makes it easy to spot issues and see how websites look for different users.
Why Do Developers Use It?
Web developers need their sites to look good everywhere: in Chrome, Firefox, Edge, Safari, even Internet Explorer (yep, some people still use it). A browser emulator lets you check your site across all these browsers quickly. You can spot layout issues, broken buttons, or weird CSS problems without multi-browser installs.
Is It Just for Developers?
Nope. Regular users can try it too. Say you want to see how a site behaves in another browser, or you’re curious why something looks strange in Safari but not in Chrome. Instead of downloading new browsers, you can just launch them in the emulator.
Can I Use It for Cross-Browser Testing?
Totally. That’s the main point. Browser emulators are built for cross-browser testing. You can load your website in multiple browsers and see exactly what users see. If your site looks great in Chrome but breaks in Firefox, the emulator shows you that.
Can I Test Websites on Mobile Browsers Too?
Yes. With a browser emulator you can try mobile versions of browsers like Android Chrome. This helps you see how your site looks on phones and tablets without needing to buy extra devices.
Do Browser Emulators Have Developer Tools?
All emulators give you access to tools like “Inspect Element”, so you can dig into code, check CSS, and test changes live.
What’s the Difference Between a Browser Emulator and a Browser Sandbox?
A browser emulator is for testing how sites look and act in different browsers. A browser sandbox is more for safe browsing and security, keeping dangerous stuff away from your device. Both run on remote computers, but they solve different problems.
Is It Safe?
Yes. Since the browsers run on remote computers, nothing touches your real device. If a site tries to crash a browser or run buggy code, it only affects the emulator, not your machine.
Do I Need to Download Anything?
Nope. Just open the browserling.com/browse, pick a browser, and start testing. It runs on HTML5, JavaScript, and WebSockets right inside your own browser.
Can I Try Older Browsers?
Yep! That’s one of the best parts. Developers can test old browser versions that people still use but don’t want to install locally. This helps with bug fixing, design tweaks, and checking compatibility.
Is It Free?
There’s a free browser emulator with limited time. If you need more testing time or access to more browsers, you can upgrade to paid plans. Paid users get longer sessions, more browser options, mobile IPs, and file transfers.
What Is Browserling?
Browserling is a free online browser emulator and testing platform. It helps developers test websites across browsers, and it helps regular users open sites safely without downloading extra software.
Who Uses Browserling?
Web developers, designers, QA testers, cybersecurity folks, schools, and big companies. Anyone who needs to test websites or check how they behave in different browsers. Millions of people use Browserling to make the web safer and better.
TLDR: Want your team to browse the web safely without risking company devices or networks? Try free Remote Browser Isolation at browserling.com/browse. It runs right in your browser. No installs, no downloads.
What’s Remote Browser Isolation (RBI)?
Think of RBI as a “browser in the cloud”. Instead of running websites directly on your laptop or office PC, RBI loads them on a secure server somewhere else. You just see a clean, safe video stream of the website. Any risky code or malware stays far away from your company systems.
Why Should Managers Care?
One bad click from an employee can cost thousands in lost time, ransomware, or data leaks. RBI reduces that risk to almost zero. With RBI, your staff can open links, check supplier sites, or even handle suspicious web apps without bringing danger onto the corporate network.
Will RBI Slow Down My Employees?
Not really. Modern RBI is built to be fast. Websites load almost instantly, and employees barely notice they’re browsing through a secure remote session. For management, this means stronger security without hurting productivity.
Will Employees Push Back Against It?
Unlikely. Since RBI looks and feels like a normal browser, most employees won’t even notice the difference. For managers, that’s a win: stronger security without resistance or complaints about “new software”.
Can RBI Help with Compliance and Regulations?
Yes. Many industries (finance, healthcare, government) require strict data protection. RBI helps by keeping risky code and malware away from local systems. This reduces compliance headaches and shows auditors that you’re serious about security.
How Does RBI Compare to Firewalls and Antivirus?
Firewalls and antivirus tools are like locks on the door. RBI is like moving the door itself into a safe building across the street. Even if malware tries to sneak in, it never reaches your office network. Managers can think of RBI as another strong layer in the security stack.
Is It Safe for Regular Users?
Yes. Users don’t need to install anything complicated. RBI runs in the browser they already use. If a sketchy site tries to drop malware, it gets stuck in the isolated environment. Employees just see the site like normal, but nothing dangerous touches their device.
Can RBI Help with Phishing Emails?
Definitely. Your team can click on links from suspicious emails inside RBI. If the site is a phishing trap or hides malicious scripts, it can’t escape the isolated session. The real endpoint stays clean.
What About IT and Security Teams?
RBI is great for IT departments. Security teams can safely open suspicious URLs, test untrusted web apps, or check malware samples without spinning up a separate VM every time. It saves time and lowers the chance of accidents.
Do We Need Special Hardware or Software?
Nope. Just go to browserling.com/browse in your normal browser. It uses modern web tech (HTML5, JavaScript, WebSockets) to stream the remote session. No downloads, no installs, no admin rights needed.
Can Employees Use Different Browsers?
Yes. RBI services let you switch between Chrome, Firefox, Edge, Opera, and even older versions. This is useful for testing apps across multiple browsers without risking the actual machine.
Is It Free?
There’s a free version you can try right now with time limits. Paid plans are available for longer sessions, advanced controls, and enterprise features like policy enforcement and logging.
Is RBI Expensive to Roll Out?
Not at all. There are free trials and affordable enterprise plans. Because RBI runs from the employees’ existing browsers, there’s no big setup cost, no new servers, and almost no need for extra staff training. Managers can start small, then scale up if the company needs more seats or features.
What Is Browserling?
Browserling is a pioneer in online RBI technology. It lets individuals and companies run browsers safely in the cloud. Enterprises use it for:
Securing employee browsing
Testing apps and websites
Opening suspicious files and URLs
Protecting against phishing and malware
Who Uses Browserling?
Everyone from small businesses to Fortune 500 companies. IT managers, government agencies, financial firms, schools, and healthcare providers use Browserling’s RBI solution to keep employees safe online. RBI is especially popular in industries where compliance and data security really matter.
TLDR: Need to see how your site or app works on Android without buying a phone? Fire up a free online Android emulator at browserling.com/android. The emulator runs right in your browser – no downloads, no installs, no setup.
What’s an Online Android Emulator?
An Android emulator is like a “phone inside your computer”. It runs on a server in the cloud, and you control it through your browser. You can tap, swipe, and test apps or websites exactly like you would on a real phone.
Is It Safe to Use?
Yep! The emulator runs remotely, so anything you test stays contained. You don’t risk messing up your computer or your personal phone. Even if an app has bugs or hidden malware, it can’t escape the emulator.
Can I Test My Mobile Website?
Absolutely. Just load your site inside the emulator and see how it looks on an Android device. You’ll notice if text is too small, buttons don’t line up, or if your layout breaks on different screen sizes.
What About Cross-Browser Testing?
You can use the emulator to try your site in Chrome for Android, Firefox for Android, and other mobile browsers. It’s super handy for web developers who need to make sure their site works everywhere, not just on desktop.
Can I Use It for App Testing?
Yes! You can install APK files and see how your app behaves. This helps developers and testers find bugs before shipping updates to real users. It’s also great for QA teams who need to check features across different Android versions.
Do I Need to Download Anything?
Nope. Just open the browserling.com/android in your browser. It works with HTML5, JavaScript, and WebSockets to stream the Android screen to you. No software, no setup, no stress.
Does It Work on Any Computer?
Yep. Since everything runs in your browser, you can use it on Windows, Mac, Linux, or even a Chromebook. No need for special hardware or software.
Can I Share My Test Sessions?
Totally. You can grab screenshots or record short clips right from the emulator. This makes it easy to share bugs with your team or show clients how their site looks on Android.
Can I Try Different Android Versions?
Totally. You can switch between Android versions and devices to see how your app or site looks on older and newer phones. This is useful for compatibility testing and catching weird bugs that only show up on certain systems.
Is It Free?
There’s a free Android emulator version with limited time. If you need longer sessions, more devices, or advanced features, there are paid plans available. The paid plans are built for developers, QA teams, and anyone doing serious testing.
What Is Browserling?
Browserling is a free online Android emulator and browser testing platform. It lets you run Android phones and web browsers straight from the cloud, right inside your browser. No downloads, no setup, no extra devices. Developers use it to test websites, QA teams use it to check mobile apps, and everyday users try it out to see how things look on Android without needing an actual phone.
Who Uses Browserling?
Web developers, mobile app testers, QA teams, and even governments use Browserling’s emulators. Big companies trust it for cross-device testing. It’s super handy if you’d rather test in the cloud than keep an Android lab on your desk.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
As I always say, naming things is hard. We’ve already talked about this in a previous article.
By creating a simple and coherent dictionary, your classes will have better names because you are representing the same idea with the same name. This improves code readability and searchability. Also, by simply looking at the names of your classes you can grasp the meaning of them.
Say that we have 3 objects that perform similar operations: they download some content from external sources.
It’s as simple as that! Just a small change can drastically improve the readability and usability of your code!
So, consider also this small kind of issue when reviewing PRs.
Conclusion
A common dictionary helps to understand the code without misunderstandings. Of course, this tip does not refer only to class names, but to variables too. Avoid using synonyms for objects (eg: video and clip). Instead of synonyms, use more specific names (YouTubeVideo instead of Video).
Any other ideas?
👉 Let’s discuss it on Twitter or on the comment section below!