In today’s regulatory climate, compliance is no longer a box-ticking exercise. It is a strategic necessity. Organizations across industries are under pressure to secure sensitive data, meet privacy obligations, and avoid hefty penalties. Yet, despite all the talk about “data visibility” and “compliance readiness,” one fundamental gap remains: unseen data—the information your business holds but doesn’t know about.
Unseen data isn’t just a blind spot—it’s a compliance time bomb waiting to trigger regulatory and reputational damage.
The Myth: Sensitive Data Lives Only in Databases
Many businesses operate under the dangerous assumption that sensitive information exists only in structured repositories like databases, ERP platforms, or CRM systems. While it’s true these systems hold vast amounts of personal and financial information, they’re far from the whole picture.
Reality check: Sensitive data is often scattered across endpoints, collaboration platforms, and forgotten storage locations. Think of HR documents on a laptop, customer details in a shared folder, or financial reports in someone’s email archive. These are prime targets for breaches—and they often escape compliance audits because they live outside the “official” data sources.
Myth vs Reality: Why Structured Data is Not the Whole Story
Yes, structured sources like SQL databases allow centralized access control and auditing. But compliance risks aren’t limited to structured data. Unstructured and endpoint data can be far more dangerous because:
They are harder to track.
They often bypass IT policies.
They get replicated in multiple places without oversight.
When organizations focus solely on structured data, they risk overlooking up to 50–70% of their sensitive information footprint.
The Challenge Without Complete Discovery
Without full-spectrum data discovery—covering structured, unstructured, and endpoint environments—organizations face several challenges:
Compliance Gaps – Regulations like GDPR, DPDPA, HIPAA, and CCPA require knowing all locations of personal data. If data is missed, compliance reports will be incomplete.
Increased Breach Risk – Cybercriminals exploit the easiest entry points, often targeting endpoints and poorly secured file shares.
Inefficient Remediation – Without knowing where data lives, security teams can’t effectively remove, encrypt, or mask it.
Costly Investigations – Post-breach forensics becomes slower and more expensive when data locations are unknown.
The Importance of Discovering Data Everywhere
A truly compliant organization knows where every piece of sensitive data resides, no matter the format or location. That means extending discovery capabilities to:
Structured Data
Where it lives: Databases, ERP, CRM, and transactional systems.
Why it matters: It holds core business-critical records, such as customer PII, payment data, and medical records.
Risks if ignored: Non-compliance with data subject rights requests; inaccurate reporting.
Unstructured Data
Where it lives: File servers, SharePoint, Teams, Slack, email archives, cloud storage.
Why it matters: Contains contracts, scanned IDs, reports, and sensitive documents in freeform formats.
Risks if ignored: Harder to monitor, control, and protect due to scattered storage.
Endpoint Data
Where it lives: Laptops, desktops, mobile devices (Windows, Mac, Linux).
Why it matters: Employees often store working copies of sensitive files locally.
Risks if ignored: Theft, loss, or compromise of devices can expose critical information.
Real-World Examples of Compliance Risks from Unseen Data
Healthcare Sector: A hospital’s breach investigation revealed patient records stored on a doctor’s laptop, which was never logged into official systems. GDPR fines followed.
Banking & Finance: An audit found loan application forms with customer PII on a shared drive, accessible to interns.
Retail: During a PCI DSS assessment, old CSV exports containing cardholder data were discovered in an unused cloud folder.
Government: Sensitive citizen records are emailed between departments, bypassing secure document transfer systems, and are later exposed to a phishing attack.
Closing the Gap: A Proactive Approach to Data Discovery
The only way to eliminate unseen data risks is to deploy comprehensive data discovery and classification tools that scan across servers, cloud platforms, and endpoints—automatically detecting sensitive content wherever it resides.
This proactive approach supports regulatory compliance, improves breach resilience, reduces audit stress, and ensures that data governance policies are meaningful in practice, not just on paper.
Bottom Line
Compliance isn’t just about protecting data you know exists—it’s about uncovering the data you don’t. From servers to endpoints, organizations need end-to-end visibility to safeguard against unseen risks and meet today’s stringent data protection laws.
In unit tests, sometimes you need to perform deep checks on the object passed to the mocked service. We will learn 3 ways to do that with Moq and C#
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When writing unit tests, you can use Mocks to simulate the usage of class dependencies.
Even though some developers are harshly against the usage of mocks, they can be useful, especially when the mocked operation does not return any value, but still, you want to check that you’ve called a specific method with the correct values.
In this article, we will learn 3 ways to check the values passed to the mocks when using Moq in our C# Unit Tests.
To better explain those 3 ways, I created this method:
publicvoid UpdateUser(User user, Preference preference)
{
var userDto = new UserDto
{
Id = user.id,
UserName = user.username,
LikesBeer = preference.likesBeer,
LikesCoke = preference.likesCoke,
LikesPizza = preference.likesPizza,
};
_userRepository.Update(userDto);
}
UpdateUser simply accepts two objects, user and preference, combines them into a single UserDto object, and then calls the Update method of _userRepository, which is an interface injected in the class constructor.
As you can see, we are not interested in the return value from _userRepository.Update. Rather, we are interested in checking that we are calling it with the right values.
We can do it in 3 ways.
Verify each property with It.Is
The simplest, most common way is by using It.Is<T> within the Verify method.
This approach works well when you have to perform checks on only a few fields. But the more fields you add, the longer and messier that code becomes.
Also, a problem with this approach is that if it fails, it becomes hard to understand which is the cause of the failure, because there is no indication of the specific field that did not match the expectations.
Here’s an example of an error message:
Expected invocation on the mock at least once, but was never performed: _ => _.Update(It.Is<UserDto>(u => (((u.Id == 1 && u.UserName == "Davidde") && u.LikesPizza == True) && u.LikesBeer == True) && u.LikesCoke == False))
Performed invocations:
Mock<IUserRepository:1> (_):
IUserRepository.Update(UserDto { UserName = Davide, Id = 1, LikesPizza = True, LikesCoke = False, LikesBeer = True })
Can you spot the error? And what if you were checking 15 fields instead of 5?
Verify with external function
Another approach is by externalizing the function.
[Test]publicvoid WithExternalFunction()
{
//Arrangevar user = new User(1, "Davide");
var preferences = new Preference(true, true, false);
UserDto expected = new UserDto
{
Id = 1,
UserName = "Davide",
LikesBeer = true,
LikesCoke = false,
LikesPizza = true,
};
//Act userUpdater.UpdateUser(user, preferences);
//Assert userRepo.Verify(_ => _.Update(It.Is<UserDto>(u => AreEqual(u, expected))));
}
privatebool AreEqual(UserDto u, UserDto expected)
{
Assert.AreEqual(expected.UserName, u.UserName);
Assert.AreEqual(expected.Id, u.Id);
Assert.AreEqual(expected.LikesBeer, u.LikesBeer);
Assert.AreEqual(expected.LikesCoke, u.LikesCoke);
Assert.AreEqual(expected.LikesPizza, u.LikesPizza);
returntrue;
}
Here, we are passing an external function to the It.Is<T> method.
This approach allows us to define more explicit and comprehensive checks.
The good parts of it are that you will gain more control over the assertions, and you will also have better error messages in case a test fails:
Expected string length 6 but was 7. Strings differ at index 5.
Expected: "Davide"
But was: "Davidde"
The bad part is that you will stuff your test class with lots of different methods, and the class can easily become hard to maintain. Unluckily, we cannot use local functions.
On the other hand, having external functions allows us to combine them when we need to do some tests that can be reused across test cases.
Intercepting the function parameters with Callback
Lastly, we can use a hidden gem of Moq: Callbacks.
With Callbacks, you can store in a local variable the reference to the item that was called by the method.
[Test]publicvoid CompareWithCallback()
{
// Arrangevar user = new User(1, "Davide");
var preferences = new Preference(true, true, false);
UserDto actual = null;
userRepo.Setup(_ => _.Update(It.IsAny<UserDto>()))
.Callback(new InvocationAction(i => actual = (UserDto)i.Arguments[0]));
UserDto expected = new UserDto
{
Id = 1,
UserName = "Davide",
LikesBeer = true,
LikesCoke = false,
LikesPizza = true,
};
//Act userUpdater.UpdateUser(user, preferences);
//Assert Assert.IsTrue(AreEqual(expected, actual));
}
In this way, you can use it locally and run assertions directly to that object without relying on the Verify method.
Or, if you use records, you can use the auto-equality checks to simplify the Verify method as I did in the previous example.
Wrapping up
In this article, we’ve explored 3 ways to perform checks on the objects passed to dependencies mocked with Moq.
Each way has its pros and cons, and it’s up to you to choose the approach that fits you the best.
I personally prefer the second and third approaches, as they allow me to perform better checks on the passed values.
When two creatives collaborate, the design process becomes a shared stage — each bringing their own strengths, perspectives, and instincts. This project united designer/art director Artem Shcherban and 3D/motion designer Andrew Moskvin to help New York–based scenographer and costume designer Christian Fleming completely reimagine how his work is presented.
What began as a portfolio refresh evolved into a cohesive visual system: a rigorously minimal print catalog, a single-page website concept, and a cinematic 3D visualization. Together, Artem and Andrew shaped an experience that distilled Christian’s theatrical sensibility into clear, atmospheric design across both physical and digital formats.
From here, Artem picks up the story, walking us through how he approached the portfolio’s structure, the visual rules it would live by, and the thinking that shaped both its print and on-screen presence.
Starting the Design Conversation
Christian Fleming is a prominent designer and director based in New York City who works with theaters around the world creating visual spaces for performances. He approached me with a challenge: to update and rethink his portfolio, to make it easy to send out to theater directors and curators. Specifically the print format.
Christian had a pretty clear understanding of what he wanted to show and how it should look: rigid Scandinavian minimalism, extreme clarity of composition, a minimum of elements and a presentation that would be understandable to absolutely anyone – regardless of age, profession or context.
It was important to create a system that would:
be updated regularly (approximately every 3 weeks),
adapt to new projects,
and at the same time remain visually and semantically stable.
There also needed to be an “About Christian” section in the structure, but this too had to fit within a strict framework of visual language.
Designing a Flexible Visual System
I started by carefully analyzing how Christian works. His primary language is visual. He thinks in images, light, texture and composition. So it was important to retain a sense of air and rhythm, but build a clear modular structure that he could confidently work with on his own.
We came up with a simple adaptive system:
it easily adapts to images of different formats,
scalable for everything from PDFs to presentations,
and can be used both digitally and offline.
In the first stages, we tried several structures. However, Christian still felt that there was something missing in the layout – the visuals and logic were in conflict. We discussed which designs he wanted to show openly and which he didn’t. Some works had global reviews and important weight, but could not be shown in all details.
The solution was to divide them into two meaningful blocks:
“Selected Projects”, with full submission, and “Archival Projects”, with a focus on awards, reviews, and context. This approach preserved both structure and tone. The layout became balanced – and Christian immediately responded to this.
After gathering the structure and understanding how it would work, we began creating the design itself and populating it with content. It was important from the start to train Kristan to add content on his own, as there was a lot of project and they change quite often.
One of the key pluses of our work is versatility. Not only could the final file be emailed, but it could also be used as a print publication. This gave Christian the opportunity to give physical copies at meetings, premieres and professional events where tactility and attention to detail are important.
Christian liked the first result, both in the way the system was laid out and the way I approached the task. Then I suggested: let’s update the website as well.
Translating the Portfolio to a Single-Page Site
This phase proved to be the most interesting, and the most challenging.
Although the website looks simple, it took almost 3 months to build. From the very beginning, Christian and I tried to understand why he needed to update the site and how it should work together with the already established portfolio system.
The main challenge was to show the visual side of his projects. Not just text or logos, but the atmosphere, the light, the costumes, the feeling of the scene.
One of the restrictions that Christian set was the requirement to make the site as concise as possible, without a large number of pages, or better to limit it to one, and without unnecessary transitions. It had to be simple, clear and intuitive, but still user-friendly and quite informative. This was a real challenge, given the amount of content that needed to be posted.
Designing with Stage Logic
One of the key constraints that started the work on the site was Christian’s wish: no multiple pages. Everything had to be compact, coherent, clear and yet rich. This posed a special challenge. It was necessary to accommodate a fairly large amount of information without overloading the perception.
I proposed a solution built on a theatrical metaphor: as in a stage blackout, the screen darkens and a new space appears. Each project becomes its own scene, with the user as a spectator — never leaving their seat, never clicking through menus. Navigation flows in smooth, seamless transitions, keeping attention focused and the emotional rhythm intact.
Christian liked the idea, but immediately faced a new challenge: how to fit everything important on one screen:
a short text about him,
social media links and a resume,
the job title and description,
and, if necessary, reviews.
At the same time, the main visual content – photos and videos – had to remain in the center of attention and not overlap with the interface.
Solving the Composition Puzzle
We explored several layouts — from centered titles and multi-level disclosures to diagonal structures and thumbnail navigation. Some looked promising, but they lacked the sense of theatrical rhythm we wanted. The layouts felt crowded, with too much design and not enough air.
The breakthrough came when we shifted focus from pure visuals to structural logic. We reduced each project view to four key elements: minimal information about Christian, the production title with the director’s name, a review (when available), and a button to select the project. Giving each element its own space created a layout that was both clear and flexible, without overloading the screen.
Refining Through Iteration
As with the book, the site went through several iterations:
In the first prototype, the central layout quickly proved unworkable – long play titles and director names didn’t fit on the screen, especially in the mobile version. We were losing scalability and not using all the available space.
In the second version, we moved the information blocks upwards – this gave us a logical hierarchy and allowed us not to burden the center of the screen. The visual focus remained on the photos, and the text did not interfere with the perception of the scenography.
In the third round, the idea of “titles” appeared – a clear typographic structure, where titles are highlighted only by boldness, without changing the lettering. This was in keeping with the overall minimalist aesthetic, and Christian specifically mentioned that he didn’t want to use more than one font or style unless necessary.
We also decided to stylistically separate the reviews from the main description. We italicized them and put them just below. This made it clear what belonged to the author and what was a response to the author’s work.
Bringing Theatrical Flow to Navigation
The last open issue was navigation between projects. I proposed two scenarios:
Navigating with arrows, as if the viewer were leafing through the play scene by scene.
A clickable menu with a list of works for those who want to go directly.
Christian was concerned about the question: wouldn’t the user lose their bearings if they didn’t see the list all the time? We discussed this and came to the conclusion that most visitors don’t come to the site to “look for the right job”. They come to feel the atmosphere and “experience” its theater. So the basic scenario is a consistent browsing experience, like moving through a play. The menu is available, but not in the way – it should not break the effect of involvement.
What We Learned About Theatrical Design
We didn’t build just a website. We built an experience. It is not a digital storefront, but a space that reflects the way Christian works. He is an artist who thinks in the rhythm of the stage, and it was essential not to break that rhythm.
The result is a place where the viewer isn’t distracted; they inhabit it. Navigation, structure, and interface quietly support this experience. Much of that comes from Christian’s clear and thoughtful feedback, which shaped the process at every step. This project is a reminder that even work which appears simple is defined by countless small decisions, each influencing not only how it functions but also the mood it creates from the very beginning.
Extending the Design from Screen to Print
Once the site was complete, a new question emerged: how should this work be presented in the most meaningful way?
The digital format was only part of the answer. We also envisioned a printed edition — something that could be mailed or handed over in person as a physical object. In the theater world, where visual presence and tactility carry as much weight as the idea itself, this felt essential.
We developed a set of layouts, but bringing the catalog to life as intended proved slow. Christian’s schedule with his theater work left little time to finalize the print production. We needed an alternative that could convey not only the design but also the atmosphere and weight of the finished book.
Turning the Book into a Cinematic Object
At this stage, 3D and motion designer Andrew Moskvin joined the project. We shared the brief with him — not just to present the catalog, but to embed it within the theatrical aesthetic, preserving the play of light, texture, air, and mood that defined the website.
Andrew was immediately enthusiastic. After a quick call, he dove into the process. I assembled all the pages of the print version we had, and together we discussed storyboards, perspectives, atmosphere, possible scenes, and materials that could deepen the experience. The goal was more than simply showing the layout — we wanted cinematic shots where every fold of fabric and every spot of light served a single dramaturgy.
The result exceeded expectations. Andrew didn’t just recreate the printed version; he brought it to life. His work was subtle and precise, with a deep respect for context. He captured not only the mood but also the intent behind each spread, giving the book weight, materiality, and presence — the kind we imagined holding in our hands and leafing through in person.
Andrew will share his development process below.
Breaking Down the 3D Process
The Concept
At the very start, I wanted my work to blend fluently in the ideas that were already made. Christian Fleming is a scenographer and costume designer, so the visual system needed to reflect his world. Since the project was deeply rooted in the theatrical aesthetic, my 3D work had to naturally blend into that atmosphere. Artem’s direction played a key role in shaping the unique look envisioned by Christian Fleming — rich with stage-like presence, bold compositions, and intentional use of space. My task was to ensure that the 3D elements not only supported this world, but also felt like an organic extension of it — capturing the same mood, lighting nuances, and visual rhythm that define a theatrical setting.
The Tools
For the entire 3D pipeline, I worked in:
Cinema 4D for modeling and scene setup
Redshift for rendering
After Effects for compositing
Photoshop for color correcting static images
Modeling the Book
The book was modeled entirely from scratch. Me and Artem discussed the form and proportions, and after several iterations, we finalized the design direction. I focused on the small details that bring realism: the curvature of the hardcover spine, beveled edges, the separation between the cover and pages, and the layered structure of the paper block. I also modeled the cloth texture wrapping the spine, giving the book a tactile, fabric-like look. The geometry was built to hold up in close-up shots and fit the theatrical lighting.
Lighting with a Theatrical Eye
Lighting was one of the most important parts of this process. I wanted the scenes to feel theatrical — as if the objects were placed on a stage under carefully controlled spotlights. Using a combination of area lights and spotlights in Redshift, I shaped the lighting to create soft gradients and shadows on the surfaces. The setup was designed to emphasize the geometry without flattening it, always preserving depth and direction. A subtle backlight highlight played a key role in defining the edges and enhancing the overall form.
I think I spent more time on lighting than on modeling, since lighting has always been more experimental for me — even in product scenes.
One small but impactful trick I always use is setting up a separate HDRI map just for reflections. I disable its contribution to diffuse lighting by setting the diffuse value to 0, while keeping reflections at 1. This allows the reflections to pop more without affecting the overall lighting of the scene. It’s a simple setup, but it gives you way more control over how materials respond — especially in stylized or highly art-directed environments.
Building the Materials
When I was creating the materials, I noticed that Artem had used a checkerboard texture for the cover. So I thought — why not take that idea further and implement it directly into the material? I added a subtle bump using a checker texture on the sides and front part of the book.
I also experimented quite a bit with displacement. Initially, I had the idea to make the title metallic, but it felt too predictable. So instead, I went with a white title featuring embossed details, while keeping the checker bump texture underneath.
This actually ties back to the modeling process — for the displacement to work properly, the geometry had to be evenly dense and ready for subdivision.
I created a mask in Photoshop and applied a procedural Gaussian blur using a Smart Object. Without the blur, the displacement looked harsh and unrefined — even a slight blur made a noticeable difference.
The main challenge with using white, as always, was avoiding blown-out highlights. I had to carefully balance the lighting and tweak the material settings to make the title clean and visible without overexposing it.
One of the more unusual challenges in this project was animating the page slide and making the pages differ. I didn’t want the pages to feel too repetitive, but I also didn’t want to create dozens of individual materials for each page. To find a balance, I created two different materials for two pages and made them random inside of the cloner. It was a bit of a workaround — mostly due to limitations inside the Shader switch node — but it worked well enough to create the illusion of variety without significantly increasing the complexity of the setup.
There’s a really useful node in Redshift called Color User Data — especially when working with the MoGraph system to trigger object index values. One of the strangest (and probably least intuitive) things I did in this setup was using a Change Range node to remap those index values properly according to the number of textures I had. With that in place, I built a system that used an index to mix between all the textures inside a Shader Switch node. This allowed me to get true variation across the pages without manually assigning materials to each one.
You might’ve noticed that the pages look a bit too bright for a real-world scenario — and that was actually a deliberate choice. I often use a trick that helps me art-direct material brightness independently of the scene’s lighting. The key node here is Color Correct Node.
Inside it, there’s a parameter called Level. If you set it higher than 1, it increases the overall brightness of the texture output — without affecting shadows or highlights too aggressively. This also works in reverse: if your texture has areas that are too bright (like pure white), lowering the Level value below 1 will tone it down without needing to modify the source texture.
It’s a simple trick, but incredibly useful when you want fine control over how materials react in stylized or theatrical lighting setups.
The red cloth material I used throughout the scene is another interesting part of the project. I wanted it to have a strong tactile feel — something that looks thick, textured, and physically present. To achieve that, I relied heavily on geometry. I used a Redshift Object Tag with Subdivision (under the Geometry tab) enabled to add more detail where it was needed. This helped the cloth catch light properly and hold up in close-up shots.
For the translucent look, I originally experimented with Subsurface Scattering, but it didn’t give me the control I wanted. So instead, I used an Opacity setup driven by a Ramp and Change Range nodes. That gave me just enough falloff and variation to fake the look of light passing through thinner areas of the fabric — and in the end, it worked surprisingly well.
Animating the Pages
This was by far the most experimental part of the project for me. The amount of improvisation — and the complete lack of confidence in what the next frame would be — made the process both fun and flexible.
What you’re about to see might look a bit chaotic, so let me quickly walk you through how it all started.
The simulation started with a subject — in our case, a page. It had to have the proper form, and by that I mean the right typology. Specifically, it needed to consist only of horizontal segments; otherwise, it would bend unevenly under the forces present in the scene. (And yes, I did try versions with even polygons — it got messy.)
I set up all the pages in a Cloner so I could easily adjust any parameters I needed, and added a bit of randomness using a Random Effector.
In the video, you can see a plane on the side that connects to the pages — that was actually the first idea I had when thinking about how to run the simulation. The plane has a Connect tag that links all the pages to it, so when it rotates, they all follow along.
I won’t go into all the force settings — most of them were experimental, and animations like this always require a bit of creative adjustment.
The main force was wind. The pages did want to slide just from the plane with the Connect tag, but I needed to give them an extra push from underneath — that’s where wind came in handy.
I also used a Field Force to move the pages mid-air, from the center outward to the other side.
Probably the most important part was how I triggered the “Mix Animation.” I used a Vertex Map tag on the Cloner to paint a map using a Field, which then drove the Mix Animation parameter in the Cloth tag. This setup made the pages activate one by one, creating a natural, finger-like sliding motion as seen in Video.
Postprocessing
I didn’t go too heavy on post-processing, but there’s one plugin I have to mention — Deep Glow. It gives amazing results. By tweaking the threshold, you can make it react only to the brightest areas, which creates a super clean, glowing effect.
The Final Theatrical Ecosystem
In the end, Christian was delighted with the outcome. Together we had built more than a portfolio — we had created a cohesive theatrical ecosystem. It moved fluidly from digital performance to printed object, from live stage to interface, and from emotion to technology.
The experience is pared back to its essence: no superfluous effects, no unnecessary clicks, nothing to pull focus. What remains is what matters most — the work itself, framed in a way that stays quietly behind the scenes yet comes fully alive in the viewer’s hands and on their screen.
Cyberattacks aren’t slowing down—they’re getting bolder and smarter. From phishing scams to ransomware outbreaks, the number of incidents has doubled or even tripled year over year. In today’s hybrid, multi-vendor IT landscape, protecting your organization’s digital assets requires choosing the top XDR vendor that can see and stop threats across every possible entry point.
Over the last five years, XDR (Extended Detection and Response) has emerged as one of the most promising cybersecurity innovations. Leading IT analysts agree: XDR solutions will play a central role in the future of cyber defense. But not all XDR platforms are created equal. Success depends on how well an XDR vendor integrates Endpoint Protection Platforms (EPP) and Endpoint Detection and Response (EDR) to detect, analyze, and neutralize threats in real time.
This guide will explain what makes a great XDR vendor and how Seqrite XDR compares to industry benchmarks. It also includes a practical checklist for confidently evaluating your next security investment.
Why Choosing the Right XDR Vendor Matters
Your XDR platform isn’t just another security tool; it’s the nerve center of your threat detection and response strategy. The best solutions act as a central brain, collecting security telemetry from:
Endpoints
Networks
Firewalls
Email
Identity systems
DNS
They don’t just collect this data, they correlate it intelligently, filter out the noise, and give your security team actionable insights to respond faster.
According to industry reports, over 80% of IT and cybersecurity professionals are increasing budgets for threat detection and response. If you choose the wrong vendor, you risk fragmented visibility, alert fatigue, and missed attacks.
Key Capabilities Every Top XDR Vendor Should Offer
When shortlisting top XDR vendors, here’s what to look for:
Advanced Threat Detection – Identify sophisticated, multi-layer attack patterns that bypass traditional tools.
Risk-Based Prioritization – Assign scores (1–1000) so you know which threats truly matter.
Unified Visibility – A centralized console to eliminate security silos.
Integration Flexibility – Native and third-party integrations to protect existing investments.
Automation & Orchestration – Automate repetitive workflows to respond in seconds, not hours.
MITRE ATT&CK Mapping – Know exactly which attacker tactics and techniques you can detect.
Remember, it’s the integration of EPP and EDR that makes or breaks an XDR solution’s effectiveness.
Your Unified Detection & Response Checklist
Use this checklist to compare vendors on a like-for-like basis:
Full telemetry coverage: Endpoints, networks, firewalls, email, identity, and DNS.
Native integration strength: Smooth backend-to-frontend integration for consistent coverage.
Real-time threat correlation: Remove false positives, detect real attacks faster.
Proactive security posture: Shift from reactive to predictive threat hunting.
MITRE ATT&CK alignment: Validate protection capabilities against industry-recognized standards.
Why Automation Is the Game-Changer
The top XDR vendors go beyond detection, they optimize your entire security operation. Automated playbooks can instantly execute containment actions when a threat is detected. Intelligent alert grouping cuts down on noise, preventing analyst burnout.
Automation isn’t just about speed; it’s about cost savings. A report by IBM Security shows that organizations with full automation save over ₹31 crore annually and detect/respond to breaches much faster than those relying on manual processes.
The Seqrite XDR Advantage
Seqrite XDR combines advanced detection, rich telemetry, and AI-driven automation into a single, unified platform. It offers:
Seamless integration with Seqrite Endpoint Protection (EPP) and Seqrite Endpoint Detection & Response (EDR) and third party telemetry sources.
MITRE ATT&CK-aligned visibility to stay ahead of attackers.
Automated playbooks to slash response times and reduce manual workload.
Unified console for complete visibility across your IT ecosystem.
GenAI-powered SIA (Seqrite Intelligent Assistant) – Your AI-Powered Virtual Security Analyst. SIA offers predefined prompts and conversational access to incident and alert data, streamlining investigations and making it faster for analysts to understand, prioritize, and respond to threats.
In a market crowded with XDR solutions, Seqrite delivers a future-ready, AI-augmented platform designed for today’s threats and tomorrow’s unknowns.
If you’re evaluating your next security investment, start with a vendor who understands the evolving threat landscape and backs it up with a platform built for speed, intelligence, and resilience.
In this article, I will show you two simple tricks that help me understand the deployment status of my .NET APIs
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When I create Web APIs with .NET I usually add two “secret” endpoints that I can use to double-check the status of the deployment.
I generally expose two endpoints: one that shows me some info about the current environment, and another one that lists all the application settings defined after the deployment.
In this article, we will see how to create those two endpoints, how to update the values when building the application, and how to hide those endpoints.
Project setup
For this article, I will use a simple .NET 6 API project. We will use Minimal APIs, and we will use the appsettings.json file to load the application’s configuration values.
Since we are using Minimal APIs, you will have the endpoints defined in the Main method within the Program class.
To expose an endpoint that accepts the GET HTTP method, you can write
That’s all you need to know about .NET Minimal APIs for the sake of this article. Let’s move to the main topics ⏩
How to show environment info in .NET APIs
Let’s say that your code execution depends on the current Environment definition. Typical examples are that, if you’re running on production you may want to hide some endpoints otherwise visible in the other environments, or that you will use a different error page when an unhandled exception is thrown.
Once the application has been deployed, how can you retrieve the info about the running environment?
Here we go:
app.MapGet("/env", async context =>
{
IWebHostEnvironment? hostEnvironment = context.RequestServices.GetRequiredService<IWebHostEnvironment>();
var thisEnv = new {
ApplicationName = hostEnvironment.ApplicationName,
Environment = hostEnvironment.EnvironmentName,
};
var jsonSerializerOptions = new JsonSerializerOptions { WriteIndented = true };
await context.Response.WriteAsJsonAsync(thisEnv, jsonSerializerOptions);
});
This endpoint is quite simple.
The context variable, which is of type HttpContext, exposes some properties. Among them, the RequestServices property allows us to retrieve the services that have been injected when starting up the application. We can then use GetRequiredService to get a service by its type and store it into a variable.
💡 GetRequiredService throws an exception if the service cannot be found. On the contrary, GetService returns null. I usually prefer GetRequiredService, but, as always, it depends on what you’re using it.
Then, we create an anonymous object with the information of our interest and finally return them as an indented JSON.
It’s time to run it! Open a terminal, navigate to the API project folder (in my case, SecretEndpoint), and run dotnet run. The application will compile and start; you can then navigate to /env and see the default result:
How to change the Environment value
While the applicationName does not change – it is the name of the running assembly, so any other value will make stop your application from running – you can (and, maybe, want to) change the Environment value.
When running the application using the command line, you can use the --environment flag to specify the Environment value.
So, running
dotnet run --environment MySplendidCustomEnvironment
will produce this result:
There’s another way to set the environment: update the launchSettings.json and run the application using Visual Studio.
To do that, open the launchSettings.json file and update the profile you are using by specifying the Environment name. In my case, the current profile section will be something like this:
As you can see, the ASPNETCORE_ENVIRONMENT variable is set to EnvByProfile.
If you run the application using Visual Studio using that profile you will see the following result:
How to list all the configurations in .NET APIs
In my current company, we deploy applications using CI/CD pipelines.
This means that final variables definition comes from the sum of 3 sources:
the project’s appsettings file
the release pipeline
the deployment environment
You can easily understand how difficult it is to debug those applications without knowing the exact values for the configurations. That’s why I came up with these endpoints.
To print all the configurations, we’re gonna use an approach similar to the one we’ve used in the previous example.
What’s going on? We are retrieving the IConfiguration object, which contains all the configurations loaded at startup; then, we’re listing all the configurations as key-value pairs, and finally, we’re returning the list to the client.
As an example, here’s my current appsettings.json file:
That endpoint shows a lot more than you can imagine: take some time to have a look at those configurations – you’ll thank me later!
How to change the value of a variable
There are many ways to set the value of your variables.
The most common one is by creating an environment-specific appsettings file that overrides some values.
So, if your environment is called “EnvByProfile”, as we’ve defined in the previous example, the file will be named appsettings.EnvByProfile.json.
There are actually some other ways to override application variables: we will learn them in the next article, so stay tuned! 😎
3 ways to hide your endpoints from malicious eyes
Ok then, we have our endpoints up and running, but they are visible to anyone who correctly guesses their addresses. And you don’t want to expose such sensitive info to malicious eyes, right?
There are, at least, 3 simple values to hide those endpoints:
Use a non-guessable endpoint: you can use an existing word, such as “housekeeper”, use random letters, such as “lkfrmlvkpeo”, or use a Guid, such as “E8E9F141-6458-416E-8412-BCC1B43CCB24”;
Specify a key on query string: if that key is not found or it has an invalid value, return a 404-not found result
Use an HTTP header, and, again, return 404 if it is not valid.
Both query strings and HTTP headers are available in the HttpContext object injected in the route definition.
Now it’s your turn to find an appropriate way to hide these endpoints. How would you do that? Drop a comment below 📩
✒ Edit 2022-10-10: I thought it was quite obvious, but apparently it is not: these endpoints expose critical information about your applications and your infrastructure, so you should not expose them unless it is strictly necessary! If you have strong authentication in place, use it to secure those endpoints. If you don’t, hide those endpoints the best you can, and show only necessary data, and not everything. Strip out sensitive content. And, as soon as you don’t need that info anymore, remove those endpoints (comment them out or generate them only if a particular flag is set at compilation time). Another possible way is by using feature flags. In the end, take that example with a grain of salt: learn that you can expose them, but keep in mind that you should not expose them.
Further readings
We’ve used a quite new way to build and develop APIs with .NET, called “Minimal APIs”. You can read more here:
If you are not using Minimal APIs, you still might want to create such endpoints. We’ve talked about accessing the HttpContext to get info about the HTTP headers and query string. When using Controllers, accessing the HttpContext requires some more steps. Here’s an article that you may find interesting:
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
You already know it: using meaningful names for variables, methods, and classes allows you to write more readable and maintainable code.
It may happen that a good name for your business entity matches one of the reserved keywords in C#.
What to do, now?
There are tons of reserved keywords in C#. Some of these are
int
interface
else
null
short
event
params
Some of these names may be a good fit for describing your domain objects or your variables.
Talking about variables, have a look at this example:
var eventList = GetFootballEvents();
foreach(vareventin eventList)
{
// do something}
That snippet will not work, since event is a reserved keyword.
You can solve this issue in 3 ways.
You can use a synonym, such as action:
var eventList = GetFootballEvents();
foreach(var action in eventList)
{
// do something}
But, you know, it doesn’t fully match the original meaning.
You can use the my prefix, like this:
var eventList = GetFootballEvents();
foreach(var myEvent in eventList)
{
// do something}
But… does it make sense? Is it really your event?
The third way is by using the @ prefix:
var eventList = GetFootballEvents();
foreach(var @event in eventList)
{
// do something}
That way, the code is still readable (even though, I admit, that @ is a bit weird to see around the code).
Of course, the same works for every keyword, like @int, @class, @public, and so on
Further readings
If you are interested in a list of reserved keywords in C#, have a look at this article:
LINQ is a set of methods that help developers perform operations on sets of items. There are tons of methods – do you know which is the one for you?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
LINQ is one of the most loved functionalities by C# developers. It allows you to perform calculations and projections over a collection of items, making your code easy to build and, even more, easy to understand.
As of C# 11, there are tens of methods and overloads you can choose from. Some of them seem similar, but there are some differences that might not be obvious to C# beginners.
In this article, we’re gonna learn the differences between couples of methods, so that you can choose the best one that fits your needs.
First vs FirstOrDefault
Both First and FirstOrDefault allow you to get the first item of a collection that matches some requisites passed as a parameter, usually with a Lambda expression:
int[] numbers = newint[] { -2, 1, 6, 12 };
var mod3OrDefault = numbers.FirstOrDefault(n => n % 3 == 0);
var mod3 = numbers.First(n => n % 3 == 0);
Using FirstOrDefault you get the first item that matches the condition. If no items are found you’ll get the default value for that type. The default value depends on the data type:
Data type
Default value
int
0
string
null
bool
false
object
null
To know the default value for a specific type, just run default(string).
So, coming back to FirstOrDefault, we have these two possible outcomes:
On the other hand, First throws an InvalidOperationException with the message “Sequence contains no matching element” if no items in the collection match the filter criterion:
While First returns the first item that satisfies the condition, even if there are more than two or more, Single ensures that no more than one item matches that condition.
If there are two or more items that passing the filter, an InvalidOperationException is thrown with the message “Sequence contains more than one matching element”.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.First(n => n % 3 == 0); // 6numbers.Single(n => n % 3 == 0); // throws exception because both 6 and 12 are accepted values
Both methods have their corresponding -OrDefault counterpart: SingleOrDefault returns the default value if no items are valid.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.SingleOrDefault(n => n % 4 == 0); // 12numbers.SingleOrDefault(n => n % 7 == 0); // 0, because no items are %7numbers.SingleOrDefault(n => n % 3 == 0); // throws exception
Any vs Count
Both Any and Count give you indications about the presence or absence of items for which the specified predicate returns True.
In this article, we learned the differences between couples of LINQ methods.
Each of them has a purpose, and you should use the right one for each case.
❓ A question for you: talking about performance, which is more efficient: First or Single? And what about Count() == 0 vs Any()? Drop a message below if you know the answer! 📩
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
For months, Eduard Bodak has been sharing glimpses of his visually rich new website. Now, he’s pulling back the curtain to walk us through how three of its most striking animations were built. In this behind-the-scenes look, he shares the reasoning, technical decisions, and lessons learned—from performance trade-offs to working with CSS variables and a custom JavaScript architecture.
Overview
In this breakdown, I’ll walk you through three of the core GSAP animations on my site: flipping 3D cards that animate on scroll, an interactive card that reacts to mouse movement on the pricing page, and a circular layout of cards that subtly rotates as you scroll. I’ll share how I built each one, why I made certain decisions, and what I learned along the way.
Overview of the animations we’re handling here
I’m using Locomotive Scroll V5 in this project to handle scroll progress and viewport detection. Since it already offers built-in progress tracking via data attributes and CSS variables, I chose to use that directly for triggering animations. ScrollTrigger offers a lot of similar functionality in a more integrated way, but for this build, I wanted to keep everything centered around Locomotive’s scroll system to avoid overlap between two scroll-handling libraries.
Personally, I love the simplicity of Locomotive Scroll. You can just add data attributes to specify the trigger offset of the element within the viewport. You can also get a CSS variable --progress on the element through data attributes. This variable represents the current progress of the element and ranges between 0 and 1. This alone can animate a lot with just CSS.
I used this project to shift my focus toward more animations and visual details. It taught me a lot about GSAP, CSS, and how to adjust animations based on what feels right. I’ve always wanted to build sites that spark a little emotion when people visit them.
Note that this setup was tailored to the specific needs of the project, but in cases where scroll behavior, animations, and state management need to be tightly integrated, GSAP’s ScrollTrigger and ScrollSmoother can offer a more unified foundation.
Now, let’s take a closer look at the three animations in action!
Flipping 3D cards on scroll
I split the animation into two parts. The first is about the cards escaping on scroll. The second is about them coming back and flipping back.
While I’m using Locomotive Scroll, I need data-scroll to enable viewport detection on an element. data-scroll-offset specifies the trigger offset of the element within the viewport. It takes two values: one for the offset when the element enters the viewport, and a second for the offset when the element leaves the viewport. The same can be built with GSAP’s ScrollTrigger, just inside the JS.
data-scroll-event-progress="progressHero" will trigger the custom event I defined here. This event allows you to retrieve the current progress of the element, which ranges between 0 and 1.
Inside the JS we can add an EventListener based on the custom event we defined. Getting the progress from it and transfer it to the GSAP timeline.
this.element is here our section we defined before, so it’s data-hero-animation.
Building now the timeline method inside the class. Getting the current timeline progress. Killing the old timeline and clearing any GSAP-applied inline styles (like transforms, opacity, etc.) to avoid residue.
Using requestAnimationFrame() to avoid layout thrashing. Initializes a new, paused GSAP timeline. While we are using Locomotive Scroll it’s important that we pause the timeline, so the progress of Locomotive can handle the animation.
Figuring out relative positioning per card. targetY moves each card down so it ends near the bottom of the container. yOffsets and rotationZValues give each card a unique vertical offset and rotation.
The actual GSAP timeline. Cards slide left or right based on their index (x). Rotate on Z slightly to look scattered. Slide downward (y) to target position. Shrink and tilt (scale, rotateX) for a 3D feel. index * 0.012: adds a subtle stagger between cards.
That’s our timeline for desktop. We can now set up GSAP’s matchMedia() to use it. We can also create different timelines based on the viewport. For example, to adjust the animation on mobile, where such an immersive effect wouldn’t work as well. Even for users who prefer reduced motion, the animation could simply move the cards slightly down and fade them out, as you can see on the live site.
Add this to our init() method to initialize the class when we call it.
init() {
this.setupBreakpoints();
}
We can also add a div with a background color on top of the card and animate its opacity on scroll so it smoothly disappears.
When you look closely, the cards are floating a bit. To achieve that, we can add a repeating animation to the cards. It’s important to animate yPercent here, because we already animated y earlier, so there won’t be any conflicts.
gsap.utils.random(1.5, 2.5) comes in handy to make each floating animation a bit different, so it looks more natural. repeatRefresh: true lets the duration refresh on every repeat.
Part 02
We basically have the same structure as before. Only now we’re using a sticky container. The service_container has height: 350vh, and the service_sticky has min-height: 100vh. That’s our space to play the animation.
In the JS, we can use the progressService event as before to get our Locomotive Scroll progress. We just have another timeline here. I’m using keyframes to really fine-tune the animation.
const position = 2 - index - 1 changes the position, so cards start spread out: right, center, left. With that we can use those arrays [12, 0, -12] in the right order.
There’s the same setupBreakpoints() method as before, so we actually just need to change the timeline animation and can use the same setup as before, only in a new JS class.
We can add the same floating animation we used in part 01, and then we have the disappearing/appearing card effect.
Part 2.1
Another micro detail in that animation is the small progress preview of the three cards in the top right.
We add data-scroll-css-progress to the previous section to get a CSS variable --progress ranging from 0 to 1, which can be used for dynamic CSS effects. This data attribute comes from Locomotive Scroll.
Using CSS calc() with min() and max() to trigger animations at specific progress points. In this case, the first animation starts at 0% and finishes at 33%, the second starts at 33% and finishes at 66%, and the last starts at 66% and finishes at 100%.
On a closer look, you can see a small slide-in animation of the card before the mouse movement takes effect. This is built in GSAP using the onComplete() callback in the timeline. this.card refers to the element with data-price-card.
I’m using an elastic easing that I got from GSAPs Ease Visualizer. The timeline plays when the page loads and triggers the mouse movement animation once complete.
In our initAnimation() method, we can use GSAP’s matchMedia() to enable the mouse movement only when hover and mouse input are available.
this.mm = gsap.matchMedia();
initAnimation() {
this.mm.add("(hover: hover) and (pointer: fine) and (prefers-reduced-motion: no-preference)", () => {
gsap.ticker.add(this.mouseMovement);
return () => {
gsap.ticker.remove(this.mouseMovement);
};
});
this.mm.add("(hover: none) and (pointer: coarse) and (prefers-reduced-motion: no-preference)", () => {
...
});
}
By using the media queries hover: hover and pointer: fine, we target only devices that support a mouse and hover. With prefers-reduced-motion: no-preference, we add this animation only when reduced motion is not enabled, making it more accessible. For touch devices or smartphones, we can use hover: none and pointer: coarse to apply a different animation.
I’m using gsap.ticker to run the method this.mouseMovement, which contains the logic for handling the rotation animation.
I originally started with one of the free resources from Osmo (mouse follower) and built this mouse movement animation on top of it. I simplified it to only use the mouse’s x position, which was all I needed.
I also added calculations for how much the card can rotate on the y-axis, and it rotates the z-axis accordingly. That’s how we get this mouse movement animation.
When building these animations, there are always some edge cases I didn’t consider before. For example, what happens when I move my mouse outside the window? Or if I hover over a link or button, should the rotation animation still play?
I added behavior so that when the mouse moves outside, the card rotates back to its original position. The same behavior applies when the mouse leaves the hero section or hovers over navigation elements.
I added a state flag this.isHovering. At the start of mouseMovement(), we check if this.isHovering is false, and if so, return early. The onMouseLeave method rotates the card back to its original position.
We can adjust it further by adding another animation for mobile, since there’s no mouse movement there. Or a subtle reflection effect on the card like in the video. This is done by duplicating the card, adding an overlay with a gradient and backdrop-filter, and animating it similarly to the original card, but with opposite values.
Cards in a circular position that slightly rotate on scroll
First, we build the base of the circularly positioned cards in CSS.
At first, we add all 24 cards, then remove the ones we don’t want to show later because we don’t see them. In the CSS, the .wheel uses a grid display, so we apply grid-area: 1 / 1 to stack the cards. We later add an overlay before the wheel with the same grid-area. By using em we can use a fluid font-size to adjust the size pretty smooth on resizing the viewport.
We can remove the card from 8 to 19 as we don’t see them behind the overlay. It should look like this now.
By adding the data attributes and setup for viewport detection from Locomotive Scroll, which we used in previous modules, we can simply add our GSAP timeline for the rotation animation.
There are probably smarter ways to build these animations than I used. But since this is my first site after changing my direction and GSAP, Locomotive Scroll V5, Swup.js, and CSS animations, I’m pretty happy with the result. This project became a personal playground for learning, it really shows that you learn best by building what you imagine. I don’t know how many times I refactored my code along the way, but it gave me a good understanding of creating accessible animations.
I also did a lot of other animations on the site, mostly using CSS animations combined with JavaScript for the logic behind them.
There are also so many great resources out there to learn GSAP and CSS.
Where I learned the most:
It’s all about how you use it. You can copy and paste, which is fast but doesn’t help you learn much. Or you can build on it your own way and make it yours, that’s at least what helped me learn the most in the end.
Back in November 2024, I shared a post on X about a tool I was building to help visualize kitchen remodels. The response from the Three.js community was overwhelmingly positive. The demo showed how procedural rendering techniques—often used in games—can be applied to real-world use cases like designing and rendering an entire kitchen in under 60 seconds.
In this article, I’ll walk through the process and thinking behind building this kind of procedural 3D kitchen design tool using vanilla Three.js and TypeScript—from drawing walls and defining cabinet segments to auto-generating full kitchen layouts. Along the way, I’ll share key technical choices, lessons learned, and ideas for where this could evolve next.
Have been wanting to redesign my parents’ kitchen for a while now
…so I built them a little 3D kitchen design-tool with @threejs, so they can quickly prototype floorplans/ideas
Here’s me designing a full kitchen remodel in ~60s 🙂
You can try out an interactive demo of the latest version here: https://kitchen-designer-demo.vercel.app/. (Tip: Press the “/” key to toggle between 2D and 3D views.)
Designing Room Layouts with Walls
Example of user drawing a simple room shape using the built-in wall module.
To initiate our project, we begin with the wall drawing module. At a high level, this is akin to Figma’s pen tool, where the user can add one line segment at a time until a closed—or open-ended—polygon is complete on an infinite 2D canvas. In our build, each line segment represents a single wall as a 2D plane from coordinate A to coordinate B, while the complete polygon outlines the perimeter envelope of a room.
We begin by capturing the [X, Z] coordinates (with Y oriented upwards) of the user’s initial click on the infinite floor plane. This 2D point is obtained via Three.js’s built-in raycaster for intersection detection, establishing Point A.
As the user hovers the cursor over a new spot on the floor, we apply the same intersection logic to determine a temporary Point B. During this movement, a preview line segment appears, connecting the fixed Point A to the dynamic Point B for visual feedback.
Upon the user’s second click to confirm Point B, we append the line segment (defined by Points A and B) to an array of segments. The former Point B instantly becomes the new Point A, allowing us to continue the drawing process with additional line segments.
Here is a simplified code snippet demonstrating a basic 2D pen-draw tool using Three.js:
import * as THREE from 'three';
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.set(0, 5, 10); // Position camera above the floor looking down
camera.lookAt(0, 0, 0);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// Create an infinite floor plane for raycasting
const floorGeometry = new THREE.PlaneGeometry(100, 100);
const floorMaterial = new THREE.MeshBasicMaterial({ color: 0xcccccc, side: THREE.DoubleSide });
const floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.rotation.x = -Math.PI / 2; // Lay flat on XZ plane
scene.add(floor);
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
let points: THREE.Vector3[] = []; // i.e. wall endpoints
let tempLine: THREE.Line | null = null;
const walls: THREE.Line[] = [];
function getFloorIntersection(event: MouseEvent): THREE.Vector3 | null {
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObject(floor);
if (intersects.length > 0) {
// Round to simplify coordinates (optional for cleaner drawing)
const point = intersects[0].point;
point.x = Math.round(point.x);
point.z = Math.round(point.z);
point.y = 0; // Ensure on floor plane
return point;
}
return null;
}
// Update temporary line preview
function onMouseMove(event: MouseEvent) {
const point = getFloorIntersection(event);
if (point && points.length > 0) {
// Remove old temp line if exists
if (tempLine) {
scene.remove(tempLine);
tempLine = null;
}
// Create new temp line from last point to current hover
const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 1], point]);
const material = new THREE.LineBasicMaterial({ color: 0x0000ff }); // Blue for temp
tempLine = new THREE.Line(geometry, material);
scene.add(tempLine);
}
}
// Add a new point and draw permanent wall segment
function onMouseDown(event: MouseEvent) {
if (event.button !== 0) return; // Left click only
const point = getFloorIntersection(event);
if (point) {
points.push(point);
if (points.length > 1) {
// Draw permanent wall line from previous to current point
const geometry = new THREE.BufferGeometry().setFromPoints([points[points.length - 2], points[points.length - 1]]);
const material = new THREE.LineBasicMaterial({ color: 0xff0000 }); // Red for permanent
const wall = new THREE.Line(geometry, material);
scene.add(wall);
walls.push(wall);
}
// Remove temp line after click
if (tempLine) {
scene.remove(tempLine);
tempLine = null;
}
}
}
// Add event listeners
window.addEventListener('mousemove', onMouseMove);
window.addEventListener('mousedown', onMouseDown);
// Animation loop
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
}
animate();
The above code snippet is a very basic 2D pen tool, and yet this information is enough to generate an entire room instance. For reference: not only does each line segment represent a wall (2D plane), but the set of accumulated points can also be used to auto-generate the room’s floor mesh, and likewise the ceiling mesh (the inverse of the floor mesh).
In order to view the planes representing the walls in 3D, one can transform each THREE.Line into a custom Wall class object, which contains both a line (for orthogonal 2D “floor plan” view) and a 2D inward-facing plane (for perspective 3D “room” view). To build this class:
class Wall extends THREE.Group {
constructor(length: number, height: number = 96, thickness: number = 4) {
super();
// 2D line for top view, along the x-axis
const lineGeometry = new THREE.BufferGeometry().setFromPoints([
new THREE.Vector3(0, 0, 0),
new THREE.Vector3(length, 0, 0),
]);
const lineMaterial = new THREE.LineBasicMaterial({ color: 0xff0000 });
const line = new THREE.Line(lineGeometry, lineMaterial);
this.add(line);
// 3D wall as a box for thickness
const wallGeometry = new THREE.BoxGeometry(length, height, thickness);
const wallMaterial = new THREE.MeshBasicMaterial({ color: 0xaaaaaa, side: THREE.DoubleSide });
const wall = new THREE.Mesh(wallGeometry, wallMaterial);
wall.position.set(length / 2, height / 2, 0);
this.add(wall);
}
}
We can now update the wall draw module to utilize this newly created Wall object:
// Update our variables
let tempWall: Wall | null = null;
const walls: Wall[] = [];
// Replace line creation in onMouseDown with
if (points.length > 1) {
const start = points[points.length - 2];
const end = points[points.length - 1];
const direction = end.clone().sub(start);
const length = direction.length();
const wall = new Wall(length);
wall.position.copy(start);
wall.rotation.y = Math.atan2(direction.z, direction.x); // Align along direction (assuming CCW for inward facing)
scene.add(wall);
walls.push(wall);
}
Upon adding the floor and ceiling meshes, we can further transform our wall module into a room generation module. To recap what we have just created: by adding walls one by one, we have given the user the ability to create full rooms with walls, floors, and ceilings—all of which can be adjusted later in the scene.
User dragging out the wall in 3D perspective camera-view.
Generating Cabinets with Procedural Modeling
Our cabinet-related logic can consist of countertops, base cabinets, and wall cabinets.
Rather than taking several minutes to add the cabinets on a case-by-case basis—for example, like with IKEA’s 3D kitchen builder—it’s possible to add all the cabinets at once via a single user action. One method to employ here is to allow the user to draw high-level cabinet line segments, in the same manner as the wall draw module.
In this module, each cabinet segment will transform into a linear row of base and wall cabinets, along with a parametrically generated countertop mesh on top of the base cabinets. As the user creates the segments, we can automatically populate this line segment with pre-made 3D cabinet meshes in meshing software like Blender. Ultimately, each cabinet’s width, depth, and height parameters will be fixed, while the width of the last cabinet can be dynamic to fill the remaining space. We use a cabinet filler piece mesh here—a regular plank, with its scale-X parameter stretched or compressed as needed.
Creating the Cabinet Line Segments
User can make a half-peninsula shape by dragging the cabinetry line segments alongside the walls, then in free-space.
Here we will construct a dedicated cabinet module, with the aforementioned cabinet line segment logic. This process is very similar to the wall drawing mechanism, where users can draw straight lines on the floor plane using mouse clicks to define both start and end points. Unlike walls, which can be represented by simple thin lines, cabinet line segments need to account for a standard depth of 24 inches to represent the base cabinets’ footprint. These segments do not require closing-polygon logic, as they can be standalone rows or L-shapes, as is common in most kitchen layouts.
We can further improve the user experience by incorporating snapping functionality, where the endpoints of a cabinet line segment automatically align to nearby wall endpoints or wall intersections, if within a certain threshold (e.g., 4 inches). This ensures cabinets fit snugly against walls without requiring manual precision. For simplicity, we’ll outline the snapping logic in code but focus on the core drawing functionality.
We can start by defining the CabinetSegment class. Like the walls, this should be its own class, as we will later add the auto-populating 3D cabinet models.
class CabinetSegment extends THREE.Group {
public length: number;
constructor(length: number, height: number = 96, depth: number = 24, color: number = 0xff0000) {
super();
this.length = length;
const geometry = new THREE.BoxGeometry(length, height, depth);
const material = new THREE.MeshBasicMaterial({ color, wireframe: true });
const box = new THREE.Mesh(geometry, material);
box.position.set(length / 2, height / 2, depth / 2); // Shift so depth spans 0 to depth (inward)
this.add(box);
}
}
Once we have the cabinet segment, we can use it in a manner very similar to the wall line segments:
let cabinetPoints: THREE.Vector3[] = [];
let tempCabinet: CabinetSegment | null = null;
const cabinetSegments: CabinetSegment[] = [];
const CABINET_DEPTH = 24; // everything in inches
const CABINET_SEGMENT_HEIGHT = 96; // i.e. both wall & base cabinets -> group should extend to ceiling
const SNAPPING_DISTANCE = 4;
function getSnappedPoint(point: THREE.Vector3): THREE.Vector3 {
// Simple snapping: check against existing wall points (wallPoints array from wall module)
for (const wallPoint of wallPoints) {
if (point.distanceTo(wallPoint) < SNAPPING_DISTANCE) return wallPoint;
}
return point;
}
// Update temporary cabinet preview
function onMouseMoveCabinet(event: MouseEvent) {
const point = getFloorIntersection(event);
if (point && cabinetPoints.length > 0) {
const snappedPoint = getSnappedPoint(point);
if (tempCabinet) {
scene.remove(tempCabinet);
tempCabinet = null;
}
const start = cabinetPoints[cabinetPoints.length - 1];
const direction = snappedPoint.clone().sub(start);
const length = direction.length();
if (length > 0) {
tempCabinet = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0x0000ff); // Blue for temp
tempCabinet.position.copy(start);
tempCabinet.rotation.y = Math.atan2(direction.z, direction.x);
scene.add(tempCabinet);
}
}
}
// Add a new point and draw permanent cabinet segment
function onMouseDownCabinet(event: MouseEvent) {
if (event.button !== 0) return;
const point = getFloorIntersection(event);
if (point) {
const snappedPoint = getSnappedPoint(point);
cabinetPoints.push(snappedPoint);
if (cabinetPoints.length > 1) {
const start = cabinetPoints[cabinetPoints.length - 2];
const end = cabinetPoints[cabinetPoints.length - 1];
const direction = end.clone().sub(start);
const length = direction.length();
if (length > 0) {
const segment = new CabinetSegment(length, CABINET_SEGMENT_HEIGHT, CABINET_DEPTH, 0xff0000); // Red for permanent
segment.position.copy(start);
segment.rotation.y = Math.atan2(direction.z, direction.x);
scene.add(segment);
cabinetSegments.push(segment);
}
}
if (tempCabinet) {
scene.remove(tempCabinet);
tempCabinet = null;
}
}
}
// Add separate event listeners for cabinet mode (e.g., toggled via UI button)
window.addEventListener('mousemove', onMouseMoveCabinet);
window.addEventListener('mousedown', onMouseDownCabinet);
Auto-Populating the Line Segments with Live Cabinet Models
Here we fill 2 line-segments with 3D cabinet models (base & wall), and countertop meshes.
Once the cabinet line segments are defined, we can procedurally populate them with detailed components. This involves dividing each segment vertically into three layers: base cabinets at the bottom, countertops in the middle, and wall cabinets above. For the base and wall cabinets, we’ll use an optimization function to divide the segment’s length into standard widths (preferring 30-inch cabinets), with any remainder filled using the filler piece mentioned above. Countertops are even simpler—they form a single continuous slab stretching the full length of the segment.
The base cabinets are set to 24 inches deep and 34.5 inches high. Countertops add 1.5 inches in height and extend to 25.5 inches deep (including a 1.5-inch overhang). Wall cabinets start at 54 inches high (18 inches above the countertop), measure 12 inches deep, and are 30 inches tall. After generating these placeholder bounding boxes, we can replace them with preloaded 3D models from Blender using a loading function (e.g., via GLTFLoader).
To handle individual cabinets, we’ll create a simple Cabinet class that manages the placeholder and model loading.
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
const loader = new GLTFLoader();
class Cabinet extends THREE.Group {
constructor(width: number, height: number, depth: number, modelPath: string, color: number) {
super();
// Placeholder box
const geometry = new THREE.BoxGeometry(width, height, depth);
const material = new THREE.MeshBasicMaterial({ color });
const placeholder = new THREE.Mesh(geometry, material);
this.add(placeholder);
// Load and replace with model async
// Case: Non-standard width -> use filler piece
if (width < DEFAULT_MODEL_WIDTH) {
loader.load(FILLER_PIECE_FALLBACK_PATH, (gltf) => {
const model = gltf.scene;
model.scale.set(
width / FILLER_PIECE_WIDTH,
height / FILLER_PIECE_HEIGHT,
depth / FILLER_PIECE_DEPTH,
);
this.add(model);
this.remove(placeholder);
});
}
loader.load(modelPath, (gltf) => {
const model = gltf.scene;
model.scale.set(width / DEFAULT_MODEL_WIDTH, 1, 1); // Scale width
this.add(model);
this.remove(placeholder);
});
}
}
Then, we can add a populate method to the existing CabinetSegment class:
function splitIntoCabinets(width: number): number[] {
const cabinets = [];
// Preferred width
while (width >= DEFAULT_MODEL_WIDTH) {
cabinets.push(DEFAULT_MODEL_WIDTH);
width -= DEFAULT_MODEL_WIDTH;
}
if (width > 0) {
cabinets.push(width); // Custom empty slot
}
return cabinets;
}
class CabinetSegment extends THREE.Group {
// ... (existing constructor and properties)
populate() {
// Remove placeholder line and box
while (this.children.length > 0) {
this.remove(this.children[0]);
}
let offset = 0;
const widths = splitIntoCabinets(this.length);
// Base cabinets
widths.forEach((width) => {
const baseCab = new Cabinet(width, BASE_HEIGHT, BASE_DEPTH, 'models/base_cabinet.glb', 0x8b4513);
baseCab.position.set(offset + width / 2, BASE_HEIGHT / 2, BASE_DEPTH / 2);
this.add(baseCab);
offset += width;
});
// Countertop (single slab, no model)
const counterGeometry = new THREE.BoxGeometry(this.length, COUNTER_HEIGHT, COUNTER_DEPTH);
const counterMaterial = new THREE.MeshBasicMaterial({ color: 0xa9a9a9 });
const counter = new THREE.Mesh(counterGeometry, counterMaterial);
counter.position.set(this.length / 2, BASE_HEIGHT + COUNTER_HEIGHT / 2, COUNTER_DEPTH / 2);
this.add(counter);
// Wall cabinets
offset = 0;
widths.forEach((width) => {
const wallCab = new Cabinet(width, WALL_HEIGHT, WALL_DEPTH, 'models/wall_cabinet.glb', 0x4b0082);
wallCab.position.set(offset + width / 2, WALL_START_Y + WALL_HEIGHT / 2, WALL_DEPTH / 2);
this.add(wallCab);
offset += width;
});
}
}
// Call for each cabinetSegment after drawing
cabinetSegments.forEach((segment) => segment.populate());
Further Improvements & Optimizations
We can further improve the scene with appliances, varying-height cabinets, crown molding, etc.
At this point, we should have the foundational elements of room and cabinet creation logic fully in place. In order to take this project from a rudimentary segment-drawing app into the practical realm—along with dynamic cabinets, multiple realistic material options, and varying real appliance meshes—we can further enhance the user experience through several targeted refinements:
We can implement a detection mechanism to determine if a cabinet line segment is in contact with a wall line segment.
For cabinet rows that run parallel to walls, we can automatically incorporate a backsplash in the space between the wall cabinets and the countertop surface.
For cabinet segments not adjacent to walls, we can remove the upper wall cabinets and extend the countertop by an additional 15 inches, aligning with standard practices for kitchen islands or peninsulas.
We can introduce drag-and-drop functionality for appliances, each with predefined widths, allowing users to position them along the line segment. This integration will instruct our cabinet-splitting algorithm to exclude those areas from dynamic cabinet generation.
Additionally, we can give users more flexibility by enabling the swapping of one appliance with another, applying different textures to our 3D models, and adjusting default dimensions—such as wall cabinet depth or countertop overhang—to suit specific preferences.
All these core components lead us to a comprehensive, interactive application that enables the rapid rendering of a complete kitchen: cabinets, countertops, and appliances, in a fully interactive, user-driven experience.
The aim of this project is to demonstrate that complex 3D tasks can be distilled down to simple user actions. It is fully possible to take the high-dimensional complexity of 3D tooling—with seemingly limitless controls—and encode these complexities into low-dimensional, easily adjustable parameters. Whether the developer chooses to expose these parameters to the user or an LLM, the end result is that historically complicated 3D processes can become simple, and thus the entire contents of a 3D scene can be fully transformed with only a few parameters.
If you find this type of development interesting, have any great ideas, or would love to contribute to the evolution of this product, I strongly welcome you to reach out to me via email. I firmly believe that only recently has it become possible to build home design software that is so wickedly fast and intuitive that any person—regardless of architectural merit—will be able to design their own single-family home in less than 5 minutes via a web app, while fully adhering to local zoning, architectural, and design requirements. All the infrastructure necessary to accomplish this already exists; all it takes is a team of crazy, ambitious developers looking to change the standard of architectural home design.