Recently, I came across some great inspiration for 3D animations. There are so many possibilities, but it can be tricky to find the right balance and not overdo it. Anything 3D on a website looks especially impressive when scrolled, as the motion reveals the magic of 3D to our eyes, even though the screen is flat (still) 🙂
This one gave me a lot of inspiration for an on-scroll effect:
Your dose of everyday inspiration — Featuring some of the most popular images on Savee™ this week.⁰
And then this awesome reel by Thomas Monavon, too:
So here’s a small scroll experiment with rotating 3D panels, along with a page transition animation using GSAP:
You’ve surely heard the news that GSAP is now completely free, which means we can now use those great plugins and share the code with you! In this specific example, I used the rewritten SplitText and SmoothScroller.
This is just a proof of concept (especially the page transition).
I really hope you enjoy this and find it inspirational!
Organizations manage personal data across multiple jurisdictions in today’s interconnected digital economy, requiring a clear understanding of global data protection frameworks. The European Union’s General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act (DPDP) 2023 are two key regulations shaping the data privacy landscape. This guide provides a comparative analysis of these regulations, outlining key distinctions for businesses operating across both regions.
Understanding the GDPR: Key Considerations for Businesses
The GDPR, enforced in May 2018, is a comprehensive data protection law that applies to any organization processing personal data of EU residents, regardless of location.
Territorial Scope: GDPR applies to organizations with an establishment in the EU or those that offer goods or services to, or monitor the behavior of, EU residents, requiring many global enterprises to comply.
Definition of Personal Data: The GDPR defines personal data as any information related to an identifiable individual. It further classifies sensitive personal data and imposes stricter processing requirements.
Principles of Processing: Compliance requires adherence to lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, confidentiality, and accountability in data processing activities.
Lawful Basis for Processing: Businesses must establish a lawful basis for processing, such as consent, contract, legal obligation, vital interests, public task, or legitimate interest.
Data Subject Rights: GDPR grants individuals rights, including access, rectification, erasure, restriction, data portability, and objection to processing, necessitating dedicated mechanisms to address these requests.
Obligations of Controllers and Processors: GDPR imposes direct responsibilities on data controllers and processors, requiring them to implement security measures, maintain processing records, and adhere to breach notification protocols.
Understanding the DPDP Act 2023: Implications for Businesses in India
The DPDP Act 2023, enacted in August 2023, establishes a legal framework for the processing of digital personal data in India.
Territorial Scope: The Act applies to digital personal data processing in India and processing outside India if it involves offering goods or services to Indian data principals.
Definition of Personal Data: Personal data refers to any data that identifies an individual, specifically in digital form. Unlike GDPR, the Act does not differentiate between general and sensitive personal data (though future classifications may emerge).
Principles of Data Processing: The Act mandates lawful and transparent processing, purpose limitation, data minimization, accuracy, storage limitation, security safeguards, and accountability.
Lawful Basis for Processing: The primary basis for processing is explicit, informed, unconditional, and unambiguous consent, with certain legitimate exceptions.
Rights of Data Principals: Individuals can access, correct, and erase their data, seek grievance redressal, and nominate another person to exercise their rights if they become incapacitated.
Obligations of Data Fiduciaries and Processors: The Act imposes direct responsibilities on Data Fiduciaries (equivalent to GDPR controllers) to obtain consent, ensure data accuracy, implement safeguards, and report breaches. Data Processors (like GDPR processors) operate under contractual obligations set by Data Fiduciaries.
GDPR vs. DPDP: Key Differences for Businesses
Feature
GDPR
DPDP Act 2023
Business Implications
Data Scope
Covers both digital and non-digital personal data within a filing system.
Applies primarily to digital personal data.
Businesses need to assess their data inventory and processing activities, particularly for non-digital data handled in India.
Sensitive Data
Explicitly defines and provides stricter rules for processing sensitive personal data.
Applies a uniform standard to all digital personal data currently.
Organizations should be mindful of potential future classifications of sensitive data under DPDP.
Lawful Basis
Offers multiple lawful bases for processing, including legitimate interests and contractual necessity.
Primarily consent-based, with limited exceptions for legitimate uses.
Businesses need to prioritize obtaining explicit consent for data processing in India and carefully evaluate the scope of legitimate use exceptions.
Individual Rights
Provides a broader range of rights, including data portability and the right to object to profiling.
Focuses on core rights like access, correction, and erasure.
Compliance programs should address the specific set of rights granted under the DPDP Act.
Data Transfer
Strict mechanisms for international data transfers, requiring adequacy decisions or safeguards.
Permits cross-border transfers except to countries specifically restricted by the Indian government.
Businesses need to monitor the list of restricted countries for data transfers from India.
Breach Notification
Requires notification to the supervisory authority if the breach is likely to result in a high risk to individuals.
Mandates notification to both the Data Protection Board and affected Data Principals for all breaches.
Organizations must establish comprehensive data breach response plans aligned with DPDP’s broader notification requirements.
Enforcement
Enforced by Data Protection Authorities in each EU member state.
Enforced by the central Data Protection Board of India.
Businesses need to be aware of the centralized enforcement mechanism under the DPDP Act.
Data Protection Officer (DPO)
Mandatory for certain organizations based on processing activities.
Mandatory for Significant Data Fiduciaries, with criteria to be specified.
Organizations that meet the criteria for Significant Data Fiduciaries under DPDP will need to appoint a DPO.
Data Processor Obligations
Imposes direct obligations on data processors.
Obligations are primarily contractual between Data Fiduciaries and Data Processors.
Data Fiduciaries in India bear greater responsibility for ensuring the compliance of their Data Processors.
Navigating Global Compliance: A Strategic Approach for Businesses
Organizations subject to GDPR and DPDP must implement a harmonized yet region-specific compliance strategy. Key focus areas include:
Data Mapping and Inventory: Identify and categorize personal data flows across jurisdictions to determine applicable regulatory requirements.
Consent Management: Implement mechanisms that align with GDPR’s “freely given, specific, informed, and unambiguous” consent standard and DPDP’s stricter “free, specific, informed, unconditional, and unambiguous” requirement. Ensure easy withdrawal options.
Data Security Measures: Deploy technical and organizational safeguards proportionate to data processing risks, meeting the security mandates of both regulations.
Data Breach Response Plan: Establish incident response protocols that meet GDPR and DPDP notification requirements, particularly DPDP’s broader scope.
Data Subject/Principal Rights Management: Develop workflows to handle data access, correction, and erasure requests under both regulations, ensuring compliance with response timelines.
Cross-Border Data Transfer Mechanisms: Implement safeguards for international data transfers, aligning with GDPR’s standard contractual clauses and DPDP’s yet-to-be-defined jurisdictional rules.
Appointment of DPO/Contact Person: Assess whether a Data Protection Officer (DPO) is required under GDPR or if the organization qualifies as a Significant Data Fiduciary under DPDP, necessitating a DPO or designated contact person.
Employee Training: Conduct training programs on data privacy laws and best practices to maintain team compliance awareness.
Regular Audits: Perform periodic audits to evaluate data protection measures, adapting to evolving regulatory guidelines.
Conclusion: Towards a Global Privacy-Centric Approach
While GDPR and the DPDP Act 2023 share a common goal of enhancing data protection, they differ in scope, consent requirements, and enforcement mechanisms. Businesses operating across multiple jurisdictions must adopt a comprehensive, adaptable compliance strategy that aligns with both regulations.
By strengthening data governance, implementing robust security controls, and fostering a privacy-first culture, organizations can navigate global data protection challenges effectively and build trust with stakeholders.
Yesterday Online PNG Tools smashed through 6.42M Google clicks and today it’s smashed through 6.43M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
Understanding the Fetch API can be challenging, particularly for those new to JavaScript’s unique approach to handling asynchronous operations. Among the many features of modern JavaScript, the Fetch API stands out for its ability to handle network requests elegantly. However, the syntax of chaining .then() methods can seem unusual at first glance. To fully grasp how the Fetch API works, it’s vital to understand three core concepts:
In programming, synchronous code is executed in sequence. Each statement waits for the previous one to finish before executing. JavaScript, being single-threaded, runs code in a linear fashion. However, certain operations, like network requests, file system tasks, or timers, could block this thread, making the user experience unresponsive.
Here’s a simple example of synchronous code:
function doTaskOne() { console.log('Task 1 completed'); }
function doTaskTwo() { console.log('Task 2 completed'); }
Let’s dive a bit deeper into the heart of our WebAssembly integration by exploring the key segments of our Go-based WASM code.
involves preparing and specifying our Go code to be compiled for a WebAssembly runtime.
// go:build wasm // +build wasm
These lines serve as directives to the Go compiler, signaling that the following code is designated for a WebAssembly runtime environment. Specifically:
//go:build wasm: A build constraint ensuring the code is compiled only for WASM targets, adhering to modern syntax.
// +build wasm: An analogous constraint, utilizing older syntax for compatibility with prior Go versions.
In essence, these directives guide the compiler to include this code segment only when compiling for a WebAssembly architecture, ensuring an appropriate setup and function within this specific runtime.
package main
import ( "context" "encoding/json" "syscall/js"
"google.golang.org/protobuf/encoding/protojson"
"github.com/Permify/permify/pkg/development" )
var dev *development.Development
func run() js.Func { // The `run` function returns a new JavaScript function // that wraps the Go function. return js.FuncOf(func(this js.Value, args []js.Value) interface{} {
// t will be used to store the unmarshaled JSON data. // The use of an empty interface{} type means it can hold any type of value. var t interface{}
// Unmarshal JSON from JavaScript function argument (args[0]) to Go's data structure (map). // args[0].String() gets the JSON string from the JavaScript argument, // which is then converted to bytes and unmarshaled (parsed) into the map `t`. err := json.Unmarshal([]byte(args[0].String()), &t)
// If an error occurs during unmarshaling (parsing) the JSON, // it returns an array with the error message "invalid JSON" to JavaScript. if err != nil { return js.ValueOf([]interface{}{"invalid JSON"}) }
// Attempt to assert that the parsed JSON (`t`) is a map with string keys. // This step ensures that the unmarshaled JSON is of the expected type (map). input, ok := t.(map[string]interface{})
// If the assertion is false (`ok` is false), // it returns an array with the error message "invalid JSON" to JavaScript. if !ok { return js.ValueOf([]interface{}{"invalid JSON"}) }
// Run the main logic of the application with the parsed input. // It’s assumed that `dev.Run` processes `input` in some way and returns any errors encountered during that process. errors := dev.Run(context.Background(), input)
// If no errors are present (the length of the `errors` slice is 0), // return an empty array to JavaScript to indicate success with no errors. if len(errors) == 0 { return js.ValueOf([]interface{}{}) }
// If there are errors, each error in the `errors` slice is marshaled (converted) to a JSON string. // `vs` is a slice that will store each of these JSON error strings. vs := make([]interface{}, 0, len(errors))
// Iterate through each error in the `errors` slice. for _, r := range errors { // Convert the error `r` to a JSON string and store it in `result`. // If an error occurs during this marshaling, it returns an array with that error message to JavaScript. result, err := json.Marshal(r) if err != nil { return js.ValueOf([]interface{}{err.Error()}) } // Add the JSON error string to the `vs` slice. vs = append(vs, string(result)) }
// Return the `vs` slice (containing all JSON error strings) to JavaScript. return js.ValueOf(vs) }) }
Within the realm of Permify, the run function stands as a cornerstone, executing a crucial bridging operation between JavaScript inputs and Go’s processing capabilities. It orchestrates real-time data interchange in JSON format, safeguarding that Permify’s core functionalities are smoothly and instantaneously accessible via a browser interface.
Digging into run:
JSON Data Interchange: Translating JavaScript inputs into a format utilizable by Go, the function unmarshals JSON, transferring data between JS and Go, assuring that the robust processing capabilities of Go can seamlessly manipulate browser-sourced inputs.
Error Handling: Ensuring clarity and user-awareness, it conducts meticulous error-checking during data parsing and processing, returning relevant error messages back to the JavaScript environment to ensure user-friendly interactions.
Contextual Processing: By employing dev.Run, it processes the parsed input within a certain context, managing application logic while handling potential errors to assure steady data management and user feedback.
Bidirectional Communication: As errors are marshaled back into JSON format and returned to JavaScript, the function ensures a two-way data flow, keeping both environments in synchronized harmony.
Thus, through adeptly managing data, error-handling, and ensuring a fluid two-way communication channel, run serves as an integral bridge, linking JavaScript and Go to ensure the smooth, real-time operation of Permify within a browser interface. This facilitation of interaction not only heightens user experience but also leverages the respective strengths of JavaScript and Go within the Permify environment.
// Continuing from the previously discussed code...
func main() { // Instantiate a channel, 'ch', with no buffer, acting as a synchronization point for the goroutine. ch := make(chan struct{}, 0)
// Create a new instance of 'Container' from the 'development' package and assign it to the global variable 'dev'. dev = development.NewContainer()
// Attach the previously defined 'run' function to the global JavaScript object, // making it callable from the JavaScript environment. js.Global().Set("run", run())
// Utilize a channel receive expression to halt the 'main' goroutine, preventing the program from terminating. <-ch }
ch := make(chan struct{}, 0): A synchronization channel is created to coordinate the activity of goroutines (concurrent threads in Go).
dev = development.NewContainer(): Initializes a new container instance from the development package and assigns it to dev.
js.Global().Set("run", run()): Exposes the Go run function to the global JavaScript context, enabling JavaScript to call Go functions.
<-ch: Halts the main goroutine indefinitely, ensuring that the Go WebAssembly module remains active in the JavaScript environment.
In summary, the code establishes a Go environment running within WebAssembly that exposes specific functionality (run function) to the JavaScript side and keeps itself active and available for function calls from JavaScript.
Before we delve into Permify’s rich functionalities, it’s paramount to elucidate the steps of converting our Go code into a WASM module, priming it for browser execution.
For enthusiasts eager to delve deep into the complete Go codebase, don’t hesitate to browse our GitHub repository: Permify Wasm Code.
Kickstart the transformation of our Go application into a WASM binary with this command:
GOOS=js GOARCH=wasm go build -o permify.wasm main.go
This directive cues the Go compiler to churn out a .wasm binary attuned for JavaScript environments, with main.go as the source. The output, permify.wasm, is a concise rendition of our Go capabilities, primed for web deployment.
In conjunction with the WASM binary, the Go ecosystem offers an indispensable JavaScript piece named wasm_exec.js. It’s pivotal for initializing and facilitating our WASM module within a browser setting. You can typically locate this essential script inside the Go installation, under misc/wasm.
However, to streamline your journey, we’ve hosted wasm_exec.js right here for direct access: wasm_exec.
cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" .
Equipped with these pivotal assets — the WASM binary and its companion JavaScript — the stage is set for its amalgamation into our frontend.
To kick things off, ensure you have a directory structure that clearly separates your WebAssembly-related code from the rest of your application. Based on your given structure, the loadWasm folder seems to be where all the magic happens:
loadWasm/ │ ├── index.tsx // Your main React component that integrates WASM. ├── wasm_exec.js // Provided by Go, bridges the gap between Go's WASM and JS. └── wasmTypes.d.ts // TypeScript type declarations for WebAssembly.
To view the complete structure and delve into the specifics of each file, refer to the Permify Playground on GitHub.
Inside the wasmTypes.d.ts, global type declarations are made which expand upon the Window interface to acknowledge the new methods brought in by Go’s WebAssembly:
WebAssembly Initialization: The asynchronous function loadWasm takes care of the entire process:
async function loadWasm(): Promise<void> { const goWasm = new window.Go(); const result = await WebAssembly.instantiateStreaming( fetch("play.wasm"), goWasm.importObject ); goWasm.run(result.instance); }
Here, new window.Go() initializes the Go WASM environment. WebAssembly.instantiateStreaming fetches the WASM module, compiles it, and creates an instance. Finally, goWasm.run activates the WASM module.
React Component with Loader UI: The LoadWasm component uses the useEffect hook to asynchronously load the WebAssembly when the component mounts:
While loading, SVG rocket is displayed to indicate that initialization is ongoing. This feedback is crucial as users might otherwise be uncertain about what’s transpiring behind the scenes. Once loading completes, children components or content will render.
Given your Go WASM exposes a method named run, you can invoke it as follows:
function Run(shape) { return new Promise((resolve) => { let res = window.run(shape); resolve(res); }); }
This function essentially acts as a bridge, allowing the React frontend to communicate with the Go backend logic encapsulated in the WASM.
To integrate a button that triggers the WebAssembly function when clicked, follow these steps:
Creating the Button Component
First, we’ll create a simple React component with a button:
In the code above, the RunButton component accepts two props:
shape: The shape argument to pass to the WebAssembly run function.
onResult: A callback function that receives the result of the WebAssembly function and can be used to update the state or display the result in the UI.
Integrating the Button in the Main Component
Now, in your main component (or wherever you’d like to place the button), integrate the RunButton:
import React, { useState } from "react"; import RunButton from "./path_to_RunButton_component"; // Replace with the actual path
function App() { const [result, setResult] = useState<any[]>([]);
// Define the shape content const shapeContent = { schema: `|- entity user {}
In this example, App is a component that contains the RunButton. When the button is clicked, the result from the WebAssembly function is displayed in a list below the button.
Throughout this exploration, the integration of WebAssembly with Go was unfolded, illuminating the pathway toward enhanced web development and optimal user interactions within browsers.
The journey involved setting up the Go environment, converting Go code to WebAssembly, and executing it within a web context, ultimately giving life to the interactive platform showcased at play.permify.co.
This platform stands not only as an example but also as a beacon, illustrating the concrete and potent capabilities achievable when intertwining these technological domains.
Yesterday Online PNG Tools smashed through 6.40M Google clicks and today it’s smashed through 6.41M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
While experimenting with particle systems, I challenged myself to create particles with tails, similar to snakes moving through space. At first, I didn’t have access to TSL, so I tested basic ideas, like using noise derivatives and calculating previous steps for each particle, but none of them worked as expected.
I spent a long time pondering how to make it work, but all my solutions involved heavy testing with WebGL and GPGPU, which seemed like it would require too much code for a simple proof of concept. That’s when TSL (Three.js Shader Language) came into play. With its Compute Shaders, I was able to compute arrays and feed the results into materials, making it easier to test ideas quickly and efficiently. This allowed me to accomplish the task without much time lost.
Now, let’s dive into the step-by-step process of building the particle system, from setting up the environment to creating the trails and achieving that fluid movement.
Step 1: Set Up the Particle System
First, we’ll define the necessary uniforms that will be used to create and control the particles in the system.
Next, create the variables that will define the parameters of the particle system. The “tails_count” variable determines how many segments each snake will have, while the “particles_count” defines the total number of segments in the scene. The “story_count” variable represents the number of frames used to store the position data for each segment. Increasing this value will increase the distance between segments, as we will store the position history of each one. The “story_snake” variable holds the history of one snake, while “full_story_length” stores the history for all snakes. These variables will be enough to bring the concept to life.
tails_count = 7 // n-1 point tails
particles_count = this.tails_count * 200 // need % tails_count
story_count = 5 // story for 1 position
story_snake = this.tails_count * this.story_count
full_story_length = ( this.particles_count / this.tails_count ) * this.story_snake
Next, we need to create the buffers required for the computational shaders. The most important buffer to focus on is the “positionStoryBuffer,” which will store the position history of all segments. To understand how it works, imagine a train: the head of the train sets the direction, and the cars follow in the same path. By saving the position history of the head, we can use that data to determine the position of each car by referencing its position in the history.
Now, let’s create the particle system with a material. I chose a standard material because it allows us to use an emissiveNode, which will interact with Bloom effects. For each segment, we’ll use a sphere and disable frustum culling to ensure the particles don’t accidentally disappear off the screen.
To initialize the positions of the particles, we’ll use a computational shader to reduce CPU usage and speed up page loading. We randomly generate the particle positions, which form a pseudo-cube shape. To keep the particles always visible on screen, we assign them a lifetime after which they disappear and won’t reappear from their starting positions. The “cycleStep” helps us assign each snake its own random positions, ensuring the tails are generated in the same location as the head. Finally, we send this data to the computation process.
For each frame, we compute the position history for each segment. The key aspect of the “computePositionStory” function is that new positions are recorded only from the head of the snake, and all positions are shifted one step forward using a queue algorithm.
Next, we update the positions of all particles, taking into account the recorded history of their positions. First, we use simplex noise to generate the new positions of the particles, allowing our snakes to move smoothly through space. Each particle also has its own lifetime, during which it moves and eventually resets to its original position. The key part of this function is determining which particle is the head and which is the tail. For the head, we generate a new position based on simplex noise, while for the tail, we use positions from the saved history.
To display the particle positions, we’ll create a simple function called “positionNode.” This function will not only output the positions but also apply a slight magnification effect to the head of the snake.
Now, you should be able to easily create position history buffers for other problem-solving tasks, and with TSL, this process becomes quick and efficient. I believe this project has potential for further development, such as transferring position data to model bones. This could enable the creation of beautiful, flying dragons or similar effects in 3D space. For this, a custom bone structure tailored to the project would be needed.
When designing a software system, we naturally focus more on the happy flow. But we should carefully plan to handle errors that fall into three categories: Validation, Transient, and Fatal.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When designing a new software system, it’s easy to focus mainly on the happy flow and forget that you must also handle errors.
You should carefully define and design how to handle errors: depending on the use case, error handling can have a huge impact on the architecture of your software system.
In this article, we’ll explore the three main categories of errors that we must always remember to address; for each type of error, we will showcase how addressing it can impact the software architecture differently.
An ideal system with only the happy path
To use a realistic example, let’s design a simple system with a single module named MainApplication: this module reads data from an external API, manipulates the data, and stores the result on the DB.
The system is called asynchronously, via a Message Queue, by an external service – that we are going to ignore.
The happy flow is pretty much the following:
An external system inserts some data into the Queue;
MainApplication reads the data from the Queue;
MainApplication calls an external API to retrieve some data;
MainApplication stores some data on the DB;
MainApplication sends a message on the queue with the operation result.
Now, the happy flow is simple. But we should have covered what to do in case of an error.
Introducing the Error Management Trio
In general, errors that need to be handled fall into three categories (that I decided to call “the Error Management Trio”): data validation, transient errors, and faults.
Data Validation focuses on the data used across the system, particularly the data you don’t control.
Transient Errors occur when the application’s overall status or its dependencies temporarily change to an invalid state.
Faults are errors that take down the whole application, and you cannot recover immediately.
The Trio does not take into account “errors” that are not properly errors: null values, queries that do not return any value, and so on. These, in my opinion, are all legitimate statuses that represent that lack of values but are not errors that have architectural relevance.
Data Validation: the first defence against invalid status
The Data Validation category focuses on ensuring that relevant data is in a valid status.
In particular, it aims at ensuring that data coming from external sources (for example, from the Body in an incoming HTTP request or from the result of a query on the database) is both syntactically and logically valid.
Suppose that the messages we receive from the queue are in the following format:
We definitely need to perform some sort of validation on the message content.
For example:
The Username property must not be empty;
The BookId property must be a positive number;
The Operation property must have one of the following values: Add, Remove, Refresh;
How does it impact our design?
We have several choices to deal with an invalid incoming message:
ignore the whole message: if it doesn’t pass the validation, discard the message;
send the message back to the caller, describing the type of error
try to fix it locally: if we are able to recreate a valid message, we could try to fix it and process the incoming message;
try to fix it in a separate service: you will need to create a distinct service that receives the invalid message and tries to fix it: if it manages to fix the message, it re-inserts it in the original queue; otherwise, it sends a message to the response queue to notify about the impossibility to recreate a valid message.
As you can see, even for the simple input validation, the choices we make can have an impact on the structure of the architecture.
Suppose that you choose option #4: you will need to implement a brand new service (let’s call it ValidationFixesManager), configure a new queue, and keep track of the attempts to fix the message.
All of this only when considering the static validation. How would you validate your business rules? How would you ensure that, for instance, the Username is valid and the user is still active on the system?
Maybe you discover that the data stored on the database in incomplete or stale. Then you have to work out a way to handle such type of data.
For example, you can:
run a background job that ensures that all the data is always valid;
enrich the data from the DB with newer data only when it is actually needed;
fine-tune the database consistency level.
We have just demonstrated a simple but important fact: data validation looks trivial, but depending on the needs of your system, it may impact how you design your system.
Transient Errors: temporary errors that may randomly occur
Even if the validation passes, temporary issues may prevent your operations from completing.
In the previous example, there are some possible cases to consider:
the external API is temporarily down, and you cannot retrieve the data you need;
the return queue is full, and you cannot add response messages;
the application is not able to connect to the DB due to network issues;
These kinds of issues are due to a temporary status of the system or of one of its dependencies.
Sure, you may add automatic retries: for instance, you can use Polly to automatically retry access the API. But what if it’s not enough?
Again, depending on your application’s requirements and the overall structure you started designing, solving this problem may bring you to unexpected paths.
Let’s say that the external API is returning a 500 HTTP error: this is a transient error, and it does not depend on the content of the request: the API is down, an you cannot to anything to solve it.
What can we do if all the retries fail?
If we can just accept the situation, we can return the error to the caller and move on with the next operation.
But if we need to keep trying until the operation goes well, we have (at least) two choices:
consume the message from the Queue, try calling the API, and, if it fails, re-insert the message on the queue (ideally, with some delay);
peek the message from the queue and try calling the API. If it fails, the message stays on the queue (and you need a way to read it again). Otherwise, we consider the message completed and remove it from the queue.
These are just two of the different solutions. But, as you can see, this choice will have, in the long run, a huge effect on the future of the application, both in terms of maintainability and performance.
Below is how the structure changes if we decide to send the failed messages back in the queue with some delay.
In both cases, we must remember that trying to call a service that is temporarily down is useless: maybe it’s time to use a Circuit Breaker?
Fatal Errors: when everything goes wrong
There is one type of error that is often neglected but that may deeply influence how your system behaves: fatal errors.
Examples of fatal errors are:
the host has consumed all the CPU or RAM;
the file system is corrupted;
the connection to an external system is interrupted due to network misconfigurations.
In short, fatal errors are errors you have no way to solve in the short run: they happen and stop everything you are doing.
This kind of error cannot be directly managed via application code, but you need to rely on other techniques.
For example, to make sure you won’t consume all the available RAM, you should plan for autoscaling of your resources. So you have to design the system with autoscaling in mind: this means, for example, that the system must be stateless and the application must run on infrastructure objects that can be configured to automatically manage resources (like Azure Functions, Kubernetes, and Azure App Services). Also: do you need horizontal or vertical scaling?
And, talking about the integrity of the system, how do you ensure that operations that were ongoing when the fatal error occurred can be completed?
One possible solution is to use a database table to keep track of the status of each operation, so that when the application restarts, it first completes pending operations, and then starts working on new operations.
A practical approach to address the Error Management Trio
There are too many errors to manage and too much effort to cover everything!
How can we cover everything? Well, it’s impossible: for every action we take to prevent an error, a new one may occur.
Let’s jump back to the example we saw for handling validation errors (using a new service that tries to fix the message). What if the ValidationFixesManager service is down or the message queue is unreachable? We tried to solve a problem, but we ended up with two more to be managed!
Let me introduce a practical approach to help you decide what needs to be addressed.
Step 1: list all the errors you can think of. Create a table to list all the possible errors that you expect they can happen.
You can add a column to describe the category the error falls into, as well as a Probability and Impact on the system column with a value (in this example, Low, Medium and High) that represents the probability that this error occurs and the impact it has on the overall application.
Problem
Category
Probability
Impact on the system
Invalid message from queue
Data Validation
Medium
High
Invalid user data on DB
Data Validation
Low
Medium
Missing user on DB
Data Validation
Low
Low
API not reachable
Transient
High
High
DB not reachable
Transient
Low
High
File system corrupted
Fatal
Low
High
CPU limit reached
Fatal
Medium
High
From here, you can pick the most urgent elements to be addressed.
Step 2: evaluate alternatives. Every error can be addressed in several ways (ignoring the error IS a valid alternative!). Take some time to explore all the alternatives.
Again, a table can be a good companion for this step. You can describe, for example:
the effort required to solve the error (Low, Medium, High)
the positive and negative consequences in terms (also) of quality attributes (aka: “-ilities”). Maybe a solution works fine for data integrity but has a negative impact on maintainability.
Step 3: use ADRs to describe how (and why) you will handle that specific error.
Take your time to thoroughly describe, using ADR documents, the problems you are trying to solve, the solutions taken into consideration, and the final choice.
Having everything written down in a shared file is fundamental for ensuring that, in the future, the present choices and necessities are taken into account, before saying “meh, that’s garbage!”
Further readings
Unfortunately, I feel that error handling is one of the most overlooked topics when designing a system. This also means that there are not lots and lots of articles and resources that explore this topic.
But, if you use queues, one of the components you should use to manage errors is the Dead Letter queue. Here’s a good article by Dorin Baba where he explains how to use Dead Letter queues to handle errors in asynchronous systems.
In this article, we used a Queue to trigger the beginning of the operation. When using Azure services, we have two types of message queues: Queues and Topics. Do you know the difference? Hint: other vendors use the same names to represent different concepts.
Whichever the way you chose to solve manage an error, always remember to write down the reasons that guided you to use that specific solution. An incredibly helpful way is by using ADRs.
This article highlights the importance of error management and the fact that even if we all want to avoid and prevent errors in our systems, we still have to take care of them and plan according to our needs.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛