In this article, we will explore how to analyze stocks using Python and Excel. We will fetch historical data for three popular stocks—Realty Income (O), McDonald’s (MCD), and Johnson & Johnson (JNJ) — calculate returns, factor in dividends, and visualize…
بلاگ
-
Automate Stock Analysis with Python and Yfinance: Generate Excel Reports
-
3 Fundamental Concepts to Fully Understand how the Fetch API Works | by Jay Cruz
Generated with DALL-E 3 Understanding the Fetch API can be challenging, particularly for those new to JavaScript’s unique approach to handling asynchronous operations. Among the many features of modern JavaScript, the Fetch API stands out for its ability to handle network requests elegantly. However, the syntax of chaining
.then()
methods can seem unusual at first glance. To fully grasp how the Fetch API works, it’s vital to understand three core concepts:In programming, synchronous code is executed in sequence. Each statement waits for the previous one to finish before executing. JavaScript, being single-threaded, runs code in a linear fashion. However, certain operations, like network requests, file system tasks, or timers, could block this thread, making the user experience unresponsive.
Here’s a simple example of synchronous code:
function doTaskOne() {
console.log('Task 1 completed');
}function doTaskTwo() {
console.log('Task 2 completed');
}doTaskOne();
doTaskTwo();
// Output:
// Task 1 completed
// Task 2 completed -
Tuesday Sale! 50% OFF! 🎁
At Browserling and Online Tools we love sales.
We just created a new automated Tuesday Sale.
Now on Tuesdays, we show a 50% discount offer to all users who visit our site.
🔥 onlinetools.com/pricing
🔥 browserling.com/#pricing
Buy a subscription now and see you next time!
-
WebAssembly with Go: Taking Web Apps to the Next Level | by Ege Aytin
Let’s dive a bit deeper into the heart of our WebAssembly integration by exploring the key segments of our Go-based WASM code.
involves preparing and specifying our Go code to be compiled for a WebAssembly runtime.
// go:build wasm
// +build wasmThese lines serve as directives to the Go compiler, signaling that the following code is designated for a WebAssembly runtime environment. Specifically:
//go:build wasm
: A build constraint ensuring the code is compiled only for WASM targets, adhering to modern syntax.// +build wasm
: An analogous constraint, utilizing older syntax for compatibility with prior Go versions.
In essence, these directives guide the compiler to include this code segment only when compiling for a WebAssembly architecture, ensuring an appropriate setup and function within this specific runtime.
package main
import (
"context"
"encoding/json"
"syscall/js""google.golang.org/protobuf/encoding/protojson"
"github.com/Permify/permify/pkg/development"
)var dev *development.Development
func run() js.Func {
// The `run` function returns a new JavaScript function
// that wraps the Go function.
return js.FuncOf(func(this js.Value, args []js.Value) interface{} {// t will be used to store the unmarshaled JSON data.
// The use of an empty interface{} type means it can hold any type of value.
var t interface{}// Unmarshal JSON from JavaScript function argument (args[0]) to Go's data structure (map).
// args[0].String() gets the JSON string from the JavaScript argument,
// which is then converted to bytes and unmarshaled (parsed) into the map `t`.
err := json.Unmarshal([]byte(args[0].String()), &t)// If an error occurs during unmarshaling (parsing) the JSON,
// it returns an array with the error message "invalid JSON" to JavaScript.
if err != nil {
return js.ValueOf([]interface{}{"invalid JSON"})
}// Attempt to assert that the parsed JSON (`t`) is a map with string keys.
// This step ensures that the unmarshaled JSON is of the expected type (map).
input, ok := t.(map[string]interface{})// If the assertion is false (`ok` is false),
// it returns an array with the error message "invalid JSON" to JavaScript.
if !ok {
return js.ValueOf([]interface{}{"invalid JSON"})
}// Run the main logic of the application with the parsed input.
// It’s assumed that `dev.Run` processes `input` in some way and returns any errors encountered during that process.
errors := dev.Run(context.Background(), input)// If no errors are present (the length of the `errors` slice is 0),
// return an empty array to JavaScript to indicate success with no errors.
if len(errors) == 0 {
return js.ValueOf([]interface{}{})
}// If there are errors, each error in the `errors` slice is marshaled (converted) to a JSON string.
// `vs` is a slice that will store each of these JSON error strings.
vs := make([]interface{}, 0, len(errors))// Iterate through each error in the `errors` slice.
for _, r := range errors {
// Convert the error `r` to a JSON string and store it in `result`.
// If an error occurs during this marshaling, it returns an array with that error message to JavaScript.
result, err := json.Marshal(r)
if err != nil {
return js.ValueOf([]interface{}{err.Error()})
}
// Add the JSON error string to the `vs` slice.
vs = append(vs, string(result))
}// Return the `vs` slice (containing all JSON error strings) to JavaScript.
return js.ValueOf(vs)
})
}Within the realm of Permify, the
run
function stands as a cornerstone, executing a crucial bridging operation between JavaScript inputs and Go’s processing capabilities. It orchestrates real-time data interchange in JSON format, safeguarding that Permify’s core functionalities are smoothly and instantaneously accessible via a browser interface.Digging into
run
:- JSON Data Interchange: Translating JavaScript inputs into a format utilizable by Go, the function unmarshals JSON, transferring data between JS and Go, assuring that the robust processing capabilities of Go can seamlessly manipulate browser-sourced inputs.
- Error Handling: Ensuring clarity and user-awareness, it conducts meticulous error-checking during data parsing and processing, returning relevant error messages back to the JavaScript environment to ensure user-friendly interactions.
- Contextual Processing: By employing
dev.Run
, it processes the parsed input within a certain context, managing application logic while handling potential errors to assure steady data management and user feedback. - Bidirectional Communication: As errors are marshaled back into JSON format and returned to JavaScript, the function ensures a two-way data flow, keeping both environments in synchronized harmony.
Thus, through adeptly managing data, error-handling, and ensuring a fluid two-way communication channel,
run
serves as an integral bridge, linking JavaScript and Go to ensure the smooth, real-time operation of Permify within a browser interface. This facilitation of interaction not only heightens user experience but also leverages the respective strengths of JavaScript and Go within the Permify environment.// Continuing from the previously discussed code...
func main() {
// Instantiate a channel, 'ch', with no buffer, acting as a synchronization point for the goroutine.
ch := make(chan struct{}, 0)// Create a new instance of 'Container' from the 'development' package and assign it to the global variable 'dev'.
dev = development.NewContainer()// Attach the previously defined 'run' function to the global JavaScript object,
// making it callable from the JavaScript environment.
js.Global().Set("run", run())// Utilize a channel receive expression to halt the 'main' goroutine, preventing the program from terminating.
<-ch
}ch := make(chan struct{}, 0)
: A synchronization channel is created to coordinate the activity of goroutines (concurrent threads in Go).dev = development.NewContainer()
: Initializes a new container instance from the development package and assigns it todev
.js.Global().Set("run", run())
: Exposes the Gorun
function to the global JavaScript context, enabling JavaScript to call Go functions.<-ch
: Halts themain
goroutine indefinitely, ensuring that the Go WebAssembly module remains active in the JavaScript environment.
In summary, the code establishes a Go environment running within WebAssembly that exposes specific functionality (
run
function) to the JavaScript side and keeps itself active and available for function calls from JavaScript.Before we delve into Permify’s rich functionalities, it’s paramount to elucidate the steps of converting our Go code into a WASM module, priming it for browser execution.
For enthusiasts eager to delve deep into the complete Go codebase, don’t hesitate to browse our GitHub repository: Permify Wasm Code.
Kickstart the transformation of our Go application into a WASM binary with this command:
GOOS=js GOARCH=wasm go build -o permify.wasm main.go
This directive cues the Go compiler to churn out a
.wasm
binary attuned for JavaScript environments, withmain.go
as the source. The output,permify.wasm
, is a concise rendition of our Go capabilities, primed for web deployment.In conjunction with the WASM binary, the Go ecosystem offers an indispensable JavaScript piece named
wasm_exec.js
. It’s pivotal for initializing and facilitating our WASM module within a browser setting. You can typically locate this essential script inside the Go installation, undermisc/wasm
.However, to streamline your journey, we’ve hosted
wasm_exec.js
right here for direct access: wasm_exec.cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" .
Equipped with these pivotal assets — the WASM binary and its companion JavaScript — the stage is set for its amalgamation into our frontend.
To kick things off, ensure you have a directory structure that clearly separates your WebAssembly-related code from the rest of your application. Based on your given structure, the
loadWasm
folder seems to be where all the magic happens:loadWasm/
│
├── index.tsx // Your main React component that integrates WASM.
├── wasm_exec.js // Provided by Go, bridges the gap between Go's WASM and JS.
└── wasmTypes.d.ts // TypeScript type declarations for WebAssembly.To view the complete structure and delve into the specifics of each file, refer to the Permify Playground on GitHub.
Inside the
wasmTypes.d.ts
, global type declarations are made which expand upon the Window interface to acknowledge the new methods brought in by Go’s WebAssembly:declare global {
export interface Window {
Go: any;
run: (shape: string) => any[];
}
}
export {};This ensures TypeScript recognizes the
Go
constructor and therun
method when called on the globalwindow
object.In
index.tsx
, several critical tasks are accomplished:- Import Dependencies: First off, we import the required JS and TypeScript declarations:
import "./wasm_exec.js";
import "./wasmTypes.d.ts";- WebAssembly Initialization: The asynchronous function
loadWasm
takes care of the entire process:
async function loadWasm(): Promise<void> {
const goWasm = new window.Go();
const result = await WebAssembly.instantiateStreaming(
fetch("play.wasm"),
goWasm.importObject
);
goWasm.run(result.instance);
}Here,
new window.Go()
initializes the Go WASM environment.WebAssembly.instantiateStreaming
fetches the WASM module, compiles it, and creates an instance. Finally,goWasm.run
activates the WASM module.- React Component with Loader UI: The
LoadWasm
component uses theuseEffect
hook to asynchronously load the WebAssembly when the component mounts:
export const LoadWasm: React.FC<React.PropsWithChildren<{}>> = (props) => {
const [isLoading, setIsLoading] = React.useState(true);useEffect(() => {
loadWasm().then(() => {
setIsLoading(false);
});
}, []);if (isLoading) {
return (
<div className="wasm-loader-background h-screen">
<div className="center-of-screen">
<SVG src={toAbsoluteUrl("/media/svg/rocket.svg")} />
</div>
</div>
);
} else {
return <React.Fragment>{props.children}</React.Fragment>;
}
};While loading, SVG rocket is displayed to indicate that initialization is ongoing. This feedback is crucial as users might otherwise be uncertain about what’s transpiring behind the scenes. Once loading completes, children components or content will render.
Given your Go WASM exposes a method named
run
, you can invoke it as follows:function Run(shape) {
return new Promise((resolve) => {
let res = window.run(shape);
resolve(res);
});
}This function essentially acts as a bridge, allowing the React frontend to communicate with the Go backend logic encapsulated in the WASM.
To integrate a button that triggers the WebAssembly function when clicked, follow these steps:
- Creating the Button Component
First, we’ll create a simple React component with a button:
import React from "react";
type RunButtonProps = {
shape: string;
onResult: (result: any[]) => void;
};function RunButton({ shape, onResult }: RunButtonProps) {
const handleClick = async () => {
let result = await Run(shape);
onResult(result);
};return <button onClick={handleClick}>Run WebAssembly</button>;
}In the code above, the
RunButton
component accepts two props:shape
: The shape argument to pass to the WebAssemblyrun
function.onResult
: A callback function that receives the result of the WebAssembly function and can be used to update the state or display the result in the UI.
- Integrating the Button in the Main Component
Now, in your main component (or wherever you’d like to place the button), integrate the
RunButton
:import React, { useState } from "react";
import RunButton from "./path_to_RunButton_component"; // Replace with the actual pathfunction App() {
const [result, setResult] = useState<any[]>([]);// Define the shape content
const shapeContent = {
schema: `|-
entity user {}entity account {
relation owner @user
relation following @user
relation follower @userattribute public boolean
action view = (owner or follower) or public
}entity post {
relation account @accountattribute restricted boolean
action view = account.view
action comment = account.following not restricted
action like = account.following not restricted
}`,
relationships: [
"account:1#owner@user:kevin",
"account:2#owner@user:george",
"account:1#following@user:george",
"account:2#follower@user:kevin",
"post:1#account@account:1",
"post:2#account@account:2",
],
attributes: [
"account:1$public|boolean:true",
"account:2$public|boolean:false",
"post:1$restricted|boolean:false",
"post:2$restricted|boolean:true",
],
scenarios: [
{
name: "Account Viewing Permissions",
description:
"Evaluate account viewing permissions for 'kevin' and 'george'.",
checks: [
{
entity: "account:1",
subject: "user:kevin",
assertions: {
view: true,
},
},
],
},
],
};return (
<div>
<RunButton shape={JSON.stringify(shapeContent)} onResult={setResult} />
<div>
Results:
<ul>
{result.map((item, index) => (
<li key={index}>{item}</li>
))}
</ul>
</div>
</div>
);
}In this example,
App
is a component that contains theRunButton
. When the button is clicked, the result from the WebAssembly function is displayed in a list below the button.Throughout this exploration, the integration of WebAssembly with Go was unfolded, illuminating the pathway toward enhanced web development and optimal user interactions within browsers.
The journey involved setting up the Go environment, converting Go code to WebAssembly, and executing it within a web context, ultimately giving life to the interactive platform showcased at play.permify.co.
This platform stands not only as an example but also as a beacon, illustrating the concrete and potent capabilities achievable when intertwining these technological domains.
-
Python – Data Wrangling with Excel and Pandas – Useful code
Data wrangling with Excel and Pandas is actually quite useful tool in the belt of any Excel professional, financial professional, data analyst or a developer. Really, everyonecan benefit from the well defined libraries that ease people’s lifes. These are the libraries used:
import pandas as pd # Main data manipulation
from openpyxl import Workbook # Excel writing
from openpyxl.styles import Font # Excel formatting (bold, colors)
import glob # File path handling
from datetime import datetime
Additionally, a function for making a unique Excel name is used:
def make_unique_name():
timestamp = datetime.now().strftime(‘%Y%m%d_%H%M%S’)
return f‘{timestamp}__report.xlsx’
An example of the video, where Jupyter Notebook is used.
In the YT video below, the following 8 points are discussed:
# Trick 1 – Simple reading of worksheet from Excel workbook
excel_file_name = “financial_data.xlsx”
df = pd.read_excel(excel_file_name,
sheet_name = “Transactions”,
parse_dates = [“Date”],
dtype={“InvoiceID”:str})
# Trick 2 – Combine Reports
income = pd.read_excel(excel_file_name, sheet_name=“Income”)
expenses = pd.read_excel(excel_file_name, sheet_name=“Expenses”)
combined = pd.concat([
income.assign(From_Worksheet=“Income”),
expenses.assign(From_Worksheet=“Expenses”)
])
# Trick 3 – Fix Missing Values
combined[“Amount”] = combined[“Amount”].fillna(combined[“Amount”].mean())
# Trick 4 – Formatting the exported Excel file
with pd.ExcelWriter(new_worksheet, engine=“openpyxl”) as writer:
combined.to_excel(writer, index=False)
workbook = writer.book
worksheet=writer.sheets[“Sheet1”]
for cell in worksheet[“1:1”]:
cell.font = Font(bold=True)
cell.font = Font(color=“FFFF22”)
# Trick 5 – Merging Excel Files
files = glob.glob(“sales12/sales_*.xlsx”)
annual_data = pd.concat([pd.read_excel(f) for f in files])
# Trick 6 – Smart Filtering
web_design_only = annual_data[
(annual_data[“Description”]==“Web Design”
)]
small_transactions = annual_data[
(annual_data[“Amount”] < 200
)]
# Trick 7 – Mergining Tables
df_transactions = pd.read_excel(
excel_file_name,
sheet_name=“Transactions”)
df_customers = pd.read_excel(
excel_file_name,
sheet_name=“Customers”)
merged = pd.merge(
df_transactions,
df_customers,
on = “CustomerID”
)
# Trick 8 – Export Dataframe to Excel
with pd.ExcelWriter(new_worksheet, engine=“openpyxl”) as writer:
merged.to_excel(writer)
The whole code with the Excel files is available in GitHub here.
https://www.youtube.com/watch?v=SXXc4WySZS4
Enjoy it!
-
6.41 Million Google Clicks! 💸
Yesterday Online PNG Tools smashed through 6.40M Google clicks and today it’s smashed through 6.41M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!
What Are Online PNG Tools?
Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.
Who Created Online PNG Tools?
Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.
Who Uses Online PNG Tools?
Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.
Smash too and see you tomorrow at 6.42M clicks! 📈
PS. Use coupon code
SMASHLING
for a 30% discount on these tools at onlinePNGtools.com/pricing. 💸 -
Matrix Sentinels: Building Dynamic Particle Trails with TSL
While experimenting with particle systems, I challenged myself to create particles with tails, similar to snakes moving through space. At first, I didn’t have access to TSL, so I tested basic ideas, like using noise derivatives and calculating previous steps for each particle, but none of them worked as expected.
I spent a long time pondering how to make it work, but all my solutions involved heavy testing with WebGL and GPGPU, which seemed like it would require too much code for a simple proof of concept. That’s when TSL (Three.js Shader Language) came into play. With its Compute Shaders, I was able to compute arrays and feed the results into materials, making it easier to test ideas quickly and efficiently. This allowed me to accomplish the task without much time lost.
Now, let’s dive into the step-by-step process of building the particle system, from setting up the environment to creating the trails and achieving that fluid movement.
Step 1: Set Up the Particle System
First, we’ll define the necessary uniforms that will be used to create and control the particles in the system.
uniforms = { color: uniform( new THREE.Color( 0xffffff ).setRGB( 1, 1, 1 ) ), size: uniform( 0.489 ), uFlowFieldInfluence: uniform( 0.5 ), uFlowFieldStrength: uniform( 3.043 ), uFlowFieldFrequency: uniform( 0.207 ), }
Next, create the variables that will define the parameters of the particle system. The “tails_count” variable determines how many segments each snake will have, while the “particles_count” defines the total number of segments in the scene. The “story_count” variable represents the number of frames used to store the position data for each segment. Increasing this value will increase the distance between segments, as we will store the position history of each one. The “story_snake” variable holds the history of one snake, while “full_story_length” stores the history for all snakes. These variables will be enough to bring the concept to life.
tails_count = 7 // n-1 point tails particles_count = this.tails_count * 200 // need % tails_count story_count = 5 // story for 1 position story_snake = this.tails_count * this.story_count full_story_length = ( this.particles_count / this.tails_count ) * this.story_snake
Next, we need to create the buffers required for the computational shaders. The most important buffer to focus on is the “positionStoryBuffer,” which will store the position history of all segments. To understand how it works, imagine a train: the head of the train sets the direction, and the cars follow in the same path. By saving the position history of the head, we can use that data to determine the position of each car by referencing its position in the history.
const positionsArray = new Float32Array( this.particles_count * 3 ) const lifeArray = new Float32Array( this.particles_count ) const positionInitBuffer = instancedArray( positionsArray, 'vec3' ); const positionBuffer = instancedArray( positionsArray, 'vec3' ); // Tails const positionStoryBuffer = instancedArray( new Float32Array( this.particles_count * this.tails_count * this.story_count ), 'vec3' ); const lifeBuffer = instancedArray( lifeArray, 'float' );
Now, let’s create the particle system with a material. I chose a standard material because it allows us to use an emissiveNode, which will interact with Bloom effects. For each segment, we’ll use a sphere and disable frustum culling to ensure the particles don’t accidentally disappear off the screen.
const particlesMaterial = new THREE.MeshStandardNodeMaterial( { metalness: 1.0, roughness: 0 } ); particlesMaterial.emissiveNode = color(0x00ff00) const sphereGeometry = new THREE.SphereGeometry( 0.1, 32, 32 ); const particlesMesh = this.particlesMesh = new THREE.InstancedMesh( sphereGeometry, particlesMaterial, this.particles_count ); particlesMesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage ); particlesMesh.frustumCulled = false; this.scene.add( this.particlesMesh )
Step 2: Initialize Particle Positions
To initialize the positions of the particles, we’ll use a computational shader to reduce CPU usage and speed up page loading. We randomly generate the particle positions, which form a pseudo-cube shape. To keep the particles always visible on screen, we assign them a lifetime after which they disappear and won’t reappear from their starting positions. The “cycleStep” helps us assign each snake its own random positions, ensuring the tails are generated in the same location as the head. Finally, we send this data to the computation process.
const computeInit = this.computeInit = Fn( () => { const position = positionBuffer.element( instanceIndex ) const positionInit = positionInitBuffer.element( instanceIndex ); const life = lifeBuffer.element( instanceIndex ) // Position position.xyz = vec3( hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) ), hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) ), hash( instanceIndex.add( uint( Math.random() * 0xffffff ) ) ) ).sub( 0.5 ).mul( vec3( 5, 5, 5 ) ); // Copy Init positionInit.assign( position ) const cycleStep = uint( float( instanceIndex ).div( this.tails_count ).floor() ) // Life const lifeRandom = hash( cycleStep.add( uint( Math.random() * 0xffffff ) ) ) life.assign( lifeRandom ) } )().compute( this.particles_count ); this.renderer.computeAsync( this.computeInit ).then( () => { this.initialCompute = true } )
Initialization of particle position Step 3: Compute Position History
For each frame, we compute the position history for each segment. The key aspect of the “computePositionStory” function is that new positions are recorded only from the head of the snake, and all positions are shifted one step forward using a queue algorithm.
const computePositionStory = this.computePositionStory = Fn( () => { const positionStory = positionStoryBuffer.element( instanceIndex ) const cycleStep = instanceIndex.mod( uint( this.story_snake ) ) const lastPosition = positionBuffer.element( uint( float( instanceIndex.div( this.story_snake ) ).floor().mul( this.tails_count ) ) ) If( cycleStep.equal( 0 ), () => { // Head positionStory.assign( lastPosition ) } ) positionStoryBuffer.element( instanceIndex.add( 1 ) ).assign( positionStoryBuffer.element( instanceIndex ) ) } )().compute( this.full_story_length );
Step 4: Update Particle Positions
Next, we update the positions of all particles, taking into account the recorded history of their positions. First, we use simplex noise to generate the new positions of the particles, allowing our snakes to move smoothly through space. Each particle also has its own lifetime, during which it moves and eventually resets to its original position. The key part of this function is determining which particle is the head and which is the tail. For the head, we generate a new position based on simplex noise, while for the tail, we use positions from the saved history.
const computeUpdate = this.computeUpdate = Fn( () => { const position = positionBuffer.element( instanceIndex ) const positionInit = positionInitBuffer.element( instanceIndex ) const life = lifeBuffer.element( instanceIndex ); const _time = time.mul( 0.2 ) const uFlowFieldInfluence = this.uniforms.uFlowFieldInfluence const uFlowFieldStrength = this.uniforms.uFlowFieldStrength const uFlowFieldFrequency = this.uniforms.uFlowFieldFrequency If( life.greaterThanEqual( 1 ), () => { life.assign( life.mod( 1 ) ) position.assign( positionInit ) } ).Else( () => { life.addAssign( deltaTime.mul( 0.2 ) ) } ) // Strength const strength = simplexNoise4d( vec4( position.mul( 0.2 ), _time.add( 1 ) ) ).toVar() const influence = uFlowFieldInfluence.sub( 0.5 ).mul( -2.0 ).toVar() strength.assign( smoothstep( influence, 1.0, strength ) ) // Flow field const flowField = vec3( simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 0 ), _time ) ), simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 1.0 ), _time ) ), simplexNoise4d( vec4( position.mul( uFlowFieldFrequency ).add( 2.0 ), _time ) ) ).normalize() const cycleStep = instanceIndex.mod( uint( this.tails_count ) ) If( cycleStep.equal( 0 ), () => { // Head const newPos = position.add( flowField.mul( deltaTime ).mul( uFlowFieldStrength ) /* * strength */ ) position.assign( newPos ) } ).Else( () => { // Tail const prevTail = positionStoryBuffer.element( instanceIndex.mul( this.story_count ) ) position.assign( prevTail ) } ) } )().compute( this.particles_count );
To display the particle positions, we’ll create a simple function called “positionNode.” This function will not only output the positions but also apply a slight magnification effect to the head of the snake.
particlesMaterial.positionNode = Fn( () => { const position = positionBuffer.element( instanceIndex ); const cycleStep = instanceIndex.mod( uint( this.tails_count ) ) const finalSize = this.uniforms.size.toVar() If( cycleStep.equal( 0 ), () => { finalSize.addAssign( 0.5 ) } ) return positionLocal.mul( finalSize ).add( position ) } )()
The final element will be to update the calculations on each frame.
async update( deltaTime ) { // Compute update if( this.initialCompute) { await this.renderer.computeAsync( this.computePositionStory ) await this.renderer.computeAsync( this.computeUpdate ) } }
Final Result Conclusion
Now, you should be able to easily create position history buffers for other problem-solving tasks, and with TSL, this process becomes quick and efficient. I believe this project has potential for further development, such as transferring position data to model bones. This could enable the creation of beautiful, flying dragons or similar effects in 3D space. For this, a custom bone structure tailored to the project would be needed.
-
Python – Monte Carlo Simulation – Useful code
Python can be used for various tasks. One of these is Monte Carlo simulation for future stock analysis. In the video below this is exactly what is happening. 🙂
10K simulations in 30 buckets for KO look like that.
Instead of explaining the video and its code (available also in GitHub), I will concentrate on why it is better to use log returns than simple returns in stock analysis. Which is actually part of the video as well. Below are the 3 main reasons:
1. Time-Additivity
Log returns sum over time, making multi-period calculations effortless. A 10% gain followed by a 10% loss doesn’t cancel out with simple returns—but it nearly does with logs.
2. Symmetry Matters
A +10% and -10% return aren’t true inverses in simple terms. Logs fix this, ensuring consistent math for gains and losses.
3. Better for Modeling
Log returns follow a near-normal distribution, crucial for statistical models like Monte Carlo simulations.
When to Use Simple Returns?
Code Highlights
-
Davide’s Code and Architecture Notes
When designing a software system, we naturally focus more on the happy flow. But we should carefully plan to handle errors that fall into three categories: Validation, Transient, and Fatal.
Table of Contents
Just a second! 🫷
If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .If you want to support this blog, please ensure that you have disabled the adblocker for this site.
I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.Thank you for your understanding.
– DavideWhen designing a new software system, it’s easy to focus mainly on the happy flow and forget that you must also handle errors.
You should carefully define and design how to handle errors: depending on the use case, error handling can have a huge impact on the architecture of your software system.
In this article, we’ll explore the three main categories of errors that we must always remember to address; for each type of error, we will showcase how addressing it can impact the software architecture differently.
An ideal system with only the happy path
To use a realistic example, let’s design a simple system with a single module named MainApplication: this module reads data from an external API, manipulates the data, and stores the result on the DB.
The system is called asynchronously, via a Message Queue, by an external service – that we are going to ignore.
The happy flow is pretty much the following:
- An external system inserts some data into the Queue;
- MainApplication reads the data from the Queue;
- MainApplication calls an external API to retrieve some data;
- MainApplication stores some data on the DB;
- MainApplication sends a message on the queue with the operation result.
Now, the happy flow is simple. But we should have covered what to do in case of an error.
Introducing the Error Management Trio
In general, errors that need to be handled fall into three categories (that I decided to call “the Error Management Trio”): data validation, transient errors, and faults.
Data Validation focuses on the data used across the system, particularly the data you don’t control.
Transient Errors occur when the application’s overall status or its dependencies temporarily change to an invalid state.
Faults are errors that take down the whole application, and you cannot recover immediately.
The Trio does not take into account “errors” that are not properly errors: null values, queries that do not return any value, and so on. These, in my opinion, are all legitimate statuses that represent that lack of values but are not errors that have architectural relevance.
Data Validation: the first defence against invalid status
The Data Validation category focuses on ensuring that relevant data is in a valid status.
In particular, it aims at ensuring that data coming from external sources (for example, from the Body in an incoming HTTP request or from the result of a query on the database) is both syntactically and logically valid.
Suppose that the messages we receive from the queue are in the following format:
{ "Username": "mr. captain", "BookId": 154, "Operation": "Add" }
We definitely need to perform some sort of validation on the message content.
For example:
- The
Username
property must not be empty; - The
BookId
property must be a positive number; - The
Operation
property must have one of the following values: Add, Remove, Refresh;
How does it impact our design?
We have several choices to deal with an invalid incoming message:
- ignore the whole message: if it doesn’t pass the validation, discard the message;
- send the message back to the caller, describing the type of error
- try to fix it locally: if we are able to recreate a valid message, we could try to fix it and process the incoming message;
- try to fix it in a separate service: you will need to create a distinct service that receives the invalid message and tries to fix it: if it manages to fix the message, it re-inserts it in the original queue; otherwise, it sends a message to the response queue to notify about the impossibility to recreate a valid message.
As you can see, even for the simple input validation, the choices we make can have an impact on the structure of the architecture.
Suppose that you choose option #4: you will need to implement a brand new service (let’s call it ValidationFixesManager), configure a new queue, and keep track of the attempts to fix the message.
All of this only when considering the static validation. How would you validate your business rules? How would you ensure that, for instance, the Username is valid and the user is still active on the system?
Maybe you discover that the data stored on the database in incomplete or stale. Then you have to work out a way to handle such type of data.
For example, you can:
- run a background job that ensures that all the data is always valid;
- enrich the data from the DB with newer data only when it is actually needed;
- fine-tune the database consistency level.
We have just demonstrated a simple but important fact: data validation looks trivial, but depending on the needs of your system, it may impact how you design your system.
Transient Errors: temporary errors that may randomly occur
Even if the validation passes, temporary issues may prevent your operations from completing.
In the previous example, there are some possible cases to consider:
- the external API is temporarily down, and you cannot retrieve the data you need;
- the return queue is full, and you cannot add response messages;
- the application is not able to connect to the DB due to network issues;
These kinds of issues are due to a temporary status of the system or of one of its dependencies.
Sure, you may add automatic retries: for instance, you can use Polly to automatically retry access the API. But what if it’s not enough?
Again, depending on your application’s requirements and the overall structure you started designing, solving this problem may bring you to unexpected paths.
Let’s say that the external API is returning a 500 HTTP error: this is a transient error, and it does not depend on the content of the request: the API is down, an you cannot to anything to solve it.
What can we do if all the retries fail?
If we can just accept the situation, we can return the error to the caller and move on with the next operation.
But if we need to keep trying until the operation goes well, we have (at least) two choices:
- consume the message from the Queue, try calling the API, and, if it fails, re-insert the message on the queue (ideally, with some delay);
- peek the message from the queue and try calling the API. If it fails, the message stays on the queue (and you need a way to read it again). Otherwise, we consider the message completed and remove it from the queue.
These are just two of the different solutions. But, as you can see, this choice will have, in the long run, a huge effect on the future of the application, both in terms of maintainability and performance.
Below is how the structure changes if we decide to send the failed messages back in the queue with some delay.
In both cases, we must remember that trying to call a service that is temporarily down is useless: maybe it’s time to use a Circuit Breaker?
Fatal Errors: when everything goes wrong
There is one type of error that is often neglected but that may deeply influence how your system behaves: fatal errors.
Examples of fatal errors are:
- the host has consumed all the CPU or RAM;
- the file system is corrupted;
- the connection to an external system is interrupted due to network misconfigurations.
In short, fatal errors are errors you have no way to solve in the short run: they happen and stop everything you are doing.
This kind of error cannot be directly managed via application code, but you need to rely on other techniques.
For example, to make sure you won’t consume all the available RAM, you should plan for autoscaling of your resources. So you have to design the system with autoscaling in mind: this means, for example, that the system must be stateless and the application must run on infrastructure objects that can be configured to automatically manage resources (like Azure Functions, Kubernetes, and Azure App Services). Also: do you need horizontal or vertical scaling?
And, talking about the integrity of the system, how do you ensure that operations that were ongoing when the fatal error occurred can be completed?
One possible solution is to use a database table to keep track of the status of each operation, so that when the application restarts, it first completes pending operations, and then starts working on new operations.
A practical approach to address the Error Management Trio
There are too many errors to manage and too much effort to cover everything!
How can we cover everything? Well, it’s impossible: for every action we take to prevent an error, a new one may occur.
Let’s jump back to the example we saw for handling validation errors (using a new service that tries to fix the message). What if the ValidationFixesManager service is down or the message queue is unreachable? We tried to solve a problem, but we ended up with two more to be managed!
Let me introduce a practical approach to help you decide what needs to be addressed.
Step 1: list all the errors you can think of. Create a table to list all the possible errors that you expect they can happen.
You can add a column to describe the category the error falls into, as well as a Probability and Impact on the system column with a value (in this example, Low, Medium and High) that represents the probability that this error occurs and the impact it has on the overall application.
Problem Category Probability Impact on the system Invalid message from queue Data Validation Medium High Invalid user data on DB Data Validation Low Medium Missing user on DB Data Validation Low Low API not reachable Transient High High DB not reachable Transient Low High File system corrupted Fatal Low High CPU limit reached Fatal Medium High From here, you can pick the most urgent elements to be addressed.
Step 2: evaluate alternatives. Every error can be addressed in several ways (ignoring the error IS a valid alternative!). Take some time to explore all the alternatives.
Again, a table can be a good companion for this step. You can describe, for example:
the effort required to solve the error (Low, Medium, High)
the positive and negative consequences in terms (also) of quality attributes (aka: “-ilities”). Maybe a solution works fine for data integrity but has a negative impact on maintainability.Step 3: use ADRs to describe how (and why) you will handle that specific error.
Take your time to thoroughly describe, using ADR documents, the problems you are trying to solve, the solutions taken into consideration, and the final choice.
Having everything written down in a shared file is fundamental for ensuring that, in the future, the present choices and necessities are taken into account, before saying “meh, that’s garbage!”
Further readings
Unfortunately, I feel that error handling is one of the most overlooked topics when designing a system. This also means that there are not lots and lots of articles and resources that explore this topic.
But, if you use queues, one of the components you should use to manage errors is the Dead Letter queue. Here’s a good article by Dorin Baba where he explains how to use Dead Letter queues to handle errors in asynchronous systems.
🔗 Handling errors like a pro or nah? Let’s talk about Dead Letters | Dorin Baba
This article first appeared on Code4IT 🐧
In this article, we used a Queue to trigger the beginning of the operation. When using Azure services, we have two types of message queues: Queues and Topics. Do you know the difference? Hint: other vendors use the same names to represent different concepts.
🔗 Azure Service Bus: Queues vs Topics | Code4IT
Whichever the way you chose to solve manage an error, always remember to write down the reasons that guided you to use that specific solution. An incredibly helpful way is by using ADRs.
🔗 Tracking decision with Architecture Decision Records (ADRs) | CodeIT
Wrapping up
This article highlights the importance of error management and the fact that even if we all want to avoid and prevent errors in our systems, we still have to take care of them and plan according to our needs.
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛
Happy coding!
🐧
-
Motion Highlights #5 | Codrops
The
New
Collective
🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.
Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.