برچسب: Code

  • Rules of 114 and 144 – Useful code


    The Rule of 114 is a quick way to estimate how long it will take to triple your money with compound interest.  The idea is simple: divide 114 by the annual interest rate (in %), and you will get an approximate answer in years.

    • If you earn 10% annually, the time to triple your money is approximately: 114/10=11.4 years.

    Similarly, the Rule of 144 works for quadrupling your money. Divide 144 by the annual interest rate to estimate the time.

    • At 10% annual growth, the time to quadruple your money is: 144/10=14.4 years

    Why Do These Rules Work?

    These rules are approximations based on the exponential nature of compound interest. While they are not perfectly accurate for all rates, they are great for quick mental math, especially for interest rates in the 5–15% range. While the rules are convenient, always use the exact formula when accuracy matters!

    Exact Formulas?

    For precise calculations, use the exact formula based on logarithms:

    • To triple your money:
    • To quadruple your money:

    These rules for 4x or 3x can be summarized with the following python formula:

    Generally, these rules are explained a bit into more details in the video, below:

    https://www.youtube.com/watch?v=iDcPdcKi-oI

    The GitHub repository is here: https://github.com/Vitosh/Python_personal/tree/master/YouTube/024_Python-Rule-of-114

    Enjoy it! 🙂



    Source link

  • Trigonometric Functions – Sine – Useful code


    import numpy as np

    import matplotlib.pyplot as plt

    import matplotlib.animation as animation

     

    # Generate unit circle points

    theta = np.linspace(0, 2 * np.pi, 1000)

    x_circle = np.cos(theta)

    y_circle = np.sin(theta)

     

    # Initialize figure

    fig, ax = plt.subplots(figsize=(8, 8))

    ax.plot(x_circle, y_circle, ‘b-‘, label=“Unit Circle”)  # Unit circle

    ax.axhline(0, color=“gray”, linestyle=“dotted”)

    ax.axvline(0, color=“gray”, linestyle=“dotted”)

     

    # Add dynamic triangle components

    triangle_line, = ax.plot([], [], ‘r-‘, linewidth=2, label=“Triangle Sides”)

    point, = ax.plot([], [], ‘ro’)  # Moving point on the circle

     

    # Text for dynamic values

    dynamic_text = ax.text(0.03, 0.03, “”, fontsize=12, color=“black”, ha=“left”, transform=ax.transAxes)

     

    # Set up axis limits and labels

    ax.set_xlim(1.2, 1.2)

    ax.set_ylim(1.2, 1.2)

    ax.set_title(“Sine as a Triangle on the Unit Circle”, fontsize=14)

    ax.set_xlabel(“cos(θ)”, fontsize=12)

    ax.set_ylabel(“sin(θ)”, fontsize=12)

    ax.legend(loc=“upper left”)

     

    # Animation update function

    def update(frame):

        angle = theta[frame]

        x_point = np.cos(angle)

        y_point = np.sin(angle)

        degrees = np.degrees(angle) % 360  # Convert radians to degrees

        

        # Update triangle

        triangle_line.set_data([0, x_point, x_point, 0], [0, y_point, 0, 0])

        

        # Update point on the circle

        point.set_data([x_point], [y_point])  # Fixed this line to avoid the warning

        

        # Update text for angle, opposite side length, and sin(θ)

        dynamic_text.set_text(f“Angle: {degrees:.1f}°\nOpposite Side Length: {y_point:.2f}\nsin(θ): {y_point:.2f}”)

        return triangle_line, point, dynamic_text

     

    # Create animation

    ani = animation.FuncAnimation(fig, update, frames=len(theta), interval=20, blit=True)

    plt.show()



    Source link

  • VBA – Automated Pivot Filtering – Useful code


    Sub FilterPivotTableBasedOnSelectedTeams()

     

        Dim pt As PivotTable

        Dim selectedItemsRange As Range

        Dim myCell As Range

        Dim fieldName As String

        Dim lastRowSelected As Long

        Dim pi As PivotItem

        Dim firstItemSet As Boolean

     

        Set pt = ThisWorkbook.Worksheets(“PivotTable2”).PivotTables(“PivotTable2”)

        lastRowSelected = LastRow(tblTemp.Name, 1)

        Set selectedItemsRange = tblTemp.Range(“A1:A” & lastRowSelected)

        fieldName = “Team”

        pt.PivotFields(fieldName).ClearAllFilters

        

        Dim itemsTotal As Long

        itemsTotal = pt.PivotFields(fieldName).PivotItems.Count

        

        For Each pi In pt.PivotFields(fieldName).PivotItems

            If Not IsInRange(pi.Name, selectedItemsRange) Then

                itemsTotal = itemsTotal 1

                If itemsTotal = 0 Then

                    Err.Raise 222, Description:=“No value in the pivot!”

                    Exit Sub

                End If

                

                pi.Visible = False

            End If

        Next pi

     

    End Sub

     

    Function IsInRange(myValue As String, myRange As Range) As Boolean

        

        Dim myCell As Range

        IsInRange = False

        For Each myCell In myRange.Cells

            If myCell.value = myValue Then

                IsInRange = True

                Exit Function

            End If

        Next myCell

     

    End Function

     

    Public Function LastRow(wsName As String, Optional columnToCheck As Long = 1) As Long

     

        Dim ws As Worksheet

        Set ws = ThisWorkbook.Worksheets(wsName)

        LastRow = ws.Cells(ws.Rows.Count, columnToCheck).End(xlUp).Row

     

    End Function



    Source link

  • Python – Data Wrangling with Excel and Pandas – Useful code

    Python – Data Wrangling with Excel and Pandas – Useful code


    Data wrangling with Excel and Pandas is actually quite useful tool in the belt of any Excel professional, financial professional, data analyst or a developer. Really, everyonecan benefit from the well defined libraries that ease people’s lifes. These are the libraries used:

    Additionally, a function for making a unique Excel name is used:

    An example of the video, where Jupyter Notebook is used.

    In the YT video below, the following 8 points are discussed:

    # Trick 1 – Simple reading of worksheet from Excel workbook

    # Trick 2 – Combine Reports

    # Trick 3 – Fix Missing Values

    # Trick 4 – Formatting the exported Excel file

    # Trick 5 – Merging Excel Files

    # Trick 6 – Smart Filtering

    # Trick 7 – Mergining Tables

    # Trick 8 – Export Dataframe to Excel

    The whole code with the Excel files is available in GitHub here.

    https://www.youtube.com/watch?v=SXXc4WySZS4

    Enjoy it!



    Source link

  • Python – Monte Carlo Simulation – Useful code

    Python – Monte Carlo Simulation – Useful code


    Python can be used for various tasks. One of these is Monte Carlo simulation for future stock analysis. In the video below this is exactly what is happening. 🙂

    10K simulations in 30 buckets for KO look like that.

    Instead of explaining the video and its code (available also in GitHub), I will concentrate on why it is better to use log returns than simple returns in stock analysis. Which is actually part of the video as well. Below are the 3 main reasons:

    1. Time-Additivity

    Log returns sum over time, making multi-period calculations effortless. A 10% gain followed by a 10% loss doesn’t cancel out with simple returns—but it nearly does with logs.

    2. Symmetry Matters

    A +10% and -10% return aren’t true inverses in simple terms. Logs fix this, ensuring consistent math for gains and losses.

    3. Better for Modeling

    Log returns follow a near-normal distribution, crucial for statistical models like Monte Carlo simulations.

    When to Use Simple Returns?

    Code Highlights



    Source link

  • Davide’s Code and Architecture Notes

    Davide’s Code and Architecture Notes


    When designing a software system, we naturally focus more on the happy flow. But we should carefully plan to handle errors that fall into three categories: Validation, Transient, and Fatal.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When designing a new software system, it’s easy to focus mainly on the happy flow and forget that you must also handle errors.

    You should carefully define and design how to handle errors: depending on the use case, error handling can have a huge impact on the architecture of your software system.

    In this article, we’ll explore the three main categories of errors that we must always remember to address; for each type of error, we will showcase how addressing it can impact the software architecture differently.

    An ideal system with only the happy path

    To use a realistic example, let’s design a simple system with a single module named MainApplication: this module reads data from an external API, manipulates the data, and stores the result on the DB.

    The system is called asynchronously, via a Message Queue, by an external service – that we are going to ignore.

    The happy flow is pretty much the following:

    1. An external system inserts some data into the Queue;
    2. MainApplication reads the data from the Queue;
    3. MainApplication calls an external API to retrieve some data;
    4. MainApplication stores some data on the DB;
    5. MainApplication sends a message on the queue with the operation result.

    Happy flow for MainApplication

    Now, the happy flow is simple. But we should have covered what to do in case of an error.

    Introducing the Error Management Trio

    In general, errors that need to be handled fall into three categories (that I decided to call “the Error Management Trio”): data validation, transient errors, and faults.

    Data Validation focuses on the data used across the system, particularly the data you don’t control.

    Transient Errors occur when the application’s overall status or its dependencies temporarily change to an invalid state.

    Faults are errors that take down the whole application, and you cannot recover immediately.

    The Trio does not take into account “errors” that are not properly errors: null values, queries that do not return any value, and so on. These, in my opinion, are all legitimate statuses that represent that lack of values but are not errors that have architectural relevance.

    The Error Management Trio schema

    Data Validation: the first defence against invalid status

    The Data Validation category focuses on ensuring that relevant data is in a valid status.

    In particular, it aims at ensuring that data coming from external sources (for example, from the Body in an incoming HTTP request or from the result of a query on the database) is both syntactically and logically valid.

    Suppose that the messages we receive from the queue are in the following format:

    {
      "Username": "mr. captain",
      "BookId": 154,
      "Operation": "Add"
    }
    

    We definitely need to perform some sort of validation on the message content.

    For example:

    • The Username property must not be empty;
    • The BookId property must be a positive number;
    • The Operation property must have one of the following values: Add, Remove, Refresh;

    How does it impact our design?

    We have several choices to deal with an invalid incoming message:

    1. ignore the whole message: if it doesn’t pass the validation, discard the message;
    2. send the message back to the caller, describing the type of error
    3. try to fix it locally: if we are able to recreate a valid message, we could try to fix it and process the incoming message;
    4. try to fix it in a separate service: you will need to create a distinct service that receives the invalid message and tries to fix it: if it manages to fix the message, it re-inserts it in the original queue; otherwise, it sends a message to the response queue to notify about the impossibility to recreate a valid message.

    As you can see, even for the simple input validation, the choices we make can have an impact on the structure of the architecture.

    Suppose that you choose option #4: you will need to implement a brand new service (let’s call it ValidationFixesManager), configure a new queue, and keep track of the attempts to fix the message.

    Example of Architecture with ValidationFixesManager component

    All of this only when considering the static validation. How would you validate your business rules? How would you ensure that, for instance, the Username is valid and the user is still active on the system?

    Maybe you discover that the data stored on the database in incomplete or stale. Then you have to work out a way to handle such type of data.

    For example, you can:

    • run a background job that ensures that all the data is always valid;
    • enrich the data from the DB with newer data only when it is actually needed;
    • fine-tune the database consistency level.

    We have just demonstrated a simple but important fact: data validation looks trivial, but depending on the needs of your system, it may impact how you design your system.

    Transient Errors: temporary errors that may randomly occur

    Even if the validation passes, temporary issues may prevent your operations from completing.

    In the previous example, there are some possible cases to consider:

    1. the external API is temporarily down, and you cannot retrieve the data you need;
    2. the return queue is full, and you cannot add response messages;
    3. the application is not able to connect to the DB due to network issues;

    These kinds of issues are due to a temporary status of the system or of one of its dependencies.

    Sure, you may add automatic retries: for instance, you can use Polly to automatically retry access the API. But what if it’s not enough?

    Again, depending on your application’s requirements and the overall structure you started designing, solving this problem may bring you to unexpected paths.

    Let’s say that the external API is returning a 500 HTTP error: this is a transient error, and it does not depend on the content of the request: the API is down, an you cannot to anything to solve it.

    What can we do if all the retries fail?

    If we can just accept the situation, we can return the error to the caller and move on with the next operation.

    But if we need to keep trying until the operation goes well, we have (at least) two choices:

    1. consume the message from the Queue, try calling the API, and, if it fails, re-insert the message on the queue (ideally, with some delay);
    2. peek the message from the queue and try calling the API. If it fails, the message stays on the queue (and you need a way to read it again). Otherwise, we consider the message completed and remove it from the queue.

    These are just two of the different solutions. But, as you can see, this choice will have, in the long run, a huge effect on the future of the application, both in terms of maintainability and performance.

    Below is how the structure changes if we decide to send the failed messages back in the queue with some delay.

    The MainApplication now sends messages back on the queue

    In both cases, we must remember that trying to call a service that is temporarily down is useless: maybe it’s time to use a Circuit Breaker?

    Fatal Errors: when everything goes wrong

    There is one type of error that is often neglected but that may deeply influence how your system behaves: fatal errors.

    Examples of fatal errors are:

    • the host has consumed all the CPU or RAM;
    • the file system is corrupted;
    • the connection to an external system is interrupted due to network misconfigurations.

    In short, fatal errors are errors you have no way to solve in the short run: they happen and stop everything you are doing.

    This kind of error cannot be directly managed via application code, but you need to rely on other techniques.

    For example, to make sure you won’t consume all the available RAM, you should plan for autoscaling of your resources. So you have to design the system with autoscaling in mind: this means, for example, that the system must be stateless and the application must run on infrastructure objects that can be configured to automatically manage resources (like Azure Functions, Kubernetes, and Azure App Services). Also: do you need horizontal or vertical scaling?

    And, talking about the integrity of the system, how do you ensure that operations that were ongoing when the fatal error occurred can be completed?

    One possible solution is to use a database table to keep track of the status of each operation, so that when the application restarts, it first completes pending operations, and then starts working on new operations.

    A database keeps track of the failed operations

    A practical approach to address the Error Management Trio

    There are too many errors to manage and too much effort to cover everything!

    How can we cover everything? Well, it’s impossible: for every action we take to prevent an error, a new one may occur.

    Let’s jump back to the example we saw for handling validation errors (using a new service that tries to fix the message). What if the ValidationFixesManager service is down or the message queue is unreachable? We tried to solve a problem, but we ended up with two more to be managed!

    Let me introduce a practical approach to help you decide what needs to be addressed.

    Step 1: list all the errors you can think of. Create a table to list all the possible errors that you expect they can happen.

    You can add a column to describe the category the error falls into, as well as a Probability and Impact on the system column with a value (in this example, Low, Medium and High) that represents the probability that this error occurs and the impact it has on the overall application.

    Problem Category Probability Impact on the system
    Invalid message from queue Data Validation Medium High
    Invalid user data on DB Data Validation Low Medium
    Missing user on DB Data Validation Low Low
    API not reachable Transient High High
    DB not reachable Transient Low High
    File system corrupted Fatal Low High
    CPU limit reached Fatal Medium High

    From here, you can pick the most urgent elements to be addressed.

    Step 2: evaluate alternatives. Every error can be addressed in several ways (ignoring the error IS a valid alternative!). Take some time to explore all the alternatives.

    Again, a table can be a good companion for this step. You can describe, for example:
    the effort required to solve the error (Low, Medium, High)
    the positive and negative consequences in terms (also) of quality attributes (aka: “-ilities”). Maybe a solution works fine for data integrity but has a negative impact on maintainability.

    Step 3: use ADRs to describe how (and why) you will handle that specific error.

    Take your time to thoroughly describe, using ADR documents, the problems you are trying to solve, the solutions taken into consideration, and the final choice.

    Having everything written down in a shared file is fundamental for ensuring that, in the future, the present choices and necessities are taken into account, before saying “meh, that’s garbage!”

    Further readings

    Unfortunately, I feel that error handling is one of the most overlooked topics when designing a system. This also means that there are not lots and lots of articles and resources that explore this topic.

    But, if you use queues, one of the components you should use to manage errors is the Dead Letter queue. Here’s a good article by Dorin Baba where he explains how to use Dead Letter queues to handle errors in asynchronous systems.

    🔗 Handling errors like a pro or nah? Let’s talk about Dead Letters | Dorin Baba

    This article first appeared on Code4IT 🐧

    In this article, we used a Queue to trigger the beginning of the operation. When using Azure services, we have two types of message queues: Queues and Topics. Do you know the difference? Hint: other vendors use the same names to represent different concepts.

    🔗 Azure Service Bus: Queues vs Topics | Code4IT

    Whichever the way you chose to solve manage an error, always remember to write down the reasons that guided you to use that specific solution. An incredibly helpful way is by using ADRs.

    🔗 Tracking decision with Architecture Decision Records (ADRs) | CodeIT

    Wrapping up

    This article highlights the importance of error management and the fact that even if we all want to avoid and prevent errors in our systems, we still have to take care of them and plan according to our needs.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Python – Reading Financial Data From Internet – Useful code

    Python – Reading Financial Data From Internet – Useful code


    Reading financial data from the internet is sometimes challenging. In this short article with two python snippets, I will show how to read it from Wikipedia and from and from API, delivering in JSON format:

    This is how the financial json data from the api looks like.

    Reading the data from the API is actually not tough, if you have experience reading JSON, with nested lists. If not, simply try with trial and error and eventually you will succeed:

    With the reading from wikipedia, it is actually even easier – the site works flawlessly with pandas, and if you count the tables correctly, you would get what you want:

    You might want to combine both sources, just in case:

    The YouTube video for this article is here:
    https://www.youtube.com/watch?v=Uj95BgimHa8
    The GitHub code is there – GitHub

    Enjoy it! 🙂



    Source link

  • Python – Simple Stock Analysis with yfinance – Useful code

    Python – Simple Stock Analysis with yfinance – Useful code


    Sometimes, the graphs of stocks are useful. Sometimes these are not. In general, do your own research, none of this is financial advice.

    And while doing that, if you want to analyze stocks with just a few lines of python, this article might help? This simple yet powerful script helps you spot potential buy and sell opportunities for Apple (AAPL) using two classic technical indicators: moving averages and RSI.

    Understanding the Strategy

    1. SMA Crossover: The Trend Following Signal

    The script first calculates two Simple Moving Averages (SMA):

    The crossover strategy is simple:

    This works because moving averages smooth out price noise, helping identify the overall trend direction.

    2. RSI: The Overbought/Oversold Indicator

    The Relative Strength Index (RSI) measures whether a stock is overbought or oversold:

    By combining SMA crossovers (trend confirmation) and RSI extremes (timing), we get stronger signals.

    This plot is generated with less than 40 lines of python code

    The code looks like that:

    The code above, but in way more details is explained in the YT video below:

    https://www.youtube.com/watch?v=m0ayASmrZmE

    And it is available in GitHub as well.



    Source link

  • Calling AWS Bedrock from code. Using Python in a Jupyter notebook | by Thomas Reid


    Image by Author

    Using Python in a Jupyter notebook

    Many of you will know that every man and his dog are producing AI products or LLM’s and integrating them with their products. Not surprisingly AWS — the biggest cloud services provider — is also getting in on the act.

    What is bedrock?

    Its AI offering is called Bedrock and the following blurb from it’s website describes what Bedrock is.

    Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security. With Amazon Bedrock’s comprehensive capabilities, you can easily experiment with a variety of top FMs, privately customize them with your data using techniques such as fine-tuning and retrieval augmented generation (RAG), and create managed agents that execute complex business tasks — from booking travel and processing insurance claims to creating ad campaigns and managing inventory — all without writing any code. Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI…



    Source link

  • Write and Test Code Instantly With an Online Python Editor



    Write and Test Code Instantly With an Online Python Editor



    Source link