برچسب: and

  • Fixing PHP Session Issues: Troubleshooting and Solutions.

    Fixing PHP Session Issues: Troubleshooting and Solutions.


    PHP sessions are essential for maintaining state and user data across multiple pages in web applications. However, they can sometimes be tricky to manage. Drawing from my own experiences, I’ll share some troubleshooting steps and solutions to common PHP session issues.

    1. Session Not Starting Properly

    Symptoms
    • Sessions are not being created.
    • $_SESSION variables are not being saved.
    Troubleshooting Steps
    1. Check session_start(): Ensure session_start() is called at the beginning of your script before any output is sent to the browser. This is a common oversight, and I’ve personally spent hours debugging a session issue only to find it was due to a missing session_start().
    <?php
    session_start();
    ?>
    

    2.Output Buffering: Make sure no HTML or whitespace appears before session_start(). This can be a subtle issue, especially if multiple developers are working on the same project.

    <?php
    ob_start();
    session_start();
    // Your code
    ob_end_flush();
    ?>
    

    3. Check error_log: Look at the PHP error log for any session-related errors. This step often provides valuable insights into what might be going wrong.

    Solutions
    • Always place session_start() at the very beginning of your script.
    • Use output buffering to prevent accidental output before sessions start.

    2. Session Variables Not Persisting

    Symptoms
    • Session variables reset on every page load.
    • Session data is not maintained across different pages.
    Troubleshooting Steps
    1. Session Cookie Settings: Check if the session cookie is being set correctly. This can sometimes be overlooked in development environments where cookies are frequently cleared.
    ini_set('session.cookie_lifetime', 0);
    

    2. Browser Settings: Ensure cookies are enabled in the browser. I’ve had instances where a simple browser setting was the culprit behind persistent session issues.

    3.Correct Session Variables: Ensure session variables are set correctly. Misconfigurations here can lead to confusing behavior.

    <?php
    session_start();
    $_SESSION['username'] = 'user';
    echo $_SESSION['username'];
    ?>
    
    Solutions
    • Verify that session_start() is called on every page where session data is accessed.
    • Ensure consistent session settings across all scripts.

    3. Session Expiring Too Soon

    Symptoms
    • Sessions are expiring before the expected time.
    • Users are being logged out prematurely.
    Troubleshooting Steps
    1. Session Timeout Settings: Check and adjust session.gc_maxlifetime and session.cookie_lifetime. In my experience, adjusting these settings can significantly improve user experience by keeping sessions active for the desired duration.
    ini_set('session.gc_maxlifetime', 3600); // 1 hour
    ini_set('session.cookie_lifetime', 3600);
    

    2. Garbage Collection: Ensure session garbage collection is not overly aggressive. Fine-tuning this setting can prevent premature session deletions.

    ini_set('session.gc_probability', 1);
    ini_set('session.gc_divisor', 100);
    
    Solutions
    • Adjust session.gc_maxlifetime and session.cookie_lifetime to reasonable values.
    • Balance garbage collection settings to prevent premature session deletion.

    4. Session Fixation

    Symptoms
    • Security vulnerability where an attacker can fixate a session ID and hijack a user session.
    Troubleshooting Steps
    1. Regenerate Session ID: Regenerate the session ID upon login or privilege change. This is a critical step in securing your application against session fixation attacks.
    session_regenerate_id(true);
    

    2. Set Session Cookie Securely: Use httponly and secure flags for session cookies. This helps in preventing session hijacking through XSS attacks.

    ini_set('session.cookie_httponly', 1);
    ini_set('session.cookie_secure', 1);
    
    Solutions
    • Always regenerate the session ID after login or significant changes in privileges.
    • Set the session cookie parameters to enhance security.

    Upload Image In Angular With PHP



    Source link

  • IEnumerable vs ICollection, and why it matters &vert; Code4IT

    IEnumerable vs ICollection, and why it matters | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Defining the best return type is crucial to creating a shared library whose behaviour is totally under your control.

    You should give the consumers of your libraries just the right amount of freedom to integrate and use the classes and structures you have defined.

    That’s why it is important to know the differences between interfaces like IEnumerable<T> and ICollection<T>: these interfaces are often used together but have totally different meanings.

    IEnumerable: loop through the items in the collection

    Suppose that IAmazingInterface is an interface you expose so that clients can interact with it without knowing the internal behaviour.

    You have defined it this way:

    public interface IAmazingInterface
    {
        IEnumerable<int> GetNumbers(int[] numbers);
    }
    

    As you can see, the GetNumbers returns an IEnumerable<int>: this means that (unless they do some particular tricks like using reflection), clients will only be able to loop through the collection of items.

    Clients don’t know that, behind the scenes, AmazingClass uses a custom class MySpecificEnumberable.

    public class AmazingClass: IAmazingInterface
    {
        public IEnumerable<int> GetNumbers(int[] numbers)
            => new MySpecificEnumberable(numbers);
    }
    

    MySpecificEnumberable is a custom class whose purpose is to store the initial values in a sorted way. It implements IEnumerable<int>, so the only operations you have to support are the two implementations of GetEnumerator() – pay attention to the returned data type!

    public class MySpecificEnumberable : IEnumerable<int>
    {
        private readonly int[] _numbers;
    
        public MySpecificEnumberable(int[] numbers)
        {
            _numbers = numbers.OrderBy(_ => _).ToArray();
        }
    
        public IEnumerator<int> GetEnumerator()
        {
            foreach (var number in _numbers)
            {
                yield return number;
            }
        }
    
        IEnumerator IEnumerable.GetEnumerator()
            => _numbers.GetEnumerator();
    }
    

    Clients will then be able to loop all the items in the collection:

    IAmazingInterface something = new AmazingClass();
    var numbers = something.GetNumbers([1, 5, 6, 9, 8, 7, 3]);
    
    foreach (var number in numbers)
    {
        Console.WriteLine(number);
    }
    

    But you cannot add or remove items from it.

    ICollection: list, add, and remove items

    As we saw, IEnumerable<T> only allows you to loop through all the elements. However, you cannot add or remove items from an IEnumerable<T>.

    To do so, you need something that implements ICollection<T>, like the following class (I haven’t implemented any of these methods: I want you to focus on the operations provided, not on the implementation details).

    class MySpecificCollection : ICollection<int>
    {
        public int Count => throw new NotImplementedException();
    
        public bool IsReadOnly => throw new NotImplementedException();
    
        public void Add(int item) => throw new NotImplementedException();
    
        public void Clear() => throw new NotImplementedException();
    
        public bool Contains(int item) => throw new NotImplementedException();
    
        public void CopyTo(int[] array, int arrayIndex) => throw new NotImplementedException();
    
        public IEnumerator<int> GetEnumerator() => throw new NotImplementedException();
    
        public bool Remove(int item) => throw new NotImplementedException();
    
        IEnumerator IEnumerable.GetEnumerator() => throw new NotImplementedException();
    }
    

    ICollection<T> is a subtype of IEnumerable<T>, so everything we said before is still valid.

    However, having a class that implements ICollection<T> gives you full control over how items can be added or removed from the collection, allowing you to define custom behaviour. For instance, you can define that the Add method adds an integer only if it’s an odd number.

    Why knowing the difference actually matters

    Classes and interfaces are meant to be used. If you are like me, you work on both the creation of the class and its consumption.

    So, if an interface must return a sequence of items, you most probably use the List shortcut: define the return type of the method as List<Item>, and then use it, regardless of having it looped through or having the consumer add items to the sequence.

    // in the interface
    public interface ISomething
    {
        List<Item> PerformSomething(int[] numbers);
    }
    
    
    // in the consumer class
    ISomething instance = //omitted
    List<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Everything works fine, but it works because we are in control of both the definition and the consumer.

    What if you have to expose the library to something outside your control?

    You have to consider two elements:

    • consumers should not be able to tamper with your internal implementation (for example, by adding items when they are not supposed to);
    • you should be able to change the internal implementation as you wish without breaking changes.

    So, if you want your users to just enumerate the items within a collection, you may start this way:

    // in the interface
    public interface ISomething
    {
        IEnumerable<Item> PerformSomething(int[] numbers);
    }
    
    // in the implementation
    
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        return numbers.Select(x => new Item(x)).ToList();
    }
    
    // in the consumer class
    
    ISomething instance = //omitted
    IEnumerable<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Then, when the time comes, you can change the internal implementation of PerformSomething with a more custom class:

    // custom IEnumerable definition
    public class MyCustomEnumberable : IEnumerable<Item> { /*omitted*/ }
    
    // in the interface
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        MyCustomEnumberable customEnumerable = new MyCustomEnumberable();
        customEnumerable.DoSomething(numbers);
        return customEnumerable;
    }
    

    And the consumer will not notice the difference. Again, unless they try to use tricks to tamper with your code!

    This article first appeared on Code4IT 🐧

    Wrapping up

    While understanding the differences between IEnumerable and ICollection is trivial, understanding why you should care about them is not.

    IEnumerable and ICollection hierarchy

    I hope this article helped you understand that yeah, you can take the easy way and return everywhere a List, but it’s a choice that you cannot always apply to a project, and that probably will make breaking changes more frequent in the long run.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Data Discovery and Classification for Modern Enterprises

    Data Discovery and Classification for Modern Enterprises


    In today’s high-stakes digital arena, data is the lifeblood of every enterprise. From driving strategy to unlocking customer insights, enterprises depend on data like never before. But with significant volume comes great vulnerability.

    Imagine managing a massive warehouse without labels, shelves, or a map. That’s how most organizations handle their data today—scattered across endpoints, servers, SaaS apps, and cloud platforms, much of it unidentified and unsecured. This dark, unclassified data is inefficient and dangerous.

    At Seqrite, the path to resilient data privacy and governance begins with two foundational steps: Data Discovery and Classification.

    Shedding Light on Dark Data: The Discovery Imperative

    Before protecting your data, you need to know what you have and where it resides. That’s the core of data discovery—scanning your digital landscape to locate and identify every piece of information, from structured records in databases to unstructured files in cloud folders.

    Modern Privacy tools leverage AI and pattern recognition to unearth sensitive data, whether it’s PII, financial records, or health information, often hidden in unexpected places. Shockingly, nearly 75% of enterprise data remains unused, mainly because it goes undiscovered.

    Without this visibility, every security policy and compliance program stands on shaky ground.

    Data Classification: Assigning Value and Implementing Control

    Discovery tells you what data you have. Classification tells you how to treat it.

    Is it public? Internal? Confidential? Restricted? Classification assigns your data a business context and risk level so you can apply the right protection, retention, and sharing rules.

    This is especially critical in industries governed by privacy laws like GDPR, DPDP Act, and HIPAA, where treating all data the same is both inefficient and non-compliant.

    With classification in place, you can:

    • Prioritize protection for sensitive data
    • Automate DLP and encryption policies
    • Streamline responses to individual rights requests
    • Reduce the clutter of ROT (redundant, obsolete, trivial) data

    The Power of Discovery + Classification

    Together, discovery and classification form the bedrock of data governance. Think of them as a radar system and rulebook:

    • Discovery shows you the terrain.
    • Classification helps you navigate it safely.

    When integrated into broader data security workflows – like Zero Trust access control, insider threat detection, and consent management – they multiply the impact of every security investment.

    Five Reasons Enterprises Can’t Ignore this Duo

    1. Targeted Security Where It Matters Most

    You can’t secure what you can’t see. With clarity on your sensitive data’s location and classification, you can apply fine-tuned protections such as encryption, role-based access, and DLP—only where needed. That reduces attack surfaces and simplifies security operations.

    1. Compliance Without Chaos

    Global data laws are demanding and constantly evolving. Discovery and classification help you prove accountability, map personal data flows, and respond to rights requests accurately and on time.

    1. Storage & Cost Optimization

    Storing ROT data is expensive and risky. Discovery helps you declutter, archive, or delete non-critical data while lowering infrastructure costs and improving data agility.

    1. Proactive Risk Management

    The longer a breach goes undetected, the more damage it does. By continuously discovering and classifying data, you spot anomalies and vulnerabilities early; well before they spiral into crises.

    1. Better Decisions with Trustworthy Data

    Only clean, well-classified data can fuel accurate analytics and AI. Whether it’s refining customer journeys or optimizing supply chains, data quality starts with discovery and classification.

    In Conclusion, Know your Data, Secure your Future

    In a world where data is constantly growing, moving, and evolving, the ability to discover and classify is a strategic necessity. These foundational capabilities empower organizations to go beyond reactive compliance and security, helping them build proactive, resilient, and intelligent data ecosystems.

    Whether your goal is to stay ahead of regulatory demands, reduce operational risks, or unlock smarter insights, it all starts with knowing your data. Discovery and classification don’t just minimize exposure; they create clarity, control, and confidence.

    Enterprises looking to take control of their data can rely on Seqrite’s Data Privacy solution, which delivers powerful discovery and classification capabilities to turn information into an advantage.



    Source link

  • Rules of 114 and 144 – Useful code


    The Rule of 114 is a quick way to estimate how long it will take to triple your money with compound interest.  The idea is simple: divide 114 by the annual interest rate (in %), and you will get an approximate answer in years.

    • If you earn 10% annually, the time to triple your money is approximately: 114/10=11.4 years.

    Similarly, the Rule of 144 works for quadrupling your money. Divide 144 by the annual interest rate to estimate the time.

    • At 10% annual growth, the time to quadruple your money is: 144/10=14.4 years

    Why Do These Rules Work?

    These rules are approximations based on the exponential nature of compound interest. While they are not perfectly accurate for all rates, they are great for quick mental math, especially for interest rates in the 5–15% range. While the rules are convenient, always use the exact formula when accuracy matters!

    Exact Formulas?

    For precise calculations, use the exact formula based on logarithms:

    • To triple your money:
    • To quadruple your money:

    These rules for 4x or 3x can be summarized with the following python formula:

    Generally, these rules are explained a bit into more details in the video, below:

    https://www.youtube.com/watch?v=iDcPdcKi-oI

    The GitHub repository is here: https://github.com/Vitosh/Python_personal/tree/master/YouTube/024_Python-Rule-of-114

    Enjoy it! 🙂



    Source link

  • Sine and Cosine – A friendly guide to the unit circle



    Welcome to the world of sine and cosine! These two functions are the backbone of trigonometry, and they’re much simpler than they seem. In this article, we will explore the unit circle, the home of sine and cosine, and learn





    Source link

  • Automate Stock Analysis with Python and Yfinance: Generate Excel Reports



    In this article, we will explore how to analyze stocks using Python and Excel. We will fetch historical data for three popular stocks—Realty Income (O), McDonald’s (MCD), and Johnson & Johnson (JNJ) — calculate returns, factor in dividends, and visualize





    Source link

  • Python – Data Wrangling with Excel and Pandas – Useful code

    Python – Data Wrangling with Excel and Pandas – Useful code


    Data wrangling with Excel and Pandas is actually quite useful tool in the belt of any Excel professional, financial professional, data analyst or a developer. Really, everyonecan benefit from the well defined libraries that ease people’s lifes. These are the libraries used:

    Additionally, a function for making a unique Excel name is used:

    An example of the video, where Jupyter Notebook is used.

    In the YT video below, the following 8 points are discussed:

    # Trick 1 – Simple reading of worksheet from Excel workbook

    # Trick 2 – Combine Reports

    # Trick 3 – Fix Missing Values

    # Trick 4 – Formatting the exported Excel file

    # Trick 5 – Merging Excel Files

    # Trick 6 – Smart Filtering

    # Trick 7 – Mergining Tables

    # Trick 8 – Export Dataframe to Excel

    The whole code with the Excel files is available in GitHub here.

    https://www.youtube.com/watch?v=SXXc4WySZS4

    Enjoy it!



    Source link

  • Davide’s Code and Architecture Notes

    Davide’s Code and Architecture Notes


    When designing a software system, we naturally focus more on the happy flow. But we should carefully plan to handle errors that fall into three categories: Validation, Transient, and Fatal.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When designing a new software system, it’s easy to focus mainly on the happy flow and forget that you must also handle errors.

    You should carefully define and design how to handle errors: depending on the use case, error handling can have a huge impact on the architecture of your software system.

    In this article, we’ll explore the three main categories of errors that we must always remember to address; for each type of error, we will showcase how addressing it can impact the software architecture differently.

    An ideal system with only the happy path

    To use a realistic example, let’s design a simple system with a single module named MainApplication: this module reads data from an external API, manipulates the data, and stores the result on the DB.

    The system is called asynchronously, via a Message Queue, by an external service – that we are going to ignore.

    The happy flow is pretty much the following:

    1. An external system inserts some data into the Queue;
    2. MainApplication reads the data from the Queue;
    3. MainApplication calls an external API to retrieve some data;
    4. MainApplication stores some data on the DB;
    5. MainApplication sends a message on the queue with the operation result.

    Happy flow for MainApplication

    Now, the happy flow is simple. But we should have covered what to do in case of an error.

    Introducing the Error Management Trio

    In general, errors that need to be handled fall into three categories (that I decided to call “the Error Management Trio”): data validation, transient errors, and faults.

    Data Validation focuses on the data used across the system, particularly the data you don’t control.

    Transient Errors occur when the application’s overall status or its dependencies temporarily change to an invalid state.

    Faults are errors that take down the whole application, and you cannot recover immediately.

    The Trio does not take into account “errors” that are not properly errors: null values, queries that do not return any value, and so on. These, in my opinion, are all legitimate statuses that represent that lack of values but are not errors that have architectural relevance.

    The Error Management Trio schema

    Data Validation: the first defence against invalid status

    The Data Validation category focuses on ensuring that relevant data is in a valid status.

    In particular, it aims at ensuring that data coming from external sources (for example, from the Body in an incoming HTTP request or from the result of a query on the database) is both syntactically and logically valid.

    Suppose that the messages we receive from the queue are in the following format:

    {
      "Username": "mr. captain",
      "BookId": 154,
      "Operation": "Add"
    }
    

    We definitely need to perform some sort of validation on the message content.

    For example:

    • The Username property must not be empty;
    • The BookId property must be a positive number;
    • The Operation property must have one of the following values: Add, Remove, Refresh;

    How does it impact our design?

    We have several choices to deal with an invalid incoming message:

    1. ignore the whole message: if it doesn’t pass the validation, discard the message;
    2. send the message back to the caller, describing the type of error
    3. try to fix it locally: if we are able to recreate a valid message, we could try to fix it and process the incoming message;
    4. try to fix it in a separate service: you will need to create a distinct service that receives the invalid message and tries to fix it: if it manages to fix the message, it re-inserts it in the original queue; otherwise, it sends a message to the response queue to notify about the impossibility to recreate a valid message.

    As you can see, even for the simple input validation, the choices we make can have an impact on the structure of the architecture.

    Suppose that you choose option #4: you will need to implement a brand new service (let’s call it ValidationFixesManager), configure a new queue, and keep track of the attempts to fix the message.

    Example of Architecture with ValidationFixesManager component

    All of this only when considering the static validation. How would you validate your business rules? How would you ensure that, for instance, the Username is valid and the user is still active on the system?

    Maybe you discover that the data stored on the database in incomplete or stale. Then you have to work out a way to handle such type of data.

    For example, you can:

    • run a background job that ensures that all the data is always valid;
    • enrich the data from the DB with newer data only when it is actually needed;
    • fine-tune the database consistency level.

    We have just demonstrated a simple but important fact: data validation looks trivial, but depending on the needs of your system, it may impact how you design your system.

    Transient Errors: temporary errors that may randomly occur

    Even if the validation passes, temporary issues may prevent your operations from completing.

    In the previous example, there are some possible cases to consider:

    1. the external API is temporarily down, and you cannot retrieve the data you need;
    2. the return queue is full, and you cannot add response messages;
    3. the application is not able to connect to the DB due to network issues;

    These kinds of issues are due to a temporary status of the system or of one of its dependencies.

    Sure, you may add automatic retries: for instance, you can use Polly to automatically retry access the API. But what if it’s not enough?

    Again, depending on your application’s requirements and the overall structure you started designing, solving this problem may bring you to unexpected paths.

    Let’s say that the external API is returning a 500 HTTP error: this is a transient error, and it does not depend on the content of the request: the API is down, an you cannot to anything to solve it.

    What can we do if all the retries fail?

    If we can just accept the situation, we can return the error to the caller and move on with the next operation.

    But if we need to keep trying until the operation goes well, we have (at least) two choices:

    1. consume the message from the Queue, try calling the API, and, if it fails, re-insert the message on the queue (ideally, with some delay);
    2. peek the message from the queue and try calling the API. If it fails, the message stays on the queue (and you need a way to read it again). Otherwise, we consider the message completed and remove it from the queue.

    These are just two of the different solutions. But, as you can see, this choice will have, in the long run, a huge effect on the future of the application, both in terms of maintainability and performance.

    Below is how the structure changes if we decide to send the failed messages back in the queue with some delay.

    The MainApplication now sends messages back on the queue

    In both cases, we must remember that trying to call a service that is temporarily down is useless: maybe it’s time to use a Circuit Breaker?

    Fatal Errors: when everything goes wrong

    There is one type of error that is often neglected but that may deeply influence how your system behaves: fatal errors.

    Examples of fatal errors are:

    • the host has consumed all the CPU or RAM;
    • the file system is corrupted;
    • the connection to an external system is interrupted due to network misconfigurations.

    In short, fatal errors are errors you have no way to solve in the short run: they happen and stop everything you are doing.

    This kind of error cannot be directly managed via application code, but you need to rely on other techniques.

    For example, to make sure you won’t consume all the available RAM, you should plan for autoscaling of your resources. So you have to design the system with autoscaling in mind: this means, for example, that the system must be stateless and the application must run on infrastructure objects that can be configured to automatically manage resources (like Azure Functions, Kubernetes, and Azure App Services). Also: do you need horizontal or vertical scaling?

    And, talking about the integrity of the system, how do you ensure that operations that were ongoing when the fatal error occurred can be completed?

    One possible solution is to use a database table to keep track of the status of each operation, so that when the application restarts, it first completes pending operations, and then starts working on new operations.

    A database keeps track of the failed operations

    A practical approach to address the Error Management Trio

    There are too many errors to manage and too much effort to cover everything!

    How can we cover everything? Well, it’s impossible: for every action we take to prevent an error, a new one may occur.

    Let’s jump back to the example we saw for handling validation errors (using a new service that tries to fix the message). What if the ValidationFixesManager service is down or the message queue is unreachable? We tried to solve a problem, but we ended up with two more to be managed!

    Let me introduce a practical approach to help you decide what needs to be addressed.

    Step 1: list all the errors you can think of. Create a table to list all the possible errors that you expect they can happen.

    You can add a column to describe the category the error falls into, as well as a Probability and Impact on the system column with a value (in this example, Low, Medium and High) that represents the probability that this error occurs and the impact it has on the overall application.

    Problem Category Probability Impact on the system
    Invalid message from queue Data Validation Medium High
    Invalid user data on DB Data Validation Low Medium
    Missing user on DB Data Validation Low Low
    API not reachable Transient High High
    DB not reachable Transient Low High
    File system corrupted Fatal Low High
    CPU limit reached Fatal Medium High

    From here, you can pick the most urgent elements to be addressed.

    Step 2: evaluate alternatives. Every error can be addressed in several ways (ignoring the error IS a valid alternative!). Take some time to explore all the alternatives.

    Again, a table can be a good companion for this step. You can describe, for example:
    the effort required to solve the error (Low, Medium, High)
    the positive and negative consequences in terms (also) of quality attributes (aka: “-ilities”). Maybe a solution works fine for data integrity but has a negative impact on maintainability.

    Step 3: use ADRs to describe how (and why) you will handle that specific error.

    Take your time to thoroughly describe, using ADR documents, the problems you are trying to solve, the solutions taken into consideration, and the final choice.

    Having everything written down in a shared file is fundamental for ensuring that, in the future, the present choices and necessities are taken into account, before saying “meh, that’s garbage!”

    Further readings

    Unfortunately, I feel that error handling is one of the most overlooked topics when designing a system. This also means that there are not lots and lots of articles and resources that explore this topic.

    But, if you use queues, one of the components you should use to manage errors is the Dead Letter queue. Here’s a good article by Dorin Baba where he explains how to use Dead Letter queues to handle errors in asynchronous systems.

    🔗 Handling errors like a pro or nah? Let’s talk about Dead Letters | Dorin Baba

    This article first appeared on Code4IT 🐧

    In this article, we used a Queue to trigger the beginning of the operation. When using Azure services, we have two types of message queues: Queues and Topics. Do you know the difference? Hint: other vendors use the same names to represent different concepts.

    🔗 Azure Service Bus: Queues vs Topics | Code4IT

    Whichever the way you chose to solve manage an error, always remember to write down the reasons that guided you to use that specific solution. An incredibly helpful way is by using ADRs.

    🔗 Tracking decision with Architecture Decision Records (ADRs) | CodeIT

    Wrapping up

    This article highlights the importance of error management and the fact that even if we all want to avoid and prevent errors in our systems, we still have to take care of them and plan according to our needs.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Easy logging management with Seq and ILogger in ASP.NET &vert; Code4IT

    Easy logging management with Seq and ILogger in ASP.NET | Code4IT


    Seq is one of the best Log Sinks out there : it’s easy to install and configure, and can be added to an ASP.NET application with just a line of code.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is one of the most essential parts of any application.

    Wouldn’t it be great if we could scaffold and use a logging platform with just a few lines of code?

    In this article, we are going to learn how to install and use Seq as a destination for our logs, and how to make an ASP.NET 8 API application send its logs to Seq by using the native logging implementation.

    Seq: a sink and dashboard to manage your logs

    In the context of logging management, a “sink” is a receiver of the logs generated by one or many applications; it can be a cloud-based system, but it’s not mandatory: even a file on your local file system can be considered a sink.

    Seq is a Sink, and works by exposing a server that stores logs and events generated by an application. Clearly, other than just storing the logs, Seq allows you to view them, access their details, perform queries over the collection of logs, and much more.

    It’s free to use for individual usage, and comes with several pricing plans, depending on the usage and the size of the team.

    Let’s start small and install the free version.

    We have two options:

    1. Download it locally, using an installer (here’s the download page);
    2. Use Docker: pull the datalust/seq image locally and run the container on your Docker engine.

    Both ways will give you the same result.

    However, if you already have experience with Docker, I suggest you use the second approach.

    Once you have Docker installed and running locally, open a terminal.

    First, you have to pull the Seq image locally (I know, it’s not mandatory, but I prefer doing it in a separate step):

    Then, when you have it downloaded, you can start a new instance of Seq locally, exposing the UI on a specific port.

    docker run --name seq -d --restart unless-stopped -e ACCEPT_EULA=Y -p 5341:80 datalust/seq:latest
    

    Let’s break down the previous command:

    • docker run: This command is used to create and start a new Docker container.
    • --name seq: This option assigns the name seq to the container. Naming containers can make them easier to manage.
    • -d: This flag runs the container in detached mode, meaning it runs in the background.
    • --restart unless-stopped: This option ensures that the container will always restart unless it is explicitly stopped. This is useful for ensuring that the container remains running even after a reboot or if it crashes.
    • -e ACCEPT_EULA=Y: This sets an environment variable inside the container. In this case, it sets ACCEPT_EULA to Y, which likely indicates that you accept the End User License Agreement (EULA) for the software running in the container.
    • -p 5341:80: This maps port 5341 on your host machine to port 80 in the container. This allows you to access the service running on port 80 inside the container via port 5341 on your host.
    • datalust/seq:latest: This specifies the Docker image to use for the container. datalust/seq is the image name, and latest is the tag, indicating that you want to use the latest version of this image.

    So, this command runs a container named seq in the background, ensures it restarts unless stopped, sets an environment variable to accept the EULA, maps a host port to a container port, and uses the latest version of the datalust/seq image.

    It’s important to pay attention to the used port: by default, Seq uses port 5341 to interact with the UI and the API. If you prefer to use another port, feel free to do that – just remember that you’ll need some additional configuration.

    Now that Seq is installed on your machine, you can access its UI. Guess what? It’s on localhost:5341!

    Seq brand new instance

    However, Seq is “just” a container for our logs – but we have to produce them.

    A sample ASP.NET API project

    I’ve created a simple API project that exposes CRUD operations for a data model stored in memory (we don’t really care about the details).

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        public BooksController()
        {
    
        }
    
        [HttpGet("{id}")]
        public ActionResult<Book> GetBook([FromRoute] int id)
        {
    
            Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
            return book switch
            {
                null => NotFound(),
                _ => Ok(book)
            };
        }
    }
    

    As you can see, the details here are not important.

    Even the Main method is the default one:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    var app = builder.Build();
    
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    app.UseHttpsRedirection();
    
    app.MapControllers();
    
    app.Run();
    

    We have the Controllers, we have Swagger… well, nothing fancy.

    Let’s mix it all together.

    How to integrate Seq with an ASP.NET application

    If you want to use Seq in an ASP.NET application (may it be an API application or whatever else), you have to add it to the startup pipeline.

    First, you have to install the proper NuGet package: Seq.Extensions.Logging.

    The Seq.Extensions.Logging NuGet package

    Then, you have to add it to your Services, calling the AddSeq() method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    + builder.Services.AddLogging(lb => lb.AddSeq());
    
    var app = builder.Build();
    

    Now, Seq is ready to intercept whatever kind of log arrives at the specified port (remember, in our case, we are using the default one: 5341).

    We can try it out by adding an ILogger to the BooksController constructor:

    private readonly ILogger<BooksController> _logger;
    
    public BooksController(ILogger<BooksController> logger)
    {
        _logger = logger;
    }
    

    So that we can use the _logger instance to create logs as we want, using the necessary Log Level:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("I am Information");
        _logger.LogWarning("I am Warning");
        _logger.LogError("I am Error");
        _logger.LogCritical("I am Critical");
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Log messages on Seq

    Using Structured Logging with ILogger and Seq

    One of the best things about Seq is that it automatically handles Structured Logging.

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Have a look at this line:

    _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    

    This line generates a string message, replaces all the placeholders, and, on top of that, creates two properties, SearchedId and TotalBooksCount; you can now define queries using these values.

    Structured Logs in Seq allow you to view additional logging properties

    Further readings

    I have to admit it: logging management is one of my favourite topics.

    I’ve already written a sort of introduction to Seq in the past, but at that time, I did not use the native ILogger, but Serilog, a well-known logging library that added some more functionalities on top of the native logger.

    🔗 Logging with Serilog and Seq | Code4IT

    This article first appeared on Code4IT 🐧

    In particular, Serilog can be useful for propagating Correlation IDs across multiple services so that you can fetch all the logs generated by a specific operation, even though they belong to separate applications.

    🔗 How to log Correlation IDs in .NET APIs with Serilog

    Feel free to search through my blog all the articles related to logging – I’m sure you will find interesting stuff!

    Wrapping up

    I think Seq is the best tool for local development: it’s easy to download and install, supports structured logging, and can be easily added to an ASP.NET application with just a line of code.

    I usually add it to my private projects, especially when the operations I run are complex enough to require some well-structured log.

    Given how it’s easy to install, sometimes I use it for my work projects too: when I have to fix a bug, but I don’t want to use the centralized logging platform (since it’s quite complex to use), I add Seq as a destination sink, run the application, and analyze the logs in my local machine. Then, of course, I remove its reference, as I want it to be just a discardable piece of configuration.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Automating Your DevOps: Writing Scripts that Save Time and Headaches | by Ulas Can Cengiz


    Or, how scripting revolutionized my workflow

    Photo by Stephen Dawson on Unsplash

    Imagine a time when factories were full of life, with gears turning and machines working together. It was a big change, like what’s happening today with computers. In the world of creating and managing software, we’re moving from doing things by hand to letting computers do the work. I’ve seen this change happen, and I can tell you, writing little programs, or “scripts,” is what’s making this change possible.

    Just like factories changed how things were made, these little programs are changing the way we handle software. They’re like a magic trick that turns long, boring tasks into quick and easy ones. In this article, I’m going to show you how these little programs fit into the bigger picture, how they make things better and faster, and the headaches they can take away.

    We’re going to go on a trip together. I’ll show you how things used to be done, talk about the different kinds of little programs and tools we use now, and share some of the tricks I’ve learned. I’ll tell you stories about times when these little programs really made a difference, give you tips, and show you some examples. So, buckle up, and let’s jump into this world where making and managing software is not just a job, but something really special.



    Source link