بلاگ

  • Motion Highlights #8 | Codrops

    Motion Highlights #8 | Codrops


    The

    New

    Collective

    🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

    Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.



    Source link

  • 5 Benefits of Generative AI in XDR: Revolutionizing Cybersecurity

    5 Benefits of Generative AI in XDR: Revolutionizing Cybersecurity


    Generative Artificial Intelligence (GenAI) is transforming cybersecurity by enhancing Extended Detection and Response (XDR) systems, which integrate data from multiple security layers to provide comprehensive threat detection and response. By leveraging Generative AI, XDR solutions offer advanced capabilities that streamline operations, improve accuracy, and bolster organizational security. Below are five key benefits of integrating GenAI into XDR systems.

    Enhanced Threat Detection and Contextual Analysis

    1. GenAI significantly improves threat detection by analyzing vast datasets across endpoints, networks, and cloud environments. Unlike traditional systems, GenAI-powered XDR can identify complex patterns and anomalies, such as subtle indicators of advanced persistent threats (APTs). By correlating data from multiple sources, it provides contextual insights, enabling security teams to understand the scope and impact of threats more effectively. For instance, GenAI can detect unusual login patterns or malware behavior in real-time, reducing false positives and ensuring precise threat identification.
    2. Automation for Faster Incident Response
      GenAI automates repetitive tasks, such as alert triage and incident investigation, allowing security teams to focus on strategic decision-making. By employing machine learning and natural language processing (NLP), GenAI can prioritize alerts based on severity, suggest mitigation steps, and even execute automated responses to contain threats. This reduces response times, minimizes human error, and ensures rapid remediation, which is critical in preventing data breaches or system compromises.
    3. Improved Accessibility for Analysts
      GenAI enables conversational interfaces that allow analysts, regardless of technical expertise, to interact with complex security data using natural language. This eliminates the need for specialized query languages or extensive training, making XDR tools more accessible. Analysts can quickly retrieve incident details, aggregate alerts, or access product documentation without navigating intricate dashboards, thereby reducing the learning curve and improving operational efficiency.
    4. Proactive Threat Intelligence and Prediction
      GenAI enhances XDR’s ability to predict and prevent threats by building behavioural models from historical data. For example, it can establish baseline user activity and flag deviations, such as multiple failed login attempts from unusual IP addresses, as potential account compromises. This predictive capability allows organizations to address vulnerabilities before they are exploited, shifting cybersecurity from reactive to proactive.
    5. Streamlined Reporting and Compliance
      GenAI simplifies the generation of reports and ensures compliance with regulatory requirements. By extracting key metrics and summarizing findings in natural language, it facilitates clear communication with stakeholders. Additionally, GenAI supports near real-time monitoring and audit logging, helping organizations meet compliance standards efficiently while reducing manual effort.

    Seqrite XDR: A Leader in GenAI driven XDR

    Seqrite XDR exemplifies the power of GenAI through its Seqrite Intelligent Assistant (SIA), a prompt-based, conversational interface designed to empower analysts. SIA offers 14 pre-defined prompt questions to kickstart investigations, enabling quick access to incident details, summarized analyses, and tailored mitigation steps. Analysts can query alert data without complex syntax, aggregate alerts by severity or MITRE techniques, and identify patterns across incidents. SIA also provides conversational access to Seqrite XDR documentation, security best practices, and step-by-step task guidance. Its contextual follow-up feature allows analysts to drill down into details or clarify technical terms effortlessly. By reducing the learning curve, saving time, and improving accessibility, SIA enables faster incident response and streamlined reporting, making Seqrite XDR an essential tool for modern cybersecurity strategies.

    Connect with us to see Seqrite XDR + SIA in action

     



    Source link

  • Getting started with Load testing with K6 on Windows 11 | Code4IT

    Getting started with Load testing with K6 on Windows 11 | Code4IT


    Can your system withstand heavy loads? You can answer this question by running Load Tests. Maybe, using K6 as a free tool.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Understanding how your system reacts to incoming network traffic is crucial to determining whether it’s stable, able to meet the expected SLO, and if the underlying infrastructure and architecture are fine.

    How can we simulate many incoming requests? How can we harvest the results of our API calls?

    In this article, we will learn how to use K6 to run load tests and display the final result locally in Windows 11.

    This article will be the foundation of future content, in which I’ll explore more topics related to load testing, performance tips, and more.

    What is Load Testing?

    Load testing simulates real-world usage conditions to ensure the software can handle high traffic without compromising performance or user experience.

    The importance of load testing lies in its ability to identify bottlenecks and weak points in the system that could lead to slow response times, errors, or crashes when under stress.

    By conducting load testing, developers can make necessary optimizations and improvements, ensuring the software is robust, reliable, and scalable. It’s an essential step in delivering a quality product that meets user expectations and maintains business continuity during peak usage times. If you think of it, a system unable to handle the incoming traffic may entirely or partially fail, leading to user dissatisfaction, loss of revenue, and damage to the company’s reputation.

    Ideally, you should plan to have automatic load tests in place in your Continuous Delivery pipelines, or, at least, ensure that you run Load tests in your production environment now and then. You then want to compare the test results with the previous ones to ensure that you haven’t introduced bottlenecks in the last releases.

    The demo project

    For the sake of this article, I created a simple .NET API project: it exposes just one endpoint, /randombook, which returns info about a random book stored in an in-memory Entity Framework DB context.

    int requestCount = 0;
    int concurrentExecutions = 0;
    object _lock = new();
    app.MapGet("/randombook", async (CancellationToken ct) =>
    {
        Book ? thisBook =
            default;
        var delayMs = Random.Shared.Next(10, 10000);
        try
        {
            lock(_lock)
            {
                requestCount++;
                concurrentExecutions++;
                app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms", requestCount, concurrentExecutions, delayMs);
            }
            using(ApiContext context = new ApiContext())
            {
                await Task.Delay(delayMs);
                if (ct.IsCancellationRequested)
                {
                    app.Logger.LogWarning("Cancellation requested");
                    throw new OperationCanceledException();
                }
                var allbooks = await context.Books.ToArrayAsync(ct);
                thisBook = Random.Shared.GetItems(allbooks, 1).First();
            }
        }
        catch (Exception ex)
        {
            app.Logger.LogError(ex, "An error occurred");
            return Results.Problem(ex.Message);
        }
        finally
        {
            lock(_lock)
            {
                concurrentExecutions--;
            }
        }
        return TypedResults.Ok(thisBook);
    });
    

    There are some details that I want to highlight before moving on with the demo.

    As you can see, I added a random delay to simulate a random RTT (round-trip time) for accessing the database:

    var delayMs = Random.Shared.Next(10, 10000);
    // omit
    await Task.Delay(delayMs);
    

    I then added a thread-safe counter to keep track of the active operations. I increase the value when the request begins, and decrease it when the request completes. The log message is defined in the lock section to avoid concurrency issues.

    lock (_lock)
    {
        requestCount++;
        concurrentExecutions++;
    
        app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms",
            requestCount,
            concurrentExecutions,
            delayMs
     );
    }
    
    // and then
    
    lock (_lock)
    {
        concurrentExecutions--;
    }
    

    Of course, it’s not a perfect solution: it just fits my need for this article.

    Install and configure K6 on Windows 11

    With K6, you can run the Load Tests by defining the endpoint to call, the number of requests per minute, and some other configurations.

    It’s a free tool, and you can install it using Winget:

    winget install k6 --source winget
    

    You can ensure that you have installed it correctly by opening a Bash (and not a PowerShell) and executing the following command.

    Note: You can actually use PowerShell, but you have to modify some system keys to make K6 recognizable as a command.

    The --version prints the version installed and the id of the latest GIT commit belonging to the installed package. For example, you will see k6.exe v0.50.0 (commit/f18209a5e3, go1.21.8, windows/amd64).

    Now, we can initialize the tool. Open a Bash and run the following command:

    This command generates a script.js file, which you will need to configure in order to set up the Load Testing configurations.

    Here’s the scaffolded file (I removed the comments that refer to parts we are not going to cover in this article):

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      // A number specifying the number of VUs to run concurrently.
      vus: 10, // A string specifying the total duration of the test run.
      duration: "30s",
    }
    
    export default function () {
      http.get("https://test.k6.io")
      sleep(1)
    }
    

    Let’s analyze the main parts:

    • vus: 10: VUs are the Virtual Users: they simulate the incoming requests that can be executed concurrently.
    • duration: '30s': this value represents the duration of the whole test run;
    • http.get('https://test.k6.io');: it’s the main function. We are going to call the specified endpoint and keep track of the responses, metrics, timings, and so on;
    • sleep(1): it’s the sleep time between each iteration.

    To run it, you need to call:

    Understanding Virtual Users (VUs) in K6

    VUs, Iterations, Sleep time… how do they work together?

    I updated the script.js file to clarify how K6 works, and how it affects the API calls.

    The new version of the file is this:

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      vus: 1,
      duration: "30s",
    }
    
    export default function () {
      http.get("https://localhost:7261/randombook")
      sleep(1)
    }
    

    We are saying “Run the load testing for 30 seconds. I want only ONE execution to exist at a time. After each execution, sleep for 1 second”.

    Make sure to run the API project, and then run k6 run script.js.

    Let’s see what happens:

    1. K6 starts, and immediately calls the API.
    2. On the API, we can see the first incoming call. The API sleeps for 1 second, and then starts sending other requests.

    By having a look at the logs printed from the application, we can see that we had no more than one concurrent request:

    Logs from 1 VU

    From the result screen, we can see that we have run our application for 30 seconds (plus another 30 seconds for graceful-stop) and that the max number of VUs was set to 1.

    Load Tests results with 1 VU

    Here, you can find the same results as plain text, making it easier to follow.

    execution: local
    script: script.js
    output: -
    
    scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
     * default: 1 looping VUs for 30s (gracefulStop: 30s)
    
    
    data_received..................: 2.8 kB 77 B/s
    data_sent......................: 867 B   24 B/s
    http_req_blocked...............: avg=20.62ms   min=0s       med=0s     max=123.77ms p(90)=61.88ms   p(95)=92.83ms
    http_req_connecting............: avg=316.64µs min=0s       med=0s     max=1.89ms   p(90)=949.95µs p(95)=1.42ms
    http_req_duration..............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    { expected_response:true }...: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    http_req_failed................: 0.00%   ✓ 0         ✗ 6
    http_req_receiving.............: avg=1.12ms   min=0s       med=0s     max=6.76ms   p(90)=3.38ms   p(95)=5.07ms
    http_req_sending...............: avg=721.55µs min=0s       med=0s     max=4.32ms   p(90)=2.16ms   p(95)=3.24ms
    http_req_tls_handshaking.......: avg=13.52ms   min=0s       med=0s     max=81.12ms   p(90)=40.56ms   p(95)=60.84ms
    http_req_waiting...............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.03s     p(95)=8.65s
    http_reqs......................: 6       0.167939/s
    iteration_duration.............: avg=5.95s     min=1.13s     med=6.38s max=10.29s   p(90)=9.11s     p(95)=9.7s
    iterations.....................: 6       0.167939/s
    vus............................: 1       min=1       max=1
    vus_max........................: 1       min=1       max=1
    
    
    running (0m35.7s), 0/1 VUs, 6 complete and 0 interrupted iterations
    default ✓ [======================================] 1 VUs   30s
    

    Now, let me run the same script but update the VUs. We are going to run this configuration:

    export const options = {
      vus: 3,
      duration: "30s",
    }
    

    The result is similar, but this time we had performed 16 requests instead of 6. That’s because, as you can see, there were up to 3 concurrent users accessing our APIs.

    Logs from 3 VU

    The final duration was still 30 seconds. However, we managed to accept 3x users without having impacts on the performance, and without returning errors.

    Load Tests results with 3 VU

    Customize Load Testing properties

    We have just covered the surface of what K6 can do. Of course, there are many resources in the official K6 documentation, so I won’t repeat everything here.

    There are some parts, though, that I want to showcase here (so that you can deep dive into the ones you need).

    HTTP verbs

    In the previous examples, we used the post HTTP method. As you can imagine, there are other methods that you can use.

    Each HTTP method has a corresponding Javascript function. For example, we have

    • get() for the GET method
    • post() for the POST method
    • put() for the PUT method
    • del() for the DELETE method.

    Stages

    You can create stages to define the different parts of the execution:

    export const options = {
      stages: [
        { duration: "30s", target: 20 },
        { duration: "1m30s", target: 10 },
        { duration: "20s", target: 0 },
      ],
    }
    

    With the previous example, I defined three stages:

    1. the first one lasts 30 seconds, and brings the load to 20 VUs;
    2. next, during the next 90 second, the number of VUs decreases to 10;
    3. finally, in the last 20 seconds, it slowly shuts down the remaining calls.

    Load Tests results with complex Stages

    As you can see from the result, the total duration was 2m20s (which corresponds to the sum of the stages), and the max amount of requests was 20 (the number defined in the first stage).

    Scenarios

    Scenarios allow you to define the details of requests iteration.

    We always use a scenario, even if we don’t create one: in fact, we use the default scenario that gives us a predetermined time for the gracefulStop value, set to 30 seconds.

    We can define custom scenarios to tweak the different parameters used to define how the test should act.

    A scenario is nothing but a JSON element where you define arguments like duration, VUs, and so on.

    By defining a scenario, you can also decide to run tests on the same endpoint but using different behaviours: you can create a scenario for a gradual growth of users, one for an immediate peak, and so on.

    A glimpse to the final report

    Now, we can focus on the meaning of the data returned by the tool.

    Let’s use again the image we saw after running the script with the complex stages:

    Load Tests results with complex Stages

    We can see lots of values whose names are mostly self-explaining.

    We can see, for example, data_received and data_sent, which tell you the size of the data sent and received.

    We have information about the duration and response of HTTP requests (http_req_duration, http_req_sending, http_reqs), as well as information about the several phases of an HTTP connection, like http_req_tls_handshaking.

    We finally have information about the configurations set in K6, such as iterations, vus, and vus_max.

    You can see the average value, the min and max, and some percentiles for most of the values.

    Wrapping up

    K6 is a nice tool for getting started with load testing.

    You can see more examples in the official documentation. I suggest to take some time and explore all the possibilities provided by K6.

    This article first appeared on Code4IT 🐧

    As I said before, this is just the beginning: in future articles, we will use K6 to understand how some technical choices impact the performance of the whole application.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!





    Source link

  • Path.Combine and Path.Join are similar but way different. | Code4IT

    Path.Combine and Path.Join are similar but way different. | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you need to compose the path to a folder or file location, you can rely on the Path class. It provides several static methods to create, analyze and modify strings that represent a file system.

    Path.Join and Path.Combine look similar, yet they have some important differences that you should know to get the result you are expecting.

    Path.Combine: take from the last absolute path

    Path.Combine concatenates several strings into a single string that represents a file path.

    Path.Combine("C:", "users", "davide");
    // C:\users\davide
    

    However, there’s a tricky behaviour: if any argument other than the first contains an absolute path, all the previous parts are discarded, and the returned string starts with the last absolute path:

    Path.Combine("foo", "C:bar", "baz");
    // C:bar\baz
    
    Path.Combine("foo", "C:bar", "baz", "D:we", "ranl");
    // D:we\ranl
    

    Path.Join: take everything

    Path.Join does not try to return an absolute path, but it just joins the string using the OS path separator:

    Path.Join("C:", "users", "davide");
    // C:\users\davide
    

    This means that if there is an absolute path in any argument position, all the previous parts are not discarded:

    Path.Join("foo", "C:bar", "baz");
    // foo\C:bar\baz
    
    Path.Join("foo", "C:bar", "baz", "D:we", "ranl");
    // foo\C:bar\baz\D:we\ranl
    

    Final comparison

    As you can see, the behaviour is slightly different.

    Let’s see a table where we call the two methods using the same input strings:

    Path.Combine Path.Join
    ["singlestring"] singlestring singlestring
    ["foo", "bar", "baz"] foo\bar\baz foo\bar\baz
    ["foo", " bar ", "baz"] foo\ bar \baz foo\ bar \baz
    ["C:", "users", "davide"] C:\users\davide C:\users\davide
    ["foo", " ", "baz"] foo\ \baz foo\ \baz
    ["foo", "C:bar", "baz"] C:bar\baz foo\C:bar\baz
    ["foo", "C:bar", "baz", "D:we", "ranl"] D:we\ranl foo\C:bar\baz\D:we\ranl
    ["C:", "/users", "/davide"] /davide C:/users/davide
    ["C:", "users/", "/davide"] /davide C:\users//davide
    ["C:", "\users", "\davide"] \davide C:\users\davide

    Have a look at some specific cases:

    • neither methods handle white and empty spaces: ["foo", " ", "baz"] are transformed to foo\ \baz. Similarly, ["foo", " bar ", "baz"] are combined into foo\ bar \baz, without removing the head and trail whitespaces. So, always remove white spaces and empty values!
    • Path.Join handles in a not-so-obvious way the case of a path starting with / or \: if a part starts with \, it is included in the final path; if it starts with /, it is escaped as //. This behaviour depends on the path separator used by the OS: in my case, I’m running these methods using Windows 11.

    Finally, always remember that the path separator depends on the Operating System that is running the code. Don’t assume that it will always be /: this assumption may be correct for one OS but wrong for another one.

    This article first appeared on Code4IT 🐧

    Wrapping up

    As we have learned, Path.Combine and Path.Join look similar but have profound differences.

    Dealing with path building may look easy, but it hides some complexity. Always remember to:

    • validate and clean your input before using either of these methods (remove empty values, white spaces, and head or trailing path separators);
    • always write some Unit Tests to cover all the necessary cases;

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Seeding in-memory Entity Framework with realistic data with Bogus | Code4IT

    Seeding in-memory Entity Framework with realistic data with Bogus | Code4IT


    You don’t need a physical database to experiment with ORMs. You can use an in-memory DB and seed the database with realistic data generated with Bogus.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you want to experiment with some features or create a demo project, but you don’t want to instantiate a real database instance.

    Also, you might want to use some realistic data – not just “test1”, 123, and so on. These values are easy to set but not very practical when demonstrating functionalities.

    In this article, we’re going to solve this problem by using Bogus and Entity Framework: you will learn how to generate realistic data and how to store them in an in-memory database.

    Bogus, a C# library for generating realistic data

    Bogus is a popular library for generating realistic data for your tests. It allows you to choose the category of dummy data that best suits your needs.

    It all starts by installing Bogus via NuGet by running Install-Package Bogus.

    From here, you can define the so-called Fakers, whose purpose is to generate dummy instances of your classes by auto-populating their fields.

    Let’s see a simple example. We have this POCO class named Book:

    public class Book
    {
        public Guid Id { get; set; }
        public string Title { get; set; }
        public int PagesCount { get; set; }
        public Genre[] Genres { get; set; }
        public DateOnly PublicationDate { get; set; }
        public string AuthorFirstName { get; set; }
        public string AuthorLastName { get; set; }
    }
    
    public enum Genre
    {
        Thriller, Fantasy, Romance, Biography
    }
    

    Note: for the sake of simplicity, I used a dumb approach: author’s first and last name are part of the Book info itself, and the Genres property is treated as an array of enums and not as a flagged enum.

    From here, we can start creating our Faker by specifying the referenced type:

    Faker<Book> bookFaker = new Faker<Book>();
    

    We can add one or more RuleFor methods to create rules used to generate each property.

    The simplest approach is to use the overload where the first parameter is a Function pointing to the property to be populated, and the second is a Function that calls the methods provided by Bogus to create dummy data.

    Think of it as this pseudocode:

    faker.RuleFor(sm => sm.SomeProperty, f => f.SomeKindOfGenerator.GenerateSomething());
    

    Another approach is to pass as the first argument the name of the property like this:

    faker.RuleFor("myName", f=> f.SomeKindOfGenerator.GenerateSomething())
    

    A third approach is to define a generator for a specific type, saying “every time you’re trying to map a property with this type, use this generator”:

    bookFaker.RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
    

    Let’s dive deeper into Bogus, generating data for common types.

    Generate random IDs with Bogus

    We can generate random GUIDs like this:

    bookFaker.RuleFor(b => b.Id, f => f.Random.Guid());
    

    In a similar way, you can generate Uuid by calling f.Random.Uuid().

    Generate random text with Bogus

    We can generate random text, following the Lorem Ipsum structure, to pick a single word or a longer text:

    Using Text you generate random text:

    bookFaker.RuleFor(b => b.Title, f => f.Lorem.Text());
    

    However, you can use several other methods to generate text with different lengths, such as Letter, Word, Paragraphs, Sentences, and more.

    Working with Enums with Bogus

    If you have an enum, you can rely again on the Random property of the Faker and get a random subset of the enums like this:

    bookFaker.RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>(2));
    

    As you can see, I specified the number of random items to use (in this case, 2). If you don’t set it, it will take a random number of items.

    However, the previous method returns an array of elements. If you want to get a single enum, you should use f.Random.Enum<Genre>().

    One of the most exciting features of Bogus is the ability to generate realistic data for common entities, such as a person.

    In particular, you can use the Person property to generate data related to the first name, last name, Gender, UserName, Phone, Website, and much more.

    You can use it this way:

    bookFaker.RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName);
    bookFaker.RuleFor(b => b.AuthorLastName, f => f.Person.LastName);
    

    Generate final class instances with Bogus

    We can generate the actual items now that we’ve defined our rules.

    You just need to call the Generate method; you can also specify the number of items to generate by passing a number as a first parameter:

    List<Book> books = bookFaker.Generate(2);
    

    Suppose you want to generate a random quantity of items. In that case, you can use the GenerateBetween method, specifying the top and bottom limit:

    List<Book> books = bookFaker.GenerateBetween(2, 5);
    

    Wrapping up the Faker example

    Now that we’ve learned how to generate a Faker, we can refactor the code to make it easier to read:

    private List<Book> GenerateBooks(int count)
    {
        Faker<Book> bookFaker = new Faker<Book>()
            .RuleFor(b => b.Id, f => f.Random.Guid())
            .RuleFor(b => b.Title, f => f.Lorem.Text())
            .RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>())
            .RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName)
            .RuleFor(b => b.AuthorLastName, f => f.Person.LastName)
            .RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800))
            .RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
    
        return bookFaker.Generate(count);
    }
    

    If we run it, we can see it generates the following items:

    Bogus-generated data

    Seeding InMemory Entity Framework with dummy data

    Entity Framework is among the most famous ORMs in the .NET ecosystem. Even though it supports many integrations, sometimes you just want to store your items in memory without relying on any specific database implementation.

    Using Entity Framework InMemory provider

    To add this in-memory provider, you must install the Microsoft.EntityFrameworkCore.InMemory NuGet Package.

    Now you can add a new DbContext – which is a sort of container of all the types you store in your database – ensuring that the class inherits from DbContext.

    public class BooksDbContext : DbContext
    {
        public DbSet<Book> Books { get; set; }
    }
    

    You then have to declare the type of database you want to use by defining it the int OnConfiguring method:

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseInMemoryDatabase("BooksDatabase");
    }
    

    Note: even though it’s an in-memory database, you still need to declare the database name.

    Seeding the database with data generated with Bogus

    You can seed the database using the data generated by Bogus by overriding the OnModelCreating method:

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);
    
        var booksFromBogus = BogusBookGenerator.GenerateBooks(15);
    
        modelBuilder.Entity<Book>().HasData(booksFromBogus);
    }
    

    Notice that we first create the items and then, using modelBuilder.Entity<Book>().HasData(booksFromBogus), we set the newly generated items as content for the Books DbSet.

    Consume dummy data generated with EF Core

    To wrap up, here’s the complete implementation of the DbContext:

    public class BooksDbContext : DbContext
    {
        public DbSet<Book> Books { get; set; }
    
        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
          optionsBuilder.UseInMemoryDatabase("BooksDatabase");
        }
    
        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            base.OnModelCreating(modelBuilder);
    
            var booksFromBogus = BogusBookGenerator.GenerateBooks(15);
    
            modelBuilder.Entity<Book>().HasData(booksFromBogus);
        }
    }
    

    We are now ready to instantiate the DbContext, ensure that the Database has been created and seeded with the correct data, and perform the operations needed.

    using var dbContext = new BooksDbContext();
    dbContext.Database.EnsureCreated();
    
    var allBooks = await dbContext.Books.ToListAsync();
    
    var thrillerBooks = dbContext.Books
            .Where(b => b.Genres.Contains(Genre.Thriller))
            .ToList();
    

    Further readings

    In this blog, we’ve already discussed the Entity Framework. In particular, we used it to perform CRUD operations on a PostgreSQL database.

    🔗 How to perform CRUD operations with Entity Framework Core and PostgreSQL | Code4IT

    This article first appeared on Code4IT 🐧

    I suggest you explore the potentialities of Bogus: there are a lot of functionalities that I didn’t cover in this article, and they may make your tests and experiments meaningful and easier to understand.

    🔗 Bogus repository | GitHub

    Wrapping up

    Bogus is a great library for creating unit and integration tests. However, I find it useful to generate dummy data for several purposes, like creating a stub of a service, populating a UI with realistic data, or trying out other tools and functionalities.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Mark a class as Sealed to prevent subclasses creation &vert; Code4IT

    Mark a class as Sealed to prevent subclasses creation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The O in SOLID stands for the Open-closed principle: according to the official definition, “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.”

    To extend a class, you usually create a subclass, overriding or implementing methods from the parent class.

    Extend functionalities by using subclasses

    The most common way to extend a class is to mark it as abstract :

    public abstract class MyBaseClass
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public abstract string GetFormattedString();
    
      public virtual string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    

    Then, to extend it you create a subclass and define the internal implementations of the extention points in the parent class:

    public class ConcreteClass : MyBaseClass
    {
      public override string GetFormattedString() => $"{Title} | {FormatDate()}";
    }
    

    As you know, this is the simplest example: overriding and implementing methods from an abstract class.

    You can override methods from a concrete class:

    public class MyBaseClass2
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public string GetFormattedString() => $"{Title} ( {FormatDate()} )";
    
      public string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    
    public class ConcreteClass2 : MyBaseClass2
    {
      public new string GetFormattedString() => $"{Title} | {FormatDate()}";
    }
    

    Notice that even though there are no abstract methods in the base class, you can override the content of a method by using the new keyword.

    Prevent the creation of subclasses using the sealed keyword

    Especially when exposing classes via NuGet, you want to prevent consumers from creating subclasses and accessing the internal status of the structures you have defined.

    To prevent classes from being extended, you must mark your class as sealed:

    public sealed class MyBaseClass3
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public string GetFormattedString() => $"{Title} ( {FormatDate()} )";
    
      public string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    
    public class ConcreteClass3 : MyBaseClass3
    {
    }
    

    This way, even if you declare ConcreteClass3 as a subclass of MyBaseClass3, you won’t be able to compile the application:

    Compilation error when trying to extend a sealed class

    4 reasons to mark a class as sealed

    Ok, it’s easy to prevent a class from being extended by a subclass. But what are the benefits of having a sealed class?

    Marking a C# class as sealed can be beneficial for several reasons:

    1. Security by design: By marking a class as sealed, you prevent consumers from creating subclasses that can alter or extend critical functionalities of the base class in unintended ways.
    2. Performance improvements: The compiler can optimize sealed classes more effectively because it knows there are no subclasses. This will not bring substantial performance improvements, but it can still help if every nanosecond is important.
    3. Explicit design intent: Sealing the class communicates to other developers that the class is not intended to be extended or modified. If they want to use it, they accept they cannot modify or extend it, as it has been designed in that way by purpose.

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Developer Spotlight: Rogier de Boevé

    Developer Spotlight: Rogier de Boevé


    Hi! I’m Rogier de Boevé, an independent creative developer based in Belgium. Over the years, I’ve had the opportunity to collaborate with leading studios and agencies such as Dogstudio, Immersive Garden, North Kingdom, and Reflektor to craft immersive digital experiences for clients ranging from global tech platforms and luxury watchmakers to national broadcasters and iconic consumer brands.

    Following Wildfire

    Following Wildfire showcases an innovative project that uses artificial intelligence (AI) to detect early signs of wildfires by analyzing real-time images shared on social media. As wildfires become more frequent and devastating, tools like this are becoming essential.

    I focused primarily on the storytelling intro animation, which featured five WebGL scenes crafted entirely from particles. The response from the community was incredible as it earned two Webby Awards, a Clio, and was nominated for honors like FWA’s Site of the Year.

    Agency: Reflektor Digital

    Rogier de Boevé Portfolio (2024)

    Unlike client work, where you’re adapting to a brand or working closely with a team, my portfolio was a chance to take full creative control from start to finish. I handled both the design and development, which meant I could follow ideas that matched my artistic vision, without compromise. The result is something that feels fully mine.

    I wrote more about the process in a Codrops case study, where I shared some of the technical choices and visual thinking behind it. But what makes this project special to me is that it represents who I am, not just as a creative developer, but also as a digital designer and visual artist.

    Sprite × Marvel – Hall of Zero Limits

    The Hall of Zero Limits is an immersive, exploratory experience built in partnership with Black Panther: Wakanda Forever, to help creators find inspiration.

    Working with Dogstudio is always exciting and being part of the Sprite × Marvel collaboration even more so. It was easily one of the most prestigious and ambitious projects I’ve worked on to date. We created a virtual hall where users could explore, discover behind-the-scenes stories, and immerse themselves in the world of Wakanda.

    Agency: Dogstudio/DEPT

    The Roger

    Campaign site ‘The Roger’ for On Running. A performance shoe sportswear brand tailored to both casual and athletic needs.

    As a massive sports enthusiast and fan of Roger Federer, it was a privilege to be part of this project. I was commissioned by North Kingdom to develop a range of WebGL components promoting On Running’s new collection, The Roger. These included immersive storytelling scenes and a virtual showroom showcasing the entire collection.

    Agency: North Kingdom

    Philosophy

    Don’t approach a technical difficulty as an insurmountable burden, but as a creative challenge that starts with close collaboration.

    Make a point to experiment and research a lot before saying “this can’t be done.” Many times, what seems impossible just needs a creative workaround or a fresh perspective. I’ve learned that by sitting down with designers early and hashing out ideas together, often result in finding the smartest approach to bring the concepts to life.

    Tools & Tech

    I don’t always get to choose which framework I have to work with, so I try to keep up with most of the popular frameworks as much as possible. When I do have the freedom to build from scratch, I often choose Astro.js because it’s simple, flexible, and customisable without imposing too many conventions or adding unnecessary complexity. If I’m just building a prototype and don’t need any content-driven pages, I use Vite, which is also the frontend tooling that Astro uses.

    Regardless of the framework, I often use open-source libraries that have a big impact on my work.

    • Three.js – JavaScript 3D library.
    • GSAP – Animation toolkit.
    • Theatre.js – Timeline-based motion library.

    Final thoughts

    At the end of the day, it’s not about how fancy the code is or whether you’re using the latest framework. It’s about crafting experiences that spark curiosity, interaction, and leave a lasting impression.

    Thanks for reading and if you ever want to collaborate, feel free to reach out.



    Source link

  • Create PDF in PHP Using FPDF.

    Create PDF in PHP Using FPDF.


    In this post, I  will explain how to create a pdf file in php. To create a PDF file in PHP we will use the FPDF library. It is a PHP library that is used to generate a PDF. FPDF is an open-source library. It is the best server-side PDF generation PHP library. It has rich features right from adding a PDF page to creating grids and more.

    Example:

    <?Php
    require('fpdf/fpdf.php');
    $pdf = new FPDF(); 
    $pdf->AddPage();
    $pdf->SetFont('Arial','B',16);
    $pdf->Cell(80,10,'Hello World From FPDF!');
    $pdf->Output('test.pdf','I'); // Send to browser and display
    ?>

    Output:

     

     



    Source link

  • Operation Sindoor: Anatomy of a High-Stakes Cyber Siege

    Operation Sindoor: Anatomy of a High-Stakes Cyber Siege


    Overview

    Seqrite Labs,  India’s largest Malware Analysis lab, has identified multiple cyber events linked to Operation Sindoor, involving state-sponsored APT activity and coordinated hacktivist operations. Observed tactics included spear phishing, deployment of malicious scripts, website defacements, and unauthorized data leaks. The campaign exhibited a combination of cyber espionage tactics, hacktivist-driven disruptions, and elements of hybrid warfare. It targeted high-value Indian sectors, including defense, government IT infrastructure, healthcare, telecom, and education. Some of the activities were attributed to APT36 and Sidecopy, Pakistan-aligned threat groups known for leveraging spoofed domains, malware payloads, and credential harvesting techniques against Indian military and government entities.

    Trigger Point: Initial Access Vector

    On April 17, 2025, Indian cyber telemetry started to light up. Across the threat detection landscape, anomalies directed towards government mail servers and defence infrastructures. Lure files carried names that mimicked urgency and legitimacy:

    These weren’t ordinary files. They were precision-guided attacks—documents laden with macros, shortcuts, and scripts that triggered covert command-and-control (C2) communications and malware deployments. Each lure played on public fear and national tragedy, weaponizing recent headlines like the Pahalgam Terror Attack. Further technical details can be found at :

    Following the initiation of Operation Sindoor on May 7th, a surge in hacktivist activities was observed, including coordinated defacements, data leaks, and disruptive cyber campaigns.

    Activity Timeline Graph – Operation Sindoor

    APT36: Evolution of a Digital Predator

    APT36, long associated with the use of Crimson RAT and social engineering, had evolved. Gone were the older Poseidon loaders—Ares, a modular, evasive malware framework, now formed the new spearhead.

    Tools & File Types:

    • .ppam, .xlam, .lnk, .xlsb, .msi
    • Macros triggering web queries:
      fogomyart[.]com/random.php
    • Payload delivery through spoofed Indian entities:
      zohidsindia[.]com, nationaldefensecollege[.]com, nationaldefencebackup[.]xyz
    • Callback C2 IP: 86.97[.]58:17854

    APT36 used advanced TTPs during Operation Sindoor for stealthy infection, persistence, and command and control. Initial access was via spear phishing attachments (T1566.001) using malicious file types (.ppam, .xlam, .lnk, .xlsb, .msi). These triggered macros executed web queries (T1059.005) to domains like fogomyart[.]com. Payloads were delivered through spoofed Indian domains such as zohidsindia[.]com and nationaldefensecollege[.]com, with C2 communication via application layer protocols (T1071.001) to 167.86.97[.]58:17854. For execution and persistence, APT36 leveraged LOLBins (T1218), scheduled tasks (T1053.005), UAC bypasses (T1548.002), and obfuscated PowerShell scripts (T1059.001, T1027), enabling prolonged access while evading detection.

    Ares RAT grants full control over the compromised host, offering capabilities such as keylogging, screen capturing, file manipulation, credential theft, and remote command execution—similar to commercial RATs but tailored for stealth and evasion.

    Digital Infrastructure: Domains of Deception

    The operation’s domain arsenal resembled a covert intelligence operation:

    • pahalgamattack[.]com
    • operationsindoor2025[.]in
    • sindoor[.]website
    • sindoor[.]live

    These domains mimicked military and government entities, exploiting user trust and leveraging geo-political narratives for social engineering.

    Hacktivism in Tandem: The Shadow Battalion

    APT36 did not act alone. In parallel, hacktivist collectives coordinated disruptive attacks—DDoS, defacements, and data leaks—across key Indian sectors. Telegram groups synchronized actions under hashtags like #OpIndia, #OperationSindoor, and #PahalgamAttack, as portrayed in the image below.

     

    A quick timeline recap

    Most Targeted Sectors:

    The Operation Sindoor campaign strategically targeted India’s critical sectors, focusing on Defense entities like the MoD, Army, Navy, and DRDO. The hactivists claimed to have disrupted Government IT infrastructure, including NIC and GSTN with evidences of DDoS and data leak, while attempting breaches in healthcare institutions such as AIIMS and DRDO Hospitals. Telecom giants like Jio and BSNL were probed, alongside multiple state-level educational and government portals, showcasing the breadth and coordination of the cyber offensive.

    Post-Campaign Threat Landscape

    From May 7–10, Seqrite telemetry reported:

    • 650+ confirmed DDoS/defacement events
    • 35+ hacktivist groups involved, 7 newly emerged
    • 26 custom detection signatures deployed across XDR

    Detection Signatures:

    Signature Name Description
    BAT.Sidecopy.49534.GC SideCopy loader script
    LNK.Sidecopy.49535.GC Macro-enabled shortcut
    MSI.Trojan.49537.GC MSI-based Trojan dropper
    HTML.Trojan.49539.GC HTML credential phisher
    Bat.downloader.49517 Download utility for RAT
    Txt.Enc.Sidecopy.49538.GC Obfuscated loader

     

    IOCs: Indicators of Compromise

    Malicious Domains:

    • pahalgamattack[.]com
    • sindoor[.]live
    • operationsindoor2025[.]in
    • nationaldefensecollege[.]com
    • fogomyart[.]com/random.php

    Malicious Files:

    Callback IP:

    • 86.97[.]58:17854 (Crimson RAT C2)

    VPS Traffic Origination:

    • Russia 🇷🇺
    • Germany 🇩🇪
    • Indonesia 🇮🇩
    • Singapore 🇸🇬

    The Mind Map of Chaos: Coordinated Disruption

    The hierarchy of the campaign looked more like a digital alliance than a lone operation:

    Seqrite’s Response

    To counteract the operation, Seqrite Labs deployed:

    • 26 detection rules across product lines
    • YARA signatures and correlation into STIP/MISP
    • XDR-wide alerting for SideCopy and Ares variants
    • Dark web and Telegram monitoring
    • Threat advisory dissemination to Indian entities

     

    Researcher’s Reflection

    Operation Sindoor revealed the blueprint of modern cyber warfare. It showcased how nation-state actors now collaborate with non-state hacktivists, merging technical intrusion with psychological operations. The evolution of APT36—especially the move from Poseidon to Ares—and the simultaneous hacktivist attacks signal a deliberate convergence of cyber espionage and ideological warfare.

    Instead of isolated malware campaigns, we now face digitally coordinated war games. The tools may change—macros, MSI files, DDoS scripts—but the objectives remain: destabilize, disinform, and disrupt.

    Conclusion

    Operation Sindoor represents a significant escalation in the cyber conflict landscape between India and Pakistan. The campaign, orchestrated by APT36 and allied hacktivist groups, leveraged a blend of advanced malware, spoofed infrastructure, and deceptive social engineering to infiltrate key Indian sectors.

    The strategic targeting of defense, government IT, healthcare, education, and telecom sectors underscores an intent to not just gather intelligence but also disrupt national operations. With the deployment of tools like Ares RAT, attackers gained complete remote access to infected systems—opening the door to surveillance, data theft, and potential sabotage of critical services.

    From an impact standpoint, this operation has:

    • Undermined trust in official digital communication by spoofing credible Indian domains.
    • Increased operational risks for sensitive departments by exposing infrastructure weaknesses.
    • Compromised public perception of cybersecurity readiness in government and defense.
    • Amplified geopolitical tension by using cyber means to project influence and provoke instability.

    The impact of this campaign on national cybersecurity and trust has been significant:

    • Data Exfiltration: Sensitive internal documents, credentials, and user information were exfiltrated from key organizations. This compromises operational security, strategic decision-making, and opens pathways for follow-up intrusions.
    • DDoS Attacks: Targeted denial-of-service attacks disrupted availability of critical government and public-facing services, affecting both internal workflows and citizen access during sensitive geopolitical periods.
    • Website Defacement: Several Indian government and institutional websites were defaced, undermining public confidence and serving as a psychological warfare tactic to project influence and cyber superiority.

    These developments highlight the urgent need for enhanced threat intelligence capabilities, robust incident response frameworks, and strategic public-private collaboration to counter such evolving hybrid threats.



    Source link

  • 6.50 Million Google Clicks! 💰

    6.50 Million Google Clicks! 💰


    Yesterday Online PNG Tools smashed through 6.49M Google clicks and today it’s smashed through 6.50M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!

    What Are Online PNG Tools?

    Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.

    Who Created Online PNG Tools?

    Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.

    Who Uses Online PNG Tools?

    Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.

    Smash too and see you tomorrow at 6.51M clicks! 📈

    PS. Use coupon code SMASHLING for a 30% discount on these tools at onlinePNGtools.com/pricing. 💸



    Source link