برچسب: Best

  • Is Random.GetItems the best way to get random items in C# 12? | Code4IT

    Is Random.GetItems the best way to get random items in C# 12? | Code4IT


    You have a collection of items. You want to retrieve N elements randomly. Which alternatives do we have?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common operations when dealing with collections of items is to retrieve a subset of these elements taken randomly.

    Before .NET 8, the most common way to retrieve random items was to order the collection using a random value and then take the first N items of the now sorted collection.

    From .NET 8 on, we have a new method in the Random class: GetItems.

    So, should we use this method or stick to the previous version? Are there other alternatives?

    For the sake of this article, I created a simple record type, CustomRecord, which just contains two properties.

    public record CustomRecord(int Id, string Name);
    

    I then stored a collection of such elements in an array. This article’s final goal is to find the best way to retrieve a random subset of such items. Spoiler alert: it all depends on your definition of best!

    Method #1: get random items with Random.GetItems

    Starting from .NET 8, released in 2023, we now have a new method belonging to the Random class: GetItems.

    There are three overloads:

    public T[] GetItems<T>(T[] choices, int length);
    public T[] GetItems<T>(ReadOnlySpan<T> choices, int length);
    public void GetItems<T>(ReadOnlySpan<T> choices, Span<T> destination);
    

    We will focus on the first overload, which accepts an array of items (choices) in input and returns an array of size length.

    We can use it as such:

    CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
    

    Simple, neat, efficient. Or is it?

    Method #2: get the first N items from a shuffled copy of the initial array

    Another approach is to shuffle the whole initial array using Random.Shuffle. It takes in input an array and shuffles the items in-place.

    Random.Shared.Shuffle(Items);
    CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    

    If you need to preserve the initial order of the items, you should create a copy of the initial array and shuffle only the copy. You can do this by using this syntax:

    CustomRecord[] copy = [.. Items];
    

    If you just need some random items and don’t care about the initial array, you can shuffle it without making a copy.

    Once we’ve shuffled the array, we can pick the first N items to get a subset of random elements.

    Method #3: order by Guid, then take N elements

    Before .NET 8, one of the most used approaches was to order the whole collection by a random value, usually a newly generated Guid, and then take the first N items.

    var randomItems = Items
        .OrderBy(_ => Guid.NewGuid()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach works fine but has the disadvantage that it instantiates a new Guid value for every item in the collection, which is an expensive memory-wise operation.

    Method #4: order by Number, then take N elements

    Another approach was to generate a random number used as a discriminator to order the collection; then, again, we used to get the first N items.

    var randomItems = Items
        .OrderBy(_ => Random.Shared.Next()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach is slightly better because generating a random integer is way faster than generating a new Guid.

    Benchmarks of the different operations

    It’s time to compare the approaches.

    I used BenchmarkDotNet to generate the reports and ChartBenchmark to represent the results visually.

    Let’s see how I structured the benchmark.

    [MemoryDiagnoser]
    public class RandomItemsBenchmark
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
    
        private CustomRecord[] Items;
        private int TotalItemsToBeRetrieved;
        private CustomRecord[] Copy;
    
        [IterationSetup]
        public void Setup()
        {
            var ids = Enumerable.Range(0, Size).ToArray();
            Items = ids.Select(i => new CustomRecord(i, $"Name {i}")).ToArray();
            Copy = [.. Items];
    
            TotalItemsToBeRetrieved = Random.Shared.Next(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithRandomGetItems()
        {
            CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomGuid()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Guid.NewGuid())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomNumber()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Random.Shared.Next())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffle()
        {
            CustomRecord[] copy = [.. Items];
    
            Random.Shared.Shuffle(copy);
            CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffleNoCopy()
        {
            Random.Shared.Shuffle(Copy);
            CustomRecord[] randomItems = Copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    }
    

    We are going to run the benchmarks on arrays with different sizes. We will start with a smaller array with 100 items and move to a bigger one with one million items.

    We generate the initial array of CustomRecord instances for every iteration and store it in the Items property. Then, we randomly choose the number of items to get from the Items array and store it in the TotalItemsToBeRetrieved property.

    We also generate a copy of the initial array at every iteration; this way, we can run Random.Shuffle without modifying the original array.

    Finally, we define the body of the benchmarks using the implementations we saw before.

    Notice: I marked the benchmark for the GetItems method as a baseline, using [Benchmark(Baseline = true)]. This way, we can easily see the results ratio for the other methods compared to this specific method.

    When we run the benchmark, we can see this final result (for simplicity, I removed the Error, StdDev, and Median columns):

    Method Size Mean Ratio Allocated Alloc Ratio
    WithRandomGetItems 100 6.442 us 1.00 424 B 1.00
    WithRandomGuid 100 39.481 us 6.64 3576 B 8.43
    WithRandomNumber 100 22.219 us 3.67 2256 B 5.32
    WithShuffle 100 7.038 us 1.16 1464 B 3.45
    WithShuffleNoCopy 100 4.254 us 0.73 624 B 1.47
    WithRandomGetItems 10000 58.401 us 1.00 5152 B 1.00
    WithRandomGuid 10000 2,369.693 us 65.73 305072 B 59.21
    WithRandomNumber 10000 1,828.325 us 56.47 217680 B 42.25
    WithShuffle 10000 180.978 us 4.74 84312 B 16.36
    WithShuffleNoCopy 10000 156.607 us 4.41 3472 B 0.67
    WithRandomGetItems 1000000 15,069.781 us 1.00 4391616 B 1.00
    WithRandomGuid 1000000 319,088.446 us 42.79 29434720 B 6.70
    WithRandomNumber 1000000 166,111.193 us 22.90 21512408 B 4.90
    WithShuffle 1000000 48,533.527 us 6.44 11575304 B 2.64
    WithShuffleNoCopy 1000000 37,166.068 us 4.57 6881080 B 1.57

    By looking at the numbers, we can notice that:

    • GetItems is the most performant method, both for time and memory allocation;
    • using Guid.NewGuid is the worst approach: it’s 10 to 60 times slower than GetItems, and it allocates, on average, 4x the memory;
    • sorting by random number is a bit better: it’s 30 times slower than GetItems, and it allocates around three times more memory;
    • shuffling the array in place and taking the first N elements is 4x slower than GetItems; if you also have to preserve the original array, notice that you’ll lose some memory allocation performance because you must allocate more memory to create the cloned array.

    Here’s the chart with the performance values. Notice that, for better readability, I used a Log10 scale.

    Results comparison for all executions

    If we move our focus to the array with one million items, we can better understand the impact of choosing one approach instead of the other. Notice that here I used a linear scale since values are on the same magnitude order.

    The purple line represents the memory allocation in bytes.

    Results comparison for one-million-items array

    So, should we use GetItems all over the place? Well, no! Let me tell you why.

    The problem with Random.GetItems: repeated elements

    There’s a huge problem with the GetItems method: it returns duplicate items. So, if you need to get N items without duplicates, GetItems is not the right choice.

    Here’s how you can demonstrate it.

    First, create an array of 100 distinct items. Then, using Random.Shared.GetItems, retrieve 100 items.

    The final array will have 100 items; the array may or may not contain duplicates.

    int[] source = Enumerable.Range(0, 100).ToArray();
    
    StringBuilder sb = new StringBuilder();
    
    for (int i = 1; i <= 200; i++)
    {
        HashSet<int> ints = Random.Shared.GetItems(source, 100).ToHashSet();
        sb.AppendLine($"run-{i}, {ints.Count}");
    }
    
    var finalCsv = sb.ToString();
    

    To check the number of distinct elements, I put the resulting array in a HashSet<int>. The final size of the HashSet will give us the exact percentage of unique values.

    If the HashSet size is exactly 100, it means that GetItems retrieved each element from the original array exactly once.

    For simplicity, I formatted the result in CSV format so that I could generate plots with it.

    Unique values percentage returned by GetItems

    As you can see, on average, we have 65% of unique items and 35% of duplicate items.

    Further readings

    I used the Enumerable.Range method to generate the initial items.

    I wrote an article to explain how to use it, which are some parts to consider when using it, and more.

    🔗 LINQ’s Enumerable.Range to generate a sequence of consecutive numbers | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    You should not replace the way you get random items from the array by using Random.GetItems. Well, unless you are okay with having duplicates.

    If you need unique values, you should rely on other methods, such as Random.Shuffle.

    All in all, always remember to validate your assumptions by running experiments on the methods you are not sure you can trust!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Top 10 PHP Security Best Practices.

    Top 10 PHP Security Best Practices.


    Top 10 PHP Security Best Practices.

    In today’s digital landscape, security is a paramount concern for developers and users alike. With the increasing sophistication of cyber threats, ensuring the security of web applications is more critical than ever. PHP, being one of the most widely used server-side scripting languages, powers millions of websites and applications. However, its popularity also makes it a prime target for attackers.

    As a PHP developer, it is your responsibility to safeguard your applications and user data from potential threats. Whether you’re building a small personal project or a large-scale enterprise application, adhering to security best practices is essential. In this blog post, we will delve into the top PHP security best practices every developer should follow. From input validation and sanitization to secure session management and error handling, we’ll cover practical strategies to fortify your PHP applications against common vulnerabilities.

    Join us as we explore these crucial practices, providing you with actionable insights and code snippets to enhance the security of your PHP projects. By the end of this post, you’ll have a solid understanding of implementing these best practices, ensuring your applications are robust, secure, and resilient against potential attacks. Let’s get started on the path to mastering PHP security!

    Here are some top PHP security best practices for developers:

    1. Input Validation and Sanitization
    • Validate Input: Always validate and sanitize all user inputs to prevent attacks such as SQL injection, XSS, and CSRF.
    • Use Built-in Functions: Use PHP functions like filter_var() to validate data, and htmlspecialchars() or htmlentities() to sanitize output.
    2. Use Prepared Statements
    • SQL Injection Prevention: Always use prepared statements and parameterized queries with PDO or MySQLi to prevent SQL injection attacks.
    $stmt = $pdo->prepare('SELECT * FROM users WHERE email = :email');
    $stmt->execute(['email' => $email]);
    3. Cross-Site Scripting (XSS) Prevention
    • Escape Output: Escape all user-generated content before outputting it to the browser using htmlspecialchars().
    • Content Security Policy (CSP): Implement CSP headers to prevent the execution of malicious scripts.
    4. Cross-Site Request Forgery (CSRF) Protection
    • Use CSRF Tokens: Include a unique token in each form submission and validate it on the server side.
    // Generating a CSRF token
    $_SESSION['csrf_token'] = bin2hex(random_bytes(32));
    
    // Including the token in a form
    echo '';
    
    5. Session Management
    • Secure Cookies: Use secure and HttpOnly flags for cookies to prevent XSS attacks.
    session_set_cookie_params([
    'lifetime' => 0,
    'path' => "https://phpforever.com/",
    'domain' => '',
    'secure' => true, // Only send cookies over HTTPS
    'httponly' => true, // Prevent access via JavaScript
    'samesite' => 'Strict' // Prevent CSRF
    ]);
    session_start();
    • Regenerate Session IDs: Regenerate session IDs frequently, particularly after login, to prevent session fixation.
    session_regenerate_id(true);
    6. Error Handling and Reporting
    • Disable Error Display: Do not display errors in production. Log errors to a file instead.
    ini_set('display_errors', 0);
    ini_set('log_errors', 1);
    ini_set('error_log', '/path/to/error.log');
    7. Secure File Handling
    • File Uploads: Validate and sanitize file uploads. Restrict file types and ensure proper permissions are set on uploaded files.
    $allowed_types = ['image/jpeg', 'image/png'];
    if (!in_array($_FILES['file']['type'], $allowed_types)) {
    die('File type not allowed');
    }
    8. Secure Configuration
    • Use HTTPS: Always use HTTPS to encrypt data transmitted between the client and server.
    • Secure Configuration Files: Restrict access to configuration files. Store sensitive information like database credentials securely.
    9. Keep Software Updated
    • Update PHP and Libraries: Regularly update PHP, frameworks, and libraries to the latest versions to patch security vulnerabilities.
    10. Use Security Headers
    • Set Security Headers: Use headers like X-Content-Type-Options, X-Frame-Options, X-XSS-Protection, and Strict-Transport-Security to enhance security.
    header('X-Content-Type-Options: nosniff');
    header('X-Frame-Options: SAMEORIGIN');
    header('X-XSS-Protection: 1; mode=block');
    header('Strict-Transport-Security: max-age=31536000; includeSubDomains');

     

    By following these best practices, PHP developers can significantly enhance the security of their applications and protect against common vulnerabilities and attacks.

    Ajax Live Search Example In PHP & MYSQL.



    Source link

  • Zero Trust Best Practices for Enterprises and Businesses


    Cybersecurity threats are becoming more sophisticated and frequent in today’s digital landscape. Whether a large enterprise or a growing small business, organizations must pivot from traditional perimeter-based security models to a more modern, robust approach—Zero Trust Security. At its core, Zero Trust operates on a simple yet powerful principle: never trust, always verify.

    Implementing Zero Trust is not a one-size-fits-all approach. It requires careful planning, integration of the right technologies, and ongoing management. Here are some key zero trust best practices to help both enterprises and small businesses establish a strong zero-trust foundation:

    1. Leverage IAM and AD Integrations

    A successful Zero-Trust strategy begins with Identity and Access Management (IAM). Integrating IAM solutions with Active Directory (AD) or other identity providers helps centralize user authentication and enforce policies more effectively. These integrations allow for a unified view of user roles, permissions, and access patterns, essential for controlling who gets access to what and when.

    IAM and AD integrations also enable seamless single sign-on (SSO) capabilities, improving user experience while ensuring access control policies are consistently applied across your environment.

    If your organization does not have an IdP or AD, choose a ZT solution with a User Management feature for Local Users.

    1. Ensure Zero Trust for Both On-Prem and Remote Users

    Gone are the days when security could rely solely on protecting the corporate network perimeter. With the rise of hybrid work models, extending zero-trust principles beyond traditional office setups is critical. This means ensuring that both on-premises and remote users are subject to the same authentication, authorization, and continuous monitoring processes.

    Cloud-native Zero Trust Network Access (ZTNA) solutions help enforce consistent policies across all users, regardless of location or device. This is especially important for businesses with distributed teams or those who rely on contractors and third-party vendors.

    1. Implement MFA for All Users for Enhanced Security

    Multi-factor authentication (MFA) is one of the most effective ways to protect user identities and prevent unauthorized access. By requiring at least two forms of verification, such as a password and a one-time code sent to a mobile device, MFA dramatically reduces the risk of credential theft and phishing attacks.

    MFA should be mandatory for all users, including privileged administrators and third-party collaborators. It’s a low-hanging fruit that can yield high-security dividends for organizations of all sizes.

    1. Ensure Proper Device Posture Rules

    Zero Trust doesn’t stop at verifying users—it must also verify their devices’ health and security posture. Whether it’s a company-issued laptop or a personal mobile phone, devices should meet specific security criteria before being granted access to corporate resources.

    This includes checking for up-to-date antivirus software, secure OS configurations, and encryption settings. By enforcing device posture rules, businesses can reduce the attack surface and prevent compromised endpoints from becoming a gateway to sensitive data.

    1. Adopt Role-Based Access Control

    Access should always be granted on a need-to-know basis. Implementing Role-Based Access Control (RBAC) ensures that users only have access to the data and applications required to perform their job functions, nothing more, nothing less.

    This minimizes the risk of internal threats and lateral movement within the network in case of a breach. For small businesses, RBAC also helps simplify user management and audit processes, primarily when roles are clearly defined, and policies are enforced consistently.

    1. Regularly Review and Update Policies

    Zero Trust is not a one-time setup, it’s a continuous process. As businesses evolve, so do user roles, devices, applications, and threat landscapes. That’s why it’s essential to review and update your security policies regularly.

    Conduct periodic audits to identify outdated permissions, inactive accounts, and policy misconfigurations. Use analytics and monitoring tools to assess real-time risk levels and fine-tune access controls accordingly. This iterative approach ensures that your Zero Trust architecture remains agile and responsive to emerging threats.

    Final Thoughts

    Zero Trust is more than just a buzzword, it’s a strategic shift that aligns security with modern business realities. Adopting these zero trust best practices can help you build a more resilient and secure IT environment, whether you are a large enterprise or a small business.

    By focusing on identity, device security, access control, and continuous policy refinement, organizations can reduce risk exposure and stay ahead of today’s ever-evolving cyber threats.

    Ready to take the next step in your Zero Trust journey? Start with what you have, plan for what you need, and adopt a security-first mindset across your organization.

    Embrace the Seqrite Zero Trust Access Solution and create a secure and resilient environment for your organization’s digital assets. Contact us today.

     



    Source link