برچسب: The

  • Why Zero‑Trust Access Is the Future of Secure Networking

    Why Zero‑Trust Access Is the Future of Secure Networking


    ZTNA vs VPN is a critical comparison in today’s hyperconnected world, where remote workforces, cloud-driven data flows, and ever-evolving threats make securing enterprise network access more complex than ever. Traditional tools like Virtual Private Networks (VPNs), which once stood as the gold standard of secure connectivity, are now showing their age. Enter Zero Trust Network Access (ZTNA) is a modern, identity-centric approach rapidly replacing VPNs in forward-thinking organizations.

    The Rise and Fall of VPNs

    VPNs have long been trusted to provide secure remote access by creating an encrypted “tunnel” between the user and the corporate network. While VPNs are still widely used, they operate on a fundamental assumption: they can be trusted once a user is inside the network. This “castle and moat” model may have worked in the past, but in today’s threat landscape, it creates glaring vulnerabilities:

    • Over-privileged access: VPNs often grant users broad network access, increasing the risk of lateral movement by malicious actors.
    • Lack of visibility: VPNs provide limited user activity monitoring once access is granted.
    • Poor scalability: As remote workforces grow, VPNs become performance bottlenecks, especially under heavy loads.
    • Susceptibility to credential theft: VPNs rely heavily on usernames and passwords, which can be stolen or reused in credential stuffing attacks.

    What is Zero Trust Network Access (ZTNA)

    ZTNA redefines secure access by flipping the trust model. It’s based on the principle of “never trust, always verify.” Instead of granting blanket access to the entire network, ZTNA enforces granular, identity-based access controls. Access is provided only after the user, device, and context are continuously verified.

    ZTNA architecture typically operates through a broker that evaluates user identity, device posture, and other contextual factors before granting access to a specific application, not the entire network. This minimizes exposure and helps prevent the lateral movement of threats.

    ZTNA vs VPN: The Key Differences

    Why ZTNA is the Future

    1. Security for the Cloud Era: ZTNA is designed for modern environments—cloud, hybrid, or multi-cloud. It secures access across on-prem and SaaS apps without the complexity of legacy infrastructure.
    2. Adaptive Access Controls: Access isn’t just based on credentials. ZTNA assesses user behavior, device health, location, and risk level in real time to dynamically permit or deny access.
    3. Enhanced User Experience: Unlike VPNs that slow down application performance, ZTNA delivers faster, direct-to-app connectivity, reducing latency and improving productivity.
    4. Minimized Attack Surface: Because ZTNA only exposes what’s necessary and hides the rest, the enterprise’s digital footprint becomes nearly invisible to attackers.
    5. Better Compliance & Visibility: With robust logging, analytics, and policy enforcement, ZTNA helps organizations meet compliance standards and gain detailed insights into access behaviors.

     Transitioning from VPN to ZTNA

    While ZTNA vs VPN continues to be a key consideration for IT leaders, it’s clear that Zero Trust offers a more future-ready approach. Although VPNs still serve specific legacy use cases, organizations aiming to modernize should begin their ZTNA vs VPN transition now. Adopting a phased, hybrid model enables businesses to secure critical applications with ZTNA while still leveraging VPN access for systems that require it.

    The key is to evaluate access needs, identify high-risk entry points, and prioritize business-critical applications for ZTNA implementation. Over time, enterprises can reduce their dependency on VPNs and shift toward a more resilient, Zero Trust architecture.

    Ready to Take the First Step Toward Zero Trust?

    Explore how Seqrite ZTNA enables secure, seamless, and scalable access for the modern workforce. Make the shift from outdated VPNs to a future-ready security model today.



    Source link

  • Friday the 13th Sale (75% OFF!) 👻

    Friday the 13th Sale (75% OFF!) 👻


    At Browserling and Online Tools we love sales.

    We just created a new automated Friday the 13th Sale.

    Now on all Fridays the 13th, we show a 75% discount offer to all users who visit our site.

    This year it runs on Jun 13th, next year on Feb 13th, etc.

    How did we find the dates of all Fridays the 13th?

    We used our Friday the 13th Finder tool!

    Buy a Sub Now!

    What Is Browserling?

    Browserling is an online service that lets you test how other websites look and work in different web browsers, like Chrome, Firefox, or Safari, without needing to install them. It runs real browsers on real machines and streams them to your screen, kind of like remote desktop but focused on browsers. This helps web developers and regular users check for bugs, suspicious links, and weird stuff that happens in certain browsers. You just go to Browserling, pick a browser and version, and then enter the site you want to test. It’s quick, easy, and works from your browser with no downloads or installs.

    What Are Online Tools?

    Online Tools is an online service that offers free, browser-based productivity tools for everyday tasks like editing text, converting files, editing images, working with code, and way more. It’s an all-in-one Digital Swiss Army Knife with 1500+ utilities, so you can find the exact tool you need without installing anything. Just open the site, use what you need, and get things done fast.

    Who Uses Browserling and Online Tools?

    Browserling and Online Tools are used by millions of regular internet users, developers, designers, students, and even Fortune 100 companies. Browserling is handy for testing websites in different browsers without having to install them. Online Tools are used for simple tasks like resizing or converting images, or even fixing small file problems quickly without downloading any apps.

    Buy a subscription now and see you next time!



    Source link

  • C# Tip: ObservableCollection – a data type to intercept changes to the collection | Code4IT

    C# Tip: ObservableCollection – a data type to intercept changes to the collection | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Imagine you need a way to raise events whenever an item is added or removed from a collection.

    Instead of building a new class from scratch, you can use ObservableCollection<T> to store items, raise events, and act when the internal state of the collection changes.

    In this article, we will learn how to use ObservableCollection<T>, an out-of-the-box collection available in .NET.

    Introducing the ObservableCollection type

    ObservableCollection<T> is a generic collection coming from the System.Collections.ObjectModel namespace.

    It allows the most common operations, such as Add<T>(T item) and Remove<T>(T item), as you can expect from most of the collections in .NET.

    Moreover, it implements two interfaces:

    • INotifyCollectionChanged can be used to raise events when the internal collection is changed.
    • INotifyPropertyChanged can be used to raise events when one of the properties of the changes.

    Let’s see a simple example of the usage:

    var collection = new ObservableCollection<string>();
    
    collection.Add("Mario");
    collection.Add("Luigi");
    collection.Add("Peach");
    collection.Add("Bowser");
    
    collection.Remove("Luigi");
    
    collection.Add("Waluigi");
    
    _ = collection.Contains("Peach");
    
    collection.Move(1, 2);
    

    As you can see, we can do all the basic operations: add, remove, swap items (with the Move method), and check if the collection contains a specific value.

    You can simplify the initialization by passing a collection in the constructor:

     var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    
     collection.Add("Bowser");
    
     collection.Remove("Luigi");
    
     collection.Add("Waluigi");
    
     _ = collection.Contains("Peach");
    
     collection.Move(1, 2);
    

    How to intercept changes to the underlying collection

    As we said, this data type implements INotifyCollectionChanged. Thanks to this interface, we can add event handlers to the CollectionChanged event and see what happens.

    var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    collection.CollectionChanged += WhenCollectionChanges;
    
    Console.WriteLine("Adding Bowser...");
    collection.Add("Bowser");
    Console.WriteLine("");
    
    
    Console.WriteLine("Removing Luigi...");
    collection.Remove("Luigi");
    Console.WriteLine("");
    
    Console.WriteLine("Adding Waluigi...");
    collection.Add("Waluigi");
    Console.WriteLine("");
    
    Console.WriteLine("Searching for Peach...");
    var containsPeach = collection.Contains("Peach");
    Console.WriteLine("");
    
    Console.WriteLine("Swapping items...");
    collection.Move(1, 2);
    

    The WhenCollectionChanges method accepts a NotifyCollectionChangedEventArgs that gives you info about the intercepted changes:

    private void WhenCollectionChanges(object? sender, NotifyCollectionChangedEventArgs e)
    {
        var allItems = ((IEnumerable<object>)sender)?.Cast<string>().ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Currently, the collection is {string.Join(',', allItems)}");
    
        Console.WriteLine($"> The operation is {e.Action}");
    
        var previousItems = e.OldItems?.Cast<string>()?.ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Before the operation it was {string.Join(',', previousItems)}");
    
    
        var currentItems = e.NewItems?.Cast<string>()?.ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Now, it is {string.Join(',', currentItems)}");
    }
    

    Every time an operation occurs, we write some logs.

    The result is:

    Adding Bowser...
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > The operation is Add
    > Before the operation it was <empty>
    > Now, it is Bowser
    
    Removing Luigi...
    > Currently, the collection is Mario,Peach,Bowser
    > The operation is Remove
    > Before the operation it was Luigi
    > Now, it is <empty>
    
    Adding Waluigi...
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > The operation is Add
    > Before the operation it was <empty>
    > Now, it is Waluigi
    
    Searching for Peach...
    
    Swapping items...
    > Currently, the collection is Mario,Bowser,Peach,Waluigi
    > The operation is Move
    > Before the operation it was Peach
    > Now, it is Peach
    

    Notice a few points:

    • the sender property holds the current items in the collection. It’s an object?, so you have to cast it to another type to use it.
    • the NotifyCollectionChangedEventArgs has different meanings depending on the operation:
      • when adding a value, OldItems is null and NewItems contains the items added during the operation;
      • when removing an item, OldItems contains the value just removed, and NewItems is null.
      • when swapping two items, both OldItems and NewItems contain the item you are moving.

    How to intercept when a collection property has changed

    To execute events when a property changes, we need to add a delegate to the PropertyChanged event. However, it’s not available directly on the ObservableCollection type: you first have to cast it to an INotifyPropertyChanged:

    var collection = new ObservableCollection<string>(new string[] { "Mario", "Luigi", "Peach" });
    (collection as INotifyPropertyChanged).PropertyChanged += WhenPropertyChanges;
    
    Console.WriteLine("Adding Bowser...");
    collection.Add("Bowser");
    Console.WriteLine("");
    
    
    Console.WriteLine("Removing Luigi...");
    collection.Remove("Luigi");
    Console.WriteLine("");
    
    Console.WriteLine("Adding Waluigi...");
    collection.Add("Waluigi");
    Console.WriteLine("");
    
    Console.WriteLine("Searching for Peach...");
    var containsPeach = collection.Contains("Peach");
    Console.WriteLine("");
    
    Console.WriteLine("Swapping items...");
    collection.Move(1, 2);
    

    We can now specify the WhenPropertyChanges method as such:

    private void WhenPropertyChanges(object? sender, PropertyChangedEventArgs e)
    {
        var allItems = ((IEnumerable<object>)sender)?.Cast<string>().ToArray() ?? new string[] { "<empty>" };
        Console.WriteLine($"> Currently, the collection is {string.Join(',', allItems)}");
        Console.WriteLine($"> Property {e.PropertyName} has changed");
    }
    

    As you can see, we have again the sender parameter that contains the collection of items.

    Then, we have a parameter of type PropertyChangedEventArgs that we can use to get the name of the property that has changed, using the PropertyName property.

    Let’s run it.

    Adding Bowser...
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > Property Count has changed
    > Currently, the collection is Mario,Luigi,Peach,Bowser
    > Property Item[] has changed
    
    Removing Luigi...
    > Currently, the collection is Mario,Peach,Bowser
    > Property Count has changed
    > Currently, the collection is Mario,Peach,Bowser
    > Property Item[] has changed
    
    Adding Waluigi...
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > Property Count has changed
    > Currently, the collection is Mario,Peach,Bowser,Waluigi
    > Property Item[] has changed
    
    Searching for Peach...
    
    Swapping items...
    > Currently, the collection is Mario,Bowser,Peach,Waluigi
    > Property Item[] has changed
    

    As you can see, for every add/remove operation, we have two events raised: one to say that the Count has changed, and one to say that the internal Item[] is changed.

    However, notice what happens in the Swapping section: since you just change the order of the items, the Count property does not change.

    This article first appeared on Code4IT 🐧

    Final words

    As you probably noticed, events are fired after the collection has been initialized. Clearly, it considers the items passed in the constructor as the initial state, and all the subsequent operations that mutate the state can raise events.

    Also, notice that events are fired only if the reference to the value changes. If the collection holds more complex classes, like:

    public class User
    {
        public string Name { get; set; }
    }
    

    No event is fired if you change the value of the Name property of an object already part of the collection:

    var me = new User { Name = "Davide" };
    var collection = new ObservableCollection<User>(new User[] { me });
    
    collection.CollectionChanged += WhenCollectionChanges;
    (collection as INotifyPropertyChanged).PropertyChanged += WhenPropertyChanges;
    
    me.Name = "Updated"; // It does not fire any event!
    

    Notice that ObservableCollection<T> is not thread-safe! You can find an interesting article by Gérald Barré (aka Meziantou) where he explains a thread-safe version of ObservableCollection<T> he created. Check it out!

    As always, I suggest exploring the language and toying with the parameters, properties, data types, etc.

    You’ll find lots of exciting things that may come in handy.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Trapped by a Call: Understanding the Digital Arrest Scam

    Trapped by a Call: Understanding the Digital Arrest Scam


    Digital Arrest Scam:

    It all starts with a phone call that seems routine at first—measured, official-sounding, and unexpectedly serious. On the other end is someone claiming to represent a government body, calmly accusing you of crimes you’ve never committed—drug trafficking, money laundering, or something just as alarming. They mention your name, address, and other personal details with unnerving accuracy, making the whole exchange feel disturbingly real. This is the unsettling face of what is now known as the “digital arrest” scam—a new and fast-evolving form of cyber fraud that feeds fear and trust.

     

    Data Aquisition:

    Scammers are capitalizing on massive data breaches to obtain IMSI numbers and SIM details, leveraging Home Location Register data to create meticulously crafted victim profiles. This precise profiling lends digital arrest scams an air of authenticity, making them disturbingly plausible. The April 2024 BoAt breach exposed a staggering 7.5 million customers’ data, including names and contact information, which subsequently surfaced on the dark web. Similarly, Hathway’s system was compromised via a Laravel vulnerability, resulting in the leak of sensitive customer data, including Aadhaar numbers, passport details, and KYC documents. As this personal data circulates, the threat of identity theft and targeted scams becomes increasingly palpable.


    PHYSICAL DATA SUBMISSION and LOCAL LEAKS:

    You submit your passport, it gets digitized and stored without proper security. Later, those exact documents are emailed back to you with fake police letterheads, claiming they were found in a drug bust. It’s a terrifying scam, turning your own physically submitted, insecurely stored data into a weapon used to threaten and coerce you.

    Medium of Communication:

    Scammers hit you via voice calls, spoofing numbers to look like “Delhi Police” or “Govt of India” with high-pressure scripts. They also use video calls on platforms like WhatsApp or Zoom, complete with fake police station backgrounds and uniforms to appear legitimate. Beyond that, watch out for fake SMS with tricky links, and especially emails impersonating government domains with forged court summons or FIRs attached—often even showing your own leaked ID. It’s all designed to look incredibly official and scare you into compliance.

    FAKE KYC UPDATES

    FAKE ARREST WARRANT MAILED TO VICTIM

        

    Payment Reference Number is always unique to a transaction !

    PAN Updating!! Must Come from Income Tax Authority, India

     

    Different Age Groups Victims of the Scam:

    Scammers target all ages, but differently. Elderly folks often fall victim due to less familiarity with cyber tricks and more trust in digital authority. Young adults and students are hit hard too, fearing career defamation, especially since their strong social media presence makes their digital footprints easy to trace. Finally, working professionals are exploited through fears of job loss or social humiliation, playing on their reputation.

    Common Payment Methods and Measures to Claim Authentic Payment:

    Scammers ensure payment authenticity by mimicking well-known Indian bank gateways, even accepting card or UPI details. They go as far as buying valid SSL certificates for that “secure lock” icon. Plus, they manipulate domain names with typosquatting, making fake sites look almost identical to real government URLs. It’s all designed to trick you into believing their payment methods are legitimate.

    It’s  a Fake Website:  Non- Government Email Address

    cscsarkaripariksha[@]gmail[.]com    info[@]sarkaripariksha[.]com

    Official government websites always use   @gov.in   or   @nic.in domains.

    How Quick Heal AntiFraud.AI Detects & Blocks Digital Arrest Scams

    1. Scam Call Detection (Before You Pick Up)

    🔔 Feature: Fraud Call Alert + Scam Protection

    • AI uses AI and global scam databases to flag known fraud numbers.
    • If a scammer is spoofing a number (e.g., police, government, bank), you’ll see an on-screen Fraud Risk Alert before answering—helping you avoid the trap early.

    2. Preventing Remote Access Tools

    🖥️ Feature: Screen Share Alert + Fraud App Detector

    • Detects if the scammer persuades you to install screen-sharing apps like Anydesk, TeamViewer, or any malicious APK.
    • AI immediately sends a high-risk alert:
      “⚠️ Screen sharing detected. This may be a scam.”
    • If unauthorized apps are found, they are flagged and disabled.

     

    3. Banking Activity Monitoring in Real-Time

    💳 Feature: Banking Fraud Alert + Unauthorized Access Alert

    • If a scammer gets access and initiates a money transfer, AntiFraud.AI monitors banking behavior using AI.
    • It identifies suspicious patterns (large transfers, new payees, unusual logins) and immediately alerts you to block or verify them.

     

    4. Payment Verification Interception

    🔐 Feature: Payee Name Announcer + Secure Payments

    • Before completing any transaction, the system reads out the payee’s name, warning you if it’s not a verified recipient.
    • Safe Banking mode blocks unsafe payment pages, fake links, or phishing apps often used by scammers.

     

    5. SIM Swap & Call Forwarding Detection

    📞 Feature: Call Forwarding Alert

    • Scammers sometimes redirect calls and SMS to capture OTPs or bypass security layers.
    • AI instantly notifies you of any SIM manipulation or call forwarding activity—giving you time to stop fraud before it starts.

    6. Post-Incident Protection & Guidance

    🆘 Feature: Victim of a Fraud? + Fraud Protect Buddy

    • If you’re caught mid-scam or realize too late, AntiFraud.AI helps you take action:
    • Block transactions
    • Notify your bank
    • Report the scam to authorities
    • Access recovery support

     

            Ending the Trap: Your Cyber Safety Checklist :

    • Stay alert, don’t panic, and always double-check before you act.
    • Never share your OTP, Bank Details over calls or messages.
    • Never post personal information on social media, as victim profile begins from here.
    • An arrest can’t be made digitally, irrespective of cognizable and non-cognizable offences.
    • Never answer video/audio calls from unknown numbers.
    • No law enforcement agency demands money over audio/ video/ SMS platforms.
    • An arrest warrant shall be presented physically, can never be emailed.
    • Don’t blindly click to any link, it may be fraudulent/ phishing threat.
    • All check the email domains- genuine government emails end in @gov.in or @nic.in.
    • Regularly check in government websites for release of fraudulent web notices/ user digests for awareness.
    • Install security tools like Antifraud.AI that can easily detect fraudulent activities and alert you and ensure your digital security.

    Report to the following Government websites:

    • https://cybercrime.gov.in – Report cybercrime complaints (National Cyber Crime Reporting Portal)
    • https://www.cert-in.org.in – Indian Computer Emergency Response Team
    • Call 1930 – Cybercrime Helpline Number
    • Visit your nearest cyber police station for support

     Stay alert. Prevention is the best protection.



    Source link

  • Deconstructing the 35mm Website: A Look at the Process and Technical Details

    Deconstructing the 35mm Website: A Look at the Process and Technical Details


    The Idea Behind the Project

    This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!

    After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

    After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!

    In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.

    1. The Edge Detection Shader

    Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.

    To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.

    Here’s my model with all the materials applied:

    This way, Three.js can accurately detect each area I want to highlight!

    As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.

    The shader can be found in:

    node_modules/three/examples/jsm/shaders/SobelOperatorShader.js

    I copied the contents of the fragment shader into a separate file so I could freely modify it.

    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        // kernel definition (in glsl matrices are filled in column-major order)
    
        const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
        const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
    
        // fetch the 3x3 neighbourhood of a fragment
    
        // first column
    
        float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
        float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1,  0 ) ).r;
        float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1,  1 ) ).r;
    
        // second column
    
        float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2(  0, -1 ) ).r;
        float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2(  0,  0 ) ).r;
        float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2(  0,  1 ) ).r;
    
        // third column
    
        float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2(  1, -1 ) ).r;
        float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2(  1,  0 ) ).r;
        float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2(  1,  1 ) ).r;
    
        // gradient value in x direction
    
        float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
            Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
            Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
    
        // gradient value in y direction
    
        float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
            Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
            Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
    
        // magnitute of the total gradient
    
        float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
    
        return G;
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);    
    
        float G = sobel(t,texel);
        G= G > 0.001 ? 1. : 0.;
            
        gl_FragColor = vec4(vec3(G),1.0);
    
        #include <colorspace_fragment>
    }

    What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.

    This way, I can modify the output of the edge detection shader before passing it back to the composer:

    float G = sobel(t,texel);
    G= G > 0.001 ? 1. : 0.;

    G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).

    As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).

    This way I can get all the edges to have the same intensity.

    Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:

    import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
    import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
    
    
    export default class CannonF1 {
        constructor() {
            //....code
        }
    
        setupPostprocessing()
        {
    
            SobelOperatorShader.fragmentShader = sobelFragment
    
            this.effectSobel = new ShaderPass(SobelOperatorShader)
            this.effectSobel.uniforms["resolution"].value.x =
            window.innerWidth * Math.min(window.devicePixelRatio, 2)
            this.effectSobel.uniforms["resolution"].value.y =
            window.innerHeight * Math.min(window.devicePixelRatio, 2)
    
            this.composer.addPass(this.effectSobel)
        }
    }

    2. The Mesh Highlight on Hover Effect

    Next, let’s take a look at the lens parts section.

    This is mainly achieved using a Three.js utility called RenderTarget.

    A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.

    Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.

    First I setup the render targets:

    /* 
      ....Code 
    */
    
    createRenderTargets() {
        const sizes = {
          width:
            window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
          height:
            window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
        }
    
        this.renderTargetA = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
    
        this.renderTargetB = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
      }
    
    /* 
      ...Code 
    */

    Then, using three.js Raycaster, I can retrieve the uuid of the mesh that is being hoverer on:

    onMouseMove(event: MouseEvent) {
        this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
        this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
    
        this.raycaster.setFromCamera(this.mouse, this.camera)
        const intersects = this.raycaster.intersectObjects(this.scene.children)
        const target = intersects[0]
    
        if (target && "material" in target.object) {
          const targetMesh = intersects[0].object as THREE.Mesh
          this.cannonF1?.onSelectMesh(targetMesh.uuid)
        } else {
          this.cannonF1?.onSelectMesh()
        }
      }

    In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).

    This way, in my render loop, I can create two distinct renders:

    • One render (renderTargetA) with all the meshes except the hovered mesh
    • Another render (renderTargetB) with only the hovered mesh
    render() {
        // Render renderTargetA
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = false
          } else {
            mesh.visible = true
          }
        })
    
        this.renderer.setRenderTarget(this.renderTargetA)
        this.renderer.render(this.scene, this.camera)
    
        // Render renderTargetB
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = true
          } else {
            mesh.visible = false
          }
        })
        if (this.targetedMesh) {
          this.targetedMesh.children.forEach((child) => {
            child.visible = true
          })
        }
    
        this.renderer.setRenderTarget(this.renderTargetB)
        this.renderer.render(this.scene, this.camera)
    
        this.modelChildren.forEach((mesh) => {
          mesh.visible = false
        })    
    
        this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
        this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
    
        this.renderer.setRenderTarget(null)
      }

    This is what the renderTargetA render looks like:

    …and renderTargetB:

    As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.

    At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:

    What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.

    To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.

    Instead, I used the alpha channel of each individual vertex to set the distance from the camera.

    #include <common>
    
    varying vec3 vPosition;
    uniform vec3 uColor;
    
    float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
        float normalized = (value - oldMin) / (oldMax - oldMin);
        
        return newMin + (newMax - newMin) * normalized;
    }
    
    void main()
    {
        float dist = distance(vPosition,cameraPosition);
    
        float l = luminance( uColor );
    
        gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
    
        #include <colorspace_fragment>
    }

    Here’s an explanation of this shader:

    • First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
    • The uColor value represents the initial color of the mesh.
    • The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
    • Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.

    And here is the updated logic for the postprocessing shader:

    uniform sampler2D tDiffuse;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        //sobel operator
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);
    
        vec4 t1 = texture2D(tDiffuse1,vUv);
        vec4 t2 = texture2D(tDiffuse2,vUv);     
    
        if(t1.a==0.)
        {
            t1.a = 1.;
        }
        if(t2.a==0.)
        {
            t2.a = 1.;
        }
    
    
        float G = sobel(tDiffuse1,texel);
        G= G > 0.001 ? 1. : 0.;
        float Gs = sobel(tDiffuse2,texel);
        Gs = Gs > 0.001 ? 1. : 0.;
        
        vec4 s1 = vec4(vec3(G),1.);
        
        vec4 s2 = vec4(vec3(Gs),1.);    
        
        vec4 sobelTexture = vec4(vec3(0.),1.);
    
    
        if(t1.a>t2.a)
        {
            sobelTexture = s2;       
        }    
        else{
            sobelTexture = s1;
        }    
    
            
        gl_FragColor = sobelTexture;
    
        #include <colorspace_fragment>
    }

    Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.

    3. The Film Roll Effect

    Next is this film roll component that moves and twist on scroll.

    This effect is achieved using only shaders, the component is a single plane component with a shader material.

    All the data is sent to the shader through uniforms:

    export default class Film {  
      constructor() {
        //...code
      }
    
      createGeometry() {
        this.geometry = new THREE.PlaneGeometry(
          60,
          2,
          100,
          10
        )
      }
    
      createMaterial() {
        this.material = new THREE.ShaderMaterial({
          vertexShader,
          fragmentShader,
          side: THREE.DoubleSide,
          transparent: true,
          depthWrite: false,
          blending: THREE.CustomBlending,
          blendEquation: THREE.MaxEquation,
          blendSrc: THREE.SrcAlphaFactor,
          blendDst: THREE.OneMinusSrcAlphaFactor,
          uniforms: {
            uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
            uRadius: new THREE.Uniform(2),
            uXZfreq: new THREE.Uniform(3.525),
            uYfreq: new THREE.Uniform(2.155),
            uOffset: new THREE.Uniform(0),
            uAlphaMap: new THREE.Uniform(
              window.preloader.loadTexture(
                "./alpha-map.jpg",
                "film-alpha-map",
                (texture) => {
                  texture.wrapS = THREE.RepeatWrapping
                  const { width, height } = texture.image
                  this.material.uniforms.uAlphaMapResolution.value =
                    new THREE.Vector2(width, height)
                }
              )
            ),
            //uImages: new THREE.Uniform(new THREE.Vector4()),
            uImages: new THREE.Uniform(
              window.preloader.loadTexture(
                "/film-texture.png",
                "film-image-texture",
                (tex) => {
                  tex.wrapS = THREE.RepeatWrapping
                }
              )
            ),
            uRepeatFactor: new THREE.Uniform(this.repeatFactor),
            uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
            uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
            uFilmColor: new THREE.Uniform(window.colors.orange1),
          },
        })
      }  
    
      createMesh() {
        this.mesh = new THREE.Mesh(this.geometry, this.material)
        this.scene.add(this.mesh)
      }
    }

    The main vertex shader uniforms are:

    • uRadius is the radius of the cylinder shape
    • uXZfreq is the frequency of the twists on the (X,Z) plane
    • uYfreq is a cylinder height factor
    • uOffset is the vertical offset of the roll when you scroll up and down

    Here is how they are used in the vertex shader:

    #define PI 3.14159265359
    
    uniform float uPlaneWidth;
    uniform float uXZfreq;
    uniform float uYfreq;
    varying vec2 vUv;
    uniform float uOffset;
    varying vec3 vPosition;
    uniform float uRadius;
    
    void main()
    {
        vec3 np = position;
        float theta = -(PI*np.x)/(uPlaneWidth*0.5);
    
    
        np.x=cos(uXZfreq*theta+uOffset)*uRadius;
        np.y+=theta*uYfreq;
        np.z=sin(uXZfreq*theta+uOffset)*uRadius;
        
        vec4 modelPosition = modelMatrix * vec4(np, 1.0);
    
        
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectedPosition = projectionMatrix * viewPosition;
        gl_Position = projectedPosition;    
    
    
        vUv=uv;
        vPosition=np;
    }

    As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.

    const tl = gsap.timeline({
      scrollTrigger: {
        trigger: this.section,
        start: "top bottom",
        end: "bottom top",
        scrub: true,
        invalidateOnRefresh: true,        
      },
    })    
    
    tl.to(
      this.material.uniforms.uOffset,
      {
        value: 10,
        duration: 1,
      },
      0
    )

    Conclusion

    That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.

    I learned so much from this project, and I hope you’ll find it just as useful!

    Thank you for reading, and thanks to Codrops for featuring me again!



    Source link

  • Is Random.GetItems the best way to get random items in C# 12? &vert; Code4IT

    Is Random.GetItems the best way to get random items in C# 12? | Code4IT


    You have a collection of items. You want to retrieve N elements randomly. Which alternatives do we have?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common operations when dealing with collections of items is to retrieve a subset of these elements taken randomly.

    Before .NET 8, the most common way to retrieve random items was to order the collection using a random value and then take the first N items of the now sorted collection.

    From .NET 8 on, we have a new method in the Random class: GetItems.

    So, should we use this method or stick to the previous version? Are there other alternatives?

    For the sake of this article, I created a simple record type, CustomRecord, which just contains two properties.

    public record CustomRecord(int Id, string Name);
    

    I then stored a collection of such elements in an array. This article’s final goal is to find the best way to retrieve a random subset of such items. Spoiler alert: it all depends on your definition of best!

    Method #1: get random items with Random.GetItems

    Starting from .NET 8, released in 2023, we now have a new method belonging to the Random class: GetItems.

    There are three overloads:

    public T[] GetItems<T>(T[] choices, int length);
    public T[] GetItems<T>(ReadOnlySpan<T> choices, int length);
    public void GetItems<T>(ReadOnlySpan<T> choices, Span<T> destination);
    

    We will focus on the first overload, which accepts an array of items (choices) in input and returns an array of size length.

    We can use it as such:

    CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
    

    Simple, neat, efficient. Or is it?

    Method #2: get the first N items from a shuffled copy of the initial array

    Another approach is to shuffle the whole initial array using Random.Shuffle. It takes in input an array and shuffles the items in-place.

    Random.Shared.Shuffle(Items);
    CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    

    If you need to preserve the initial order of the items, you should create a copy of the initial array and shuffle only the copy. You can do this by using this syntax:

    CustomRecord[] copy = [.. Items];
    

    If you just need some random items and don’t care about the initial array, you can shuffle it without making a copy.

    Once we’ve shuffled the array, we can pick the first N items to get a subset of random elements.

    Method #3: order by Guid, then take N elements

    Before .NET 8, one of the most used approaches was to order the whole collection by a random value, usually a newly generated Guid, and then take the first N items.

    var randomItems = Items
        .OrderBy(_ => Guid.NewGuid()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach works fine but has the disadvantage that it instantiates a new Guid value for every item in the collection, which is an expensive memory-wise operation.

    Method #4: order by Number, then take N elements

    Another approach was to generate a random number used as a discriminator to order the collection; then, again, we used to get the first N items.

    var randomItems = Items
        .OrderBy(_ => Random.Shared.Next()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach is slightly better because generating a random integer is way faster than generating a new Guid.

    Benchmarks of the different operations

    It’s time to compare the approaches.

    I used BenchmarkDotNet to generate the reports and ChartBenchmark to represent the results visually.

    Let’s see how I structured the benchmark.

    [MemoryDiagnoser]
    public class RandomItemsBenchmark
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
    
        private CustomRecord[] Items;
        private int TotalItemsToBeRetrieved;
        private CustomRecord[] Copy;
    
        [IterationSetup]
        public void Setup()
        {
            var ids = Enumerable.Range(0, Size).ToArray();
            Items = ids.Select(i => new CustomRecord(i, $"Name {i}")).ToArray();
            Copy = [.. Items];
    
            TotalItemsToBeRetrieved = Random.Shared.Next(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithRandomGetItems()
        {
            CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomGuid()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Guid.NewGuid())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomNumber()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Random.Shared.Next())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffle()
        {
            CustomRecord[] copy = [.. Items];
    
            Random.Shared.Shuffle(copy);
            CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffleNoCopy()
        {
            Random.Shared.Shuffle(Copy);
            CustomRecord[] randomItems = Copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    }
    

    We are going to run the benchmarks on arrays with different sizes. We will start with a smaller array with 100 items and move to a bigger one with one million items.

    We generate the initial array of CustomRecord instances for every iteration and store it in the Items property. Then, we randomly choose the number of items to get from the Items array and store it in the TotalItemsToBeRetrieved property.

    We also generate a copy of the initial array at every iteration; this way, we can run Random.Shuffle without modifying the original array.

    Finally, we define the body of the benchmarks using the implementations we saw before.

    Notice: I marked the benchmark for the GetItems method as a baseline, using [Benchmark(Baseline = true)]. This way, we can easily see the results ratio for the other methods compared to this specific method.

    When we run the benchmark, we can see this final result (for simplicity, I removed the Error, StdDev, and Median columns):

    Method Size Mean Ratio Allocated Alloc Ratio
    WithRandomGetItems 100 6.442 us 1.00 424 B 1.00
    WithRandomGuid 100 39.481 us 6.64 3576 B 8.43
    WithRandomNumber 100 22.219 us 3.67 2256 B 5.32
    WithShuffle 100 7.038 us 1.16 1464 B 3.45
    WithShuffleNoCopy 100 4.254 us 0.73 624 B 1.47
    WithRandomGetItems 10000 58.401 us 1.00 5152 B 1.00
    WithRandomGuid 10000 2,369.693 us 65.73 305072 B 59.21
    WithRandomNumber 10000 1,828.325 us 56.47 217680 B 42.25
    WithShuffle 10000 180.978 us 4.74 84312 B 16.36
    WithShuffleNoCopy 10000 156.607 us 4.41 3472 B 0.67
    WithRandomGetItems 1000000 15,069.781 us 1.00 4391616 B 1.00
    WithRandomGuid 1000000 319,088.446 us 42.79 29434720 B 6.70
    WithRandomNumber 1000000 166,111.193 us 22.90 21512408 B 4.90
    WithShuffle 1000000 48,533.527 us 6.44 11575304 B 2.64
    WithShuffleNoCopy 1000000 37,166.068 us 4.57 6881080 B 1.57

    By looking at the numbers, we can notice that:

    • GetItems is the most performant method, both for time and memory allocation;
    • using Guid.NewGuid is the worst approach: it’s 10 to 60 times slower than GetItems, and it allocates, on average, 4x the memory;
    • sorting by random number is a bit better: it’s 30 times slower than GetItems, and it allocates around three times more memory;
    • shuffling the array in place and taking the first N elements is 4x slower than GetItems; if you also have to preserve the original array, notice that you’ll lose some memory allocation performance because you must allocate more memory to create the cloned array.

    Here’s the chart with the performance values. Notice that, for better readability, I used a Log10 scale.

    Results comparison for all executions

    If we move our focus to the array with one million items, we can better understand the impact of choosing one approach instead of the other. Notice that here I used a linear scale since values are on the same magnitude order.

    The purple line represents the memory allocation in bytes.

    Results comparison for one-million-items array

    So, should we use GetItems all over the place? Well, no! Let me tell you why.

    The problem with Random.GetItems: repeated elements

    There’s a huge problem with the GetItems method: it returns duplicate items. So, if you need to get N items without duplicates, GetItems is not the right choice.

    Here’s how you can demonstrate it.

    First, create an array of 100 distinct items. Then, using Random.Shared.GetItems, retrieve 100 items.

    The final array will have 100 items; the array may or may not contain duplicates.

    int[] source = Enumerable.Range(0, 100).ToArray();
    
    StringBuilder sb = new StringBuilder();
    
    for (int i = 1; i <= 200; i++)
    {
        HashSet<int> ints = Random.Shared.GetItems(source, 100).ToHashSet();
        sb.AppendLine($"run-{i}, {ints.Count}");
    }
    
    var finalCsv = sb.ToString();
    

    To check the number of distinct elements, I put the resulting array in a HashSet<int>. The final size of the HashSet will give us the exact percentage of unique values.

    If the HashSet size is exactly 100, it means that GetItems retrieved each element from the original array exactly once.

    For simplicity, I formatted the result in CSV format so that I could generate plots with it.

    Unique values percentage returned by GetItems

    As you can see, on average, we have 65% of unique items and 35% of duplicate items.

    Further readings

    I used the Enumerable.Range method to generate the initial items.

    I wrote an article to explain how to use it, which are some parts to consider when using it, and more.

    🔗 LINQ’s Enumerable.Range to generate a sequence of consecutive numbers | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    You should not replace the way you get random items from the array by using Random.GetItems. Well, unless you are okay with having duplicates.

    If you need unique values, you should rely on other methods, such as Random.Shuffle.

    All in all, always remember to validate your assumptions by running experiments on the methods you are not sure you can trust!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • The Quick Guide to Dijkstra's Algorithm



    The Quick Guide to Dijkstra's Algorithm



    Source link

  • Building a Physics-Based Character Controller with the Help of AI

    Building a Physics-Based Character Controller with the Help of AI


    Creating a third-person character controller involves more than just moving an object around a 3D scene. Realistic movement, grounded physics, responsive jumping, and animation blending are essential for a polished feel. This article explores how these elements can be assembled — not through traditional manual coding, but via AI-assisted development using Bolt.new, a browser-based AI-assisted development tool that generates web code from natural language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It provides a lightweight environment where developers can focus on describing functionality rather than writing boilerplate.

    For this character controller, Bolt handled tasks like setting up physics, integrating animations, and managing input systems, making it easier to test ideas and iterate quickly without switching between tools or writing everything from scratch.

    If you’re curious to learn more, check out this article on Codrops, which also explores the platform’s capabilities and showcases another real-world project built entirely with AI.

    The final project is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create complex, interactive 3D experiences by guiding AI — focusing on behavior and structure rather than syntax.

    Step 1: Setting Up Physics with a Capsule and Ground

    The character controller begins with a simple setup: a capsule collider for the player and a ground plane to interact with. Rapier, a fast and lightweight physics engine built in WebAssembly, handles gravity, rigid body dynamics, and collisions. This forms the foundation for player movement and world interaction.

    The capsule shape was chosen for its stability when sliding across surfaces and climbing over small obstacles — a common pattern in real-time games.

    Step 2: Real-Time Tuning with a GUI

    To enable rapid iteration and balance gameplay feel, a visual GUI was introduced (using Leva.js). This panel exposes parameters such as:

    • Player movement speed
    • Jump force
    • Gravity scale
    • Follow camera offset
    • Debug toggles

    By integrating this directly into the experience, developers can tune the controller live without needing to edit or recompile code, speeding up testing and design decisions.

    Step 3: Ground Detection with Raycasting

    A raycast is used to detect whether the player is grounded. This simple yet effective check prevents the character from jumping mid-air or triggering multiple jumps in sequence.

    The logic is executed on every frame, casting a ray downward from the base of the capsule collider. When contact is confirmed, the jump input is enabled. This technique also allows smooth transitions between grounded and falling states in the animation system.

    Step 4: Integrating a Rigged Character with Animation States

    The visual character uses a rigged GLB model via Mixamo, with three key animations: Idle, Run, and Fall. These are integrated as follows:

    • The GLB character is attached as a child of the capsule collider
    • The animation state switches dynamically based on velocity and grounded status
    • Transitions are handled via animation blending for a natural feel

    This setup keeps the visuals in sync with physics, while preserving modular control over the physical capsule.

    Step 5: World Building and Asset Integration

    The environment was arranged in Blender, then exported as a single .glb file and imported into the bolt.new project scene. This approach allows for efficient scene composition while keeping asset management simple.

    For web, using .glb keeps geometry and textures bundled together. To maintain performance, it’s recommended to keep textures at 1024×1024 resolution or other square power-of-two sizes (e.g. 256, 512, 2048). This ensures optimal GPU memory usage and faster load times across devices.

    Special thanks to KayLousberg for the low-poly 3D kit used for prototyping.

    Step 6: Cross-Platform Input Support

    The controller was designed to work seamlessly across desktop, mobile, and gamepad platforms — all built using AI-generated logic through Bolt.

    Gamepad support was added using the Gamepad API, allowing players to plug in a controller and play with analog input.

    On desktop, the controller uses standard keyboard input (WASD or arrow keys) and mouse movement for camera control.

    On mobile, AI-generated code enabled an on-screen joystick and jump button, making the game fully touch-compatible.

    All input types control the same physics-driven character, ensuring consistent behavior across devices — whether you’re playing on a laptop, touchscreen, or game controller.

    This cross-platform support was implemented entirely through natural language prompts, showcasing how AI can translate high-level intent into working input systems.

    The Role of AI in the Workflow

    What makes this controller unique isn’t the mechanics — it’s the process. Every system was generated by AI through descriptive prompts, allowing the developer to work more like a creative director than a traditional engineer.

    AI handled the boilerplate, the physics setup, the animation switching logic — all based on clear creative goals. This opens new doors for prototyping and interactive design, where iteration speed matters more than syntax.

    This character controller demo includes:

    • Capsule collider with physics
    • Grounded detection via raycast
    • State-driven animation blending
    • GUI controls for tuning
    • Environment interaction with static/dynamic objects
    • Cross-Platform Input Support

    It’s a strong starting point for creating browser-based games, interactive experiences, or prototyping new ideas — all with the help of AI.

    Check out the full game built using this setup as a base: 🎮 Demo Game

    Thanks for following along — have fun building 😊



    Source link

  • IFormattable interface, to define different string formats for the same object &vert; Code4IT

    IFormattable interface, to define different string formats for the same object | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even when the internal data is the same, sometimes you can represent it in different ways. Think of the DateTime structure: by using different modifiers, you can represent the same date in different formats.

    DateTime dt = new DateTime(2024, 1, 1, 8, 53, 14);
    
    Console.WriteLine(dt.ToString("yyyy-MM-dddd")); //2024-01-Monday
    Console.WriteLine(dt.ToString("Y")); //January 2024
    

    Same datetime, different formats.

    You can further customise it by adding the CultureInfo:

    System.Globalization.CultureInfo italianCulture = new System.Globalization.CultureInfo("it-IT");
    
    Console.WriteLine(dt.ToString("yyyy-MM-dddd", italianCulture)); //2024-01-lunedì
    Console.WriteLine(dt.ToString("Y", italianCulture)); //gennaio 2024
    

    Now, how can we use this behaviour in our custom classes?

    IFormattable interface for custom ToString definition

    Take this simple POCO class:

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    }
    

    We can make this class implement the IFormattable interface so that we can define and use the advanced ToString:

    public class Person : IFormattable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    
        public string ToString(string? format, IFormatProvider? formatProvider)
        {
            // Here, you define how to work with different formats
        }
    }
    

    Now, we can define the different formats. Since I like to keep the available formats close to the main class, I added a nested class that only exposes the names of the formats.

    public class Person : IFormattable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    
        public string ToString(string? format, IFormatProvider? formatProvider)
        {
            // Here, you define how to work with different formats
        }
    
        public static class StringFormats
        {
            public const string FirstAndLastName = "FL";
            public const string Mini = "Mini";
            public const string Full = "Full";
        }
    }
    

    Finally, we can implement the ToString(string? format, IFormatProvider? formatProvider) method, taking care of all the different formats we support (remember to handle the case when the format is not recognised!)

    public string ToString(string? format, IFormatProvider? formatProvider)
    {
        switch (format)
        {
            case StringFormats.FirstAndLastName:
                return string.Format("{0} {1}", FirstName, LastName);
            case StringFormats.Full:
            {
                FormattableString fs = $"{FirstName} {LastName} ({BirthDate:D})";
                return fs.ToString(formatProvider);
            }
            case StringFormats.Mini:
                return $"{FirstName.Substring(0, 1)}.{LastName.Substring(0, 1)}";
            default:
                return this.ToString();
        }
    }
    

    A few things to notice:

    1. I use a switch statement based on the values defined in the StringFormats subclass. If the format is empty or unrecognised, this method returns the default implementation of ToString.
    2. You can use whichever way to generate a string, like string interpolation, or more complex ways;
    3. In the StringFormats.Full branch, I stored the string format in a FormattableString instance to apply the input formatProvider to the final result.

    Getting a custom string representation of an object

    We can try the different formatting options now that we have implemented them all.

    Look at how the behaviour changes based on the formatting and input culture (Hint: venerdí is the Italian for Friday.).

    Person person = new Person
    {
        FirstName = "Albert",
        LastName = "Einstein",
        BirthDate = new DateTime(1879, 3, 14)
    };
    
    System.Globalization.CultureInfo italianCulture = new System.Globalization.CultureInfo("it-IT");
    
    Console.WriteLine(person.ToString(Person.StringFormats.FirstAndLastName, italianCulture)); //Albert Einstein
    
    Console.WriteLine(person.ToString(Person.StringFormats.Mini, italianCulture)); //A.E
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, italianCulture)); //Albert Einstein (venerdì 14 marzo 1879)
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, null)); //Albert Einstein (Friday, March 14, 1879)
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.InvariantCulture)); //Albert Einstein (Friday, 14 March 1879)
    
    Console.WriteLine(person.ToString("INVALID FORMAT", CultureInfo.InvariantCulture)); //Scripts.General.IFormattableTest+Person
    
    Console.WriteLine(string.Format("I am {0:Mini}", person)); //I am A.E
    
    Console.WriteLine($"I am not {person:Full}"); //I am not Albert Einstein (Friday, March 14, 1879)
    

    Not only that, but now the result can also depend on the Culture related to the current thread:

    using (new TemporaryThreadCulture(italianCulture))
    {
        Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.CurrentCulture)); // Albert Einstein (venerdì 14 marzo 1879)
    }
    
    using (new TemporaryThreadCulture(germanCulture))
    {
        Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.CurrentCulture)); //Albert Einstein (Freitag, 14. März 1879)
    }
    

    (note: TemporaryThreadCulture is a custom class that I explained in a previous article – see below)

    Further readings

    You might be thinking «wow, somebody still uses String.Format? Weird!»

    Well, even though it seems an old-style method to generate strings, it’s still valid, as I explain here:

    🔗How to use String.Format – and why you should care about it | Code4IT

    Also, how did I temporarily change the culture of the thread? Here’s how:
    🔗 C# Tip: How to temporarily change the CurrentCulture | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Bolt.new: Web Creation at the Speed of Thought

    Bolt.new: Web Creation at the Speed of Thought


    What Is Bolt.new?

    Bolt.new is a browser-based AI web development agent focused on speed and simplicity. It lets anyone prototype, test, and publish web apps instantly—without any dev experience required.

    Designed for anyone with an idea, Bolt empowers users to create fully functional websites and apps using just plain language. No coding experience? No problem. By combining real-time feedback with prompt-based development, Bolt turns your words into working code right in the browser. Whether you’re a designer, marketer, educator, or curious first-timer, Bolt.new offers an intuitive, AI-assisted playground where you can build, iterate, and launch at the speed of thought.

    Core Features:

    • Instantly live: Bolt creates your code as you type—no server setup needed.
    • Web-native: Write in HTML, CSS, and JavaScript; no frameworks required.
    • Live preview: Real-time output without reloads or delays.
    • One-click sharing: Publish your project with a single URL.

    A Lean Coding Playground

    Bolt is a lightweight workspace that allows anyone to become an engineer without knowing how to code. Bolt presents users with a simple, chat-based environment in which you can prompt your agent to create anything you can imagine. Features include:

    • Split view: Code editor and preview side by side.
    • Multiple files: Organize HTML, CSS, and JS independently.
    • ES module support: Structure your scripts cleanly and modularly.
    • Live interaction testing: Great for animations and frontend logic.

    Beyond the Frontend

    With integrated AI and full-stack support via WebContainers (from StackBlitz), Bolt.new can handle backend tasks right in the browser.

    • Full-stack ready: Run Node.js servers, install npm packages, and test APIs—all in-browser.
    • AI-assisted dev: Use natural-language prompts for setup and changes.
    • Quick deployment: Push to production with a single click, directly from the editor.

    Design-to-Code with Figma

    For designers, Bolt.new is more than a dev tool, it’s a creative enabler. By eliminating the need to write code, it opens the door to hands-on prototyping, faster iteration, and tighter collaboration. With just a prompt, designers can bring interfaces to life, experiment with interactivity, and see their ideas in action – without leaving the browser. Whether you’re translating a Figma file into responsive HTML or testing a new UX flow, Bolt gives you the freedom to move from concept to clickable with zero friction.

    Key Features:

    • Bolt.new connects directly with Figma, translating design components into working web code ideal for fast iteration and developer-designer collaboration.
    • Enable real-time collaboration between teams.
    • Use it for prototyping, handoff, or production-ready builds.

    Trying it Out

    To put Bolt.new to the test, we set out to build a Daily Coding Challenge Planner. Here’s the prompt we used:

    Web App Request: Daily Frontend Coding Challenge Planner

    I’d like a web app that helps me plan and keep track of one coding challenge each day. The main part of the app should be a calendar that shows the whole month. I want to be able to click on a day and add a challenge to it — only one challenge per day.

    Each challenge should have:

    • A title (what the challenge is)
    • A category (like “CSS”, “JavaScript”, “React”, etc.)
    • A way to mark it as “completed” once I finish it
    • Optionally, a link to a tutorial or resource I’m using

    I want to be able to:

    • Move challenges from one day to another by dragging and dropping them
    • Add new categories or rename existing ones
    • Easily delete or edit a challenge if I need to

    There should also be a side panel or settings area to manage my list of categories.

    The app should:

    • Look clean and modern
    • Work well on both computer and mobile
    • Offer light/dark mode switch
    • Automatically save data—no login required

    This is a tool to help me stay consistent with daily practice and see my progress over time.

    Building with Bolt.new

    We handed the prompt to Bolt.new and watched it go to work.

    • Visual feedback while the app was being generated.
    • The initial result included key features: adding, editing, deleting challenges, and drag-and-drop.
    • Prompts like “fix dark mode switch” and “add category colors” helped refine the UI.

    Integrated shadcn/ui components gave the interface a polished finish.

    Screenshots

    The Daily Frontend Coding Challenge Planner app, built using just a few prompts
    Adding a new challenge to the planner

    With everything in place, we deployed the app in one click.

    👉 See the live version here
    👉 View the source code on GitHub

    Verdict

    We were genuinely impressed by how quickly Bolt.new generated a working app from just a prompt. Minor tweaks were easy, and even a small bug resolved itself with minimal guidance.

    Try it yourself—you might be surprised by how much you can build with so little effort.

    🔗 Try Bolt.new

    Final Thoughts

    The future of the web feels more accessible, creative, and immediate—and tools like Bolt.new are helping shape it. In a landscape full of complex tooling and steep learning curves, Bolt.new offers a refreshing alternative: an intelligent, intuitive space where ideas take form instantly.

    Bolt lowers the barrier to building for the web. Its prompt-based interface, real-time feedback, and seamless deployment turn what used to be hours of setup into minutes of creativity. With support for full-stack workflows, Figma integration, and AI-assisted editing, Bolt.new isn’t just another code editor, it’s a glimpse into a more accessible, collaborative, and accelerated future for web creation.

    What will you create?



    Source link