برچسب: Use

  • In Landmark Ruling, Court Declares Training AI Is Fair Use But Draws a Hard Line on Piracy



    In Landmark Ruling, Court Declares Training AI Is Fair Use But Draws a Hard Line on Piracy



    Source link

  • Use TestCase to run similar unit tests with NUnit | Code4IT

    Use TestCase to run similar unit tests with NUnit | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In my opinion, Unit tests should be well structured and written even better than production code.

    In fact, Unit Tests act as a first level of documentation of what your code does and, if written properly, can be the key to fixing bugs quickly and without adding regressions.

    One way to improve readability is by grouping similar tests that only differ by the initial input but whose behaviour is the same.

    Let’s use a dummy example: some tests on a simple Calculator class that only performs sums on int values.

    public static class Calculator
    {
        public static int Sum(int first, int second) => first + second;
    }
    

    One way to create tests is by creating one test for each possible combination of values:

    public class SumTests
    {
    
        [Test]
        public void SumPositiveNumbers()
        {
            var result = Calculator.Sum(1, 5);
            Assert.That(result, Is.EqualTo(6));
        }
    
        [Test]
        public void SumNegativeNumbers()
        {
            var result = Calculator.Sum(-1, -5);
            Assert.That(result, Is.EqualTo(-6));
        }
    
        [Test]
        public void SumWithZero()
        {
            var result = Calculator.Sum(1, 0);
            Assert.That(result, Is.EqualTo(1));
        }
    }
    

    However, it’s not a good idea: you’ll end up with lots of identical tests (DRY, remember?) that add little to no value to the test suite. Also, this approach forces you to add a new test method to every new kind of test that pops into your mind.

    When possible, we should generalize it. With NUnit, we can use the TestCase attribute to specify the list of parameters passed in input to our test method, including the expected result.

    We can then simplify the whole test class by creating only one method that accepts the different cases in input and runs tests on those values.

    [Test]
    [TestCase(1, 5, 6)]
    [TestCase(-1, -5, -6)]
    [TestCase(1, 0, 1)]
    public void SumWorksCorrectly(int first, int second, int expected)
    {
        var result = Calculator.Sum(first, second);
        Assert.That(result, Is.EqualTo(expected));
    }
    

    By using TestCase, you can cover different cases by simply adding a new case without creating new methods.

    Clearly, don’t abuse it: use it only to group methods with similar behaviour – and don’t add if statements in the test method!

    There is a more advanced way to create a TestCase in NUnit, named TestCaseSource – but we will talk about it in a future C# tip 😉

    Further readings

    If you are using NUnit, I suggest you read this article about custom equality checks – you might find it handy in your code!

    🔗 C# Tip: Use custom Equality comparers in Nunit tests | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 2 ways to use custom equality rules in a HashSet | Code4IT

    2 ways to use custom equality rules in a HashSet | Code4IT


    With HashSet, you can get a list of different items in a performant way. What if you need a custom way to define when two objects are equal?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, object instances can be considered equal even though some of their properties are different. Consider a movie translated into different languages: the Italian and French versions are different, but the movie is the same.

    If we want to store unique values in a collection, we can use a HashSet<T>. But how can we store items in a HashSet when we must follow a custom rule to define if two objects are equal?

    In this article, we will learn two ways to add custom equality checks when using a HashSet.

    Let’s start with a dummy class: Pirate.

    public class Pirate
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    }
    

    I’m going to add some instances of Pirate to a HashSet. Please, note that there are two pirates whose Id is 4:

    List<Pirate> mugiwara = new List<Pirate>()
    {
        new Pirate(1, "Luffy"),
        new Pirate(2, "Zoro"),
        new Pirate(3, "Nami"),
        new Pirate(4, "Sanji"), // This ...
        new Pirate(5, "Chopper"),
        new Pirate(6, "Robin"),
        new Pirate(4, "Duval"), // ... and this
    };
    
    
    HashSet<Pirate> hashSet = new HashSet<Pirate>();
    
    
    foreach (var pirate in mugiwara)
    {
        hashSet.Add(pirate);
    }
    
    
    _output.WriteAsTable(hashSet);
    

    (I really hope you’ll get the reference 😂)

    Now, what will we print on the console? (ps: output is just a wrapper around some functionalities provided by Spectre.Console, that I used here to print a table)

    HashSet result when no equality rule is defined

    As you can see, we have both Sanji and Duval: even though their Ids are the same, those are two distinct objects.

    Also, we haven’t told HashSet that the Id property must be used as a discriminator.

    Define a custom IEqualityComparer in a C# HashSet

    In order to add a custom way to tell the HashSet that two objects can be treated as equal, we can define a custom equality comparer: it’s nothing but a class that implements the IEqualityComparer<T> interface, where T is the name of the class we are working on.

    public class PirateComparer : IEqualityComparer<Pirate>
    {
        bool IEqualityComparer<Pirate>.Equals(Pirate? x, Pirate? y)
        {
            Console.WriteLine($"Equals: {x.Name} vs {y.Name}");
            return x.Id == y.Id;
        }
    
        int IEqualityComparer<Pirate>.GetHashCode(Pirate obj)
        {
            Console.WriteLine("GetHashCode " + obj.Name);
            return obj.Id.GetHashCode();
        }
    }
    

    The first method, Equals, compares two instances of a class to tell if they are equal, following the custom rules we write.

    The second method, GetHashCode, defines a way to build an object’s hash code given its internal status. In this case, I’m saying that the hash code of a Pirate object is just the hash code of its Id property.

    To include this custom comparer, you must add a new instance of PirateComparer to the HashSet declaration:

    HashSet<Pirate> hashSet = new HashSet<Pirate>(new PirateComparer());
    

    Let’s rerun the example, and admire the result:

    HashSet result with custom comparer

    As you can see, there is only one item whose Id is 4: Sanji.

    Let’s focus a bit on the messages printed when executing Equals and GetHashCode.

    GetHashCode Luffy
    GetHashCode Zoro
    GetHashCode Nami
    GetHashCode Sanji
    GetHashCode Chopper
    GetHashCode Robin
    GetHashCode Duval
    Equals: Sanji vs Duval
    

    Every time we insert an item, we call the GetHashCode method to generate an internal ID used by the HashSet to check if that item already exists.

    As stated by Microsoft’s documentation,

    Two objects that are equal return hash codes that are equal. However, the reverse is not true: equal hash codes do not imply object equality, because different (unequal) objects can have identical hash codes.

    This means that if the Hash Code is already used, it’s not guaranteed that the objects are equal. That’s why we need to implement the Equals method (hint: do not just compare the HashCode of the two objects!).

    Is implementing a custom IEqualityComparer the best choice?

    As always, it depends.

    On the one hand, using a custom IEqualityComparer has the advantage of allowing you to have different HashSets work differently depending on the EqualityComparer passed in input; on the other hand, you are now forced to pass an instance of IEqualityComparer everywhere you use a HashSet — and if you forget one, you’ll have a system with inconsistent behavior.

    There must be a way to ensure consistency throughout the whole codebase.

    Implement the IEquatable interface

    It makes sense to implement the equality checks directly inside the type passed as a generic type to the HashSet.

    To do that, you need to have that class implement the IEquatable<T> interface, where T is the class itself.

    Let’s rework the Pirate class, letting it implement the IEquatable<Pirate> interface.

    public class Pirate : IEquatable<Pirate>
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    
        bool IEquatable<Pirate>.Equals(Pirate? other)
        {
            Console.WriteLine($"IEquatable Equals: {this.Name} vs {other.Name}");
            return this.Id == other.Id;
        }
    
        public override bool Equals(object obj)
        {
            Console.WriteLine($"Override Equals {this.Name} vs {(obj as Pirate).Name}");
            return Equals(obj as Pirate);
        }
    
        public override int GetHashCode()
        {
            Console.WriteLine($"GetHashCode {this.Id}");
            return (Id).GetHashCode();
        }
    }
    

    The IEquatable interface forces you to implement the Equals method. So, now we have two implementations of Equals (the one for IEquatable and the one that overrides the default implementation). Which one is correct? Is the GetHashCode really used?

    Let’s see what happens in the next screenshot:

    HashSet result with a class that implements IEquatable

    As you could’ve imagined, the Equals method called in this case is the one needed to implement the IEquatable interface.

    Please note that, as we don’t need to use the custom comparer, the HashSet initialization becomes:

    HashSet<Pirate> hashSet = new HashSet<Pirate>();
    

    What has the precedence: IEquatable or IEqualityComparer?

    What happens when we use both IEquatable and IEqualityComparer?

    Let’s quickly demonstrate it.

    First of all, keep the previous implementation of the Pirate class, where the equality check is based on the Id property:

    public class Pirate : IEquatable<Pirate>
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    
        bool IEquatable<Pirate>.Equals(Pirate? other)
        {
            Console.WriteLine($"IEquatable Equals: {this.Name} vs {other.Name}");
            return this.Id == other.Id;
        }
    
        public override int GetHashCode()
        {
            Console.WriteLine($"GetHashCode {this.Id}");
            return (Id).GetHashCode();
        }
    }
    

    Now, create a new IEqualityComparer where the equality is based on the Name property.

    public class PirateComparerByName : IEqualityComparer<Pirate>
    {
        bool IEqualityComparer<Pirate>.Equals(Pirate? x, Pirate? y)
        {
            Console.WriteLine($"Equals: {x.Name} vs {y.Name}");
            return x.Name == y.Name;
        }
        int IEqualityComparer<Pirate>.GetHashCode(Pirate obj)
        {
            Console.WriteLine("GetHashCode " + obj.Name);
            return obj.Name.GetHashCode();
        }
    }
    

    Now we have custom checks on both the Name and the Id.

    It’s time to add a new pirate to the list, and initialize the HashSet by passing in the constructor an instance of PirateComparerByName.

    List<Pirate> mugiwara = new List<Pirate>()
    {
        new Pirate(1, "Luffy"),
        new Pirate(2, "Zoro"),
        new Pirate(3, "Nami"),
        new Pirate(4, "Sanji"), // Id = 4
        new Pirate(5, "Chopper"), // Name = Chopper
        new Pirate(6, "Robin"),
        new Pirate(4, "Duval"), // Id = 4
        new Pirate(7, "Chopper") // Name = Chopper
    };
    
    
    HashSet<Pirate> hashSet = new HashSet<Pirate>(new PirateComparerByName());
    
    
    foreach (var pirate in mugiwara)
    {
        hashSet.Add(pirate);
    }
    

    We now have two pirates with ID = 4 and two other pirates with Name = Chopper.

    Can you foresee what will happen?

    HashSet items when defining both IEqualityComparare and IEquatable

    The checks on the ID are totally ignored: in fact, the final result contains both Sanji and Duval, even if their IDs are the same. The custom IEqualityComparer has the precedence over the IEquatable interface.

    This article first appeared on Code4IT 🐧

    Wrapping up

    This started as a short article but turned out to be a more complex topic.

    There is actually more to discuss, like performance considerations, code readability, and more. Maybe we’ll tackle those topics in a future article.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • ZTNA Use Cases and Benefits for BFSI Companies

    ZTNA Use Cases and Benefits for BFSI Companies


    In an era of digital banking, cloud migration, and a growing cyber threat landscape, traditional perimeter-based security models are no longer sufficient for the Banking, Financial Services, and Insurance (BFSI) sector. Enter Zero Trust Network Access (ZTNA) — a modern security framework that aligns perfectly with the BFSI industry’s need for robust, scalable, and compliant cybersecurity practices.

    This blog explores the key use cases and benefits of ZTNA for BFSI organizations.

    ZTNA Use Cases for BFSI

    1. Secure Remote Access for Employees

    With hybrid and remote work becoming the norm, financial institutions must ensure secure access to critical applications and data outside corporate networks. ZTNA allows secure, identity-based access without exposing internal resources to the public internet. This ensures that only authenticated and authorized users can access specific resources, reducing attack surfaces and preventing lateral movement by malicious actors.

    1. Protect Customer Data Using Least Privileged Access

    ZTNA enforces the principle of least privilege, granting users access only to the resources necessary for their roles. This granular control is vital in BFSI, where customer financial data is highly sensitive. By limiting access based on contextual parameters such as user identity, device health, and location, ZTNA drastically reduces the chances of data leakage or internal misuse.

    1. Compliance with Regulatory Requirements

    The BFSI sector is governed by stringent regulations such as RBI guidelines, PCI DSS, GDPR, and more. ZTNA provides centralized visibility, detailed audit logs, and fine-grained access control—all critical for meeting regulatory requirements. It also helps institutions demonstrate proactive data protection measures during audits and assessments.

    1. Vendor and Third-Party Access Management

    Banks and insurers frequently engage with external vendors, consultants, and partners. Traditional VPNs provide broad access once a connection is established, posing a significant security risk. ZTNA addresses this by granting secure, time-bound, and purpose-specific access to third parties—without ever bringing them inside the trusted network perimeter.

    Key Benefits of ZTNA for BFSI

    1. Reduced Risk of Data Breaches

    By minimizing the attack surface and verifying every user and device before granting access, ZTNA significantly lowers the risk of unauthorized access and data breaches. Since applications are never directly exposed to the internet, ZTNA also protects against exploitation of vulnerabilities in public-facing assets.

    1. Improved Compliance Posture

    ZTNA simplifies compliance by offering audit-ready logs, consistent policy enforcement, and better visibility into user activity. BFSI firms can use these capabilities to ensure adherence to local and global regulations and quickly respond to compliance audits with accurate data.

    1. Enhanced Customer Trust and Loyalty

    Security breaches in financial institutions can erode customer trust instantly. By adopting a Zero Trust approach, organizations can demonstrate their commitment to customer data protection, thereby enhancing credibility, loyalty, and long-term customer relationships.

    1. Cost Savings on Legacy VPNs

    Legacy VPN solutions are often complex, expensive, and challenging to scale. ZTNA offers a modern alternative that is more efficient and cost-effective. It eliminates the need for dedicated hardware and reduces operational overhead by centralizing policy management in the cloud.

    1. Scalability for Digital Transformation

    As BFSI institutions embrace digital transformation—be it cloud adoption, mobile banking, or FinTech partnerships—ZTNA provides a scalable, cloud-native security model that grows with the business. It supports rapid onboarding of new users, apps, and services without compromising on security.

    Final Thoughts

    ZTNA is more than just a security upgrade—it’s a strategic enabler for BFSI organizations looking to build resilient, compliant, and customer-centric digital ecosystems. With its ability to secure access for employees, vendors, and partners while ensuring regulatory compliance and data privacy, ZTNA is fast becoming the cornerstone of modern cybersecurity strategies in the financial sector.

    Ready to embrace Zero Trust? Identify high-risk access points and gradually implement ZTNA for your most critical systems. The transformation may be phased, but the security gains are immediate and long-lasting.

    Seqrite’s Zero Trust Network Access (ZTNA) solution empowers BFSI organizations with secure, seamless, and policy-driven access control tailored for today’s hybrid and regulated environments. Partner with Seqrite to strengthen data protection, streamline compliance, and accelerate your digital transformation journey.



    Source link

  • How to use IHttpClientFactory and WireMock.NET together using Moq

    How to use IHttpClientFactory and WireMock.NET together using Moq


    WireMock.NET is a popular library used to simulate network communication through HTTP. But there is no simple way to integrate the generated in-memory server with an instance of IHttpClientFactory injected via constructor. Right? Wrong!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Testing the integration with external HTTP clients can be a cumbersome task, but most of the time, it is necessary to ensure that a method is able to perform correct operations – not only sending the right information but also ensuring that we are able to read the content returned from the called API.

    Instead of spinning up a real server (even if in the local environment), we can simulate a connection to a mock server. A good library for creating temporary in-memory servers is WireMock.NET.

    Many articles I read online focus on creating a simple HttpClient, using WireMock.NET to drive its behaviour. In this article, we are going to do a little step further: we are going to use WireMock.NET to handle HttpClients generated, using Moq, via IHttpClientFactory.

    Explaining the dummy class used for the examples

    As per every practical article, we must start with a dummy example.

    For the sake of this article, I’ve created a dummy class with a single method that calls an external API to retrieve details of a book and then reads the returned content. If the call is successful, the method returns an instance of Book; otherwise, it throws a BookServiceException exception.

    Just for completeness, here’s the Book class:

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
    }
    

    And here’s the BookServiceException definition:

    [Serializable]
    public class BookServiceException: Exception
    {
        public BookServiceException(string message, Exception inner) : base(message, inner) { }
        protected BookServiceException(
          System.Runtime.Serialization.SerializationInfo info,
          System.Runtime.Serialization.StreamingContext context) : base(info, context) { }
    }
    

    Finally, we have our main class:

    public class BookService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public BookService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<Book> GetBookById(int id)
        {
    
            string url = $"/api/books/{id}";
            HttpClient httpClient = _httpClientFactory.CreateClient("books_client");
    
            try
            {
                    Book? book = await httpClient.GetFromJsonAsync<Book>(url);
                    return book;
            }
            catch (Exception ex)
            {
                    throw new BookServiceException($"There was an error while getting info about the book {id}", ex);
            }
        }
    }
    

    There are just two things to notice:

    • We are injecting an instance of IHttpClientFactory into the constructor.
    • We are generating an instance of HttpClient by passing a name to the CreateClient method of IHttpClientFactory.

    Now that we have our cards on the table, we can start!

    WireMock.NET, a library to simulate HTTP calls

    WireMock is an open-source platform you can install locally to create a real mock server. You can even create a cloud environment to generate and test HTTP endpoints.

    However, for this article we are interested in the NuGet package that takes inspiration from the WireMock project, allowing .NET developers to generate disposable in-memory servers: WireMock.NET.

    To add the library, you must add the WireMock.NET NuGet package to your project, for example using dotnet add package WireMock.Net.

    Once the package is ready, you can generate a test server in your Unit Tests class:

    public class WireMockTests
    {
        private WireMockServer _server;
    
        [OneTimeSetUp]
        public void OneTimeSetUp()
        {
            _server = WireMockServer.Start();
        }
    
        [SetUp]
        public void Setup()
        {
            _server.Reset();
        }
    
        [OneTimeTearDown]
        public void OneTimeTearDown()
        {
            _server.Stop();
        }
    }
    

    You can instantiate a new instance of WireMockServer in the OneTimeSetUp step, store it in a private field, and make it accessible to every test in the test class.

    Before each test run, you can reset the internal status of the mock server by running the Reset() method. I’d suggest you reset the server to avoid unintentional internal status, but it all depends on what you want to do with the server instance.

    Finally, remember to free up resources by calling the Stop() method in the OneTimeTearDown phase (but not during the TearDown phase: you still need the server to be on while running your tests!).

    Basic configuration of HTTP requests and responses with WireMock.NET

    The basic structure of the definition of a mock response using WireMock.NET is made of two parts:

    1. Within the Given method, you define the HTTP Verb and URL path whose response is going to be mocked.
    2. Using RespondWith you define what the mock server must return when the endpoint specified in the Given step is called.

    In the next example, you can see that the _server instance (the one I instantiated in the OneTimeSetUp phase, remember?) must return a specific body (responseBody) and the 200 HTTP Status Code when the /api/books/42 endpoint is called.

    string responseBody = @"
    {
    ""Id"": 42,
    ""Title"": ""Life, the Universe and Everything""
    }
    ";
    
    _server
     .Given(Request.Create().WithPath("/api/books/42").UsingGet())
     .RespondWith(
      Response.Create()
     .WithStatusCode(200)
     .WithBody(responseBody)
     );
    

    Similarly, you can define that an endpoint will return an error by changing its status code:

    _server
    .Given(Request.Create().WithPath("/api/books/42").UsingGet())
    .RespondWith(
      Response.Create()
     .WithStatusCode(404)
    );
    

    All in all, both the request and the response are highly customizable: you can add HTTP Headers, delays, cookies, and much more.

    Look closely; there’s one part that is missing: What is the full URL? We have declared only the path (/api/books/42) but have no info about the hostname and the port used to communicate.

    How to integrate WireMock.NET with a Moq-driven IHttpClientFactory

    In order to have WireMock.NET react to an HTTP call, we have to call the exact URL – even the hostname and port must match. But when we create a mocked HttpClient – like we did in this article – we don’t have a real hostname. So, how can we have WireMock.NET and HttpClient work together?

    The answer is easy: since WireMockServer.Start() automatically picks a free port in your localhost, you don’t have to guess the port number, but you can reference the current instance of _server.

    Once the WireMockServer is created, internally it contains the reference to one or more URLs it will use to listen for HTTP requests, intercepting the calls and replying in place of a real server. You can then use one of these ports to configure the HttpClient generated by the HttpClientFactory.

    Let’s see the code:

    [Test]
    public async Task GetBookById_Should_HandleBadRequests()
    {
        string baseUrl = _server.Url;
    
        HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
    
        Mock<IHttpClientFactory> mockFactory = new Mock<IHttpClientFactory>();
        mockFactory.Setup(_ => _.CreateClient("books_client")).Returns(myHttpClient);
    
        _server
            .Given(Request.Create().WithPath("/api/books/42").UsingGet())
            .RespondWith(
                Response.Create()
                .WithStatusCode(404)
            );
    
        BookService service = new BookService(mockFactory.Object);
    
        Assert.CatchAsync<BookServiceException>(() => service.GetBookById(42));
    }
    

    First we access the base URL used by the mock server by accessing _server.Url.

    We use that URL as a base address for the newly created instance of HttpClient.

    Then, we create a mock of IHttpClientFactory and configure it to return the local instance of HttpClient whenever we call the CreateClient method with the specified name.

    In the meanwhile, we define how the mock server must behave when an HTTP call to the specified path is intercepted.

    Finally, we can pass the instance of the mock IHttpClientFactory to the BookService.

    So, the key part to remember is that you can simply access the Url property (or, if you have configured it to handle many URLs, you can access the Urls property, that is an array of strings).

    Let WireMock.NET create the HttpClient for you

    As suggested by Stef in the comments to this post, there’s actually another way to generate the HttpClient with the correct URL: let WireMock.NET do it for you.

    Instead of doing

    string baseUrl = _server.Url;
    
    HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
    

    you can simplify the process by calling the CreateClient method:

    HttpClient myHttpClient = _server.CreateClient();
    

    Of course, you will still have to pass the instance to the mock of IHttpClientFactory.

    Further readings

    It’s important to notice that WireMock and WireMock.NET are two totally distinct things: one is a platform, and one is a library, owned by a different group of people, that mimics some functionalities from the platform to help developers write better tests.

    WireMock.NET is greatly integrated with many other libraries, such as xUnit, FluentAssertions, and .NET Aspire.

    You can find the official repository on GitHub:

    🔗 WireMock.Net | Github

    This article first appeared on Code4IT 🐧

    It’s important to remember that using an HttpClientFactory is generally more performant than instantiating a new HttpClient. Ever heard of socket exhaustion?

    🔗 Use IHttpClientFactory to generate HttpClient instances | Code4IT

    Finally, for the sake of this article I’ve used Moq. However, there’s a similar library you can use: NSubstitute. The learning curve is quite flat: in the most common scenarios, it’s just a matter of syntax usage.

    🔗 Moq vs NSubstitute: syntax cheat sheet | Code4IT

    Wrapping up

    In this article, we almost skipped all the basic stuff about WireMock.NET and tried to go straight to the point of integrating WireMock.NET with IHttpClientFactory.

    There are lots of articles out there that explain how to use WireMock.NET – just remember that WireMock and WireMock.NET are not the same thing!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • GPT Function Calling: 5 Underrated Use Cases | by Max Brodeur-Urbas


    OpenAI’s backend converting messy unstructured data to structured data via functions

    OpenAI’s “Function Calling” might be the most groundbreaking yet under appreciated feature released by any software company… ever.

    Functions allow you to turn unstructured data into structured data. This might not sound all that groundbreaking but when you consider that 90% of data processing and data entry jobs worldwide exist for this exact reason, it’s quite a revolutionary feature that went somewhat unnoticed.

    Have you ever found yourself begging GPT (3.5 or 4) to spit out the answer you want and absolutely nothing else? No “Sure, here is your…” or any other useless fluff surrounding the core answer. GPT Functions are the solution you’ve been looking for.

    How are Functions meant to work?

    OpenAI’s docs on function calling are extremely limited. You’ll find yourself digging through their developer forum for examples of how to use them. I dug around the forum for you and have many example coming up.

    Here’s one of the only examples you’ll be able to find in their docs:

    functions = [
    {
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
    "type": "object",
    "properties": {
    "location": {
    "type": "string",
    "description": "The city and state, e.g. San Francisco, CA",
    },
    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
    },
    "required": ["location"],
    },
    }
    ]

    A function definition is a rigid JSON format that defines a function name, description and parameters. In this case, the function is meant to get the current weather. Obviously GPT isn’t able to call this actual API (since it doesn’t exist) but using this structured response you’d be able to connect the real API hypothetically.

    At a high level however, functions provide two layers of inference:

    Picking the function itself:

    You may notice that functions are passed into the OpenAI API call as an array. The reason you provide a name and description to each function are so GPT can decide which to use based on a given prompt. Providing multiple functions in your API call is like giving GPT a Swiss army knife and asking it to cut a piece of wood in half. It knows that even though it has a pair of pliers, scissors and a knife, it should use the saw!

    Function definitions contribute towards your token count. Passing in hundreds of functions would not only take up the majority of your token limit but also result in a drop in response quality. I often don’t even use this feature and only pass in 1 function that I force it to use. It is very nice to have in certain use cases however.

    Picking the parameter values based on a prompt:

    This is the real magic in my opinion. GPT being able to choose the tool in it’s tool kit is amazing and definitely the focus of their feature announcement but I think this applies to more use cases.

    You can imagine a function like handing GPT a form to fill out. It uses its reasoning, the context of the situation and field names/descriptions to decide how it will fill out each field. Designing the form and the additional information you pass in is where you can get creative.

    GPT filling out your custom form (function parameters)

    One of the most common things I use functions for to extract specific values from a large chunk of text. The sender’s address from an email, a founders name from a blog post, a phone number from a landing page.

    I like to imagine I’m searching for a needle in a haystack except the LLM burns the haystack, leaving nothing but the needle(s).

    GPT Data Extraction Personified.

    Use case: Processing thousands of contest submissions

    I built an automation that iterated over thousands of contest submissions. Before storing these in a Google sheet I wanted to extract the email associated with the submission. Heres the function call I used for extracting their email.

    {
    "name":"update_email",
    "description":"Updates email based on the content of their submission.",
    "parameters":{
    "type":"object",
    "properties":{
    "email":{
    "type":"string",
    "description":"The email provided in the submission"
    }
    },
    "required":[
    "email"
    ]
    }
    }

    Assigning unstructured data a score based on dynamic, natural language criteria is a wonderful use case for functions. You could score comments during sentiment analysis, essays based on a custom grading rubric, a loan application for risk based on key factors. A recent use case I applied scoring to was scoring of sales leads from 0–100 based on their viability.

    Use Case: Scoring Sales leads

    We had hundreds of prospective leads in a single google sheet a few months ago that we wanted to tackle from most to least important. Each lead contained info like company size, contact name, position, industry etc.

    Using the following function we scored each lead from 0–100 based on our needs and then sorted them from best to worst.

    {
    "name":"update_sales_lead_value_score",
    "description":"Updates the score of a sales lead and provides a justification",
    "parameters":{
    "type":"object",
    "properties":{
    "sales_lead_value_score":{
    "type":"number",
    "description":"An integer value ranging from 0 to 100 that represents the quality of a sales lead based on these criteria. 100 is a perfect lead, 0 is terrible. Ideal Lead Criteria:\n- Medium sized companies (300-500 employees is the best range)\n- Companies in primary resource heavy industries are best, ex. manufacturing, agriculture, etc. (this is the most important criteria)\n- The higher up the contact position, the better. VP or Executive level is preferred."
    },
    "score_justification":{
    "type":"string",
    "description":"A clear and conscise justification for the score provided based on the custom criteria"
    }
    }
    },
    "required":[
    "sales_lead_value_score",
    "score_justification"
    ]
    }

    Define custom buckets and have GPT thoughtfully consider each piece of data you give it and place it in the correct bucket. This can be used for labelling tasks like selecting the category of youtube videos or for discrete scoring tasks like assigning letter grades to homework assignments.

    Use Case: Labelling news articles.

    A very common first step in data processing workflows is separating incoming data into different streams. A recent automation I built did exactly this with news articles scraped from the web. I wanted to sort them based on the topic of the article and include a justification for the decision once again. Here’s the function I used:

    {
    "name":"categorize",
    "description":"Categorize the input data into user defined buckets.",
    "parameters":{
    "type":"object",
    "properties":{
    "category":{
    "type":"string",
    "enum":[
    "US Politics",
    "Pandemic",
    "Economy",
    "Pop culture",
    "Other"
    ],
    "description":"US Politics: Related to US politics or US politicians, Pandemic: Related to the Coronavirus Pandemix, Economy: Related to the economy of a specific country or the world. , Pop culture: Related to pop culture, celebrity media or entertainment., Other: Doesn't fit in any of the defined categories. "
    },
    "justification":{
    "type":"string",
    "description":"A short justification explaining why the input data was categorized into the selected category."
    }
    },
    "required":[
    "category",
    "justification"
    ]
    }
    }

    Often times when processing data, I give GPT many possible options and want it to select the best one based on my needs. I only want the value it selected, no surrounding fluff or additional thoughts. Functions are perfect for this.

    Use Case: Finding the “most interesting AI news story” from hacker news

    I wrote another medium article here about how I automated my entire Twitter account with GPT. Part of that process involves selecting the most relevant posts from the front pages of hacker news. This post selection step leverages functions!

    To summarize the functions portion of the use case, we would scrape the first n pages of hacker news and ask GPT to select the post most relevant to “AI news or tech news”. GPT would return only the headline and the link selected via functions so that I could go on to scrape that website and generate a tweet from it.

    I would pass in the user defined query as part of the message and use the following function definition:

    {
    "name":"find_best_post",
    "description":"Determine the best post that most closely reflects the query.",
    "parameters":{
    "type":"object",
    "properties":{
    "best_post_title":{
    "type":"string",
    "description":"The title of the post that most closely reflects the query, stated exactly as it appears in the list of titles."
    }
    },
    "required":[
    "best_post_title"
    ]
    }
    }

    Filtering is a subset of categorization where you categorize items as either true or false based on a natural language condition. A condition like “is Spanish” will be able to filter out all Spanish comments, articles etc. using a simple function and conditional statement immediately after.

    Use Case: Filtering contest submission

    The same automation that I mentioned in the “Data Extraction” section used ai-powered-filtering to weed out contest submissions that didn’t meet the deal-breaking criteria. Things like “must use typescript” were absolutely mandatory for the coding contest at hand. We used functions to filter out submissions and trim down the total set being processed by 90%. Here is the function definition we used.

    {
    "name":"apply_condition",
    "description":"Used to decide whether the input meets the user provided condition.",
    "parameters":{
    "type":"object",
    "properties":{
    "decision":{
    "type":"string",
    "enum":[
    "True",
    "False"
    ],
    "description":"True if the input meets this condition 'Does submission meet the ALL these requirements (uses typescript, uses tailwindcss, functional demo)', False otherwise."
    }
    },
    "required":[
    "decision"
    ]
    }
    }

    If you’re curious why I love functions so much or what I’ve built with them you should check out AgentHub!

    AgentHub is the Y Combinator-backed startup I co-founded that let’s you automate any repetitive or complex workflow with AI via a simple drag and drop no-code platform.

    “Imagine Zapier but AI-first and on crack.” — Me

    Automations are built with individual nodes called “Operators” that are linked together to create power AI pipelines. We have a catalogue of AI powered operators that leverage functions under the hood.

    Our current AI-powered operators that use functions!

    Check out these templates to see examples of function use-cases on AgentHub: Scoring, Categorization, Option-Selection,

    If you want to start building AgentHub is live and ready to use! We’re very active in our discord community and are happy to help you build your automations if needed.

    Feel free to follow the official AgentHub twitter for updates and myself for AI-related content.





    Source link

  • What Technology Do Marine Biologists Use?


    We are living in interesting times. Technology continues evolving at dizzying speeds in all industries, including the marine sector. Read on for more insights on the marine biology business and the technology marine biologists use.

    1. Submarines

    The careers of marine biologists include researching animals living in water. They study what causes changes in marine populations and how they can improve it. For this, they go to where the marine life lives, inside the ocean.

    They use submersibles to go inside and reach the sea floor. The technology used to build submersibles includes providing the submersible with a specially controlled internal environment to ensure the scientists’ safety inside the submersible. Imagine if these scientists tried diving to the bottom of the sea without these submarines. They would not as much as make it halfway down, as they’re likely to drown.

    Nebraska Department of Health and Human Services notes that drowning in natural waters accounts for a third of all deaths that occur due to unintentional drowning.

    Among the technological features of these submersibles are the specially designed mechanical hands. The biologists from inside the submersible manipulate these. They enable the scientists to pick up any objects while inside the submarine.

    2. Boats

    A marine biologist must work using specially designed and equipped boats. They have several boats for different tasks. The aluminum boats sail in the shallow waters in areas such as estuaries. They also use inflatable boats to do their research along the shores.

    When venturing out as far as 40 feet offshore, biologists use trawlers. These boats come equipped with radar, radio, and GPS. They also come with a hydraulic winch, which helps when dredging, pulling, and using the bottom grab.

    3. Cameras

    Ever wondered how marine biologists capture majestic images of animal life undersea? They use waterproof video and still photo cameras to snap at these marine creatures.

    Digital cameras can capture great images with clarity, even in very low lighting. There are special cameras attached to the drill machines, and these allow the scientists to record videos of the seafloor. They can also use video cameras to pinpoint interesting areas of study, such as submarine volcanic eruptions.

    Digital cameras also capture marine snow. The marine biologists dispatch a digital camera to the seafloor and, within two hours, bring back hundreds of images of marine snow. While the marine snow forms part of marine life’s food, we can’t eat the snow humans experience on land.

    Its weight can range from light to heavy and could damage your roof. FEMA snow load safety guide notes that one foot of fresh light snow may be as heavy as 3 pounds per square foot (psf). The wet snow may be as heavy as 21 psf and can stress your roof during winter. Have your roof inspected before the snow season starts.

    4. Buoy System

    The buoy is a floating instrument marine biologists send out in the sea. It collects information about environmental conditions at sea. It works by using the surface buoy, which collects information such as the surface temperature of the sea, the humidity, current speed and direction of the wind, and wave parameters.

    Marine biologists put in many months of work while at sea. Their careers generally involve long hours of research in marine ecosystems. Though their facilities, such as boats and submarines, are equipped to cater to their comfort at sea, they could require services that must be outsourced when they’re on land. One such service would be restroom facilities.

    They ideally need safe and ecologically sustainable restroom facilities to use when they are offshore for the better part of the day. According to IBISWorld, the market size, measured by revenue, of the portable toilet rental industry was $2.1 billion in 2022. This shows they offer great solutions.

    These are just some of the technologies marine biologists use. You can expect to see more innovations in the future. Be on the lookout.



    Source link

  • Zero Trust Network Access Use Cases

    Zero Trust Network Access Use Cases


    As organizations navigate the evolving threat landscape, traditional security models like VPNs and legacy access solutions are proving insufficient. Zero Trust Network Access (ZTNA) has emerged as a modern alternative that enhances security while improving user experience. Let’s explore some key use cases where ZTNA delivers significant value.

    Leveraging ZTNA as a VPN Alternative

    Virtual Private Networks (VPNs) have long been the go-to solution for secure remote access. However, they come with inherent challenges, such as excessive trust, lateral movement risks, and performance bottlenecks. ZTNA eliminates these issues by enforcing a least privilege access model, verifying every user and device before granting access to specific applications rather than entire networks. This approach minimizes attack surfaces and reduces the risk of breaches.

    ZTNA for Remote and Hybrid Workforce

    With the rise of remote and hybrid work, employees require seamless and secure access to corporate resources from anywhere. ZTNA ensures secure, identity-based access without relying on traditional perimeter defenses. By continuously validating users and devices, ZTNA provides a better security posture while offering faster, more reliable connectivity than conventional VPNs. Cloud-native ZTNA solutions can dynamically adapt to user locations, reducing latency and enhancing productivity.

    Securing BYOD Using ZTNA

    Bring Your Own Device (BYOD) policies introduce security risks due to the varied nature of personal devices connecting to corporate networks. ZTNA secures these endpoints by enforcing device posture assessments, ensuring that only compliant devices can access sensitive applications. Unlike VPNs, which expose entire networks, ZTNA grants granular access based on identity and device trust, significantly reducing the attack surface posed by unmanaged endpoints.

    Replacing Legacy VDI

    Virtual Desktop Infrastructure (VDI) has traditionally provided secure remote access. However, VDIs can be complex to manage, require significant resources, and often introduce performance challenges. ZTNA offers a lighter, more efficient alternative by providing direct, controlled access to applications without needing a full virtual desktop environment. This improves user experience, simplifies IT operations, and reduces costs.

    Secure Access to Vendors and Partners

    Third-party vendors and partners often require access to corporate applications, but providing them with excessive permission can lead to security vulnerabilities. Zero Trust Network Access enables secure, policy-driven access for external users without exposing internal networks. By implementing identity-based controls and continuous monitoring, organizations can ensure that external users only access what they need when they need it, reducing potential risks from supply chain attacks.

    Conclusion

    ZTNA is revolutionizing secure access by addressing the limitations of traditional VPNs and legacy security models. Whether securing remote workers, BYOD environments, or third-party access, ZTNA provides a scalable, flexible, and security-first approach. As cyber threats evolve, adopting ZTNA is a crucial step toward a Zero Trust architecture, ensuring robust protection without compromising user experience.

    Is your organization ready to embrace Zero Trust Network Access? Now is the time for a more secure, efficient, and scalable access solution. Contact us or visit our website for more information.



    Source link

  • Use your own user @ domain for Mastodon discoverability with the WebFinger Protocol without hosting a server

    Use your own user @ domain for Mastodon discoverability with the WebFinger Protocol without hosting a server



    Mastodon is a free, open-source social networking service that is decentralized and distributed. It was created in 2016 as an alternative to centralized social media platforms such as Twitter and Facebook.

    One of the key features of Mastodon is the use of the WebFinger protocol, which allows users to discover and access information about other users on the Mastodon network. WebFinger is a simple HTTP-based protocol that enables a user to discover information about other users or resources on the internet by using their email address or other identifying information. The WebFinger protocol is important for Mastodon because it enables users to find and follow each other on the network, regardless of where they are hosted.

    WebFinger uses a “well known” path structure when calling an domain. You may be familiar with the robots.txt convention. We all just agree that robots.txt will sit at the top path of everyone’s domain.

    The WebFinger protocol is a simple HTTP-based protocol that enables a user or search to discover information about other users or resources on the internet by using their email address or other identifying information. My is first name at last name .com, so…my personal WebFinger API endpoint is here https://www.hanselman.com/.well-known/webfinger

    The idea is that…

    1. A user sends a WebFinger request to a server, using the email address or other identifying information of the user or resource they are trying to discover.

    2. The server looks up the requested information in its database and returns a JSON object containing the information about the user or resource. This JSON object is called a “resource descriptor.”

    3. The user’s client receives the resource descriptor and displays the information to the user.

    The resource descriptor contains various types of information about the user or resource, such as their name, profile picture, and links to their social media accounts or other online resources. It can also include other types of information, such as the user’s public key, which can be used to establish a secure connection with the user.

    There’s a great explainer here as well. From that page:

    When someone searches for you on Mastodon, your server will be queried for accounts using an endpoint that looks like this:

    GET https://${MASTODON_DOMAIN}/.well-known/webfinger?resource=acct:${MASTODON_USER}@${MASTODON_DOMAIN}

    Note that Mastodon user names start with @ so they are @username@someserver.com. Just like twiter would be @shanselman@twitter.com I can be @shanselman@hanselman.com now!

    Searching for me with Mastodon

    So perhaps https://www.hanselman.com/.well-known/webfinger?resource=acct:FRED@HANSELMAN.COM

    Mine returns

    {
    "subject":"acct:shanselman@hachyderm.io",
    "aliases":
    [
    "https://hachyderm.io/@shanselman",
    "https://hachyderm.io/users/shanselman"
    ],
    "links":
    [
    {
    "rel":"http://webfinger.net/rel/profile-page",
    "type":"text/html",
    "href":"https://hachyderm.io/@shanselman"
    },
    {
    "rel":"self",
    "type":"application/activity+json",
    "href":"https://hachyderm.io/users/shanselman"
    },
    {
    "rel":"http://ostatus.org/schema/1.0/subscribe",
    "template":"https://hachyderm.io/authorize_interaction?uri={uri}"
    }
    ]
    }

    This file should be returned as a mime type of application/jrd+json

    My site is an ASP.NET Razor Pages site, so I just did this in Startup.cs to map that well known URL to a page/route that returns the JSON needed.

    services.AddRazorPages().AddRazorPagesOptions(options =>
    {
    options.Conventions.AddPageRoute("/robotstxt", "/Robots.Txt"); //i did this before, not needed
    options.Conventions.AddPageRoute("/webfinger", "/.well-known/webfinger");
    options.Conventions.AddPageRoute("/webfinger", "/.well-known/webfinger/{val?}");
    });

    then I made a webfinger.cshtml like this. Note I have to double escape the @@ sites because it’s Razor.

    @page
    @{
    Layout = null;
    this.Response.ContentType = "application/jrd+json";
    }
    {
    "subject":"acct:shanselman@hachyderm.io",
    "aliases":
    [
    "https://hachyderm.io/@@shanselman",
    "https://hachyderm.io/users/shanselman"
    ],
    "links":
    [
    {
    "rel":"http://webfinger.net/rel/profile-page",
    "type":"text/html",
    "href":"https://hachyderm.io/@@shanselman"
    },
    {
    "rel":"self",
    "type":"application/activity+json",
    "href":"https://hachyderm.io/users/shanselman"
    },
    {
    "rel":"http://ostatus.org/schema/1.0/subscribe",
    "template":"https://hachyderm.io/authorize_interaction?uri={uri}"
    }
    ]
    }

    This is a static response, but if I was hosting pages for more than one person I’d want to take in the url with the user’s name, and then map it to their aliases and return those correctly.

    Even easier, you can just use the JSON file of your own Mastodon server’s webfinger response and SAVE IT as a static json file and copy it to your own server!

    As long as your server returns the right JSON from that well known URL then it’ll work.

    So this is my template https://hachyderm.io/.well-known/webfinger?resource=acct:shanselman@hachyderm.io from where I’m hosted now.

    If you want to get started with Mastodon, start here. https://github.com/joyeusenoelle/GuideToMastodon/ it feels like Twitter circa 2007 except it’s not owned by anyone and is based on web standards like ActivityPub.

    Hope this helps!




    About Scott

    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

    facebook
    bluesky
    subscribe
    About   Newsletter

    Hosting By
    Hosted on Linux using .NET in an Azure App Service












    Source link