In an era of digital banking, cloud migration, and a growing cyber threat landscape, traditional perimeter-based security models are no longer sufficient for the Banking, Financial Services, and Insurance (BFSI) sector. Enter Zero Trust Network Access (ZTNA) — a modern security framework that aligns perfectly with the BFSI industry’s need for robust, scalable, and compliant cybersecurity practices.
This blog explores the key use cases and benefits of ZTNA for BFSI organizations.
ZTNA Use Cases for BFSI
Secure Remote Access for Employees
With hybrid and remote work becoming the norm, financial institutions must ensure secure access to critical applications and data outside corporate networks. ZTNA allows secure, identity-based access without exposing internal resources to the public internet. This ensures that only authenticated and authorized users can access specific resources, reducing attack surfaces and preventing lateral movement by malicious actors.
Protect Customer Data Using Least Privileged Access
ZTNA enforces the principle of least privilege, granting users access only to the resources necessary for their roles. This granular control is vital in BFSI, where customer financial data is highly sensitive. By limiting access based on contextual parameters such as user identity, device health, and location, ZTNA drastically reduces the chances of data leakage or internal misuse.
Compliance with Regulatory Requirements
The BFSI sector is governed by stringent regulations such as RBI guidelines, PCI DSS, GDPR, and more. ZTNA provides centralized visibility, detailed audit logs, and fine-grained access control—all critical for meeting regulatory requirements. It also helps institutions demonstrate proactive data protection measures during audits and assessments.
Vendor and Third-Party Access Management
Banks and insurers frequently engage with external vendors, consultants, and partners. Traditional VPNs provide broad access once a connection is established, posing a significant security risk. ZTNA addresses this by granting secure, time-bound, and purpose-specific access to third parties—without ever bringing them inside the trusted network perimeter.
Key Benefits of ZTNA for BFSI
Reduced Risk of Data Breaches
By minimizing the attack surface and verifying every user and device before granting access, ZTNA significantly lowers the risk of unauthorized access and data breaches. Since applications are never directly exposed to the internet, ZTNA also protects against exploitation of vulnerabilities in public-facing assets.
Improved Compliance Posture
ZTNA simplifies compliance by offering audit-ready logs, consistent policy enforcement, and better visibility into user activity. BFSI firms can use these capabilities to ensure adherence to local and global regulations and quickly respond to compliance audits with accurate data.
Enhanced Customer Trust and Loyalty
Security breaches in financial institutions can erode customer trust instantly. By adopting a Zero Trust approach, organizations can demonstrate their commitment to customer data protection, thereby enhancing credibility, loyalty, and long-term customer relationships.
Cost Savings on Legacy VPNs
Legacy VPN solutions are often complex, expensive, and challenging to scale. ZTNA offers a modern alternative that is more efficient and cost-effective. It eliminates the need for dedicated hardware and reduces operational overhead by centralizing policy management in the cloud.
Scalability for Digital Transformation
As BFSI institutions embrace digital transformation—be it cloud adoption, mobile banking, or FinTech partnerships—ZTNA provides a scalable, cloud-native security model that grows with the business. It supports rapid onboarding of new users, apps, and services without compromising on security.
Final Thoughts
ZTNA is more than just a security upgrade—it’s a strategic enabler for BFSI organizations looking to build resilient, compliant, and customer-centric digital ecosystems. With its ability to secure access for employees, vendors, and partners while ensuring regulatory compliance and data privacy, ZTNA is fast becoming the cornerstone of modern cybersecurity strategies in the financial sector.
Ready to embrace Zero Trust? Identify high-risk access points and gradually implement ZTNA for your most critical systems. The transformation may be phased, but the security gains are immediate and long-lasting.
Seqrite’s Zero Trust Network Access (ZTNA) solution empowers BFSI organizations with secure, seamless, and policy-driven access control tailored for today’s hybrid and regulated environments. Partner with Seqrite to strengthen data protection, streamline compliance, and accelerate your digital transformation journey.
WireMock.NET is a popular library used to simulate network communication through HTTP. But there is no simple way to integrate the generated in-memory server with an instance of IHttpClientFactory injected via constructor. Right? Wrong!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Testing the integration with external HTTP clients can be a cumbersome task, but most of the time, it is necessary to ensure that a method is able to perform correct operations – not only sending the right information but also ensuring that we are able to read the content returned from the called API.
Instead of spinning up a real server (even if in the local environment), we can simulate a connection to a mock server. A good library for creating temporary in-memory servers is WireMock.NET.
Many articles I read online focus on creating a simple HttpClient, using WireMock.NET to drive its behaviour. In this article, we are going to do a little step further: we are going to use WireMock.NET to handle HttpClients generated, using Moq, via IHttpClientFactory.
Explaining the dummy class used for the examples
As per every practical article, we must start with a dummy example.
For the sake of this article, I’ve created a dummy class with a single method that calls an external API to retrieve details of a book and then reads the returned content. If the call is successful, the method returns an instance of Book; otherwise, it throws a BookServiceException exception.
Just for completeness, here’s the Book class:
publicclassBook{
publicint Id { get; set; }
publicstring Title { get; set; }
}
publicclassBookService{
privatereadonly IHttpClientFactory _httpClientFactory;
public BookService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<Book> GetBookById(int id)
{
string url = $"/api/books/{id}";
HttpClient httpClient = _httpClientFactory.CreateClient("books_client");
try {
Book? book = await httpClient.GetFromJsonAsync<Book>(url);
return book;
}
catch (Exception ex)
{
thrownew BookServiceException($"There was an error while getting info about the book {id}", ex);
}
}
}
There are just two things to notice:
We are injecting an instance of IHttpClientFactory into the constructor.
We are generating an instance of HttpClient by passing a name to the CreateClient method of IHttpClientFactory.
Now that we have our cards on the table, we can start!
WireMock.NET, a library to simulate HTTP calls
WireMock is an open-source platform you can install locally to create a real mock server. You can even create a cloud environment to generate and test HTTP endpoints.
However, for this article we are interested in the NuGet package that takes inspiration from the WireMock project, allowing .NET developers to generate disposable in-memory servers: WireMock.NET.
To add the library, you must add the WireMock.NET NuGet package to your project, for example using dotnet add package WireMock.Net.
Once the package is ready, you can generate a test server in your Unit Tests class:
You can instantiate a new instance of WireMockServer in the OneTimeSetUp step, store it in a private field, and make it accessible to every test in the test class.
Before each test run, you can reset the internal status of the mock server by running the Reset() method. I’d suggest you reset the server to avoid unintentional internal status, but it all depends on what you want to do with the server instance.
Finally, remember to free up resources by calling the Stop() method in the OneTimeTearDown phase (but not during the TearDown phase: you still need the server to be on while running your tests!).
Basic configuration of HTTP requests and responses with WireMock.NET
The basic structure of the definition of a mock response using WireMock.NET is made of two parts:
Within the Given method, you define the HTTP Verb and URL path whose response is going to be mocked.
Using RespondWith you define what the mock server must return when the endpoint specified in the Given step is called.
In the next example, you can see that the _server instance (the one I instantiated in the OneTimeSetUp phase, remember?) must return a specific body (responseBody) and the 200 HTTP Status Code when the /api/books/42 endpoint is called.
All in all, both the request and the response are highly customizable: you can add HTTP Headers, delays, cookies, and much more.
Look closely; there’s one part that is missing: What is the full URL? We have declared only the path (/api/books/42) but have no info about the hostname and the port used to communicate.
How to integrate WireMock.NET with a Moq-driven IHttpClientFactory
In order to have WireMock.NET react to an HTTP call, we have to call the exact URL – even the hostname and port must match. But when we create a mocked HttpClient – like we did in this article – we don’t have a real hostname. So, how can we have WireMock.NET and HttpClient work together?
The answer is easy: since WireMockServer.Start() automatically picks a free port in your localhost, you don’t have to guess the port number, but you can reference the current instance of _server.
Once the WireMockServer is created, internally it contains the reference to one or more URLs it will use to listen for HTTP requests, intercepting the calls and replying in place of a real server. You can then use one of these ports to configure the HttpClient generated by the HttpClientFactory.
Let’s see the code:
[Test]publicasync Task GetBookById_Should_HandleBadRequests()
{
string baseUrl = _server.Url;
HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
Mock<IHttpClientFactory> mockFactory = new Mock<IHttpClientFactory>();
mockFactory.Setup(_ => _.CreateClient("books_client")).Returns(myHttpClient);
_server
.Given(Request.Create().WithPath("/api/books/42").UsingGet())
.RespondWith(
Response.Create()
.WithStatusCode(404)
);
BookService service = new BookService(mockFactory.Object);
Assert.CatchAsync<BookServiceException>(() => service.GetBookById(42));
}
First we access the base URL used by the mock server by accessing _server.Url.
We use that URL as a base address for the newly created instance of HttpClient.
Then, we create a mock of IHttpClientFactory and configure it to return the local instance of HttpClient whenever we call the CreateClient method with the specified name.
In the meanwhile, we define how the mock server must behave when an HTTP call to the specified path is intercepted.
Finally, we can pass the instance of the mock IHttpClientFactory to the BookService.
So, the key part to remember is that you can simply access the Url property (or, if you have configured it to handle many URLs, you can access the Urls property, that is an array of strings).
Let WireMock.NET create the HttpClient for you
As suggested by Stef in the comments to this post, there’s actually another way to generate the HttpClient with the correct URL: let WireMock.NET do it for you.
Instead of doing
string baseUrl = _server.Url;
HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
you can simplify the process by calling the CreateClient method:
HttpClient myHttpClient = _server.CreateClient();
Of course, you will still have to pass the instance to the mock of IHttpClientFactory.
Further readings
It’s important to notice that WireMock and WireMock.NET are two totally distinct things: one is a platform, and one is a library, owned by a different group of people, that mimics some functionalities from the platform to help developers write better tests.
WireMock.NET is greatly integrated with many other libraries, such as xUnit, FluentAssertions, and .NET Aspire.
It’s important to remember that using an HttpClientFactory is generally more performant than instantiating a new HttpClient. Ever heard of socket exhaustion?
Finally, for the sake of this article I’ve used Moq. However, there’s a similar library you can use: NSubstitute. The learning curve is quite flat: in the most common scenarios, it’s just a matter of syntax usage.
In this article, we almost skipped all the basic stuff about WireMock.NET and tried to go straight to the point of integrating WireMock.NET with IHttpClientFactory.
There are lots of articles out there that explain how to use WireMock.NET – just remember that WireMock and WireMock.NET are not the same thing!
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛
OpenAI’s backend converting messy unstructured data to structured data via functions
OpenAI’s “Function Calling” might be the most groundbreaking yet under appreciated feature released by any software company… ever.
Functions allow you to turn unstructured data into structured data. This might not sound all that groundbreaking but when you consider that 90% of data processing and data entry jobs worldwide exist for this exact reason, it’s quite a revolutionary feature that went somewhat unnoticed.
Have you ever found yourself begging GPT (3.5 or 4) to spit out the answer you want and absolutely nothing else? No “Sure, here is your…” or any other useless fluff surrounding the core answer. GPT Functions are the solution you’ve been looking for.
How are Functions meant to work?
OpenAI’s docs on function calling are extremely limited. You’ll find yourself digging through their developer forum for examples of how to use them. I dug around the forum for you and have many example coming up.
Here’s one of the only examples you’ll be able to find in their docs:
functions = [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, } ]
A function definition is a rigid JSON format that defines a function name, description and parameters. In this case, the function is meant to get the current weather. Obviously GPT isn’t able to call this actual API (since it doesn’t exist) but using this structured response you’d be able to connect the real API hypothetically.
At a high level however, functions provide two layers of inference:
Picking the function itself:
You may notice that functions are passed into the OpenAI API call as an array. The reason you provide a name and description to each function are so GPT can decide which to use based on a given prompt. Providing multiple functions in your API call is like giving GPT a Swiss army knife and asking it to cut a piece of wood in half. It knows that even though it has a pair of pliers, scissors and a knife, it should use the saw!
Function definitions contribute towards your token count. Passing in hundreds of functions would not only take up the majority of your token limit but also result in a drop in response quality. I often don’t even use this feature and only pass in 1 function that I force it to use. It is very nice to have in certain use cases however.
Picking the parameter values based on a prompt:
This is the real magic in my opinion. GPT being able to choose the tool in it’s tool kit is amazing and definitely the focus of their feature announcement but I think this applies to more use cases.
You can imagine a function like handing GPT a form to fill out. It uses its reasoning, the context of the situation and field names/descriptions to decide how it will fill out each field. Designing the form and the additional information you pass in is where you can get creative.
GPT filling out your custom form (function parameters)
One of the most common things I use functions for to extract specific values from a large chunk of text. The sender’s address from an email, a founders name from a blog post, a phone number from a landing page.
I like to imagine I’m searching for a needle in a haystack except the LLM burns the haystack, leaving nothing but the needle(s).
GPT Data Extraction Personified.
Use case: Processing thousands of contest submissions
I built an automation that iterated over thousands of contest submissions. Before storing these in a Google sheet I wanted to extract the email associated with the submission. Heres the function call I used for extracting their email.
{ "name":"update_email", "description":"Updates email based on the content of their submission.", "parameters":{ "type":"object", "properties":{ "email":{ "type":"string", "description":"The email provided in the submission" } }, "required":[ "email" ] } }
Assigning unstructured data a score based on dynamic, natural language criteria is a wonderful use case for functions. You could score comments during sentiment analysis, essays based on a custom grading rubric, a loan application for risk based on key factors. A recent use case I applied scoring to was scoring of sales leads from 0–100 based on their viability.
Use Case: Scoring Sales leads
We had hundreds of prospective leads in a single google sheet a few months ago that we wanted to tackle from most to least important. Each lead contained info like company size, contact name, position, industry etc.
Using the following function we scored each lead from 0–100 based on our needs and then sorted them from best to worst.
{ "name":"update_sales_lead_value_score", "description":"Updates the score of a sales lead and provides a justification", "parameters":{ "type":"object", "properties":{ "sales_lead_value_score":{ "type":"number", "description":"An integer value ranging from 0 to 100 that represents the quality of a sales lead based on these criteria. 100 is a perfect lead, 0 is terrible. Ideal Lead Criteria:\n- Medium sized companies (300-500 employees is the best range)\n- Companies in primary resource heavy industries are best, ex. manufacturing, agriculture, etc. (this is the most important criteria)\n- The higher up the contact position, the better. VP or Executive level is preferred." }, "score_justification":{ "type":"string", "description":"A clear and conscise justification for the score provided based on the custom criteria" } } }, "required":[ "sales_lead_value_score", "score_justification" ] }
Define custom buckets and have GPT thoughtfully consider each piece of data you give it and place it in the correct bucket. This can be used for labelling tasks like selecting the category of youtube videos or for discrete scoring tasks like assigning letter grades to homework assignments.
Use Case: Labelling news articles.
A very common first step in data processing workflows is separating incoming data into different streams. A recent automation I built did exactly this with news articles scraped from the web. I wanted to sort them based on the topic of the article and include a justification for the decision once again. Here’s the function I used:
{ "name":"categorize", "description":"Categorize the input data into user defined buckets.", "parameters":{ "type":"object", "properties":{ "category":{ "type":"string", "enum":[ "US Politics", "Pandemic", "Economy", "Pop culture", "Other" ], "description":"US Politics: Related to US politics or US politicians, Pandemic: Related to the Coronavirus Pandemix, Economy: Related to the economy of a specific country or the world. , Pop culture: Related to pop culture, celebrity media or entertainment., Other: Doesn't fit in any of the defined categories. " }, "justification":{ "type":"string", "description":"A short justification explaining why the input data was categorized into the selected category." } }, "required":[ "category", "justification" ] } }
Often times when processing data, I give GPT many possible options and want it to select the best one based on my needs. I only want the value it selected, no surrounding fluff or additional thoughts. Functions are perfect for this.
Use Case: Finding the “most interesting AI news story” from hacker news
I wrote another medium article here about how I automated my entire Twitter account with GPT. Part of that process involves selecting the most relevant posts from the front pages of hacker news. This post selection step leverages functions!
To summarize the functions portion of the use case, we would scrape the first n pages of hacker news and ask GPT to select the post most relevant to “AI news or tech news”. GPT would return only the headline and the link selected via functions so that I could go on to scrape that website and generate a tweet from it.
I would pass in the user defined query as part of the message and use the following function definition:
{ "name":"find_best_post", "description":"Determine the best post that most closely reflects the query.", "parameters":{ "type":"object", "properties":{ "best_post_title":{ "type":"string", "description":"The title of the post that most closely reflects the query, stated exactly as it appears in the list of titles." } }, "required":[ "best_post_title" ] } }
Filtering is a subset of categorization where you categorize items as either true or false based on a natural language condition. A condition like “is Spanish” will be able to filter out all Spanish comments, articles etc. using a simple function and conditional statement immediately after.
Use Case: Filtering contest submission
The same automation that I mentioned in the “Data Extraction” section used ai-powered-filtering to weed out contest submissions that didn’t meet the deal-breaking criteria. Things like “must use typescript” were absolutely mandatory for the coding contest at hand. We used functions to filter out submissions and trim down the total set being processed by 90%. Here is the function definition we used.
{ "name":"apply_condition", "description":"Used to decide whether the input meets the user provided condition.", "parameters":{ "type":"object", "properties":{ "decision":{ "type":"string", "enum":[ "True", "False" ], "description":"True if the input meets this condition 'Does submission meet the ALL these requirements (uses typescript, uses tailwindcss, functional demo)', False otherwise." } }, "required":[ "decision" ] } }
If you’re curious why I love functions so much or what I’ve built with them you should check out AgentHub!
AgentHub is the Y Combinator-backed startup I co-founded that let’s you automate any repetitive or complex workflow with AI via a simple drag and drop no-code platform.
“Imagine Zapier but AI-first and on crack.” — Me
Automations are built with individual nodes called “Operators” that are linked together to create power AI pipelines. We have a catalogue of AI powered operators that leverage functions under the hood.
Our current AI-powered operators that use functions!
If you want to start building AgentHub is live and ready to use! We’re very active in our discord community and are happy to help you build your automations if needed.
We are living in interesting times. Technology continues evolving at dizzying speeds in all industries, including the marine sector. Read on for more insights on the marine biology business and the technology marine biologists use.
1. Submarines
The careers of marine biologists include researching animals living in water. They study what causes changes in marine populations and how they can improve it. For this, they go to where the marine life lives, inside the ocean.
They use submersibles to go inside and reach the sea floor. The technology used to build submersibles includes providing the submersible with a specially controlled internal environment to ensure the scientists’ safety inside the submersible. Imagine if these scientists tried diving to the bottom of the sea without these submarines. They would not as much as make it halfway down, as they’re likely to drown.
Nebraska Department of Health and Human Services notes that drowning in natural waters accounts for a third of all deaths that occur due to unintentional drowning.
Among the technological features of these submersibles are the specially designed mechanical hands. The biologists from inside the submersible manipulate these. They enable the scientists to pick up any objects while inside the submarine.
2. Boats
A marine biologist must work using specially designed and equipped boats. They have several boats for different tasks. The aluminum boats sail in the shallow waters in areas such as estuaries. They also use inflatable boats to do their research along the shores.
When venturing out as far as 40 feet offshore, biologists use trawlers. These boats come equipped with radar, radio, and GPS. They also come with a hydraulic winch, which helps when dredging, pulling, and using the bottom grab.
3. Cameras
Ever wondered how marine biologists capture majestic images of animal life undersea? They use waterproof video and still photo cameras to snap at these marine creatures.
Digital cameras can capture great images with clarity, even in very low lighting. There are special cameras attached to the drill machines, and these allow the scientists to record videos of the seafloor. They can also use video cameras to pinpoint interesting areas of study, such as submarine volcanic eruptions.
Digital cameras also capture marine snow. The marine biologists dispatch a digital camera to the seafloor and, within two hours, bring back hundreds of images of marine snow. While the marine snow forms part of marine life’s food, we can’t eat the snow humans experience on land.
Its weight can range from light to heavy and could damage your roof. FEMA snow load safety guide notes that one foot of fresh light snow may be as heavy as 3 pounds per square foot (psf). The wet snow may be as heavy as 21 psf and can stress your roof during winter. Have your roof inspected before the snow season starts.
4. Buoy System
The buoy is a floating instrument marine biologists send out in the sea. It collects information about environmental conditions at sea. It works by using the surface buoy, which collects information such as the surface temperature of the sea, the humidity, current speed and direction of the wind, and wave parameters.
Marine biologists put in many months of work while at sea. Their careers generally involve long hours of research in marine ecosystems. Though their facilities, such as boats and submarines, are equipped to cater to their comfort at sea, they could require services that must be outsourced when they’re on land. One such service would be restroom facilities.
They ideally need safe and ecologically sustainable restroom facilities to use when they are offshore for the better part of the day. According to IBISWorld, the market size, measured by revenue, of the portable toilet rental industry was $2.1 billion in 2022. This shows they offer great solutions.
These are just some of the technologies marine biologists use. You can expect to see more innovations in the future. Be on the lookout.
As organizations navigate the evolving threat landscape, traditional security models like VPNs and legacy access solutions are proving insufficient. Zero Trust Network Access (ZTNA) has emerged as a modern alternative that enhances security while improving user experience. Let’s explore some key use cases where ZTNA delivers significant value.
Leveraging ZTNA as a VPN Alternative
Virtual Private Networks (VPNs) have long been the go-to solution for secure remote access. However, they come with inherent challenges, such as excessive trust, lateral movement risks, and performance bottlenecks. ZTNA eliminates these issues by enforcing a least privilege access model, verifying every user and device before granting access to specific applications rather than entire networks. This approach minimizes attack surfaces and reduces the risk of breaches.
ZTNA for Remote and Hybrid Workforce
With the rise of remote and hybrid work, employees require seamless and secure access to corporate resources from anywhere. ZTNA ensures secure, identity-based access without relying on traditional perimeter defenses. By continuously validating users and devices, ZTNA provides a better security posture while offering faster, more reliable connectivity than conventional VPNs. Cloud-native ZTNA solutions can dynamically adapt to user locations, reducing latency and enhancing productivity.
Securing BYOD Using ZTNA
Bring Your Own Device (BYOD) policies introduce security risks due to the varied nature of personal devices connecting to corporate networks. ZTNA secures these endpoints by enforcing device posture assessments, ensuring that only compliant devices can access sensitive applications. Unlike VPNs, which expose entire networks, ZTNA grants granular access based on identity and device trust, significantly reducing the attack surface posed by unmanaged endpoints.
Replacing Legacy VDI
Virtual Desktop Infrastructure (VDI) has traditionally provided secure remote access. However, VDIs can be complex to manage, require significant resources, and often introduce performance challenges. ZTNA offers a lighter, more efficient alternative by providing direct, controlled access to applications without needing a full virtual desktop environment. This improves user experience, simplifies IT operations, and reduces costs.
Secure Access to Vendors and Partners
Third-party vendors and partners often require access to corporate applications, but providing them with excessive permission can lead to security vulnerabilities. Zero Trust Network Access enables secure, policy-driven access for external users without exposing internal networks. By implementing identity-based controls and continuous monitoring, organizations can ensure that external users only access what they need when they need it, reducing potential risks from supply chain attacks.
Conclusion
ZTNA is revolutionizing secure access by addressing the limitations of traditional VPNs and legacy security models. Whether securing remote workers, BYOD environments, or third-party access, ZTNA provides a scalable, flexible, and security-first approach. As cyber threats evolve, adopting ZTNA is a crucial step toward a Zero Trust architecture, ensuring robust protection without compromising user experience.
Is your organization ready to embrace Zero Trust Network Access? Now is the time for a more secure, efficient, and scalable access solution. Contact us or visit our website for more information.
Mastodon is a free, open-source social networking service that is decentralized and distributed. It was created in 2016 as an alternative to centralized social media platforms such as Twitter and Facebook.
One of the key features of Mastodon is the use of the WebFinger protocol, which allows users to discover and access information about other users on the Mastodon network. WebFinger is a simple HTTP-based protocol that enables a user to discover information about other users or resources on the internet by using their email address or other identifying information. The WebFinger protocol is important for Mastodon because it enables users to find and follow each other on the network, regardless of where they are hosted.
WebFinger uses a “well known” path structure when calling an domain. You may be familiar with the robots.txt convention. We all just agree that robots.txt will sit at the top path of everyone’s domain.
The WebFinger protocol is a simple HTTP-based protocol that enables a user or search to discover information about other users or resources on the internet by using their email address or other identifying information. My is first name at last name .com, so…my personal WebFinger API endpoint is here https://www.hanselman.com/.well-known/webfinger
The idea is that…
A user sends a WebFinger request to a server, using the email address or other identifying information of the user or resource they are trying to discover.
The server looks up the requested information in its database and returns a JSON object containing the information about the user or resource. This JSON object is called a “resource descriptor.”
The user’s client receives the resource descriptor and displays the information to the user.
The resource descriptor contains various types of information about the user or resource, such as their name, profile picture, and links to their social media accounts or other online resources. It can also include other types of information, such as the user’s public key, which can be used to establish a secure connection with the user.
Note that Mastodon user names start with @ so they are @username@someserver.com. Just like twiter would be @shanselman@twitter.com I can be @shanselman@hanselman.com now!
So perhaps https://www.hanselman.com/.well-known/webfinger?resource=acct:FRED@HANSELMAN.COM
This is a static response, but if I was hosting pages for more than one person I’d want to take in the url with the user’s name, and then map it to their aliases and return those correctly.
Even easier, you can just use the JSON file of your own Mastodon server’s webfinger response and SAVE IT as a static json file and copy it to your own server!
As long as your server returns the right JSON from that well known URL then it’ll work.
If you want to get started with Mastodon, start here. https://github.com/joyeusenoelle/GuideToMastodon/ it feels like Twitter circa 2007 except it’s not owned by anyone and is based on web standards like ActivityPub.
Hope this helps!
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.