بلاگ

  • Python – Reading Financial Data From Internet – Useful code

    Python – Reading Financial Data From Internet – Useful code


    Reading financial data from the internet is sometimes challenging. In this short article with two python snippets, I will show how to read it from Wikipedia and from and from API, delivering in JSON format:

    This is how the financial json data from the api looks like.

    Reading the data from the API is actually not tough, if you have experience reading JSON, with nested lists. If not, simply try with trial and error and eventually you will succeed:

    With the reading from wikipedia, it is actually even easier – the site works flawlessly with pandas, and if you count the tables correctly, you would get what you want:

    You might want to combine both sources, just in case:

    The YouTube video for this article is here:
    https://www.youtube.com/watch?v=Uj95BgimHa8
    The GitHub code is there – GitHub

    Enjoy it! 🙂



    Source link

  • GPT Function Calling: 5 Underrated Use Cases | by Max Brodeur-Urbas


    OpenAI’s backend converting messy unstructured data to structured data via functions

    OpenAI’s “Function Calling” might be the most groundbreaking yet under appreciated feature released by any software company… ever.

    Functions allow you to turn unstructured data into structured data. This might not sound all that groundbreaking but when you consider that 90% of data processing and data entry jobs worldwide exist for this exact reason, it’s quite a revolutionary feature that went somewhat unnoticed.

    Have you ever found yourself begging GPT (3.5 or 4) to spit out the answer you want and absolutely nothing else? No “Sure, here is your…” or any other useless fluff surrounding the core answer. GPT Functions are the solution you’ve been looking for.

    How are Functions meant to work?

    OpenAI’s docs on function calling are extremely limited. You’ll find yourself digging through their developer forum for examples of how to use them. I dug around the forum for you and have many example coming up.

    Here’s one of the only examples you’ll be able to find in their docs:

    functions = [
    {
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
    "type": "object",
    "properties": {
    "location": {
    "type": "string",
    "description": "The city and state, e.g. San Francisco, CA",
    },
    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
    },
    "required": ["location"],
    },
    }
    ]

    A function definition is a rigid JSON format that defines a function name, description and parameters. In this case, the function is meant to get the current weather. Obviously GPT isn’t able to call this actual API (since it doesn’t exist) but using this structured response you’d be able to connect the real API hypothetically.

    At a high level however, functions provide two layers of inference:

    Picking the function itself:

    You may notice that functions are passed into the OpenAI API call as an array. The reason you provide a name and description to each function are so GPT can decide which to use based on a given prompt. Providing multiple functions in your API call is like giving GPT a Swiss army knife and asking it to cut a piece of wood in half. It knows that even though it has a pair of pliers, scissors and a knife, it should use the saw!

    Function definitions contribute towards your token count. Passing in hundreds of functions would not only take up the majority of your token limit but also result in a drop in response quality. I often don’t even use this feature and only pass in 1 function that I force it to use. It is very nice to have in certain use cases however.

    Picking the parameter values based on a prompt:

    This is the real magic in my opinion. GPT being able to choose the tool in it’s tool kit is amazing and definitely the focus of their feature announcement but I think this applies to more use cases.

    You can imagine a function like handing GPT a form to fill out. It uses its reasoning, the context of the situation and field names/descriptions to decide how it will fill out each field. Designing the form and the additional information you pass in is where you can get creative.

    GPT filling out your custom form (function parameters)

    One of the most common things I use functions for to extract specific values from a large chunk of text. The sender’s address from an email, a founders name from a blog post, a phone number from a landing page.

    I like to imagine I’m searching for a needle in a haystack except the LLM burns the haystack, leaving nothing but the needle(s).

    GPT Data Extraction Personified.

    Use case: Processing thousands of contest submissions

    I built an automation that iterated over thousands of contest submissions. Before storing these in a Google sheet I wanted to extract the email associated with the submission. Heres the function call I used for extracting their email.

    {
    "name":"update_email",
    "description":"Updates email based on the content of their submission.",
    "parameters":{
    "type":"object",
    "properties":{
    "email":{
    "type":"string",
    "description":"The email provided in the submission"
    }
    },
    "required":[
    "email"
    ]
    }
    }

    Assigning unstructured data a score based on dynamic, natural language criteria is a wonderful use case for functions. You could score comments during sentiment analysis, essays based on a custom grading rubric, a loan application for risk based on key factors. A recent use case I applied scoring to was scoring of sales leads from 0–100 based on their viability.

    Use Case: Scoring Sales leads

    We had hundreds of prospective leads in a single google sheet a few months ago that we wanted to tackle from most to least important. Each lead contained info like company size, contact name, position, industry etc.

    Using the following function we scored each lead from 0–100 based on our needs and then sorted them from best to worst.

    {
    "name":"update_sales_lead_value_score",
    "description":"Updates the score of a sales lead and provides a justification",
    "parameters":{
    "type":"object",
    "properties":{
    "sales_lead_value_score":{
    "type":"number",
    "description":"An integer value ranging from 0 to 100 that represents the quality of a sales lead based on these criteria. 100 is a perfect lead, 0 is terrible. Ideal Lead Criteria:\n- Medium sized companies (300-500 employees is the best range)\n- Companies in primary resource heavy industries are best, ex. manufacturing, agriculture, etc. (this is the most important criteria)\n- The higher up the contact position, the better. VP or Executive level is preferred."
    },
    "score_justification":{
    "type":"string",
    "description":"A clear and conscise justification for the score provided based on the custom criteria"
    }
    }
    },
    "required":[
    "sales_lead_value_score",
    "score_justification"
    ]
    }

    Define custom buckets and have GPT thoughtfully consider each piece of data you give it and place it in the correct bucket. This can be used for labelling tasks like selecting the category of youtube videos or for discrete scoring tasks like assigning letter grades to homework assignments.

    Use Case: Labelling news articles.

    A very common first step in data processing workflows is separating incoming data into different streams. A recent automation I built did exactly this with news articles scraped from the web. I wanted to sort them based on the topic of the article and include a justification for the decision once again. Here’s the function I used:

    {
    "name":"categorize",
    "description":"Categorize the input data into user defined buckets.",
    "parameters":{
    "type":"object",
    "properties":{
    "category":{
    "type":"string",
    "enum":[
    "US Politics",
    "Pandemic",
    "Economy",
    "Pop culture",
    "Other"
    ],
    "description":"US Politics: Related to US politics or US politicians, Pandemic: Related to the Coronavirus Pandemix, Economy: Related to the economy of a specific country or the world. , Pop culture: Related to pop culture, celebrity media or entertainment., Other: Doesn't fit in any of the defined categories. "
    },
    "justification":{
    "type":"string",
    "description":"A short justification explaining why the input data was categorized into the selected category."
    }
    },
    "required":[
    "category",
    "justification"
    ]
    }
    }

    Often times when processing data, I give GPT many possible options and want it to select the best one based on my needs. I only want the value it selected, no surrounding fluff or additional thoughts. Functions are perfect for this.

    Use Case: Finding the “most interesting AI news story” from hacker news

    I wrote another medium article here about how I automated my entire Twitter account with GPT. Part of that process involves selecting the most relevant posts from the front pages of hacker news. This post selection step leverages functions!

    To summarize the functions portion of the use case, we would scrape the first n pages of hacker news and ask GPT to select the post most relevant to “AI news or tech news”. GPT would return only the headline and the link selected via functions so that I could go on to scrape that website and generate a tweet from it.

    I would pass in the user defined query as part of the message and use the following function definition:

    {
    "name":"find_best_post",
    "description":"Determine the best post that most closely reflects the query.",
    "parameters":{
    "type":"object",
    "properties":{
    "best_post_title":{
    "type":"string",
    "description":"The title of the post that most closely reflects the query, stated exactly as it appears in the list of titles."
    }
    },
    "required":[
    "best_post_title"
    ]
    }
    }

    Filtering is a subset of categorization where you categorize items as either true or false based on a natural language condition. A condition like “is Spanish” will be able to filter out all Spanish comments, articles etc. using a simple function and conditional statement immediately after.

    Use Case: Filtering contest submission

    The same automation that I mentioned in the “Data Extraction” section used ai-powered-filtering to weed out contest submissions that didn’t meet the deal-breaking criteria. Things like “must use typescript” were absolutely mandatory for the coding contest at hand. We used functions to filter out submissions and trim down the total set being processed by 90%. Here is the function definition we used.

    {
    "name":"apply_condition",
    "description":"Used to decide whether the input meets the user provided condition.",
    "parameters":{
    "type":"object",
    "properties":{
    "decision":{
    "type":"string",
    "enum":[
    "True",
    "False"
    ],
    "description":"True if the input meets this condition 'Does submission meet the ALL these requirements (uses typescript, uses tailwindcss, functional demo)', False otherwise."
    }
    },
    "required":[
    "decision"
    ]
    }
    }

    If you’re curious why I love functions so much or what I’ve built with them you should check out AgentHub!

    AgentHub is the Y Combinator-backed startup I co-founded that let’s you automate any repetitive or complex workflow with AI via a simple drag and drop no-code platform.

    “Imagine Zapier but AI-first and on crack.” — Me

    Automations are built with individual nodes called “Operators” that are linked together to create power AI pipelines. We have a catalogue of AI powered operators that leverage functions under the hood.

    Our current AI-powered operators that use functions!

    Check out these templates to see examples of function use-cases on AgentHub: Scoring, Categorization, Option-Selection,

    If you want to start building AgentHub is live and ready to use! We’re very active in our discord community and are happy to help you build your automations if needed.

    Feel free to follow the official AgentHub twitter for updates and myself for AI-related content.





    Source link

  • Easy logging management with Seq and ILogger in ASP.NET | Code4IT

    Easy logging management with Seq and ILogger in ASP.NET | Code4IT


    Seq is one of the best Log Sinks out there : it’s easy to install and configure, and can be added to an ASP.NET application with just a line of code.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is one of the most essential parts of any application.

    Wouldn’t it be great if we could scaffold and use a logging platform with just a few lines of code?

    In this article, we are going to learn how to install and use Seq as a destination for our logs, and how to make an ASP.NET 8 API application send its logs to Seq by using the native logging implementation.

    Seq: a sink and dashboard to manage your logs

    In the context of logging management, a “sink” is a receiver of the logs generated by one or many applications; it can be a cloud-based system, but it’s not mandatory: even a file on your local file system can be considered a sink.

    Seq is a Sink, and works by exposing a server that stores logs and events generated by an application. Clearly, other than just storing the logs, Seq allows you to view them, access their details, perform queries over the collection of logs, and much more.

    It’s free to use for individual usage, and comes with several pricing plans, depending on the usage and the size of the team.

    Let’s start small and install the free version.

    We have two options:

    1. Download it locally, using an installer (here’s the download page);
    2. Use Docker: pull the datalust/seq image locally and run the container on your Docker engine.

    Both ways will give you the same result.

    However, if you already have experience with Docker, I suggest you use the second approach.

    Once you have Docker installed and running locally, open a terminal.

    First, you have to pull the Seq image locally (I know, it’s not mandatory, but I prefer doing it in a separate step):

    Then, when you have it downloaded, you can start a new instance of Seq locally, exposing the UI on a specific port.

    docker run --name seq -d --restart unless-stopped -e ACCEPT_EULA=Y -p 5341:80 datalust/seq:latest
    

    Let’s break down the previous command:

    • docker run: This command is used to create and start a new Docker container.
    • --name seq: This option assigns the name seq to the container. Naming containers can make them easier to manage.
    • -d: This flag runs the container in detached mode, meaning it runs in the background.
    • --restart unless-stopped: This option ensures that the container will always restart unless it is explicitly stopped. This is useful for ensuring that the container remains running even after a reboot or if it crashes.
    • -e ACCEPT_EULA=Y: This sets an environment variable inside the container. In this case, it sets ACCEPT_EULA to Y, which likely indicates that you accept the End User License Agreement (EULA) for the software running in the container.
    • -p 5341:80: This maps port 5341 on your host machine to port 80 in the container. This allows you to access the service running on port 80 inside the container via port 5341 on your host.
    • datalust/seq:latest: This specifies the Docker image to use for the container. datalust/seq is the image name, and latest is the tag, indicating that you want to use the latest version of this image.

    So, this command runs a container named seq in the background, ensures it restarts unless stopped, sets an environment variable to accept the EULA, maps a host port to a container port, and uses the latest version of the datalust/seq image.

    It’s important to pay attention to the used port: by default, Seq uses port 5341 to interact with the UI and the API. If you prefer to use another port, feel free to do that – just remember that you’ll need some additional configuration.

    Now that Seq is installed on your machine, you can access its UI. Guess what? It’s on localhost:5341!

    Seq brand new instance

    However, Seq is “just” a container for our logs – but we have to produce them.

    A sample ASP.NET API project

    I’ve created a simple API project that exposes CRUD operations for a data model stored in memory (we don’t really care about the details).

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        public BooksController()
        {
    
        }
    
        [HttpGet("{id}")]
        public ActionResult<Book> GetBook([FromRoute] int id)
        {
    
            Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
            return book switch
            {
                null => NotFound(),
                _ => Ok(book)
            };
        }
    }
    

    As you can see, the details here are not important.

    Even the Main method is the default one:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    var app = builder.Build();
    
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    app.UseHttpsRedirection();
    
    app.MapControllers();
    
    app.Run();
    

    We have the Controllers, we have Swagger… well, nothing fancy.

    Let’s mix it all together.

    How to integrate Seq with an ASP.NET application

    If you want to use Seq in an ASP.NET application (may it be an API application or whatever else), you have to add it to the startup pipeline.

    First, you have to install the proper NuGet package: Seq.Extensions.Logging.

    The Seq.Extensions.Logging NuGet package

    Then, you have to add it to your Services, calling the AddSeq() method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    + builder.Services.AddLogging(lb => lb.AddSeq());
    
    var app = builder.Build();
    

    Now, Seq is ready to intercept whatever kind of log arrives at the specified port (remember, in our case, we are using the default one: 5341).

    We can try it out by adding an ILogger to the BooksController constructor:

    private readonly ILogger<BooksController> _logger;
    
    public BooksController(ILogger<BooksController> logger)
    {
        _logger = logger;
    }
    

    So that we can use the _logger instance to create logs as we want, using the necessary Log Level:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("I am Information");
        _logger.LogWarning("I am Warning");
        _logger.LogError("I am Error");
        _logger.LogCritical("I am Critical");
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Log messages on Seq

    Using Structured Logging with ILogger and Seq

    One of the best things about Seq is that it automatically handles Structured Logging.

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Have a look at this line:

    _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    

    This line generates a string message, replaces all the placeholders, and, on top of that, creates two properties, SearchedId and TotalBooksCount; you can now define queries using these values.

    Structured Logs in Seq allow you to view additional logging properties

    Further readings

    I have to admit it: logging management is one of my favourite topics.

    I’ve already written a sort of introduction to Seq in the past, but at that time, I did not use the native ILogger, but Serilog, a well-known logging library that added some more functionalities on top of the native logger.

    🔗 Logging with Serilog and Seq | Code4IT

    This article first appeared on Code4IT 🐧

    In particular, Serilog can be useful for propagating Correlation IDs across multiple services so that you can fetch all the logs generated by a specific operation, even though they belong to separate applications.

    🔗 How to log Correlation IDs in .NET APIs with Serilog

    Feel free to search through my blog all the articles related to logging – I’m sure you will find interesting stuff!

    Wrapping up

    I think Seq is the best tool for local development: it’s easy to download and install, supports structured logging, and can be easily added to an ASP.NET application with just a line of code.

    I usually add it to my private projects, especially when the operations I run are complex enough to require some well-structured log.

    Given how it’s easy to install, sometimes I use it for my work projects too: when I have to fix a bug, but I don’t want to use the centralized logging platform (since it’s quite complex to use), I add Seq as a destination sink, run the application, and analyze the logs in my local machine. Then, of course, I remove its reference, as I want it to be just a discardable piece of configuration.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Integrating Psychology into Software Development | by Ulas Can Cengiz


    14 min read

    Nov 10, 2023

    Photo by Bret Kavanaugh on Unsplash

    Imagine sitting down at your desk to untangle a particularly complex piece of software code. Your eyes scan lines packed with logical operations and function calls. Somewhere in this intricate weave, a bug lurks, derailing the application’s performance. This scenario, familiar to many developers, isn’t just a test of technical skill; it’s a psychological challenge. The frustration and cognitive fatigue that often accompany such tasks can cloud judgment and prolong resolution. It’s in moments like these that the intersection of psychology and software development comes into sharp focus.

    Cognitive load theory, originally applied to educational psychology, has profound implications for managing complexity in software projects. It posits that our working memory has a limited capacity for processing new information. In the context of software development, this translates to the need for clean, readable code and well-architected systems that minimize the cognitive load on developers. By understanding and applying this theory, we can create development environments that reduce unnecessary complexity and allow developers to allocate their cognitive resources…



    Source link

  • Weekend Sale! 🎁

    Weekend Sale! 🎁


    At Browserling and Online Tools we love sales.

    We just created a new automated Weekend Sale.

    Now each weekend, we show a 50% discount offer to all users who visit our site.

    🔥 onlinetools.com/pricing

    🔥 browserling.com/#pricing

    Buy a subscription now and see you next time!



    Source link

  • Python – Simple Stock Analysis with yfinance – Useful code

    Python – Simple Stock Analysis with yfinance – Useful code


    Sometimes, the graphs of stocks are useful. Sometimes these are not. In general, do your own research, none of this is financial advice.

    And while doing that, if you want to analyze stocks with just a few lines of python, this article might help? This simple yet powerful script helps you spot potential buy and sell opportunities for Apple (AAPL) using two classic technical indicators: moving averages and RSI.

    Understanding the Strategy

    1. SMA Crossover: The Trend Following Signal

    The script first calculates two Simple Moving Averages (SMA):

    The crossover strategy is simple:

    This works because moving averages smooth out price noise, helping identify the overall trend direction.

    2. RSI: The Overbought/Oversold Indicator

    The Relative Strength Index (RSI) measures whether a stock is overbought or oversold:

    By combining SMA crossovers (trend confirmation) and RSI extremes (timing), we get stronger signals.

    This plot is generated with less than 40 lines of python code

    The code looks like that:

    The code above, but in way more details is explained in the YT video below:

    https://www.youtube.com/watch?v=m0ayASmrZmE

    And it is available in GitHub as well.



    Source link

  • Developer Spotlight: Reksa Andhika | Codrops

    Developer Spotlight: Reksa Andhika | Codrops


    Hi, my name is Reksa Andhika. I’m an independent creative developer based in Indonesia, specializing in building websites with motion and interaction. I work with companies, agencies, studios, and individuals from all over the world.

    Selected Works

    TrueKind

    This e-commerce skincare project was a passion project, created in collaboration with Abhishek Jha, a talented designer from India.

    I truly enjoyed watching every step of progress we made. It was an exciting journey from start to finish. We discussed and brainstormed everything from design and functionality to motion, aiming to make the project as polished as possible and highlight its beauty. The challenge of this project was to create a floating layout and motion effects that were not distracting, but instead made the website come alive using parallax and motion. We crafted engaging micro-interactions to enhance its visual appeal. My favorite part was the seamless page transition when entering the product detail page.

    We are really happy, people’s response is positive, we received awards & recognition, such as Awwwards Site of the Day & Developer Award, Awwwards E-commerce Honors Nominee, GSAP Site of the Day, Muz.li Picks Honor, Made With GSAP.

    Tech stack: Nuxt 3, Prismic, GSAP, Lenis.

    ELEMENTIS

    Luxury health & wellness resorts and residences. I had the pleasure of working with Fleava, an agency based in Bali, Indonesia.

    I thoroughly enjoyed working on the ELEMENTIS project, especially with the stunning assets and content they provided. The motion elements were thoughtfully aligned with the brand logo, ensuring a cohesive and harmonious visual identity. One of the challenges of this project was creating a clean layout and minimalist motion, leveraging the brand identity to craft a unique motion experience.

    ELEMENTIS received several awards and recognition, including Awwwards Site of the Day & Developer Award (my first SOTD!), GSAP Site of the Week, and GSAP Showreel 2024.

    Tech stack: Nuxt 3, GSAP, Vold (Internal Fleava CMS), Lenis.

    FIFTYSEVEN

    This is the portfolio website for FIFTYSEVEN, a studio based in Serbia. What makes this project special is that each case study is custom and carefully crafted, making every project unique. The challenge was to build different layouts, colors, and motion effects for each case study, but it was a fun and rewarding experience.

    Tech Stack: React, GSAP, Lenis.

    Understanding Neurodiversity

    This is an interactive website about neurodivergence. The site was designed to tell a story through interactive scroll, hover, and click elements. We aimed to deliver the message beautifully, using colorful vector assets. The challenge of this project was controlling the animation timeline, ensuring it worked smoothly both forwards and backwards.

    Understanding Neurodiversity received recognition as CSSDA Site of the Day and was nominated for CSSDA Website of the Month.

    Tech Stack: Vanilla HTML, CSS, JS, GSAP, Lottie.

    About Me

    I’ve been passionate about programming since I was 15, starting with desktop software, mobile applications, and websites. Eventually, developing websites became the most comfortable and fulfilling part of my journey.

    I started freelancing by helping friends with their projects while constantly improving my programming skills. Back then, I was only familiar with traditional, static websites.

    In 2020, my partner, Ala, introduced me to Awwwards, and I discovered that websites could be incredibly creative, with dynamic layouts, rich motion, and interactive elements. That moment sparked my interest and curiosity in creative website development.

    I began creating experimental projects, recreating entire websites for learning purposes, and later started working on client projects that featured rich motion and interaction. I learned the most by working on real projects, whether for clients or personal experiments.

    Philosophy

    I draw a lot of inspiration from internet culture, especially memes. They can be funny, satirical, and often deeply relatable.

    “It Ain’t Much But It’s Honest Work” (from a simple meme farm guy) – I always want to keep these words in my portfolio and in my mindset. As long as I’m proud of the work and know that I put real effort into it, that’s what matters most. Even if a project feels small or insignificant, honest work has its own value.

    I believe there’s always someone out there who will appreciate it; we just never know who that might be.

    Tools & Workflow

    When exploring the tech stack for projects and their requirements, I mostly use and feel comfortable with Nuxt.js for the frontend, SCSS for styling, Prismic for headless CMS, GSAP for motion and interaction, Three.js / OGL for WebGL, Lenis for smooth scrolling, and Shopify for e-commerce projects.

    Currently Learning

    Currently diving deep into learning WebGL and motion concepts.

    Current Challenges

    Finding and crafting the most efficient workflow.

    Final Thoughts

    As a developer, I believe curiosity and consistency are key to thriving in the industry. It’s important to learn the fundamentals to build strong knowledge, stay focused, avoid distractions, and create something experimental or a passion project. Publishing it can be a powerful move.

    Thank you to Codrops and Manoela for the opportunity and spotlight – it’s truly an honor! Codrops is one of the best resources for creative designers and developers, and I’ve learned so much from the site.



    Source link

  • 2 ways to generate realistic data using Bogus &vert; Code4IT

    2 ways to generate realistic data using Bogus | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we delved into the creation of realistic data using Bogus, an open-source library that allows you to generate data with plausible values.

    Bogus contains several properties and methods that generate realistic data such as names, addresses, birthdays, and so on.

    In this article, we will learn two ways to generate data with Bogus: both ways generate the same result; the main change is on the reusability and the modularity. But, in my opinion, it’s just a matter of preference: there is no approach absolutely better than the other. However, both methods can be preferred in specific cases.

    For the sake of this article, we are going to use Bogus to generate instances of the Book class, defined like this:

    public class Book
    {
        public Guid Id { get; set; }
        public string Title { get; set; }
        public int PagesCount { get; set; }
        public Genre[] Genres { get; set; }
        public DateOnly PublicationDate { get; set; }
        public string AuthorFirstName { get; set; }
        public string AuthorLastName { get; set; }
    }
    
    public enum Genre
    {
        Thriller, Fantasy, Romance, Biography
    }
    

    Expose a Faker inline or with a method

    It is possible to create a specific object that, using a Builder approach, allows you to generate one or more items of a specified type.

    It all starts with the Faker<T> generic type, where T is the type you want to generate.

    Once you create it, you can define the rules to be used when initializing the properties of a Book by using methods such as RuleFor and RuleForType.

    public static class BogusBookGenerator
    {
        public static Faker<Book> CreateFaker()
        {
            Faker<Book> bookFaker = new Faker<Book>()
             .RuleFor(b => b.Id, f => f.Random.Guid())
             .RuleFor(b => b.Title, f => f.Lorem.Text())
             .RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>())
             .RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName)
             .RuleFor(b => b.AuthorLastName, f => f.Person.LastName)
             .RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800))
             .RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
    
            return bookFaker;
        }
    }
    

    In this way, thanks to the static method, you can simply create a new instance of Faker<Book>, ask it to generate one or more books, and enjoy the result:

    Faker<Book> generator = BogusBookGenerator.CreateFaker();
    var books = generator.Generate(10);
    

    Clearly, it’s not necessary for the class to be marked as static: it all depends on what you need to achieve!

    Expose a subtype of Faker, specific for the data type to be generated

    If you don’t want to use a method (static or not static, it doesn’t matter), you can define a subtype of Faker<Book> whose customization rules are all defined in the constructor.

    public class BookGenerator : Faker<Book>
    {
        public BookGenerator()
        {
            RuleFor(b => b.Id, f => f.Random.Guid());
            RuleFor(b => b.Title, f => f.Lorem.Text());
            RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>());
            RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName);
            RuleFor(b => b.AuthorLastName, f => f.Person.LastName);
            RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800));
            RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
        }
    }
    

    Using this way, you can simply create a new instance of BookGenerator and, again, call the Generate method to create new book instances.

    var generator = new BookGenerator();
    var books = generator.Generate(10);
    

    Method vs Subclass: When should we use which?

    As we saw, both methods bring the same result, and their usage is almost identical.

    So, which way should I use?

    Use the method approach (the first one) when you need:

    • Simplicity: If you need to generate fake data quickly and your rules are straightforward, using a method is the easiest approach.
    • Ad-hoc Data Generation: Ideal for one-off or simple scenarios where you don’t need to reuse the same rules across your application.

    Or use the subclass (the second approach) when you need:

    • Reusability: If you need to generate the same type of fake data in multiple places, defining a subclass allows you to encapsulate the rules and reuse them easily.
    • Complex scenarios and extensibility: Better suited for more complex data generation scenarios where you might have many rules or need to extend the functionality.
    • Maintainability: Easier to maintain and update the rules in one place.

    Further readings

    If you want to learn a bit more about Bogus and use it to populate data used by Entity Framework, I recently published an article about this topic:

    🔗Seeding in-memory Entity Framework with realistic data with Bogus | Code4IT

    This article first appeared on Code4IT 🐧

    But, clearly, the best place to learn about Bogus is by reading the official documentation, that you can find on GitHub.

    🔗 Bogus repository | GitHub

    Wrapping up

    This article sort of complements the previous article about Bogus.

    I think Bogus is one of the best libraries in the .NET universe, as having realistic data can help you improve the intelligibility of the test cases you generate. Also, Bogus can be a great tool when you want to showcase demo values without accessing real data.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 7 Useful Tips to Consider When Starting a Trucking Business


    There are many business lines in the world that are not easy to manage, and the trucking business is one of them. This industry is one of the booming industries in many countries.

    Nowadays, many business owners are trying to take part in this industry. Over the past years, this business has shown constant growth, which has made it a popular business line. If you are planning to start a trucking business, then you have to understand the complex jargon of this field. Along with that, you need to get a DOT authority for operating a business in your State.

    In this blog, you will find out how you can start and run your trucking business successfully.

    Do your research

    To hit the jackpot, the first thing you need to do is to crack the nuts. This means you will have to research the market and needs.

    By doing in-depth research, you will be able to identify your business niche in the trucking industry. Are you interested in transporting goods or using a truck for mobile billboards? These are only two examples, but when you research it, you will definitely find more possibilities in it.

    After that, it will be easy for you to develop a business plan.

    Find your target market

    Another one of the leading business strategies is finding and understanding the target audience. Once you understand for whom you will offer your services and what their needs are, it will become easy for you to offer the services and make more sales.

    It will be a wise decision if you develop your business strategy according to the niche market. By following this approach, you can ensure that your operations are cohesive and on track. When you tailor your trucking services according to the needs of your clients, in results your business will be able to earn a reputation and revenue.

    Finance your fleet

    Businesses are all about heavy investment, no matter the size or scale of your startup. When it comes to the trucking business, you will be surprised to know the buying cost of trucks. When planning the finances for buying trucks, you will also have to prepare for the maintenance costs. You can find many financing options to start your business.

    You can also start your own company with new vehicles or can consider investing in offers for used commercial vehicles and construction machinery.

    Make it legal

    It is crucial for business owners to meet all the legal requirements to operate their businesses in State. Without legal recognition or approval, the federal ministry can take charge of you, and you could end up losing your business.

    Many people enter the trucking business without knowing that it is highly regulated. You will need to get a permit or authority to operate your business activities interstate. You will also need to file for a DOT MC Number in your State.

    Ensure that your business complies with the applicable laws for maintaining legitimacy.

    Invest on technology

    Technology is the future, and especially for trucking business startups, you should realize its importance earlier. Technology is about to dominate services and different businesses. With technology, you will provide numerous benefits to your business.

    When it comes to transporting business, you will have to track and manage the orders. For this, it is crucial for you to use mobile applications or websites to promote your business and make it visible. If you cannot afford oversized technological items in your business, you can still add basics like GPS systems, smart cameras, and more.

    Learn your competition

    When you research your market, you should also study your competitors. It will help you to understand the threats and weaknesses that already existing businesses are facing. This way, you will come up with innovative business strategies and fill the needs of the clients.

    You can also offer the most competitive prices from other truckers and brokers with reasonable margins, so a good number of clients will attract your business.

    Pro tip:

    You should always connect directly with consigners so you will pass the benefits to your clients through a reduction in prices.

    Final note:

    There is no doubt in it that the trucking business has been booming over the years, and it has brought gold for owners. If you get the fundamentals right, being new in the market, you can also harvest the jackpot.



    Source link

  • Automating Your DevOps: Writing Scripts that Save Time and Headaches | by Ulas Can Cengiz


    Or, how scripting revolutionized my workflow

    Photo by Stephen Dawson on Unsplash

    Imagine a time when factories were full of life, with gears turning and machines working together. It was a big change, like what’s happening today with computers. In the world of creating and managing software, we’re moving from doing things by hand to letting computers do the work. I’ve seen this change happen, and I can tell you, writing little programs, or “scripts,” is what’s making this change possible.

    Just like factories changed how things were made, these little programs are changing the way we handle software. They’re like a magic trick that turns long, boring tasks into quick and easy ones. In this article, I’m going to show you how these little programs fit into the bigger picture, how they make things better and faster, and the headaches they can take away.

    We’re going to go on a trip together. I’ll show you how things used to be done, talk about the different kinds of little programs and tools we use now, and share some of the tricks I’ve learned. I’ll tell you stories about times when these little programs really made a difference, give you tips, and show you some examples. So, buckle up, and let’s jump into this world where making and managing software is not just a job, but something really special.



    Source link