نویسنده: post Bina

  • Advanced parsing using Int.TryParse in C# | Code4IT


    We all need to parse strings as integers. Most of the time, we use int.TryParse(string, out int). But there’s a more advanced overload that we can use for complex parsing.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    You have probably used the int.TryParse method with this signature:

    public static bool TryParse (string? s, out int result);
    

    That C# method accepts a string, s, which, if it can be parsed, will be converted to an int value and whose integer value will be stored in the result parameter; at the same time, the method returns true to notify that the parsing was successful.

    As an example, this snippet:

    if (int.TryParse("100", out int result))
    {
        Console.WriteLine(result + 2); // correctly parsed as an integer
    }
    else
    {
        Console.WriteLine("Failed");
    }
    

    prints 102.

    Does it work? Yes. Is this the best we can do? No!

    How to parse complex strings with int.TryParse

    What if you wanted to parse 100€? There is a less-known overload that does the job:

    public static bool TryParse (
        string? s,
        System.Globalization.NumberStyles style,
        IFormatProvider? provider,
        out int result);
    

    As you see, we have two more parameters: style and provider.

    IFormatProvider? provider allows you to specify the culture information: examples are CultureInfo.InvariantCulture and new CultureInfo("es-es").

    But the real king of this overload is the style parameter: it is a Flagged Enum which allows you to specify the expected string format.

    style is of type System.Globalization.NumberStyles, which has several values:

    [Flags]
    public enum NumberStyles
    {
        None = 0x0,
        AllowLeadingWhite = 0x1,
        AllowTrailingWhite = 0x2,
        AllowLeadingSign = 0x4,
        AllowTrailingSign = 0x8,
        AllowParentheses = 0x10,
        AllowDecimalPoint = 0x20,
        AllowThousands = 0x40,
        AllowExponent = 0x80,
        AllowCurrencySymbol = 0x100,
        AllowHexSpecifier = 0x200,
        Integer = 0x7,
        HexNumber = 0x203,
        Number = 0x6F,
        Float = 0xA7,
        Currency = 0x17F,
        Any = 0x1FF
    }
    

    You can combine those values with the | symbol.

    Let’s see some examples.

    Parse as integer

    The simplest example is to parse a simple integer:

    [Fact]
    void CanParseInteger()
    {
        NumberStyles style = NumberStyles.Integer;
        var canParse = int.TryParse("100", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    Notice the NumberStyles style = NumberStyles.Integer;, used as a baseline.

    Parse parenthesis as negative numbers

    In some cases, parenthesis around a number indicates that the number is negative. So (100) is another way of writing -100.

    In this case, you can use the NumberStyles.AllowParentheses flag.

    [Fact]
    void ParseParenthesisAsNegativeNumber()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowParentheses;
        var canParse = int.TryParse("(100)", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(-100, result);
    }
    

    Parse with currency

    And if the string represents a currency? You can use NumberStyles.AllowCurrencySymbol.

    [Fact]
    void ParseNumberAsCurrency()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowCurrencySymbol;
        var canParse = int.TryParse(
    "100€",
     style,
     new CultureInfo("it-it"),
    out int result);
    
        Assert.True(canParse);
        Assert.Equal(100, result);
    }
    

    But, remember: the only valid symbol is the one related to the CultureInfo instance you are passing to the method.

    Both

    var canParse = int.TryParse(
        "100€",
        style,
        new CultureInfo("en-gb"),
        out int result);
    

    and

    var canParse = int.TryParse(
        "100$",
        style,
        new CultureInfo("it-it"),
        out int result);
    

    are not valid. One because we are using English culture to parse Euros, the other because we are using Italian culture to parse Dollars.

    Hint: how to get the currency symbol given a CultureInfo? You can use NumberFormat.CurrecySymbol, like this:

    new CultureInfo("it-it").NumberFormat.CurrencySymbol; // €
    

    Parse with thousands separator

    And what to do when the string contains the separator for thousands? 10.000 is a valid number, in the Italian notation.

    Well, you can specify the NumberStyles.AllowThousands flag.

    [Fact]
    void ParseThousands()
    {
        NumberStyles style = NumberStyles.Integer | NumberStyles.AllowThousands;
        var canParse = int.TryParse("10.000", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    Parse hexadecimal values

    It’s a rare case, but it may happen: you receive a string in the Hexadecimal notation, but you need to parse it as an integer.

    In this case, NumberStyles.AllowHexSpecifier is the correct flag.

    [Fact]
    void ParseHexValue()
    {
        NumberStyles style = NumberStyles.AllowHexSpecifier;
        var canParse = int.TryParse("F", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(15, result);
    }
    

    Notice that the input string does not contain the Hexadecimal prefix.

    Use multiple flags

    You can compose multiple Flagged Enums to create a new value that represents the union of the specified values.

    We can use this capability to parse, for example, a currency that contains the thousands separator:

    [Fact]
    void ParseThousandsCurrency()
    {
        NumberStyles style =
    NumberStyles.Integer
    | NumberStyles.AllowThousands
    | NumberStyles.AllowCurrencySymbol;
    
        var canParse = int.TryParse("10.000€", style, new CultureInfo("it-it"), out int result);
    
        Assert.True(canParse);
        Assert.Equal(10000, result);
    }
    

    NumberStyles.AllowThousands | NumberStyles.AllowCurrencySymbol does the trick.

    Conclusion

    We all use the simple int.TryParse method, but when parsing the input string requires more complex calculations, we can rely on those overloads. Of course, if it’s still not enough, you should create your custom parsers (or, as a simpler approach, you can use regular expressions).

    Are there any methods that have overloads that nobody uses? Share them in the comments!

    Happy coding!

    🐧



    Source link

  • Not all comments are bad | Code4IT

    Not all comments are bad | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Many developers say that

    All comments are bad! 💢

    False! Most of the comments are bad!

    For example, look at this method, and look at the comments:

    /// <summary> Checks if the password is valid </summary>
    /// <param name="password">The password to be validated</param>
    /// <returns>True if the password is valid, false otherwise</returns>
    public bool IsPasswordValid(string password)
    {
        Regex regex = new Regex(@"[a-z]{2,7}[1-9]{3,4}");
        var hasMatch = regex.IsMatch(password);
        return hasMatch;
    }
    

    Here the comments are pointless – they just tell the same things you can infer by looking at the method signature: this method checks if the input string is a valid password.

    So, yes, those kinds of comments are totally meaningless, and they should be avoided.

    But still, there are cases when writing comments is pretty helpful.

    public bool IsPasswordValid(string password)
    {
        // 2 to 7 lowercase chars followed by 3 or 4 numbers
        // Valid:   kejix173
        //          aoe193
        // Invalid: a92881
        Regex regex = new Regex(@"[a-z]{2,7}[1-9]{3,4}");
        return regex.IsMatch(password);
    }
    

    Here the purpose of the comment is not to explain what the method does (it’s already pretty explicit), but it explains with examples the Regular Expression used to validate the password. Another way to explain it is by adding tests that validate some input strings. In this way, you make sure that the documentation (aka the tests) is always aligned with the production code.

    By the way, for more complex calculations, adding comments explaining WHY (and not HOW or WHAT) a piece of code does is a good way to help developers understand the code.

    Another reason to add comments is to explain why a specific piece of code exists: examples are legal regulations, related work items, or references to where you’ve found that particular solution.

    Conclusion

    Always pay attention when writing comments: yes, they often just clutter the code. But they can really add value to the code, in some cases.

    To read more about good and bad comments, here’s a well-detailed article you might like:

    🔗 Clean code tips – comments and formatting

    Happy coding!

    🐧



    Source link

  • How to perform CRUD operations with Entity Framework Core and PostgreSQL &vert; Code4IT

    How to perform CRUD operations with Entity Framework Core and PostgreSQL | Code4IT


    With Entity Framework you can perform operations on relational databases without writing a single line of SQL. We will use EF to integrate PostgreSQL in our application

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working with relational databases, you often come across two tasks: writing SQL queries and mapping the results to some DTO objects.

    .NET developers are lucky to have an incredibly powerful tool that can speed up their development: Entity Framework. Entity Framework (in short: EF) is an ORM built with in mind simplicity and readability.

    In this article, we will perform CRUD operations with Entity Framework Core on a database table stored on PostgreSQL.

    Introduction EF Core

    With Entity Framework you don’t have to write SQL queries in plain text: you write C# code that gets automatically translated into SQL commands. Then the result is automatically mapped to your C# classes.

    Entity Framework supports tons of database engines, such as SQL Server, MySQL, Azure CosmosDB, Oracle, and, of course, PostgreSQL.

    There are a lot of things you should know about EF if you’re new to it. In this case, the best resource is its official documentation.

    But the only way to learn it is by getting your hands dirty. Let’s go!

    How to set up EF Core

    For this article, we will reuse the same .NET Core repository and the same database table we used when we performed CRUD operations with Dapper (a lightweight OR-M) and with NpgSql, which is the library that performs bare-metal operations.

    The first thing to do is, as usual, install the related NuGet package. Here we will need Npgsql.EntityFrameworkCore.PostgreSQL. Since I’ve used .NET 5, I have downloaded version 5.0.10.

    Npgsql.EntityFrameworkCore.PostgreSQL NuGet package

    Then, we need to define and configure the DB Context.

    Define and configure DbContext

    The idea behind Entity Framework is to create DB Context objects that map database tables to C# data sets. DB Contexts are the entry point to the tables, and the EF way to work with databases.

    So, the first thing to do is to define a class that inherits from DbContext:

    public class BoardGamesContext : DbContext
    {
    
    }
    

    Within this class we define one or more DbSets, that represent the collections of data rows on their related DB table:

    public DbSet<BoardGame> Games { get; set; }
    

    Then we can configure this specific DbContext by overriding the OnConfiguring method and specifying some options; for example, you can specify the connection string:

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseNpgsql(CONNECTION_STRING);
        base.OnConfiguring(optionsBuilder);
    }
    

    Remember to call base.OnConfiguring! Otherwise some configurations will not be applied, and the system may not work.

    Also, pay attention to the Port in the connection string! While with other libraries you can define it as

    private const string CONNECTION_STRING = "Host=localhost:5455;" +
        "Username=postgresUser;" +
        "Password=postgresPW;" +
        "Database=postgresDB";
    

    Entity Framework core requires the port to be specified in a different field:

    private const string CONNECTION_STRING = "Host=localhost;"+
                "Port=5455;" + // THIS!!!!!
                "Username=postgresUser;" +
                "Password=postgresPW;" +
                "Database=postgresDB";
    

    If you don’t explicitly define the Port, EF Core won’t recognize the destination host.

    Then, we can configure the models mapped to DB tables by overriding OnModelCreating:

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<BoardGame>(e => e.ToTable("games"));
        base.OnModelCreating(modelBuilder);
    }
    

    Here we’re saying that the rows in the games table will be mapped to BoardGame objects. We will come back to it later.

    For now, we’re done; here’s the full BoardGamesContext class:

    public class BoardGamesContext : DbContext
    {
        public DbSet<BoardGame> Games { get; set; }
    
        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            optionsBuilder.UseNpgsql(CONNECTION_STRING);
            base.OnConfiguring(optionsBuilder);
        }
        private const string CONNECTION_STRING = "Host=localhost;Port=5455;" +
                    "Username=postgresUser;" +
                    "Password=postgresPW;" +
                    "Database=postgresDB";
    
        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.Entity<BoardGame>(e => e.ToTable("games"));
            base.OnModelCreating(modelBuilder);
        }
    }
    

    Add the DbContext to Program

    Now that we have the BoardGamesContext ready we have to add its reference in the Startup class.

    In the ConfigureServices method, add the following instruction:

    services.AddDbContext<BoardGamesContext>();
    

    With this instruction, you make the BoardGamesContext context available across the whole application.

    You can further configure that context using an additional parameter of type Action<DbContextOptionsBuilder>. In this example, you can skip it, since we’ve already configured the BoardGamesContext using the OnConfiguring method. They are equivalent.

    If you don’t like

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseNpgsql(CONNECTION_STRING);
        base.OnConfiguring(optionsBuilder);
    }
    

    you can do

    services.AddDbContext<BoardGamesContext>(
        optionsBuilder => optionsBuilder.UseNpgsql(CONNECTION_STRING)
    );
    

    The choice is yours!

    Define and customize the DB Model

    As we know, EF allows you to map DB rows to C# objects. So, we have to create a class and configure it in a way that allows EF Core to perform the mapping.

    Here we have the BoardGame class:

    public class BoardGame
    {
        [System.ComponentModel.DataAnnotations.Key]
        public int Id { get; set; }
    
        public string Name { get; set; }
    
        public int MinPlayers { get; set; }
    
        public int MaxPlayers { get; set; }
    
        public int AverageDuration { get; set; }
    }
    

    Notice that we’ve explicitly declared that Id is the primary key in the table.

    But it’s not enough! This way the code won’t work! 😣

    Have a look at the table on Postgres:

    Games table on Posgres

    Have you noticed it? Postgres uses lowercase names, but we are using CamelCase. C# names must be 100% identical to those in the database!

    Now we have two ways:

    ➡ Rename all the C# properties to their lowercase equivalent

    public class BoardGame
    {
        [System.ComponentModel.DataAnnotations.Key]
        public int id { get; set; }
        public string name { get; set; }
        /// and so on
    }
    

    ➡ decorate all the properties with the Column attribute.

    public class BoardGame
    {
        [System.ComponentModel.DataAnnotations.Key]
        [Column("id")]
        public int Id { get; set; }
    
        [Column("name")]
        public string Name { get; set; }
    
        [Column("minplayers")]
        public int MinPlayers { get; set; }
    
        [Column("maxplayers")]
        public int MaxPlayers { get; set; }
    
        [Column("averageduration")]
        public int AverageDuration { get; set; }
    }
    

    Using the Column attribute is useful also when the DB column names and the C# properties differ for more than just the case, like in:

    [Column("averageduration")]
    public int AvgDuration { get; set; }
    

    Is it enough? Have a look again at the table definition:

    Games table on Posgres

    Noticed the table name? It’s “games”, not “BoardGame”!

    We need to tell EF which is the table that contains BoardGame objects.

    Again, we have two ways:

    ➡ Override the OnModelCreating method in the BoardGamesContext class, as we’ve seen before:

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<BoardGame>(e => e.ToTable("games"));
        base.OnModelCreating(modelBuilder);
    }
    

    ➡ Add the Table attribute to the BoardGame class:

    [Table("games")]
    public class BoardGame
    {...}
    

    Again, the choice is yours.

    CRUD operations with Entity Framework

    Now that the setup is complete, we can perform our CRUD operations. Entity Framework simplifies a lot the way to perform such types of operations, so we can move fast in this part.

    There are two main points to remember:

    1. to access the context we have to create a new instance of BoardGamesContext, which should be placed into a using block.
    2. When performing operations that change the status of the DB (insert/update/delete rows), you have to explicitly call SaveChanges or SaveChangesAsync to apply those changes. This is useful when performing batch operations on one or more tables (for example, inserting an order in the Order table and updating the user address in the Users table).

    Create

    To add a new BoardGame, we have to initialize the BoardGamesContext context and add a new game to the Games DbSet.

    public async Task Add(BoardGame game)
    {
        using (var db = new BoardGamesContext())
        {
            await db.Games.AddAsync(game);
            await db.SaveChangesAsync();
        }
    }
    

    Read

    If you need a specific entity by its id you can use Find and FindAsync.

    public async Task<BoardGame> Get(int id)
    {
        using (var db = new BoardGamesContext())
        {
            return await db.Games.FindAsync(id);
        }
    }
    

    Or, if you need all the items, you can retrieve them by using ToListAsync

    public async Task<IEnumerable<BoardGame>> GetAll()
    {
        using (var db = new BoardGamesContext())
        {
            return await db.Games.ToListAsync();
        }
    }
    

    Update

    Updating an item is incredibly straightforward: you have to call the Update method, and then save your changes with SaveChangesAsync.

    public async Task Update(int id, BoardGame game)
    {
        using (var db = new BoardGamesContext())
        {
            db.Games.Update(game);
            await db.SaveChangesAsync();
    
        }
    }
    

    For some reason, EF does not provide an asynchronous way to update and remove items. I suppose that it’s done to prevent or mitigate race conditions.

    Delete

    Finally, to delete an item you have to call the Remove method and pass to it the game to be removed. Of course, you can retrieve that game using FindAsync.

    public async Task Delete(int id)
    {
        using (var db = new BoardGamesContext())
        {
            var game = await db.Games.FindAsync(id);
            if (game == null)
                return;
    
            db.Games.Remove(game);
            await db.SaveChangesAsync();
        }
    }
    

    Further readings

    Entity Framework is impressive, and you can integrate it with tons of database vendors. In the link below you can find the full list. But pay attention that not all the libraries are implemented by the EF team, some are third-party libraries (like the one we used for Postgres):

    🔗 Database Providers | Microsoft docs

    If you want to start working with PostgreSQL, a good way is to download it as a Docker image:

    🔗 How to run PostgreSQL locally with Docker | Code4IT

    Then, if you don’t like Entity Framework, you can perform CRUD operations using the native library, NpgSql:

    🔗 CRUD operations on PostgreSQL using C# and Npgsql | Code4IT

    or, maybe, if you prefer Dapper:

    🔗 PostgreSQL CRUD operations with C# and Dapper | Code4IT

    Finally, you can have a look at the full repository here:

    🔗 Repository used for this article | GitHub

    This article first appeared on Code4IT 🐧

    Wrapping up

    This article concludes the series that explores 3 ways to perform CRUD operations on a Postgres database with C#.

    In the first article, we’ve seen how to perform bare-metal queries using NpgSql. In the second article, we’ve used Dapper, which helps mapping queries results to C# DTOs. Finally, we’ve used Entity Framework to avoid writing SQL queries and have everything in place.

    Which one is your favorite way to query relational databases?

    What are the pros and cons of each approach?

    Happy coding!

    🐧



    Source link

  • Use Debug-Assert to break the debugging flow if a condition fails &vert; Code4IT

    Use Debug-Assert to break the debugging flow if a condition fails | Code4IT


    It would be great if we could break the debugging flow if a condition is (not) met. Can we? Of course!

    Table of Contents



    Source link

  • Designer Spotlight: Andrés Briganti | Codrops

    Designer Spotlight: Andrés Briganti | Codrops


    My name is Andrés Briganti, and I’m an independent graphic designer based in Buenos Aires, Argentina. I collaborate with brands, institutions, and individuals from around the world.

    While my focus is on visual identity, my work spans over various fields of design visual communication, from physical to digital, from posters to wristwatches. My approach is refined and intentional, with the goal to distill abstract or complex ideas into distinctive visual forms.

    Selected Projects

    Personal Site

    My most recent, and most visible, digital project is my portfolio website. After years of iterations and a stalled launch, the concept matured into a more cohesive direction. Earlier this year, I teamed up with Joyco Studio to bring it to life. The result has been well received and earned an FWA Site of the Day.

    AB Setima

    A few years ago, I began taking my interest in type design more seriously. I was even part of the team that designed the Geist typeface for Vercel while working at basement.studio.

    My latest exploration in this field is AB Setima, a sans-serif display typeface that blends Art Deco geometry with a modern sensibility. It has refined proportions and tapered inner angles in contrast with sharp outer ones. The typeface includes variants and discretionary ligatures, offering some flexibility in composition.

    Designed with restaurant identities, hotel graphics, and event communications in mind, AB Setima delivers a sense of distinction with some edge to it.

    Einstoffen

    During 2023, I led the rebrand and visual language refinement for Einstoffen, a Swiss brand founded in 2008 that creates distinctive eyewear and fashion for independent-minded individuals. The project focused on sharpening the brand’s visual identity to better reflect its bold, self-assured spirit and evolving product offering.

    Various Identities

    A selection of brand visual identities I’ve created over the years.

    © All rights to the trademarks and designs shown are reserved to their respective owners. Displayed here for presentation purposes only.

    Background

    Previously, I’ve worked as Lead Brand Designer for digital studios and as Design Director for fashion and lifestyle brands. Today, I split my time between my independent practice, where I focus on visual exploration, and Rampant Studio, a collaborative creative bureau I’m building as Creative Director.

    Design Philisophy

    I believe in design that is both conceptually grounded and thoughtfully constructed, work that distills complex ideas into clear, enduring visual forms. My process balances strategic thinking with formal expression, creating identities and systems that are purposeful, distinctive, and built to last. I aim for clarity over noise, and visual languages that reflect the true character of the brands they serve.

    Final Thoughts

    I believe a designer’s greatest asset is the ability to connect the dots across the span of human experience and culture. A lack of interest in history, culture, and seemingly unrelated subjects leads to work that is shallow and short-lived. It’s curiosity – not specific skills or tools – that truly sets us apart.

    Contact

    I’m always happy to connect, share ideas, and explore new projects. Drop me a line anytime.



    Source link

  • How to access the HttpContext in .NET API

    How to access the HttpContext in .NET API


    If your application is exposed on the Web, I guess that you get some values from the HTTP Requests, don’t you?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you are building an application that is exposed on the Web, you will probably need to read some data from the current HTTP Request or set some values on the HTTP Response.

    In a .NET API, all the info related to both HTTP Request and HTTP Response is stored in a global object called HttpContext. How can you access it?

    In this article, we will learn how to get rid of the old HttpContext.Current and what we can do to write more testable code.

    Why not HttpContext directly

    Years ago, we used to access the HttpContext directly in our code.

    For example, if we had to access the Cookies collection, we used to do

    var cookies = HttpContext.Current.Request.Cookies;
    

    It worked, right. But this approach has a big problem: it makes our tests hard to set up.

    In fact, we were using a static instance that added a direct dependency between the client class and the HttpContext.

    That’s why the .NET team has decided to abstract the retrieval of that class: we now need to use IHttpContextAccessor.

    Add IHttpContextAccessor

    Now, I have this .NET project that exposes an endpoint, /WeatherForecast, that returns the current weather for a particular city, whose name is stored in the HTTP Header “data-location”.

    The real calculation (well, real… everything’s fake, here 😅) is done by the WeatherService. In particular, by the GetCurrentWeather method.

    public WeatherForecast GetCurrentWeather()
    {
        string currentLocation = GetLocationFromContext();
    
        var rng = new Random();
    
        return new WeatherForecast
        {
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)],
            Location = currentLocation
        };
    }
    

    We have to retrieve the current location.

    As we said, we cannot anymore rely on the old HttpContext.Current.Request.

    Instead, we need to inject IHttpContextAccessor in the constructor, and use it to access the Request object:

    public WeatherService(IHttpContextAccessor httpContextAccessor)
    {
        _httpContextAccessor = httpContextAccessor;
    }
    

    Once we have the instance of IHttpContextAccessor, we can use it to retrieve the info from the current HttpContext headers:

    string currentLocation = "";
    
    if (_httpContextAccessor.HttpContext.Request.Headers.TryGetValue("data-location", out StringValues locationHeaders) && locationHeaders.Any())
    {
        currentLocation = locationHeaders.First();
    }
    
    return currentLocation;
    

    Easy, right? We’re almost done.

    Configure Startup class

    If you run the application in this way, you will not be able to access the current HTTP request.

    That’s because we haven’t specified that we want to add IHttpContextAccessor as a service in our application.

    To do that, we have to update the ConfigureServices class by adding this instruction:

    services.AddHttpContextAccessor();
    

    Which comes from the Microsoft.Extensions.DependencyInjection namespace.

    Now we can run the project!

    If we call the endpoint specifying a City in the data-location header, we will see its value in the returned WeatherForecast object, in the Location field:

    Location is taken from the HTTP Headers

    Further improvements

    Is it enough?

    Is it really enough?

    If we use it this way, every class that needs to access the HTTP Context will have tests quite difficult to set up, because you will need to mock several objects.

    In fact, for mocking HttpContext.Request.Headers, we need to create mocks for HttpContext, for Request, and for Headers.

    This makes our tests harder to write and understand.

    So, my suggestion is to wrap the HttpContext access in a separate class and expose only the methods you actually need.

    For instance, you could wrap the access to HTTP Request Headers in the GetValueFromRequestHeader of an IHttpContextWrapper service:

    public interface IHttpContextWrapper
    {
        string GetValueFromRequestHeader(string key, string defaultValue);
    }
    

    That will be the only service that accesses the IHttpContextAccessor instance.

    public class HttpContextWrapper : IHttpContextWrapper
    {
        private readonly IHttpContextAccessor _httpContextAccessor;
    
        public HttpContextWrapper(IHttpContextAccessor httpContextAccessor)
        {
            _httpContextAccessor = httpContextAccessor;
        }
    
        public string GetValueFromRequestHeader(string key, string defaultValue)
        {
            if (_httpContextAccessor.HttpContext.Request.Headers.TryGetValue(key, out StringValues headerValues) && headerValues.Any())
            {
                return headerValues.First();
            }
    
            return defaultValue;
        }
    }
    

    In this way, you will be able to write better tests both for the HttpContextWrapper class, by focusing on the building of the HttpRequest, and for the WeatherService class, so that you can write tests without worrying about setting up complex structures just for retrieving a value.

    But pay attention to the dependency lifescope! HTTP Requests info live within – guess what? – their HTTP Request. So, when defining the dependencies in the Startup class, remember to inject the IHttpContextWrapper as Transient or, even better, as Scoped. If you don’t remember the difference, I got you covered here!

    Wrapping up

    In this article, we’ve learned that you can access the current HTTP request by using IHttpContextAccessor. Of course, you can use it to update the Response too, for instance by adding an HTTP Header.

    Happy coding!

    🐧



    Source link

  • Avoid using too many Imports in your classes &vert; Code4IT

    Avoid using too many Imports in your classes | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Actually, this article is not about a tip to write cleaner code, but it is an article that aims at pointing out a code smell.

    Of course, once you find this code smell in your code, you can act in order to eliminate it, and, as a consequence, you will end up with cleaner code.

    The code smell is easy to identify: open your classes and have a look at the imports list (in C#, the using on top of the file).

    A real example of too many imports

    Here’s a real-life example (I censored the names, of course):

    using MyCompany.CMS.Data;
    using MyCompany.CMS.Modules;
    using MyCompany.CMS.Rendering;
    using MyCompany.Witch.Distribution;
    using MyCompany.Witch.Distribution.Elements;
    using MyCompany.Witch.Distribution.Entities;
    using Microsoft.Extensions.Logging;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Serialization;
    using MyProject.Controllers.VideoPlayer.v1.DataSource;
    using MyProject.Controllers.VideoPlayer.v1.Vod;
    using MyProject.Core;
    using MyProject.Helpers.Common;
    using MyProject.Helpers.DataExplorer;
    using MyProject.Helpers.Entities;
    using MyProject.Helpers.Extensions;
    using MyProject.Helpers.Metadata;
    using MyProject.Helpers.Roofline;
    using MyProject.ModelsEntities;
    using MyProject.Models.ViewEntities.Tags;
    using MyProject.Modules.EditorialDetail.Core;
    using MyProject.Modules.VideoPlayer.Models;
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    
    namespace MyProject.Modules.Video
    

    Sounds familiar?

    If we exclude the imports necessary to use some C# functionalities

    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    

    We have lots of dependencies on external modules.

    This means that if something changes in one of the classes that are part of those namespaces, we may end up with code that is difficult to update.

    Class dependencies

    Also, guess what comes with all those imports? Constructor with too many parameters (and, in fact, in this class, I have 11 dependencies injected in the constructor) and code that is too long and difficult to understand (and, in fact, this class has 500+ lines).

    A solution? Refactor your project in order to minimize scattering those dependencies.

    Wrapping up

    Having all those imports (in C# we use the keyword using) is a good indicator that your code does too many things. You should focus on minimizing those imports without cheating (like using global imports).

    Happy coding!

    🐧



    Source link

  • Developer Spotlight: Ruud Luijten | Codrops

    Developer Spotlight: Ruud Luijten | Codrops


    I’m a 32-year-old freelance developer based in Antwerp, Belgium, with a degree in Multimedia Technology and over a decade of experience crafting digital experiences. Early in my career, I took a bold step and moved to New York City to join the creative agency Your Majesty as a front-end developer — a launchpad that allowed me to work on high-profile projects for global brands like BMW and Spotify.

    After a year in the U.S., I returned home to be closer to family and friends and continued refining my skills with renowned agencies such as Build in Amsterdam, Hello Monday, Watson DG, Exo Ape, and Studio 28K. Over the years, I’ve had the privilege of collaborating with top-tier creative teams on projects for clients including Amazon, Apple, Disney+, Mammut, Sony, WeTransfer, and more.

    Today, as an independent developer, I partner with agencies around the world to deliver design and motion-driven digital experiences. Outside of work, you’ll often find me trail running, playing golf, or exploring the outdoors through my passion for landscape and nature photography.

    Featured work

    WRK Timepieces – Advanced Horology Engineering

    WRK Timepieces is renowned for its commitment to crafting luxury timepieces. Their artisans craft each piece with exceptional care, combining traditional craftsmanship with cutting-edge innovation. Utilizing premium materials such as titanium and DLC (Diamond-Like Carbon), WRK Timepieces produces watches that embody precision engineering and timeless elegance. For their website, I worked with 28K Studio and their amazing team of designers.

    I developed a product page for WRK’s latest timepiece, using immersive 3D scroll-triggered animations to reveal features like the precision-engineered movement, Flying Tourbillon, and Double-wishbone System. The fluid, interactive sequences let users explore the watch in rich detail from every angle, while a seamless scrolling flow highlights its design, materials, and craftsmanship in a premium, minimalist style true to the WRK brand.

    Built with Vue/Nuxt, SCSS, TypeScript, GSAP, Storyblok for content, and Vercel for fast, reliable hosting.

    Mammut – Digital Flagship Store

    Mammut is a Swiss premium outdoor brand known for its high-performance gear and apparel, combining innovative design with over 160 years of alpine heritage to equip adventurers for the most demanding environments.

    From concept to launch, Build in Amsterdam collaborated closely with Mammut to create their new digital flagship store, united by a shared vision for quality. We set a fresh creative direction for lifestyle and product photography, shifting toward a more emotional shopping experience.

    The mobile-first design system meets WCAG 2.1 accessibility standards, ensuring usability for all. To highlight the story behind every product, we designed editorial modules that encourage deeper exploration without overshadowing commerce.

    Built with Next.js for responsive, high-performance interfaces, the platform integrates Contentful as a headless CMS, Algolia for lightning-fast search, and a custom PIM for real-time product data.

    Exo Ape – Portfolio

    Exo Ape is a digital design studio that plants the flag at the intersection of technology, digital branding and content creation. For over a decade their team has had the privilege of working with global brands and inspiring startups to tell brand-led stories and shape digital identities.

    I’ve collaborated with the Exo Ape team on numerous projects and had the privilege of contributing to their new studio website. Their focus consistently lies in strong design and bringing sites to life through engaging motion design. Working closely with Rob Smittenaar, I developed a robust foundation using Vue/Nuxt and created a backend-for-frontend API layer that streamlined data usage within the frontend app.

    The project uses Vue/Nuxt for a dynamic frontend, SCSS for modular styling, and GSAP for smooth animations and transitions. Content is managed with Storyblok, and the site is hosted on Vercel for fast, reliable delivery.

    Columbia Pictures – 100 Years Anniversary

    Celebrating a century of cinema – In honor of Columbia Pictures’ 100th anniversary, I worked with Sony Pictures, Exo Ape and Watson DG to create a century-filled digital experience and quiz. Through this, we took visitors on a journey, not only through entertainment history, but also through a personalized path of self-discovery helping fans uncover the films and television shows that have shaped them the most.

    In order to create a uniquely individual experience, we implemented a strategic backend development to keep the journey original, dynamic, and varied for every user – no matter how many times they return. This, along with a diverse mix of visuals and questions, created a fresh and engaging experience for everyone. The assets and questions are sourced from an extensive database of films, TV shows, and actors.

    Each run through the quiz presents the visitor with eight interactive questions that gradually change based on their responses. The creative team developed a design system featuring five interactive mechanics — Circles, This or That, Slider, Rotator, and Drag and Drop — to present quiz questions in an engaging way, encouraging users to select answers through fun, dynamic interactions.

    With titles, genres, and visuals from a hundred years of film and television, we were able to tailor the experience around each answer. For example, if the quiz finds that the user leans toward horror, more questions will be horror-themed. If they lean toward 80s films, more options will tap into nostalgia.

    After diving into Columbia’s 100 years and completing the quiz, visitors were presented with a personalized shareable through an automated asset generator. This asset — accompanied with a range of social sharing options — played an important role in maintaining engagement and enthusiasm throughout the campaign and beyond.

    The project’s frontend was built with Vue/Nuxt, SCSS, and GSAP for dynamic interfaces, modular styling, and smooth animations. The backend uses Node, Express, and Canvas to generate the shareable asset.

    General Electric – Innovation Barometer

    Every two years, General Electric surveys the world’s leading innovators to explore the future of innovation. The findings are presented in the Global Innovation Barometer, a biannual campaign created in collaboration with Little Red Robot.

    The creative team developed a futuristic visual system for the campaign, based on a modern abstract 3D design style. Photography and live-action video were combined with 3D animations and illustrations to create a rich visual experience.

    The project was built using Vue for a reactive and component-driven frontend, combined with Stylus for efficient and scalable styling. For smooth, performant animations, GSAP was integrated to handle complex transitions and scroll-triggered effects, enhancing the overall user experience.

    Extra: Photography Portfolio 2.0 sneak peek

    I’m excited to share that I’m currently collaborating with Barend Eijkenduijn and Clay Boan on the next version of my photography portfolio. This upcoming Version 2.0 will be more than just a refreshed collection of my work — it will also include a fully integrated print shop. Through this new platform, you’ll be able to browse my photographs, select your favorites, and customize your prints with a range of options, including various sizes, paper types, and framing styles. Our goal is to create a seamless and enjoyable experience, making it easier than ever to bring a piece of my photography into your home or workspace. We’re working towards an official launch by the end of this year, and I can’t wait to share it with you.

    Final Thoughts

    As a developer, I believe that curiosity and consistency are essential to growing in this field — but so is making time to explore, experiment, and create personal passion projects. Sharing that work can open unexpected doors.

    Many thanks to Codrops and Manoela for this opportunity — it’s a real honor. Codrops has long been one of my favorite resources, and it’s played a big role in my own learning journey.

    You can find more of work on my portfolio, Twitter and LinkedIn.



    Source link

  • Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene

    Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene



    This tutorial walks through creating an interactive animation: starting in Blender by designing a button and simulating a cloth-like object that drops onto a surface and settles with a soft bounce.

    After baking the cloth simulation, the animation is exported and brought into a Three.js project, where it becomes an interactive scene that can be replayed on click.

    By the end, you’ll have a user-triggered animation that blends Blender’s physics simulations with Three.js rendering and interactivity.

    Let’s dive in!

    Step 1: Create a Cube and Add Subdivisions

    1. Start a New Project: Open Blender and delete the default cube (select it and press X, then confirm).
    2. Add a Cube: Press Shift + A > Mesh > Cube to create a new cube.
    3. Enter Edit Mode: Select the cube, then press Tab to switch to Edit Mode.
    4. Subdivide the Cube: Press Ctrl + R to add a loop cut, hover over the cube, and scroll your mouse wheel to increase the number of cuts.
    5. Apply Subdivision: With the cube still selected in Object Mode, go to the Modifiers panel (wrench icon), and click Add Modifier > Subdivision Surface. Set the Levels to 2 or 3 for a smoother result, then click Apply.

    Step 2: Add Cloth Physics and Adjust Settings

    1. Select the Cube: Ensure your subdivided cube is selected in Object Mode.
    2. Add Cloth Physics: Go to the Physics tab in the Properties panel. Click Cloth to enable cloth simulation.
    3. Pin the Edges (Optional): If you want parts of the cube to stay fixed (e.g., the top), switch to Edit Mode, select the vertices you want to pin, go back to the Physics tab, and under Cloth > Shape, click Pin to assign those vertices to a vertex group.
    4. Adjust Key Parameters:
      • Quality Steps: Set to 10-15 for smoother simulation (higher values increase accuracy but slow down computation).
      • Mass: Set to around 0.2-0.5 kg for a lighter, more flexible cloth.
      • Pressure: Under Cloth > Pressure, enable it and set a positive value (e.g., 2-5) to simulate inflation. This will make the cloth expand as if air is pushing it outward.
      • Stiffness: Adjust Tension and Compression (e.g., 10-15) to control how stiff or loose the cloth feels.
    5. Test the Simulation: Press the Spacebar to play the animation and see the cloth inflate. Tweak settings as needed.

    Step 3: Add a Ground Plane with a Collision

    1. Create a Ground Plane: Press Shift + A > Mesh > Plane. Scale it up by pressing S and dragging (e.g., scale it to 5-10x) so it’s large enough for the cloth to interact with.
    2. Position the Plane: Move the plane below the cube by pressing G > Z > -5 (or adjust as needed).
    3. Enable Collision: Select the plane, go to the Physics tab, and click Collision. Leave the default settings.
    4. Run the Simulation: Press the Spacebar again to see the cloth inflate and settle onto the ground plane.

    Step 4: Adjust Materials and Textures

    1. Select the Cube: In Object Mode, select the cloth (cube) object.
    2. Add a Material: Go to the Material tab, click New to create a material, and name it.
    3. Set Base Color/UV Map: In the Base Color slot, choose a fabric-like color (e.g., red or blue) or connect an image texture by clicking the yellow dot next to Base Color and selecting Image Texture. Load a texture file if you have one.
    4. Adjust Roughness and Specular: Set Roughness to 0.1-0.3 for a soft fabric look.
    5. Apply to Ground (Optional): Repeat the process for the plane, using a simple gray or textured material for contrast.

    Step 5: Export as MDD and Generate Shape Keys for Three.js

    To use the cloth animation in a Three.js project, we’ll export the physics simulation as an MDD file using the NewTek MDD plugin, then re-import it to create Shape Keys. Follow these steps:

    1. Enable the NewTek MDD Plugin:
      1. Go to Edit > Preferences > Add-ons.
      2. Search for “NewTek” or “MDD” and enable the “Import-Export: NewTek MDD format” add-on by checking the box. Close the Preferences window.
    2. Apply All Modifiers and All Transform:
      1. In Object Mode, select the cloth object.
      2. Go to the Modifiers panel (wrench icon). For each modifier (e.g., Subdivision Surface, Cloth), click the dropdown and select Apply. This “freezes” the mesh with its current shape and physics data.
      3. Ensure no unapplied deformations (e.g., scale) remain: Press Ctrl + A > All Transforms to apply location, rotation, and scale.
    3. Export as MDD:
      1. With the cloth object selected, go to File > Export > Lightwave Point Cache (.mdd).
      2. In the export settings (bottom left):
        • Set FPS (frames per second) to match your project (e.g., 24, 30, or 60).
        • Set the Start/End Frame of your animation.
      3. Choose a save location (e.g., “inflation.mdd”) and click Export MDD.
    4. Import the MDD:
      1. Go to File > Import > Lightwave Point Cache (.mdd), and load the “inflation.mdd” file.
      2. In the Physics and Modifiers panel, remove any cloth simulation-related options, as we now have shape keys.

    Step 6: Export the Cloth Simulation Object as GLB

    After importing the MDD, select the cube with the animation data.

    1. Export as glTF 2.0 (.glb/.gltf): Go to File > Export > glTF 2.0 (.glb/.gltf).
    2. Check Shape Keys and Animation
      1. Under the Data section, check Shape Keys to include the morph targets generated from the animation.
      2. Check Animations to export the animation data tied to the Shape Keys.
    3. Export: Choose a save location (e.g., “inflation.glb”) and click Export glTF 2.0. This file is now ready for use in Three.js.

    Step 7: Implement the Cloth Animation in Three.js

    In this step, we’ll use Three.js with React (via @react-three/fiber) to load and animate the cloth inflation effect from the inflation.glb file exported in Step 6. Below is the code with explanations:

    1. Set Up Imports and File Path:
      1. Import necessary libraries: THREE for core Three.js functionality, useRef, useState, useEffect from React for state and lifecycle management, and utilities from @react-three/fiber and @react-three/drei for rendering and controls.
      2. Import GLTFLoader from Three.js to load the .glb file.
      3. Define the model path: const modelPath = ‘/inflation.glb’; points to the exported file (adjust the path based on your project structure).
    2. Create the Model Component:
      1. Define the Model component to handle loading and animating the .glb file.
      2. Use state variables: model for the loaded 3D object, loading to track progress, and error for handling issues.
      3. Use useRef to store the AnimationMixer (mixerRef) and animation actions (actionsRef) for controlling playback.
    3. Load the Model with Animations:
      1. In a useEffect hook, instantiate GLTFLoader and load inflation.glb.
      2. On success (gltf callback):
        • Extract the scene (gltf.scene) and create an AnimationMixer to manage animations.
        • For each animation clip in gltf.animations:
          • Set duration to 6 seconds (clip.duration = 6).
          • Create an AnimationAction (mixer.clipAction(clip)).
          • Configure the action: clampWhenFinished = true stops at the last frame, loop = THREE.LoopOnce plays once, and setDuration(6) enforces the 6-second duration.
          • Reset and play the action immediately, storing it in actionsRef.current.
        • Update state with the loaded model and set loading to false.
      3. Log loading progress with the xhr callback.
      4. Handle errors in the error callback, updating error state.
      5. Clean up the mixer on component unmount.
    4. Animate the Model:
      1. Use useFrame to update the mixer each frame with mixerRef.current.update(delta), advancing the animation based on time.
      2. Add interactivity:
        • handleClick: Resets and replays all animations on click.
        • onPointerOver/onPointerOut: Changes the cursor to indicate clickability.
    5. Render the Model:
      1. Return null if still loading, an error occurs, or no model is loaded.
      2. Return a <primitive> element with the loaded model, enabling shadows and attaching event handlers.
    6. Create a Reflective Ground:
      1. Define MetalGround as a mesh with a plane geometry (args={[100, 100]}).
      2. Apply MeshReflectorMaterial with properties like metalness=0.5, roughness=0.2, and color=”#202020″ for a metallic, reflective look. Adjust blur, strength, and resolution as needed.
    7. Set Up the Scene:
      1. In the App component, create a <Canvas> with a camera positioned at [0, 15, 15] and a 50-degree FOV.
      2. Add a directionalLight at [0, 15, 0] with shadows enabled.
      3. Include an Environment preset (“studio”) for lighting, a Model at [0, 5, 0], ContactShadows for realism, and the MetalGround rotated and positioned below.
      4. Add OrbitControls for interactive camera movement.
    import * as THREE from 'three';
    import { useRef, useState, useEffect } from 'react';
    import { Canvas, useFrame } from '@react-three/fiber';
    import { OrbitControls, Environment, MeshReflectorMaterial, ContactShadows } from '@react-three/drei';
    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
    
    const modelPath = '/inflation.glb';
    
    function Model({ ...props }) {
      const [model, setModel] = useState<THREE.Group | null>(null);
      const [loading, setLoading] = useState(true);
      const [error, setError] = useState<unknown>(null);
      const mixerRef = useRef<THREE.AnimationMixer | null>(null);
      const actionsRef = useRef<THREE.AnimationAction[]>([]);
    
      const handleClick = () => {
        actionsRef.current.forEach((action) => {
          action.reset();
          action.play();
        });
      };
    
      const onPointerOver = () => {
        document.body.style.cursor = 'pointer';
      };
    
      const onPointerOut = () => {
        document.body.style.cursor = 'auto';
      };
    
      useEffect(() => {
        const loader = new GLTFLoader();
        const dracoLoader = new DRACOLoader();
        dracoLoader.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/');
        loader.setDRACOLoader(dracoLoader);
    
        loader.load(
          modelPath,
          (gltf) => {
            const mesh = gltf.scene;
            const mixer = new THREE.AnimationMixer(mesh);
            mixerRef.current = mixer;
    
            if (gltf.animations && gltf.animations.length) {
              gltf.animations.forEach((clip) => {
                clip.duration = 6;
                const action = mixer.clipAction(clip);
                action.clampWhenFinished = true;
                action.loop = THREE.LoopOnce;
                action.setDuration(6);
                action.reset();
                action.play();
                actionsRef.current.push(action);
              });
            }
    
            setModel(mesh);
            setLoading(false);
          },
          (xhr) => {
            console.log(`Loading: ${(xhr.loaded / xhr.total) * 100}%`);
          },
          (error) => {
            console.error('An error happened loading the model:', error);
            setError(error);
            setLoading(false);
          }
        );
    
        return () => {
          if (mixerRef.current) {
            mixerRef.current.stopAllAction();
          }
        };
      }, []);
    
      useFrame((_, delta) => {
        if (mixerRef.current) {
          mixerRef.current.update(delta);
        }
      });
    
      if (loading || error || !model) {
        return null;
      }
    
      return (
        <primitive
          {...props}
          object={model}
          castShadow
          receiveShadow
          onClick={handleClick}
          onPointerOver={onPointerOver}
          onPointerOut={onPointerOut}
        />
      );
    }
    
    function MetalGround({ ...props }) {
      return (
        <mesh {...props} receiveShadow>
          <planeGeometry args={[100, 100]} />
          <MeshReflectorMaterial
            color="#151515"
            metalness={0.5}
            roughness={0.2}
            blur={[0, 0]}
            resolution={2048}
            mirror={0}
          />
        </mesh>
      );
    }
    
    export default function App() {
      return (
        <div id="content">
          <Canvas camera={{ position: [0, 35, 15], fov: 25 }}>
            <directionalLight position={[0, 15, 0]} intensity={1} shadow-mapSize={1024} />
    
            <Environment preset="studio" background={false} environmentRotation={[0, Math.PI / -2, 0]} />
            <Model position={[0, 5, 0]} />
            <ContactShadows opacity={0.5} scale={10} blur={5} far={10} resolution={512} color="#000000" />
            <MetalGround rotation-x={Math.PI / -2} position={[0, -0.01, 0]} />
    
            <OrbitControls
              enableZoom={false}
              enablePan={false}
              enableRotate={true}
              enableDamping={true}
              dampingFactor={0.05}
            />
          </Canvas>
        </div>
      );
    }

    And that’s it! Starting from a cloth simulation in Blender, we turned it into a button that drops into place and reacts with a bit of bounce inside a Three.js scene.

    This workflow shows how Blender’s physics simulations can be exported and combined with Three.js to create interactive, real-time experiences on the web.



    Source link

  • how I automated my blogging workflow with GitHub, PowerShell, and Azure &vert; Code4IT

    how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT


    After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.

    In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.

    I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.

    Introducing my blog architecture

    To better understand what’s going on, I need a very brief overview of the architecture of my blog.

    It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).

    So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.

    As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.

    Now that you know some general aspects, it’s time to deep dive into my writing process.

    Before writing: organizing ideas with GitHub

    My central source, as you might have already understood, is GitHub.

    There, I write all my notes and keep track of the status of my articles.

    Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.

    Github Projects to track the status of the articles

    GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.

    GitHub projects

    I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.

    In this way, I can use different columns and have more flexibility when handling the status of the tasks.

    GitHub issues templates

    As I said, to write my notes I use GitHub issues.

    When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.

    To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.

    The list of GitHub issues templates I use

    Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form

    Article creation form as generated by a template

    which is prepopulated with some fields:

    • Title: with a placeholder ([Article] )
    • Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
    • Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)

    How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.

    ---
    name: New article
    about: New blog article
    title: "[Article] - "
    labels: Article
    assignees: bellons91
    ---
    
    ## Argomenti
    
    ## Link
    
    ## Appunti vari
    

    And you’re good to go!

    GitHub action to assign issues to a project

    Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!

    With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.

    So, here’s mine:

    Auto-assign to project GitHub Action

    For better readability, you can find the Gist here.

    This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.

    In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎

    Writing

    Now it’s the time of writing!

    As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.

    For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.

    But, of course, I automated it! 😎

    Powershell script to scaffold a new article

    Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.

    Also, every MD file begins with a Frontmatter section, like this:

    ---
    title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
    path: "/blog/automate-articles-creations-github-powershell-azure"
    tags: ["MainArticle"]
    featuredImage: "./cover.png"
    excerpt: "a description for 072-how-i-create-articles"
    created: 4219-11-20
    updated: 4219-11-20
    ---
    

    But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.

    PowerShell script to scaffold a new article

    You can find the code in this Gist.

    This script performs several actions:

    1. Switches to the Master branch and downloads the latest updates
    2. Asks for the article slug that will be used to create the folder name
    3. Creates a new branch using the article slug as a name
    4. Creates a new folder that will contain all the files I will be using for my article (markdown content and images)
    5. Creates the article file with the Frontmatter part populated with dummy values
    6. Copies a placeholder image into this folder; this image will be the temporary cover image

    In this way, with a single command, I can scaffold a new article with all the files I need to get started.

    Ok, but how can I run a PowerShell in a Gatsby repository?

    I added this script in the package.json file

    "create-article": "@powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./article-creator.ps1",
    

    where article-creator.ps1 is the name of the file that contains the script.

    Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.

    Markdown preview on VS Code

    I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).

    One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.

    Here’s what I can see right now while I’m writing:

    Markdown preview on the side with VS Code

    Grammar check with Grammarly

    Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).

    I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.

    Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).

    Unprofessional, but fun, cover images

    One of the tasks I like the most is creating my cover images.

    I don’t use stock images, I prefer using less professional but more original cover images.

    Some of the cover images for my articles

    You can see all of them here.

    Creating and scheduling PR on GitHub with Templates and Actions

    Now that my article is complete, I can set it as ready for being scheduled.

    To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!

    I have created a PR template in an MD file, which I use to create a draft of the PR content.

    Pull Request form on GitHub

    In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).

    Also, I can define when this PR will be merged on Master, using the /schedule tag.

    It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.

    YAML of Merge Schedule action

    So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.

    As usual, you can find the code of this action here

    After the PR is merged, I also receive an email that notifies me of the action.

    After publishing

    Once a new article is online, I like to give it some visibility.

    To do that, I heavily rely on Azure Logic Apps.

    Azure Logic App for sharing on Twitter

    My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.

    I use it to trigger an Azure Logic App to publish a message on Twitter:

    Azure Logic App workflow for publishing on Twitter

    The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.

    If you prefer, you can use a custom Azure Function! The choice is yours!

    Cross-post reminder with Azure Logic Apps

    Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.

    Azure Logic App workflow for crosspost reminders

    I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.

    Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.

    And, my friends, this is everything that happens in the background of my blog!

    What I’m still missing

    I’ve added a lot of effort to my blog, and I’m incredibly proud of it!

    But still, there are a few things I’d like to improve.

    SEO Tools/analysis

    I’ve never considered SEO. Or, better, Keywords.

    I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.

    I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.

    Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.

    Also, I should spend more time thinking of the correct title and section titles.

    Any idea?

    Easy upgrade of Gatsby/Migrate to other headless CMSs

    Lastly, I’d like to find another theme or platform and leave the one I’m currently using.

    Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.

    Wrapping up

    That’s it: in this article, I’ve explained everything that I do when writing a blog post.

    Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!

    So, for now, happy coding!

    🐧



    Source link