برچسب: Project

  • The Making of a Personal Project Platform: A Portfolio that Grew out of Process and Play

    The Making of a Personal Project Platform: A Portfolio that Grew out of Process and Play



    This summer I created my Personal Project Platform. It wasn’t exactly intentional. When I realised where my process was going, I was already some way along.

    Speaking of process, I’m a big fan. When you’re ready to surrender, you’ll find yourself in places you wouldn’t expect. Anyway, two paths came together when I discovered I was working on my Personal Project Platform. Let’s talk about the first one.

    Path 1: A Necessary Happy Place

    As a designer, or as a human being for that matter, not every day is full of inspiration. Especially when the design-and-AI landscape changes as fast as it does now, it’s sometimes hard to see the big picture.

    As a remedy, I started building a moodboard that would serve as my Happy Place. Whenever I came across a reference that made me smile, I put it there. It had sections for my dream office; quotes and thoughts that resonated with me; and random image fragments that, together, felt like me ~ or at least a designer version of me. I started adding my own scribbles, notes and thoughts about purpose: why am I still doing this? What am I looking for as a designer?

    Things that inspired me. Among MyMind, Bon Iver, Collins, Orchid and other stuff from great designers.
    A section from my Happy Place. Snippets from MyMind, Bon Iver, Collins, Orchid, Kode, Daylight and other work from great designers.

    Path 2: Instagram Experiments

    One evening in December 2022, I had a drink with a designer friend. We were making random things just for fun. At work, I had shifted into more of a managerial role, and I missed designing. 

    Then I thought: why not throw it online? So I created an Instagram account and posted my first Processing sketch.

    The more I made, the more I wanted to make. Over time, this habit became part of me. Sketches became interactive, but it bothered me they only ran locally ~ I was the only one who could interact with them. I also started sharing quick tutorials, and was amazed by how many positive responses I got from people who felt inspired to make something of their own.

    Where the Two Paths Meet

    Meanwhile, my “Happy Place” notes grew longer and more intentional. I wanted more people to interact with my sketches. Since I was doing it all for fun, why not share the source code? Why not collect my resources for others to use?

    Slowly it became an idea for a platform: one where the intentional and the unexpected coexist, showing new designers ~ especially with AI replacing all the fun ~ that learning a craft, practising, and training your creative muscle still matter. 

    Now I just had to build it.

    I started with just a few basic components in Figma.

    Building the Platform

    Since we’re on Codrops, let’s talk code. I have a background in PHP and JavaScript ~ old-school, before ES6 or TypeScript, let alone Vue or React. I wanted to use this project to learn something new.

    After some research, I decided on Nuxt.js. From what I read, it’s easier to set up than Next.js. And since my platform isn’t likely to scale any time soon, I think it does the job. I had also played with Prismic CMS a few years back. Lightweight, not too many features, but fine for me. So I watched some Nuxt.js+Prismic tutorials, and off I went.

    The Hero

    I knew I wanted interactive components. Something that gave visitors an immediate sense of my work. Let’s start with the hero.

    With your mouse you draw objects onto the canvas, plain and simple. I wanted the objects to have a link with nature ~ something that grows, can flourish ~ as you would do when you take on lots of personal projects.

    In my first sketch the flowers scaled from small to big, literally growing. But then I thought: how many times had I got stuck on a sketch, frustrated over an idea that just wouldn’t work out? So I decided linear growth wouldn’t be honest. Most of the time when I work on my projects my head is all over the place. Things should scale randomly, they don’t even need to match in width and height. I like it like this, it mirrors the tension between control and chaos in my work. Below you’ll find the bit where this is happening.

    /**
     * Get a portion of the next image
     */
     public getPortion(): p5.Image {
       // Fetch original
       const original = this.getNext();
       if (! original) return null;
    
       // Source
       const ow = original.width;
       const oh = original.height;
       const sx = Math.random() * ow;
       const sy = Math.random() * oh;
    
       // Remaining part
       const loW = ow - sx;
       const loH = oh - sy;
    
       let sw = Math.round(loW * Math.random()) + 10;
       let sh = Math.round(loH * Math.random()) + 10;
    
       // Destination
       const dx = 0;
       const dy = 0;
       const dw = sw;
       const dh = sh;
        
       // Create new image
       const copy = this.p.createImage(dw, dh);
       copy.copy(original, sx, sy, sw, sh, dx, dy, dw, dh);
    
       return copy;
     }
    
     public getRandomSizedPortion(): p5.Image {
       // Get portion
       const img = this.getPortion();
       if (! img) return null;
    
       // Random size
       const maxSize = this.p.width * .1;
       img.resize(this.p.random(10,maxSize), this.p.random(10,maxSize));
    
       return img;
     }

    The Footer

    To balance the hero, I also made the footer interactive. I used an older sketch as a base, adding depth and texture to make it feel a little like an abstract ocean.

    For me, it brings a sense of calm and focus ~ with subtle vertical movement and a tone that changes as you move the mouse along the x-axis. The snippet below should give you an idea of how it works, but the original sketch is available to download on the platform. So if you’re curious, go ahead and play.

    /**
     * Calculate all data
     */
     public update() {
    
       // Animation settings
       let duration: number = 128;
       let progress: number = this.p.frameCount % duration;
       if(progress == 0) this.iteration++;
        
       // Rows and height
       let numRowsDrawn: number = this.numRows + 1 + this.iteration;
       let colW: number = this.p.width / this.numCols;
       let rowH: number = this.p.height / this.numRows;
    
       let count = 0;
       // Loop through rows
       for (let y: number = this.iteration; y<numRowsDrawn; y++) {
          
         // Calculate y position (start at the bottom)
         let targetY: number = this.p.height - (y+1) * rowH + this.iteration * rowH;
    
         // Where are we in the progress
         let posY: number = this.p.map(progress, 0, duration, targetY, targetY+rowH);
         // Mouse influence
         const smoothing = 0.06;
         this.currentMouseX += (this.p.mouseX - this.currentMouseX) * smoothing;
         const mouseInfluence: number = this.p.map(this.currentMouseX, 0, this.p.width, .8, -.3);
    
         // What is the influence based on the y position
         let yInfluence: number = this.p.map(posY / this.numRows, 0, rowH, 1, this.numRows+1) * mouseInfluence;
         // Double columns each row
         let extraCols: number = Math.exp(yInfluence * Math.LN2); 
         // Size and position
         let currentW: number = colW + extraCols * colW;
          
         // Loop through columns
         for (let x:number = 0; x<this.numCols; x++) {
           // Calculate x position
           let posX: number = x * currentW - (extraCols * yInfluence + 1) * colW;
    
           // Don't draw things out of screen x-axis
           if(posX > this.p.width) continue;
           if(posX + currentW < 0) continue;
    
           // Draw 
           this.display(x, y, posX, posY, currentW, rowH);
           count++;
          }
        }
      }

    The Masonry Grid

    I’ve always liked inspiration websites where a lot is going on. You get all sorts of images and videos that are strong on their own, but gain new purpose in a different context. That’s what I wanted for my case overview

    Since I don’t aim for any particular graphical style, I like that it feels more like a collection of references. This is why I decided to go for a masonry grid. I didn’t want to use a plugin, so I built this little CSS/JavaScript thingy where I use CSS Grid rows to distribute the images, and JavaScript to calculate how many rows it should span, depending on the aspect ratio that is set in the CMS. I think there is still room for improvement, but to be honest, I ran low on patience on this one. I decided it does the job for now. Maybe I will get back to it someday to refactor. Below is the snippet where most of the work happens.

    function applyMasonry() {
       // Fetch grid and items
       const grid = document.querySelector('.masonry-grid');
       const items = grid?.querySelectorAll('.masonry-item');
    
       // Make sure they’re both loaded
       if (!grid || !items) return
    
       // Get properties from CSS
       const rowHeight = parseInt(getComputedStyle(grid).getPropertyValue('grid-auto-rows'))
       const gap = parseInt(getComputedStyle(grid).getPropertyValue('gap') || 0)
        
       items.forEach(item => {
    
         // Fetch media and info container separately
         const media = item.querySelector('.masonry-item__image-container')
         const info = item.querySelector('.masonry-item__info-container')
    
         if (!media || !info) return
    
         // Combine them to item height
         const mediaHeight = media.getBoundingClientRect().height
         const infoHeight = info.getBoundingClientRect().height
         const itemHeight = mediaHeight + infoHeight
    
         // Calculate how many rows to span
         const rowSpan = Math.ceil((itemHeight + gap) / (rowHeight + gap))
    
         // Apply row span
         item.style.gridRowEnd = `span ${rowSpan}`;
         item.style.opacity = 1;
       })
     }

    Resources & Code

    Since I truly want to encourage people to start their own journey with personal projects, I want to share resources and code examples to get them started.

    Of course with the launch of this platform I had to do this retrospectively for more than 20 projects, so in future I’ll probably share more process and behind-the-scenes. Who knows. Anyway, this component gives me a space for anything that might be useful to people who are interested.

    Two Weeks Without a Laptop

    Then the summer holiday arrived. France. Four days of Disneyland chaos, followed by some peace near the ocean. Days were simple: beach, pool, playgrounds. In between, I picked up a Bon Iver notebook I’d bought back home.

    At the time, the platform had a temporary wordmark with my initials “mvds”. But I felt I could spend a little more time and attention crafting something beautiful. So every day I doodled my initials in all sorts of forms. By the end of the holiday I had a pretty good idea of what my logomark should become. Back home, with two more weeks before I needed to get back to work, I started digitising my sketches and tweaking anchor points until I got it right. (Then tweaked a little more, you know how it goes.) This resulted in a logomark I’m quite proud of. So I figured it needed a place on the platform.

    P5.js vs Three.js

    For the launch of my logomark on Instagram, I created a Processing sketch that placed the logo in a pixelated 3D scene, rotating. I liked that it almost became a sculpture or building of sorts. Now I only needed to build a web version.

    Because my Hero and Footer components were both p5.js, this was my first choice. But it was slow ~ I mean like really slow. No matter how I tried to optimise it, the 3D workload killed the performance. I had only worked with Three.js once a few years back, but I remembered it handled 3D pretty well. Not sure you’re going to have the best performing website by using multiple libraries, but since it’s all just for fun, I decided to give it a go. With the Three.js version I could add far more detail to the structure, and it still performed flawlessly compared to the p5.js version. Below you’ll see me looping through all the voxels.

    let instanceId: number = 0;
    
    // Loop using voxel resolution (detail), not image resolution
    for (let z: number = 0; z < detail; z++) {
      for (let y: number = 0; y < detail; y++) {
        const flippedY: number = detail - 1 - y;
    
        for (let x: number = 0; x < detail; x++) {
          // Sample image using normalized coordinates
          const sampleX: number = Math.floor((x / detail) * imgDetail);
          const sampleY: number = Math.floor((flippedY / detail) * imgDetail);
          const sampleZ: number = Math.floor((z / detail) * imgDetail);
    
          const brightness1: number = getBrightnessAt(imgData, imgDetail, sampleX, sampleY);
          const brightness2: number = getBrightnessAt(imgData, imgDetail, sampleZ, sampleY);
    
          if (brightness1 < 100 && brightness2 < 100 && instanceId < maxInstances) {
            dummy.position.set(
              x * cellSize - (detail * cellSize) / 2,
              y * cellSize - (detail * cellSize) / 2,
              z * cellSize - (detail * cellSize) / 2
              );
            dummy.updateMatrix();
            mesh.setMatrixAt(instanceId, dummy.matrix);
            instanceId++;
          }
        }
      }
    }

    Wrapping Up

    This platform isn’t finished ~ that’s the point. It’s a space to interact with my coded tools, for sketches to be shared for further exploration and for process itself to stay visible. If you’re a designer or coder, I hope it nudges you to start or continue your own side projects. That’s how creativity stays alive. Thank you for reading.





    Source link

  • Integrating Rive into a React Project: Behind the Scenes of Valley Adventures

    Integrating Rive into a React Project: Behind the Scenes of Valley Adventures


    Bringing new tools into a workflow is always exciting—curiosity bumps up against the comfort of familiar methods. But when our longtime client, Chumbi Valley, came to us with their Valley Adventures project, we saw the perfect opportunity to experiment with Rive and craft cartoon-style animations that matched the playful spirit of the brand.

    Rive is a powerful real-time interactive design tool with built-in support for interactivity through State Machines. In this guide, we’ll walk you through how we integrated a .riv file into a React environment and added mouse-responsive animations.

    We’ll also walk through a modernized integration method using Rive’s newer Data Binding feature—our current preferred approach for achieving the same animation with less complexity and greater flexibility.

    Animation Concept & File Preparation

    Valley Adventures is a gamified Chumbi NFT staking program, where magical creatures called Chumbi inhabit an enchanted world. The visual direction leans heavily into fairytale book illustrations—vibrant colors, playful characters, and a whimsical, cartoon-like aesthetic.

    To immediately immerse users in this world, we went with a full-section hero animation on the landing page. We split the animation into two parts:

    • an idle animation that brings the scene to life;
    • a cursor-triggered parallax effect, adding depth and interactivity.

    Several elements animate simultaneously—background layers like rustling leaves and flickering fireflies, along with foreground characters that react to movement. The result is a dynamic, storybook-like experience that invites users to explore.

    The most interesting—and trickiest—part of the integration was tying animations to mouse tracking. Rive provides a built-in way to handle this: by applying constraints with varying strengths to elements within a group that’s linked to Mouse Tracking, which itself responds to the cursor’s position.

    However, we encountered a limitation with this approach: the HTML buttons layered above the Rive asset were blocking the hover state, preventing it from triggering the animation beneath.

    To work around this, we used a more robust method that gave us finer control and avoided those problems altogether. 

    Here’s how we approached it:

    1. Create four separate timelines, each with a single keyframe representing an extreme position of the animation group:
      • Far left
      • Far right
      • Top
      • Bottom
    2. Add two animation layers, each responsible for blending between opposite keyframes:
      • Layer 1 blends the far-left and far-right timelines
      • Layer 2 blends the top and bottom timelines
    3. Tie each layer’s blend amount to a numeric input—one for the X axis, one for the Y axis.

    By adjusting the values of these inputs based on the cursor’s position, you can control how tightly the animation responds on each axis. This approach gives you a smoother, more customizable parallax effect—and prevents unexpected behavior caused by overlapping UI.

    Once the animation is ready, simply export it as a .riv file—and leave the rest of the magic to the devs.

    How We Did It: Integrating a Rive File into a React Project

    Before we dive further, let’s clarify what a .riv file actually is.

    A .riv file is the export format from the Rive editor. It can include:

    • vector graphics,
    • timeline animations,
    • a State Machine with input parameters.

    In our case, we’re using a State Machine with two numeric inputs: Axis_X and Axis_Y. These inputs are tied to how we control animation in Rive, using values from the X and Y axes of the cursor’s position.

    These inputs drive the movement of different elements—like the swaying leaves, fluttering fireflies, and even subtle character reactions—creating a smooth, interactive experience that responds to the user’s mouse.

    Step-by-Step Integration

    Step 1: Install the Rive React runtime

    Install the official package:

    npm install @rive-app/react-canvas

    Step 2: Create an Animation Component

    Create a component called RiveBackground.tsx to handle loading and rendering the animation.

    Step 3: Connect animation

    const { rive, setCanvasRef, setContainerRef } = useRive({
      src: 'https://cdn.rive.app/animations/hero.riv',
      autoplay: true,
      layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
      onLoad: () => setIsLoaded(true),
      enableRiveAssetCDN: true,
    });
    

    For a better understanding, let’s take a closer look at each prop you’ll typically use when working with Rive in React:

    What each option does:

    Property Description
    src Path to your .riv file — can be local or hosted via CDN
    autoplay Automatically starts the animation once it’s loaded
    layout Controls how the animation fits into the canvas (we’re using Cover and Center)
    onLoad Callback that fires when the animation is ready — useful for setting isLoaded
    enableRiveAssetCDN Allows loading of external assets (like fonts or textures) from Rive’s CDN

    Step 4: Connect State Machine Inputs

    const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
    const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);

    This setup connects directly to the input values defined inside the State Machine, allowing us to update them dynamically in response to user interaction.

    • State Machine 1 — the name of your State Machine, exactly as defined in the Rive editor
    • Axis_X and Axis_Y — numeric inputs that control movement based on cursor position
    • 0 — the initial (default) value for each input

    ☝️ Important: Make sure your .riv file includes the exact names: Axis_X, Axis_Y, and State Machine 1. These must match what’s defined in the Rive editor — otherwise, the animation won’t respond as expected.

    Step 5: Handle Mouse Movement

    useEffect(() => {
      if (!numX || !numY) return;
    
      const handleMouseMove = (e: MouseEvent) => {
        const { innerWidth, innerHeight } = window;
        numX.value = (e.clientX / innerWidth) * 100;
        numY.value = 100 - (e.clientY / innerHeight) * 100;
      };
    
      window.addEventListener('mousemove', handleMouseMove);
      return () => window.removeEventListener('mousemove', handleMouseMove);
    }, [numX, numY]);

    What’s happening here:

    • We use clientX and clientY to track the mouse position within the browser window.
    • The values are normalized to a 0–100 range, matching what the animation expects.
    • These normalized values are then passed to the Axis_X and Axis_Y inputs in the Rive State Machine, driving the interactive animation.

    ⚠️ Important: Always remember to remove the event listener when the component unmounts to avoid memory leaks and unwanted behavior. 

    Step 6: Cleanup and Render the Component

    useEffect(() => {
      return () => rive?.cleanup();
    }, [rive]);

    And the render:

    return (
      <div
        ref={setContainerRef}
        className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
      >
        <canvas ref={setCanvasRef} />
      </div>
    );
    • cleanup() — frees up resources when the component unmounts. Always call this to prevent memory leaks.
    • setCanvasRef and setContainerRef — these must be connected to the correct DOM elements in order for Rive to render the animation properly.

    And here’s the complete code:

    import {
      useRive,
      useStateMachineInput,
      Layout,
      Fit,
      Alignment,
    } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function RiveBackground({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        animations: ['State Machine 1','Timeline 1','Timeline 2'
    ],
        autoplay: true,
        layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
        onLoad: () => setIsLoaded(true),
        enableRiveAssetCDN: true,
      });
    
      const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
      const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);
    
      useEffect(() => {
        if (!numX || !numY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
    	if (!numX || !numY) {
            return;
          }
    
          const { innerWidth, innerHeight } = window;
          numX.value = (e.clientX / innerWidth) * 100;
          numY.value = 100 - (e.clientY / innerHeight) * 100;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
        return () => window.removeEventListener('mousemove', handleMouseMove);
      }, [numX, numY]);
    
      useEffect(() => {
        return () => {
          rive?.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }
    

    Step 7: Use the Component

    Now you can use the RiveBackground like any other component:

    <RiveBackground className="hero-background" />

    Step 8: Preload the WASM File

    To avoid loading the .wasm file at runtime—which can delay the initial render—you can preload it in App.tsx:

    import riveWASMResource from '@rive-app/canvas/rive.wasm';
    
    <link
      rel="preload"
      href={riveWASMResource}
      as="fetch"
      crossOrigin="anonymous"
    />

    This is especially useful if you’re optimizing for first paint or overall performance.

    Simple Parallax: A New Approach with Data Binding

    In the first part of this article, we used a classic approach with a State Machine to create the parallax animation in Rive. We built four separate animations (top, bottom, left, right), controlled them using input variables, and blended their states to create smooth motion. This method made sense at the time, especially before Data Binding support was introduced.

    But now that Data Binding is available in Rive, achieving the same effect is much simpler—just a few steps. Data binding in Rive is a system that connects editor elements to dynamic data and code via view models, enabling reactive, runtime-driven updates and interactions between design and development.

    In this section, we’ll show how to refactor the original Rive file and code using the new approach.

    Updating the Rive File

    1. Remove the old setup:
      • Go to the State Machine.
      • Delete the input variables: top, bottom, left, right.
      • Remove the blending states and their associated animations.
    2. Group the parallax layers:
      • Wrap all the parallax layers into a new group—e.g., ParallaxGroup.
    3. Create binding parameters:
      • Select ParallaxGroup and add:
        • pointerX (Number)
        • pointerY (Number)
    4. Bind coordinates:
      • In the properties panel, set:
        • X → pointerX
        • Y → pointerY

    Now the group will move dynamically based on values passed from JavaScript.

    The Updated JS Code

    Before we dive into the updated JavaScript, let’s quickly define an important concept:

    When using Data Binding in Rive, viewModelInstance refers to the runtime object that links your Rive file’s bindable properties (like pointerX or pointerY) to your app’s logic. In the Rive editor, you assign these properties to elements like positions, scales, or rotations. At runtime, your code accesses and updates them through the viewModelInstance—allowing for real-time, declarative control without needing a State Machine.

    With that in mind, here’s how the new setup replaces the old input-driven logic:

    import { useRive } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function ParallaxEffect({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        autoplay: true,
        autoBind: true,
        onLoad: () => setIsLoaded(true),
      });
    
      useEffect(() => {
        if (!rive) return;
    
        const vmi = rive.viewModelInstance;
        const pointerX = vmi?.number('pointerX');
        const pointerY = vmi?.number('pointerY');
    
        if (!pointerX || !pointerY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
          const { innerWidth, innerHeight } = window;
          const x = (e.clientX / innerWidth) * 100;
          const y = 100 - (e.clientY / innerHeight) * 100;
          pointerX.value = x;
          pointerY.value = y;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
    
        return () => {
          window.removeEventListener('mousemove', handleMouseMove);
          rive.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }

    The Result

    You get the same parallax effect, but:

    • without input variables or blending;
    • without a State Machine;
    • with simple control via the ViewModel.

    Official Live Example from Rive

    👉 CodeSandbox: Data Binding Parallax

    Conclusion

    Data Binding is a major step forward for interactive Rive animations. Effects like parallax can now be set up faster, more reliably, and with cleaner logic. We strongly recommend this approach for new projects.

    Final Thoughts

    So why did we choose Rive over Lottie for this project?

    • Interactivity: With Lottie, achieving the same level of interactivity would’ve required building a custom logic layer from scratch. With Rive, we got that behavior baked into the file—plug and play.
    • Optimization: Rive gives you more control over each asset inside the .riv file, and the output tends to be lighter overall.

    Our biggest takeaway? Don’t be afraid to experiment with new tools—especially when they feel like the right fit for your project’s concept. Rive matched the playful, interactive vibe of Valley Adventures perfectly, and we’re excited to keep exploring what it can do.



    Source link

  • Upgrading a 20 year old University Project to .NET 6 with dotnet-upgrade-assistant

    Upgrading a 20 year old University Project to .NET 6 with dotnet-upgrade-assistant



    I wrote a Tiny Virtual Operating System for a 300-level OS class in C# for college back in 2001 (?) and later moved it to VB.NET in 2002. This is all pre-.NET Core, and on early .NET 1.1 or 2.0 on Windows. I moved it to GitHub 5 years ago and ported it to .NET Core 2.0 at the time. At this point it was 15 years old, so it was cool to see this project running on Windows, Linux, in Docker, and on a Raspberry Pi…a machine that didn’t exist when the project was originally written.

    NOTE: If the timeline is confusing, I had already been working in industry for years at this point but was still plugging away at my 4 year degree at night. It eventually took 11 years to complete my BS in Software Engineering.

    This evening, as the children slept, I wanted to see if I could run the .NET Upgrade Assistant on this now 20 year old app and get it running on .NET 6.

    Let’s start:

    $ upgrade-assistant upgrade .\TinyOS.sln
    -----------------------------------------------------------------------------------------------------------------
    Microsoft .NET Upgrade Assistant v0.3.256001+3c4e05c787f588e940fe73bfa78d7eedfe0190bd

    We are interested in your feedback! Please use the following link to open a survey: https://aka.ms/DotNetUASurvey
    -----------------------------------------------------------------------------------------------------------------

    [22:58:01 INF] Loaded 5 extensions
    [22:58:02 INF] Using MSBuild from C:\Program Files\dotnet\sdk\6.0.100\
    [22:58:02 INF] Using Visual Studio install from C:\Program Files\Microsoft Visual Studio\2022\Preview [v17]
    [22:58:06 INF] Initializing upgrade step Select an entrypoint
    [22:58:07 INF] Setting entrypoint to only project in solution: C:\Users\scott\TinyOS\src\TinyOSCore\TinyOSCore.csproj
    [22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
    [22:58:07 INF] Initializing upgrade step Select project to upgrade
    [22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
    [22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
    [22:58:07 INF] Initializing upgrade step Back up project

    See how the process is interactive at the command line, with color prompts and a series of dynamic multiple-choice questions?

    Updating .NET project with the upgrade assistant

    Interestingly, it builds on the first try, no errors.

    When I manually look at the .csproj I can see some weird version numbers, likely from some not-quite-baked version of .NET Core 2 I used many years ago. My spidey sense says this is wrong, and I’m assuming the upgrade assistant didn’t understand it.

        <!-- <PackageReference Include="ILLink.Tasks" Version="0.1.4-preview-906439" /> -->
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
    <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />

    I also note a commented-out reference to ILLink.Tasks which was a preview feature in Mono’s Linker to reduce the final size of apps and tree-trim them. Some of that functionality is built into .NET 6 now so I’ll use that during the build and packaging process later. The reference is not needed today.

    I’m gonna blindly upgrade them to .NET 6 and see what happens. I could do this by just changing the numbers and seeing if it restores and builds, but I can also try dotnet outdated which remains a lovely tool in the upgrader’s toolkit.

    image

    This “outdated” tool is nice as it talks to NuGet and confirms that there are newer versions of certain packages.

    In my tests – which were just batch files at this early time – I was calling my dotnet app like this:

    dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt  

    This will change to the modern form with just TinyOSCore.exe 512 scott13.txt with an exe and args and no ceremony.

    Publishing and trimming my TinyOS turns into just a 15 meg EXE. Nice considering that the .NET I need is in there with no separate install. I could turn this little synthetic OS into a microservice if I wanted to be totally extra.

    dotnet publish -r win-x64 --self-contained -p:PublishSingleFile=true -p:SuppressTrimAnalysisWarnings=true

    If I add

    -p:EnableCompressionInSingleFile=true

    Then it’s even smaller. No code changes. Run all my tests, looks good. My project from university from .NET 1.1 is now .NET 6.0, cross platform, self-contained in 11 megs in a single EXE. Sweet.


    Sponsor: At Rocket Mortgage® the work you do around here will be 100% impactful but won’t take all your free time, giving you the perfect work-life balance. Or as we call it, tech/life balance! Learn more.




    About Scott

    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

    facebook
    bluesky
    subscribe
    About   Newsletter

    Hosting By
    Hosted on Linux using .NET in an Azure App Service










    Source link