برچسب: Objects

  • Clean code tips – Abstraction and objects | Code4IT


    Are Getters and Setters the correct way to think of abstraction? What are pro and cons of OOP and Procedural programming? And, in the OOP world, how can you define objects?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is the third part of my series of tips about clean code.

    Here’s the list (in progress)

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    In this article, I’m going to explain how to define classes in order to make your code extensible, more readable and easier to understand. In particular, I’m going to explain how to use effectively Abstraction, what’s the difference between pure OOP and Procedural programming, and how the Law of Demeter can help you structure your code.

    The real meaning of abstraction

    Some people think that abstraction is nothing but adding Getters and Setters to class properties, in order to (if necessary) manipulate the data before setting or retrieving it:

    interface IMixer_A
    {
    	void SetVolume(int value);
    	int GetVolume();
    	int GetMaxVolume();
    }
    
    class Mixer_A : IMixer_A
    {
    	private const int MAX_VOLUME = 100;
    	private int _volume = 0;
    
    	void SetVolume(int value) { _volume = value; }
    	int GetVolume() { return _volume; }
    	int GetMaxVolume() { return MAX_VOLUME; }
    }
    

    This way of structuring the class does not hide the implementation details, because any client that interacts with the Mixer knows that internally it works with integer values. A client should only know about the operations that can be performed on a Mixer.

    Let’s see a better definition for an IMixer interface:

    interface IMixer_B
    {
    	void IncreaseVolume();
    	void DecreaseVolume();
    	void Mute();
    	void SetToMaxVolume();
    }
    
    class Mixer_B : IMixer_B
    {
    	private const int MAX_VOLUME = 100;
    	private int _volume = 0;
    
    	void IncreaseVolume()
    	{
    		if (_volume < MAX_VOLUME) _volume++;
    	}
    	void DecreaseVolume()
    	{
    		if (_volume > 0) _volume--;
    	}
    
    	void Mute() { _volume = 0; }
    
    	void SetToMaxVolume()
    	{
    		_volume = MAX_VOLUME;
    	}
    }
    

    With this version, we can perform all the available operations without knowing the internal details of the Mixer. Some advantages?

    • We can change the internal type for the _volume field, and store it as a ushort or a float, and change the other methods accordingly. And no one else will know it!
    • We can add more methods, for instance a SetVolumeToPercentage(float percentage) without the risk of affecting the exposed methods
    • We can perform additional checks and validation before performing the internal operations

    It can help you of thinking classes as if they were real objects you can interact: if you have a stereo you won’t set manually the volume inside its circuit, but you’ll press a button that increases the volume and performs all the operations for you. At the same time, the volume value you see on the display is a “human” representation of the internal state, not the real value.

    Procedural vs OOP

    Object-oriented programming works the best if you expose behaviors so that any client won’t have to access any internal properties.

    Have a look at this statement from Wikipedia:

    The focus of procedural programming is to break down a programming task into a collection of variables, data structures, and subroutines, whereas in object-oriented programming it is to break down a programming task into objects that expose behavior (methods) and data (members or attributes) using interfaces. The most important distinction is that while procedural programming uses procedures to operate on data structures, object-oriented programming bundles the two together, so an “object”, which is an instance of a class, operates on its “own” data structure.

    To see the difference between OO and Procedural programming, let’s write the same functionality in two different ways. In this simple program, I’m going to generate the <a> tag for content coming from different sources: Twitter and YouTube.

    Procedural programming

    public class IContent
    {
    	public string Url { get; set; }
    }
    
    class Tweet : IContent
    {
    	public string Author { get; set; }
    }
    
    class YouTubeVideo : IContent
    {
    	public int ChannelName { get; set; }
    }
    

    Nice and easy: the classes don’t expose any behavior, but only their properties. So, a client class (I’ll call it LinkCreator) will use their properties to generate the HTML tag.

    public static class LinkCreator
    {
    	public static string CreateAnchorTag(IContent content)
    	{
    		switch (content)
    		{
    			case Tweet tweet: return $"<a href=\"{tweet.Url}\"> A post by {tweet.Author}</a>";
    			case YouTubeVideo yt: return $"<a href=\"{yt.Url}\"> A video by {yt.ChannelName}</a>";
    			default: return "";
    		}
    	}
    }
    

    We can notice that the Tweet and YouTubeVideo classes are really minimal, so they’re easy to read.
    But there are some downsides:

    • By only looking at the IContent classes, we don’t know what kind of operations the client can perform on them.
    • If we add a new class that inherits from IContent we must implement the operations that are already in place in every client. If we forget about it, the CreateAnchorTag method will return an empty string.
    • If we change the type of URL (it becomes a relative URL or an object of type System.Uri) we must update all the methods that reference that field to propagate the change.

    Object-oriented programming

    In Object-oriented programming, we declare the functionalities to expose and we implement them directly within the class:

    public interface IContent
    {
    	string CreateAnchorTag();
    }
    
    public class Tweet : IContent
    {
    	public string Url { get; }
    	public string Author { get; }
    
    	public string CreateAnchorTag()
    	{
    		return $"<a href=\"{Url}\"> A post by {Author}</a>";
    	}
    }
    
    public class YouTubeVideo : IContent
    {
    	public string Url { get; }
    	public int ChannelName { get; }
    
    	public string CreateAnchorTag()
    	{
    		return $"<a href=\"{Url}\"> A video by {ChannelName}</a>";
    	}
    
    }
    

    We can see that the classes are more voluminous, but just by looking at a single class, we can see what functionalities they expose and how.

    So, the LinkCreator class will be simplified, since it hasn’t to worry about the implementations:

    public static class LinkCreator
    {
    	public static string CreateAnchorTag(IContent content)
    	{
    		return content.CreateAnchorTag();
    	}
    }
    

    But even here there are some downsides:

    • If we add a new IContent type, we must implement every method explicitly (or, at least, leave a dummy implementation)
    • If we expose a new method on IContent, we must implement it in every subclass, even when it’s not required (should I care about the total video duration for a Twitter channel? Of course no).
    • It’s harder to create easy-to-maintain classes hierarchies

    So what?

    Luckily we don’t live in a world in black and white, but there are other shades: it’s highly unlikely that you’ll use pure OO programming or pure procedural programming.

    So, don’t stick too much to the theory, but use whatever fits best to your project and yourself.

    Understand Pro and Cons of each type, and apply them wherever is needed.

    Objects vs Data structure – according to Uncle Bob

    There’s a statement by the author that is the starting point of all his following considerations:

    Objects hide their data behind abstractions and expose functions that operate on that data. Data structure expose their data and have no meaningful functions.

    Personally, I disagree with him. For me it’s the opposite: think of a linked list.

    A linked list is a data structure consisting of a collection of nodes linked together to form a sequence. You can perform some operations, such as insertBefore, insertAfter, removeBefore and so on. But they expose only the operations, not the internal: you won’t know if internally it is built with an array, a list, or some other structures.

    interface ILinkedList
    {
    	Node[] GetList();
    	void InsertBefore(Node node);
    	void InsertAfter(Node node);
    	void DeleteBefore(Node node);
    	void DeleteAfter(Node node);
    }
    

    On the contrary, a simple class used just as DTO or as View Model creates objects, not data structures.

    class Person
    {
    	public String FirstName { get; set; }
    	public String LastName { get; set; }
    	public DateTime BirthDate { get; set; }
    }
    

    Regardless of the names, it’s important to know when one type is preferred instead of the other. Ideally, you should not allow the same class to expose both properties and methods, like this one:

    class Person
    {
    	public String FirstName { get; set; }
    	public String LastName { get; set; }
    	public DateTime BirthDate { get; set; }
    
    	public string CalculateSlug()
    	{
    		return FirstName.ToLower() + "-" + LastName.ToLower() + "-" + BirthDate.ToString("yyyyMMdd");
    	}
    }
    

    An idea to avoid this kind of hybrid is to have a different class which manipulates the Person class:

    static class PersonAttributesManager
    {
    	static string CalculateSlug(Person p)
    	{
    		return p.FirstName.ToLower() + "-" + p.LastName.ToLower() + "-" + p.BirthDate.ToString("yyyyMMdd");
    	}
    }
    

    In this way, we decouple the properties of a pure Person and the possible properties that a specific client may need from that class.

    The Law of Demeter

    The Law of Demeter is a programming law that says that a module should only talk to its friends, not to strangers. What does it mean?

    Say that you have a MyClass class that contains a MyFunction class, which can accept some arguments. The Law of Demeter says that MyFunction should only call the methods of

    1. MyClass itself
    2. a thing created within MyFunction
    3. every thing passed as a parameter to MyFunction
    4. every thing stored within the current instance of MyClass

    This is strictly related to the fact that things (objects or data structures – it depends if you agree with the Author’s definitions or not) should not expose their internals, but only the operations on them.

    Here’s an example of what not to do:

    class LinkedListClient{
    	ILinkedList linkedList;
    
    	public void AddTopic(Node nd){
    		// do something
    		linkedList.NodesList.Next = nd;
    		// do something else
    	}
    }
    

    What happens if the implementation changes or you find a bug on it? You should update everything. Also, you are coupling too much the two classes.

    A problem with this rule is that you should not refer the most common operations on base types too:

    class LinkedListClient{
    	ILinkedList linkedList;
    
    	public int GetCount(){
    		return linkedList.GetTopicsList().Count();
    	}
    }
    

    Here, the GetCount method is against the Law of Demeter, because it is performing operations on the array type returned by GetList. To solve this problem, you have to add the GetCount() method to the ILinkedList class and call this method on the client.

    When it’s a single method, it’s acceptable. What about operations on strings or dates?

    Take the Person class. If we exposed the BirthDate properties as a method (something like GetBirthDate), we could do something like

    class PersonExample{
    	void DoSomething(Person person){
    		var a = person.GetBirthDate().ToString("yyyy-MM-dd");
    		var b = person.GetBirthDate().AddDays(52);
    	}
    }
    

    which is perfectly reasonable. But it violates the law of Demeter: you can’t perform ToString and AddDays here, because you’re not using only methods exposed by the Person class, but also those exposed by DateTime.

    A solution could be to add new methods to the Person class to handle these operations; of course, it would make the class bigger and less readable.

    Therefore, I think that this law of Demeter is a good rule of thumb, but you should consider it only as a suggestion and not as a strict rule.

    If you want to read more, you can refer to this article by Carlos Caballero or to this one by Robert Brautigam.

    Wrapping up

    We’ve seen that it’s not so easy to define which behaviors a class should expose. Do we need pure data or objects with a behavior? And how can abstraction help us hiding the internals of a class?

    Also, we’ve seen that it’s perfectly fine to not stick to OOP principles strictly, because that’s a way of programming that can’t always be applied to our projects and to our processes.

    Happy coding!





    Source link

  • Three.js Instances: Rendering Multiple Objects Simultaneously

    Three.js Instances: Rendering Multiple Objects Simultaneously


    When building the basement studio site, we wanted to add 3D characters without compromising performance. We used instancing to render all the characters simultaneously. This post introduces instances and how to use them with React Three Fiber.

    Introduction

    Instancing is a performance optimization that lets you render many objects that share the same geometry and material simultaneously. If you have to render a forest, you’d need tons of trees, rocks, and grass. If they share the same base mesh and material, you can render all of them in a single draw call.

    A draw call is a command from the CPU to the GPU to draw something, like a mesh. Each unique geometry or material usually needs its own call. Too many draw calls hurt performance. Instancing reduces that by batching many copies into one.

    Basic instancing

    As an example, let’s start by rendering a thousand boxes in a traditional way, and let’s loop over an array and generate some random boxes:

    const boxCount = 1000
    
    function Scene() {
      return (
        <>
          {Array.from({ length: boxCount }).map((_, index) => (
            <mesh
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
            >
              <boxGeometry />
              <meshBasicMaterial color={getRandomColor()} />
            </mesh>
          ))}
        </>
      )
    }
    Live | Source

    If we add a performance monitor to it, we’ll notice that the number of “calls” matches our boxCount.

    A quick way to implement instances in our project is to use drei/instances.

    The Instances component acts as a provider; it needs a geometry and materials as children that will be used each time we add an instance to our scene.

    The Instance component will place one of those instances in a particular position/rotation/scale. Every Instance will be rendered simultaneously, using the geometry and material configured on the provider.

    import { Instance, Instances } from "@react-three/drei"
    
    const boxCount = 1000
    
    function Scene() {
      return (
        <Instances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          {Array.from({ length: boxCount }).map((_, index) => (
            <Instance
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
              color={getRandomColor()}
            />
          ))}
        </Instances>
      )
    }

    Notice how “calls” is now reduced to 1, even though we are showing a thousand boxes.

    Live | Source

    What is happening here? We are sending the geometry of our box and the material just once to the GPU, and ordering that it should reuse the same data a thousand times, so all boxes are drawn simultaneously.

    Notice that we can have multiple colors even though they use the same material because Three.js supports this. However, other properties, like the map, should be the same because all instances share the exact same material.

    We’ll see how we can hack Three.js to support multiple maps later in the article.

    Having multiple sets of instances

    If we are rendering a forest, we may need different instances, one for trees, another for rocks, and one for grass. However, the example from before only supports one instance in its provider. How can we handle that?

    The creteInstnace() function from drei allows us to create multiple instances. It returns two React components, the first one a provider that will set up our instance, the second, a component that we can use to position one instance in our scene.

    Let’s see how we can set up a provider first:

    import { createInstances } from "@react-three/drei"
    
    const boxCount = 1000
    const sphereCount = 1000
    
    const [CubeInstances, Cube] = createInstances()
    const [SphereInstances, Sphere] = createInstances()
    
    function InstancesProvider({ children }: { children: React.ReactNode }) {
      return (
        <CubeInstances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          <SphereInstances limit={sphereCount}>
            <sphereGeometry />
            <meshBasicMaterial />
            {children}
          </SphereInstances>
        </CubeInstances>
      )
    }

    Once we have our instance provider, we can add lots of Cubes and Spheres to our scene:

    function Scene() {
      return (
        <InstancesProvider>
          {Array.from({ length: boxCount }).map((_, index) => (
            <Cube
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
    
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Sphere
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
        </InstancesProvider>
      )
    }

    Notice how even though we are rendering two thousand objects, we are just running two draw calls on our GPU.

    Live | Source

    Instances with custom shaders

    Until now, all the examples have used Three.js’ built-in materials to add our meshes to the scene, but sometimes we need to create our own materials. How can we add support for instances to our shaders?

    Let’s first set up a very basic shader material:

    import * as THREE from "three"
    
    const baseMaterial = new THREE.RawShaderMaterial({
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        void main() {
          vec4 modelPosition = modelMatrix * vec4(position, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `
    })
    
    export function Scene() {
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry />
        </mesh>
      )
    }

    Now that we have our testing object in place, let’s add some movement to the vertices:

    We’ll add some movement on the X axis using a time and amplitude uniform and use it to create a blob shape:

    const baseMaterial = new THREE.RawShaderMaterial({
      // some unifroms
      uniforms: {
        uTime: { value: 0 },
        uAmplitude: { value: 1 },
      },
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        // Added this code to shift the vertices
        uniform float uTime;
        uniform float uAmplitude;
        vec3 movement(vec3 position) {
          vec3 pos = position;
          pos.x += sin(position.y + uTime) * uAmplitude;
          return pos;
        }
    
        void main() {
          vec3 blobShift = movement(position);
          vec4 modelPosition = modelMatrix * vec4(blobShift, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `,
    });
    
    export function Scene() {
      useFrame((state) => {
        // update the time uniform
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry args={[1, 32, 32]} />
        </mesh>
      );
    }
    

    Now, we can see the sphere moving around like a blob:

    Live | Source

    Now, let’s render a thousand blobs using instancing. First, we need to add the instance provider to our scene:

    import { createInstances } from '@react-three/drei';
    
    const [BlobInstances, Blob] = createInstances();
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob key={index} position={getRandomPosition()} />
          ))}
        </BlobInstances>
      );
    }
    

    The code runs successfully, but all spheres are in the same place, even though we added different positions.

    This is happening because when we calculated the position of each vertex in the vertexShader, we returned the same position for all vertices, all these attributes are the same for all spheres, so they end up in the same spot:

    vec3 blobShift = movement(position);
    vec4 modelPosition = modelMatrix * vec4(deformedPosition, 1.0);
    vec4 viewPosition = viewMatrix * modelPosition;
    vec4 projectionPosition = projectionMatrix * viewPosition;
    gl_Position = projectionPosition;

    To solve this issue, we need to use a new attribute called instanceMatrix. This attribute will be different for each instance that we are rendering.

      attribute vec3 position;
      attribute vec3 instanceColor;
      attribute vec3 normal;
      attribute vec2 uv;
      uniform mat4 modelMatrix;
      uniform mat4 viewMatrix;
      uniform mat4 projectionMatrix;
      // this attribute will change for each instance
      attribute mat4 instanceMatrix;
    
      uniform float uTime;
      uniform float uAmplitude;
    
      vec3 movement(vec3 position) {
        vec3 pos = position;
        pos.x += sin(position.y + uTime) * uAmplitude;
        return pos;
      }
    
      void main() {
        vec3 blobShift = movement(position);
        // we can use it to transform the position of the model
        vec4 modelPosition = instanceMatrix * modelMatrix * vec4(blobShift, 1.0);
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectionPosition = projectionMatrix * viewPosition;
        gl_Position = projectionPosition;
      }

    Now that we have used the instanceMatrix attribute, each blob is in its corresponding position, rotation, and scale.

    Live | Source

    Changing attributes per instance

    We managed to render all the blobs in different positions, but since the uniforms are shared across all instances, they all end up having the same animation.

    To solve this issue, we need a way to provide custom information for each instance. We actually did this before, when we used the instanceMatrix to move each instance to its corresponding location. Let’s debug the magic behind instanceMatrix, so we can learn how we can create own instanced attributes.

    Taking a look at the implementation of instancedMatrix we can see that it is using something called InstancedAttribute:

    https://github.com/mrdoob/three.js/blob/master/src/objects/InstancedMesh.js#L57

    InstancedBufferAttribute allows us to create variables that will change for each instance. Let’s use it to vary the animation of our blobs.

    Drei has a component to simplify this called InstancedAttribute that allows us to define custom attributes easily.

    // Tell typescript about our custom attribute
    const [BlobInstances, Blob] = createInstances<{ timeShift: number }>()
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime
      })
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          {/* Declare an instanced attribute with a default value */}
          <InstancedAttribute name="timeShift" defaultValue={0} />
          
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob
              key={index}
              position={getRandomPosition()}
              
              // Set the instanced attribute value for this instance
              timeShift={Math.random() * 10}
              
            />
          ))}
        </BlobInstances>
      )
    }

    We’ll use this time shift attribute in our shader material to change the blob animation:

    uniform float uTime;
    uniform float uAmplitude;
    // custom instanced attribute
    attribute float timeShift;
    
    vec3 movement(vec3 position) {
      vec3 pos = position;
      pos.x += sin(position.y + uTime + timeShift) * uAmplitude;
      return pos;
    }

    Now, each blob has its own animation:

    Live | Source

    Creating a forest

    Let’s create a forest using instanced meshes. I’m going to use a 3D model from SketchFab: Stylized Pine Tree Tree by Batuhan13.

    import { useGLTF } from "@react-three/drei"
    import * as THREE from "three"
    import { GLTF } from "three/examples/jsm/Addons.js"
    
    // I always like to type the models so that they are safer to work with
    interface TreeGltf extends GLTF {
      nodes: {
        tree_low001_StylizedTree_0: THREE.Mesh<
          THREE.BufferGeometry,
          THREE.MeshStandardMaterial
        >
      }
    }
    
    function Scene() {
    
      // Load the model
      const { nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          {/* add one tree to our scene */ }
          <mesh
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          />
        </group>
      )
    }
    

    (I added lights and a ground in a separate file.)

    Now that we have one tree, let’s apply instancing.

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    const [TreeInstances, Tree] = createInstances()
    const treeCount = 1000
    
    function Scene() {
      const { scene, nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          <TreeInstances
            limit={treeCount}
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          >
            {Array.from({ length: treeCount }).map((_, index) => (
              <Tree key={index} position={getRandomPosition()} />
            ))}
          </TreeInstances>
        </group>
      )
    }

    Our entire forest is being rendered in only three draw calls: one for the skybox, another one for the ground plane, and a third one with all the trees.

    To make things more interesting, we can vary the height and rotation of each tree:

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    function getRandomScale() {
      return Math.random() * 0.7 + 0.5
    }
    
    // ...
    <Tree
      key={index}
      position={getRandomPosition()}
      scale={getRandomScale()}
      rotation-y={Math.random() * Math.PI * 2}
    />
    // ...
    Live | Source

    Further reading

    There are some topics that I didn’t cover in this article, but I think they are worth mentioning:

    • Batched Meshes: Now, we can render one geometry multiple times, but using a batched mesh will allow you to render different geometries at the same time, sharing the same material. This way, you are not limited to rendering one tree geometry; you can vary the shape of each one.
    • Skeletons: They are not currently supported with instancing, to create the latest basement.studio site we managed to hack our own implementation, I invite you to read our implementation there.
    • Morphing with batched mesh: Morphing is supported with instances but not with batched meshes. If you want to implement it yourself, I’d suggest you read these notes.



    Source link