برچسب: Process

  • How to kill a process running on a local port in Windows | Code4IT

    How to kill a process running on a local port in Windows | Code4IT


    Now you can’t run your application because another process already uses the port. How can you find that process? How to kill it?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, when trying to run your ASP.NET application, there’s something stopping you.

    Have you ever found a message like this?

    Failed to bind to address https://127.0.0.1:7261: address already in use.

    You can try over and over again, you can also restart the application, but the port still appears to be used by another process.

    How can you find the process that is running on a local port? How can you kill it to free up the port and, eventually, be able to run your application?

    In this article, we will learn how to find the blocking port in Windows 10 and Windows 11, and then we will learn how to kill that process given its PID.

    How to find the process running on a port on Windows 11 using PowerShell

    Let’s see how to identify the process that is running on port 7261.

    Open a PowerShell and run the netstat command:

    NETSTAT is a command that shows info about the active TCP/IP network connections. It accepts several options. In this case, we will use:

    • -n: Displays addresses and port numbers in numerical form.
    • -o: Displays the owning process ID associated with each connection.
    • -a: Displays all connections and listening ports;
    • -p: Filter for a specific protocol (TCP or UDP)

    Netstat command to show all active TCP connections

    Notice that the last column lists the PID (Process ID) bound to each connection.

    From here, we can use the findstr command to get only the rows with a specific string (the searched port number).

    netstat -noa -p TCP | findstr 7261
    

    Netstat info filtered by string

    Now, by looking at the last column, we can identify the Process ID: 19160.

    How to kill a process given its PID on Windows or PowerShell

    Now that we have the Process ID (PID), we can open the Task Manager, paste the PID value in the topmost textbox, and find the related application.

    In our case, it was an instance of Visual Studio running an API application. We can now kill the process by hitting End Task.

    Using Task Manager on Windows11 to find the process with specified ID

    If you prefer working with PowerShell, you can find the details of the related process by using the Get-Process command:

    Process info found using PowerShell

    Then, you can use the taskkill command by specifying the PID, using the /PID flag, and adding the /F flag to force the killing of the process.

    We have killed the process related to the running application. Visual Studio is still working, of course.

    Further readings

    Hey, what are these fancy colours on the PowerShell?

    It’s a customization I added to show the current folder and the info about the associated GIT repository. It’s incredibly useful while developing and navigating the file system with PowerShell.

    🔗 OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal

    This article first appeared on Code4IT 🐧

    Wrapping up

    As you can imagine, this article exists because I often forget how to find the process that stops my development.

    It’s always nice to delve into these topics to learn more about what you can do with PowerShell and which flags are available for a command.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Deconstructing the 35mm Website: A Look at the Process and Technical Details

    Deconstructing the 35mm Website: A Look at the Process and Technical Details


    The Idea Behind the Project

    This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!

    After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

    After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!

    In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.

    1. The Edge Detection Shader

    Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.

    To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.

    Here’s my model with all the materials applied:

    This way, Three.js can accurately detect each area I want to highlight!

    As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.

    The shader can be found in:

    node_modules/three/examples/jsm/shaders/SobelOperatorShader.js

    I copied the contents of the fragment shader into a separate file so I could freely modify it.

    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        // kernel definition (in glsl matrices are filled in column-major order)
    
        const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
        const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
    
        // fetch the 3x3 neighbourhood of a fragment
    
        // first column
    
        float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
        float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1,  0 ) ).r;
        float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1,  1 ) ).r;
    
        // second column
    
        float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2(  0, -1 ) ).r;
        float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2(  0,  0 ) ).r;
        float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2(  0,  1 ) ).r;
    
        // third column
    
        float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2(  1, -1 ) ).r;
        float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2(  1,  0 ) ).r;
        float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2(  1,  1 ) ).r;
    
        // gradient value in x direction
    
        float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
            Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
            Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
    
        // gradient value in y direction
    
        float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
            Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
            Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
    
        // magnitute of the total gradient
    
        float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
    
        return G;
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);    
    
        float G = sobel(t,texel);
        G= G > 0.001 ? 1. : 0.;
            
        gl_FragColor = vec4(vec3(G),1.0);
    
        #include <colorspace_fragment>
    }

    What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.

    This way, I can modify the output of the edge detection shader before passing it back to the composer:

    float G = sobel(t,texel);
    G= G > 0.001 ? 1. : 0.;

    G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).

    As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).

    This way I can get all the edges to have the same intensity.

    Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:

    import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
    import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
    
    
    export default class CannonF1 {
        constructor() {
            //....code
        }
    
        setupPostprocessing()
        {
    
            SobelOperatorShader.fragmentShader = sobelFragment
    
            this.effectSobel = new ShaderPass(SobelOperatorShader)
            this.effectSobel.uniforms["resolution"].value.x =
            window.innerWidth * Math.min(window.devicePixelRatio, 2)
            this.effectSobel.uniforms["resolution"].value.y =
            window.innerHeight * Math.min(window.devicePixelRatio, 2)
    
            this.composer.addPass(this.effectSobel)
        }
    }

    2. The Mesh Highlight on Hover Effect

    Next, let’s take a look at the lens parts section.

    This is mainly achieved using a Three.js utility called RenderTarget.

    A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.

    Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.

    First I setup the render targets:

    /* 
      ....Code 
    */
    
    createRenderTargets() {
        const sizes = {
          width:
            window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
          height:
            window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
        }
    
        this.renderTargetA = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
    
        this.renderTargetB = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
      }
    
    /* 
      ...Code 
    */

    Then, using three.js Raycaster, I can retrieve the uuid of the mesh that is being hoverer on:

    onMouseMove(event: MouseEvent) {
        this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
        this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
    
        this.raycaster.setFromCamera(this.mouse, this.camera)
        const intersects = this.raycaster.intersectObjects(this.scene.children)
        const target = intersects[0]
    
        if (target && "material" in target.object) {
          const targetMesh = intersects[0].object as THREE.Mesh
          this.cannonF1?.onSelectMesh(targetMesh.uuid)
        } else {
          this.cannonF1?.onSelectMesh()
        }
      }

    In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).

    This way, in my render loop, I can create two distinct renders:

    • One render (renderTargetA) with all the meshes except the hovered mesh
    • Another render (renderTargetB) with only the hovered mesh
    render() {
        // Render renderTargetA
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = false
          } else {
            mesh.visible = true
          }
        })
    
        this.renderer.setRenderTarget(this.renderTargetA)
        this.renderer.render(this.scene, this.camera)
    
        // Render renderTargetB
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = true
          } else {
            mesh.visible = false
          }
        })
        if (this.targetedMesh) {
          this.targetedMesh.children.forEach((child) => {
            child.visible = true
          })
        }
    
        this.renderer.setRenderTarget(this.renderTargetB)
        this.renderer.render(this.scene, this.camera)
    
        this.modelChildren.forEach((mesh) => {
          mesh.visible = false
        })    
    
        this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
        this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
    
        this.renderer.setRenderTarget(null)
      }

    This is what the renderTargetA render looks like:

    …and renderTargetB:

    As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.

    At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:

    What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.

    To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.

    Instead, I used the alpha channel of each individual vertex to set the distance from the camera.

    #include <common>
    
    varying vec3 vPosition;
    uniform vec3 uColor;
    
    float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
        float normalized = (value - oldMin) / (oldMax - oldMin);
        
        return newMin + (newMax - newMin) * normalized;
    }
    
    void main()
    {
        float dist = distance(vPosition,cameraPosition);
    
        float l = luminance( uColor );
    
        gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
    
        #include <colorspace_fragment>
    }

    Here’s an explanation of this shader:

    • First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
    • The uColor value represents the initial color of the mesh.
    • The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
    • Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.

    And here is the updated logic for the postprocessing shader:

    uniform sampler2D tDiffuse;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        //sobel operator
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);
    
        vec4 t1 = texture2D(tDiffuse1,vUv);
        vec4 t2 = texture2D(tDiffuse2,vUv);     
    
        if(t1.a==0.)
        {
            t1.a = 1.;
        }
        if(t2.a==0.)
        {
            t2.a = 1.;
        }
    
    
        float G = sobel(tDiffuse1,texel);
        G= G > 0.001 ? 1. : 0.;
        float Gs = sobel(tDiffuse2,texel);
        Gs = Gs > 0.001 ? 1. : 0.;
        
        vec4 s1 = vec4(vec3(G),1.);
        
        vec4 s2 = vec4(vec3(Gs),1.);    
        
        vec4 sobelTexture = vec4(vec3(0.),1.);
    
    
        if(t1.a>t2.a)
        {
            sobelTexture = s2;       
        }    
        else{
            sobelTexture = s1;
        }    
    
            
        gl_FragColor = sobelTexture;
    
        #include <colorspace_fragment>
    }

    Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.

    3. The Film Roll Effect

    Next is this film roll component that moves and twist on scroll.

    This effect is achieved using only shaders, the component is a single plane component with a shader material.

    All the data is sent to the shader through uniforms:

    export default class Film {  
      constructor() {
        //...code
      }
    
      createGeometry() {
        this.geometry = new THREE.PlaneGeometry(
          60,
          2,
          100,
          10
        )
      }
    
      createMaterial() {
        this.material = new THREE.ShaderMaterial({
          vertexShader,
          fragmentShader,
          side: THREE.DoubleSide,
          transparent: true,
          depthWrite: false,
          blending: THREE.CustomBlending,
          blendEquation: THREE.MaxEquation,
          blendSrc: THREE.SrcAlphaFactor,
          blendDst: THREE.OneMinusSrcAlphaFactor,
          uniforms: {
            uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
            uRadius: new THREE.Uniform(2),
            uXZfreq: new THREE.Uniform(3.525),
            uYfreq: new THREE.Uniform(2.155),
            uOffset: new THREE.Uniform(0),
            uAlphaMap: new THREE.Uniform(
              window.preloader.loadTexture(
                "./alpha-map.jpg",
                "film-alpha-map",
                (texture) => {
                  texture.wrapS = THREE.RepeatWrapping
                  const { width, height } = texture.image
                  this.material.uniforms.uAlphaMapResolution.value =
                    new THREE.Vector2(width, height)
                }
              )
            ),
            //uImages: new THREE.Uniform(new THREE.Vector4()),
            uImages: new THREE.Uniform(
              window.preloader.loadTexture(
                "/film-texture.png",
                "film-image-texture",
                (tex) => {
                  tex.wrapS = THREE.RepeatWrapping
                }
              )
            ),
            uRepeatFactor: new THREE.Uniform(this.repeatFactor),
            uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
            uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
            uFilmColor: new THREE.Uniform(window.colors.orange1),
          },
        })
      }  
    
      createMesh() {
        this.mesh = new THREE.Mesh(this.geometry, this.material)
        this.scene.add(this.mesh)
      }
    }

    The main vertex shader uniforms are:

    • uRadius is the radius of the cylinder shape
    • uXZfreq is the frequency of the twists on the (X,Z) plane
    • uYfreq is a cylinder height factor
    • uOffset is the vertical offset of the roll when you scroll up and down

    Here is how they are used in the vertex shader:

    #define PI 3.14159265359
    
    uniform float uPlaneWidth;
    uniform float uXZfreq;
    uniform float uYfreq;
    varying vec2 vUv;
    uniform float uOffset;
    varying vec3 vPosition;
    uniform float uRadius;
    
    void main()
    {
        vec3 np = position;
        float theta = -(PI*np.x)/(uPlaneWidth*0.5);
    
    
        np.x=cos(uXZfreq*theta+uOffset)*uRadius;
        np.y+=theta*uYfreq;
        np.z=sin(uXZfreq*theta+uOffset)*uRadius;
        
        vec4 modelPosition = modelMatrix * vec4(np, 1.0);
    
        
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectedPosition = projectionMatrix * viewPosition;
        gl_Position = projectedPosition;    
    
    
        vUv=uv;
        vPosition=np;
    }

    As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.

    const tl = gsap.timeline({
      scrollTrigger: {
        trigger: this.section,
        start: "top bottom",
        end: "bottom top",
        scrub: true,
        invalidateOnRefresh: true,        
      },
    })    
    
    tl.to(
      this.material.uniforms.uOffset,
      {
        value: 10,
        duration: 1,
      },
      0
    )

    Conclusion

    That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.

    I learned so much from this project, and I hope you’ll find it just as useful!

    Thank you for reading, and thanks to Codrops for featuring me again!



    Source link

  • How Has Medical Technology Impacted the Surrogacy Process?


    Advancements in medical technology have significantly transformed the surrogacy process, offering new opportunities and improving outcomes for all parties involved. From the initial application to post-birth care, technology plays a crucial role in making surrogacy a viable and successful option for many families. Let’s explore how these advancements have impacted the various stages of the surrogacy journey.

    Streamlining the Application Process

    Every year, thousands of women express their interest in becoming surrogate mothers. The process begins with a thorough application and screening to ensure candidates are suitable for the role. Medical technology has streamlined this initial stage, enabling agencies to efficiently process and review applications. Online platforms and databases allow for quick and secure submission of documents, while advanced screening tools help identify potential surrogates who meet the necessary health and psychological criteria.

    Ensuring Health and Compatibility

    The first three months of the surrogacy process involve a rigorous schedule of paperwork, legal formalities, and medical exams, as stated by Elevate Baby. Medical technology has enhanced these early stages by providing sophisticated diagnostic tools and tests. Surrogate mothers undergo comprehensive health evaluations to ensure they are physically capable of carrying a pregnancy to term. This includes blood tests, ultrasounds, and other imaging techniques that offer detailed insights into their health status. These exams help identify any potential issues early on, ensuring a smooth and safe journey ahead.

    Facilitating Legal and Ethical Compliance

    Legal aspects are a critical component of the surrogacy process. The initial months also involve meticulous legal work to protect the rights and responsibilities of all parties. Medical technology aids in this by ensuring accurate and secure documentation. Digital contracts and electronic signatures have replaced traditional paperwork, making the process more efficient and less prone to errors. Secure online portals allow for the easy sharing and storage of legal documents, ensuring compliance with local regulations and ethical standards.

    Enhancing Fertility Treatments

    One of the most significant impacts of medical technology on surrogacy is in the realm of fertility treatments. In vitro fertilization (IVF) is a cornerstone of the surrogacy process, and advancements in this field have greatly improved success rates. Technologies such as preimplantation genetic testing (PGT) allow for the screening of embryos for genetic abnormalities before implantation. This increases the likelihood of a healthy pregnancy and reduces the risk of complications. Additionally, innovations in cryopreservation enable the freezing and storage of eggs, sperm, and embryos, providing greater flexibility and options for intended parents and surrogates.

    Monitoring Pregnancy and Health

    Throughout the surrogacy journey, continuous monitoring of the surrogate’s health is paramount. Modern medical technology offers a range of tools to track the progress of the pregnancy and ensure the well-being of both the surrogate and the developing baby. Regular ultrasounds, non-invasive prenatal testing (NIPT), and wearable health devices provide real-time data on the surrogate’s condition. This information allows healthcare providers to promptly address any concerns and make informed decisions to support a healthy pregnancy.

    Supporting Emotional Well-being

    The surrogacy process can be emotionally taxing for all involved. Medical technology also plays a role in supporting the mental health of surrogate mothers. Telemedicine and virtual counseling services offer accessible support, allowing surrogates to connect with mental health professionals from the comfort of their homes. These resources help surrogates manage stress, anxiety, and other emotional challenges, ensuring a positive and fulfilling experience.

    Post-Birth Care and Follow-Up

    After the birth of the child, medical technology continues to be essential. Surrogates receive comprehensive post-birth care to ensure their physical and emotional recovery. Regular follow-up visits and check-ups are facilitated by advanced medical scheduling systems and electronic health records, ensuring continuity of care. It is recommended that individuals visit a doctor at least once a year to maintain their overall health, and this applies to surrogate mothers as well. Annual check-ups help monitor long-term health outcomes and provide ongoing support.



    Source link