Folks, I have a node.js script running on my Windows machine that uses the dockerode npm package to talk to docker on said box and starts and kills docker containers.

However, after the containers have been killed off, docker still holds on to the memory that it blocked for those containers and this means downstream processes fail due to lack of RAM.

To counter this, I have powershell scripts to start docker desktop and to kill docker desktop.

All of this is a horrid experience.

On my Mac, I just use Colima with Portainer and couldn’t be happier.

I’ve explored some options to replace Docker Desktop and it seems Rancher Desktop is a drop-in replacement for Docker Desktop, including the docker remote API.

  1. Is this true? Is Rancher Desktop that good of a drop-in replacement?
  2. Does Rancher Desktop better manage RAM for containers that have been killed off? Or does it do the same thing as Docker Desktop and hold on to the RAM?

Are there other options which I’m not thinking of which might solve my problems? I’ve seen a few alternatives but haven’t tried them yet - moby,
containerd,
podman

I don’t actually need the Docker Desktop interface. So pure CLI docker would also just work. How are you all running pure docker on Windows boxes?

  • Dandroid@dandroid.app
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I despise Docker Desktop. Before I knew anything about docker or containers, all I knew was that it was in the required software list for my work for building our software. All I knew was that if it wasn’t open, my build would fail and if it was open, my laptop would slow down to a crawl.

    Eventually I took classes on Docker for work and learned quite a bit about it. I learned that I could use docker from command line with no UI, and I wouldn’t take anywhere near the performance hit. I eventually linked my IDE docker runtime to podman running on WSL2. Now I take pretty much no noticable performance hit.

    TL;DR: you can replace Docker Desktop with WSL2 command line commands and have no UI.

    • damnthefilibuster@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      thanks for that :)

      BTW, if I fire up a bunch of docker containers in WSL2 using podman or native docker, and then kill them, does WSL2 release the RAM it acquired to run those containers?

    • markr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The integration of Docker for windows with wsl2 is an abomination that breaks just about every time I update either ddw or windows. Also the fact that it is tied to my user account ( both ddw and wsl2) means that it is not a great choice for persistent services. I still use it to provide monitoring agents for Prometheus and portainer, but otherwise everything runs on Linux vms on my homelab xenserver cluster.

      It is possible to install docker without ddw. It’s documented for server versions of windows, but is basically only for running windows containers. The only use case for that is windows build agents as far as I can tell.

      Docker can be installed standalone on wsl2 and would be more reliable.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    When I had Windows I ran WSL2 + standard Linux docker, worked flawlessly. If you have all your files in the WSL volume, it’s also really fast compared to Docker Desktop on Windows or Mac. I found it almost as fast as a native Linux version.

    • damnthefilibuster@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I thought WSL2 made things slow because of some stupidity they did with the code? Maybe they fixed it.

      Anyways, is it able to take as much resources as it needs from the host? Unrestricted in terms of RAM and CPU?

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It’s slow when you go cross-filesystem, meaning accessing WSL2 files from Windows, or accessing Windows files from WSL2. If you keep all related files in WSL2, it’s really comparable to native Linux experience (with a small penalty due to being ran in a VM, but it’s not noticeable by a human eye).

        As far as I know, yes, it can take all the resources it needs.

      • breadsmasher@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        made things slow

        That’s probably referring to how file systems are handled. Going from WSL to windows file system is slower than using the “proper” mount point

        Unrestricted

        yes