Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KW
Posts
2
Comments
13
Joined
6 mo. ago

  • Ah I think you may have solved part of the problem. I tried to use a network and have container name resolution but it failed. That’s why I went with pods and publish ports directly to the host.

    I will try to use a dedicated network with DNS on, thanks!

  • I wanted to do something similar. But I grouped some containers using pods and it seems it broke the networking.

    Eventually I kept the pods, and exposed everything to the host where caddy can meet the services there. Not the cleanest way, especially as my firewall is turned off.

  • I switched at work because of the license changes docker did. I noticed that for my work workflow, podman was a direct remplacement of docker.

    For my homelab, I wanted to experiment with rootless and I also prefer to have my services handled by systemd. Also I really like the built-in auto update from podman

  • Yes maybe, I will edit my post to better explain the issue I’m facing.

    I’m using pasta. I can see some weird, for instance some services can access other through host.containers.internal and for others, I have to use 192.168.1.x

  • I should have clarified this. It does not open the ports, but I have setup my firewall to allow a range of IP and the traffic is still blocked.

    I have noticed some inconsistency in the behavior, where the traffic would sometimes work upon ufw activation but never work upon reboot. Knowing how docker works, I thought podman would also mess with the firewall. But maybe the issue comes from something else.

  • Selfhosted @lemmy.world

    Podman rootless and ufw

  • I have a MacBook Pro M1 Pro with 16GB RAM. I closed a lot of things and managed to have 10GB free, but that seems to still not be enough to run the 7B model. For the answer being truncated, it seems to be a frontend issue. I tried open-webui connected to llama-server and it seems to be working great, thank you!

  • I tried llama.cpp with llama-server and Qwen2.5 Coder 1.5B. Higher parameters just output garbage and I can see an OutOfMemory error in the logs. When trying the 1.5B model, I have an issue where the model will just stop outputting the answer, it will stop mid sentence or in the middle of a class. Is it an issue with my hardware not being performant enough or is it something I can tweak with some parameters?

  • I’m new to this and I was wondering why you don’t recommend ollama? This is the first one I managed to run and it seemed decent but if there are better alternatives I’m interested

    Edit: it seems the two others don’t have an API. What would you recommend if you need an API?

  • Selfhosted @lemmy.world

    Jellyfin burning subtitles for AndroidTV