The problem, I believe, is that stable diffusion presently only supports Python 3.10, but Arch ships 3.12, and some of the dependencies aren't compatible with the newer version. Here's what I did to get it working on Arch + AMD 7800XT GPU.
- Install python310 package from AUR
- Manually create the virtualenv for stable diffusion with
python3.10 -m venv venv
(in stable diffusion root directory)
This should be enough for the dependencies to install correctly. To get GPU acceleration to work, I also had to add this environment variable: HSA_OVERRIDE_GFX_VERSION=11.0.0
(Not sure if this is needed or if the value is same for 7900 XTX)
This was my experience as well as a developer trying to package an application as an appimage. Creating an appimage that works on your machine is easy. Creating one that actually works on other distros can be damn near impossible unless everything is statically linked and self contained in the first place. In contrast, flatpak's developer experience is much easier and if it runs, you can be pretty sure it runs elsewhere as well.
Tämä tuntuu jotenkin kuin olisi suoraan jostain vanhasta scifi satiirista. Eletään maailmassa jossa työnteko vie alati kasvavan osan ajasta. Ihmiset eristetään toisistaan ja sanotaan, että he ovat rationaalisia yksilöitä jotka tekevät vain itsenäisiä päätöksiä. Kanssakäyminen korvataan transaktionaalisilla palveluilla ja muodollisilla rajapinnoilla. Sitten ihmetellään miksi ihmiset ovat yksinäisiä ja ratkaisuksi tarjotaan chattibottia jolle voi puhua ongelmistaan!
If I recall, Enlightenment used to have a rather focal fan base at one time. The DE was a lot prettier than most of its contemporaries, and was relatively lightweight despite having animated effects and everything. I always thought EFL was one of the hidden gems of the Linux ecosystem that was left in GTKs and Qts shadow, but after reading the article (back when it was first published) I realized there was probably a good reason it never got popular. I thought the story was embellished, as thedailywtf articles typically are, with the "SPANK! SPANK! SPANK! Naughty programmer!" stuff, so I downloaded EFL source code and checked. OMG, it was a real error message. (Though I believe it has since been removed.)
The company in question using EFL was (probably) Samsung, who apparently still uses it as the native graphical toolkit for Tizen.
Fyysisten tuotteiden väärennökset vaativat niin paljon henkilöstöä ja infraa, että ne varmasti on monialaisen järjestäytyneen rikollisuuden käsissä. Mutta en usko tuon yleistyvän digitaaliseen piratismiin, jossa sivustojen pyörittäjät ovat yleensä olleet opiskelijoita tai muita tietotekniikkaharrastajia.
Juuri näin. Kun P2P verkot syntyivät, digitaalisesta massajakelusta tuli niin helppoa ja halpaa, että sitä pystyi tekemään vahingossa.
That is a good point to emphasize. A downside of a CLA is that it adds a bit of bureaucracy and may deter some contributors. If the primary concern is whether a GPL licensed app is publishable on an App Store, an alternative is to add an app store exception clause to the license. (The GPL allows optional extra clauses to make the license more permissive.) Though this means that while your code can be incorporated to other GPL licensed applications, you can't take code from other GPL projects that don't have the same exception.
As others have already said, the prohibition of using the code in commercial applications would make the license not open source/free software (as defined by the Free Software Foundation and Open Source Initiative.)
These are some of the most commonly used licenses:
- MIT - a very permissive license. Roughly says "do anything with this as long as you give attribution"
- BSD - similar to MIT (note that there are multiple versions of the BSD license)
- ASL2 - another permissive license. Major difference is that it also includes a patent grant clause. (Mini rant: I often hear that GPL3's patent clause is the reason big companies don't like it. Yet, ASL2 has the very same clause and it's Google's favored license.)
- GPL - the most popular copyleft license (family). Requires derived works to be licensed under the same terms.
- LGPL - a variant of the GPL that permits dynamic linking to differently licensed works. Mainly useful for libraries.
- AGPL - a variant of GPL that specifies that making the software available over a network counts as distribution. (Works around the SaaS loophole. Mainly used for server applications.)
- Mozilla - a hybrid permissive/copyleft license. I don't fully understand how this one works.
If you want to use a true FLOSS license and your goal is to discourage people from selling it, I'd say the GPL is your best bet. Legit vendors who don't want to give out their source code won't touch GPL code. The non-legit ones won't care no matter what license you choose. Also, iOS App Store terms are not compatible with the GPL so they can't release their stuff there, but you can as long as you hold full copyright to your application.
When I first set up my IKEA remotes, they would sometimes bind to a random other light in the room. This happened when I used the IKEA method of pushing the reset button and holding the remote up to the target light. Not sure if it was caused by touchlink having a lot bigger range than it should have or if it has something to do with how the devices create zigbee groups. For me, this problem stopped happening once I switched to using zigbee2mqtt's own group and binding controls rather than try to pair the remote using the IKEA hub method.
So, check the remote's binding section in zigbee2mqtt console. Perhaps there other device is listed there, or you can try unbinding the default bind group to see if it does anything.
(I just checked, one of my remotes that is presently out of battery is bound to the default bind group according to z2m, even though it shouldn't be. Could be a bug with the IKEA remote where it resets back to a default binding and the plug happens to be listening on the same ID as well.)
My impression about Matter was too that it is not “done” yet and device support is poor. On the other hand you read at every corner that it will be the future.
This is my impression as well. I'm keeping an eye on how this space develops and I'll probably buy a second dongle just for Thread when I need it (i.e. when some product I really want comes out that only supports Thread.) I believe most zigbee dongles are theoretically capable of supporting Thread, since they both share the same physical layer protocol.
I'm curious to hear people's experiences with Thread/Matter devices. Ideally, I'd like to use my HA box as the border router and configure it to not allow any external Internet connections. Will this break any functionality on devices with a Matter logo on them? Ideally it shouldn't, but given the track record of manufacturers so far, my expectations are low.
I use zigbee2mqtt myself and I've been very happy with it. I haven't tried ZHA, but I believe z2m supports more devices. (I use z2m's supported devices list to choose which ones to buy.) The downside is that it's a bit more work to set up initially, as you need an MQTT broker as well. But in return, I feel like z2m is more reliable since it runs (and is updated) separate from HA core. I use it with a zzh! dongle and even though I got one of the bad ones with a faulty amplifier chip, it's been rock solid.
As for Thread(+Matter), I'm waiting for things to settle down. Support in HA is still experimental and there are very few products out yet that use Thread. I'll probably prefer Zigbee for as long as they sell them so all my devices will share the same mesh. Also, unlike Zigbee, Thread devices are not guaranteed to be local-only, which is my biggest worry. Thread/Matter won't free us from having to check a device compatibility list before buying.
This is my chief worry with Thread. Zigbee is guaranteed to be local only, but if they switch over to Thread, the individual bulbs will be able to call home, even if they expose some of their functionality locally via Matter. With home assistant, one can probably configure their Thread Border Router to not allow internet access, but I have a suspicion a lot of supposedly local thread/matter devices will be designed with the assumption that they have cloud access and won't function fully if firewalled.
I didn't actually have problems with proxmox, other than the potential compatibility issue with Frigate. I didn't test it, but I had read that getting iGPU passthrough for video acceleration working can be tricky. A couple of things worked better: the ethernet adapter was more stable and the power button worked.
A few weeks ago I wrote about my experience migrating a HA installation from a Raspberry Pi to a NUC running proxmox. Since I can't help but to tinker, here's my experience transferring the installation to bare metal.
Reasons I had for making the switch:
- There are HAOS add-ons for pretty much every extra service I'm interested in running right now
- According to its documentation, Frigate works better when not run inside a VM (passing through the iGPU can be problematic)
- Proxmox nags me about a subscription every time I log in to the admin console
- Random crashes (that I misattributed to proxmox)
- Potentially lower idle power consumption (made no difference, it turned out)
I started the migration by making a full backup again. I installed Puppy Linux on a USB stick, booted the NUC with it, downloaded the HA image and wiped the boot drive:
dd if=hass.img of=/dev/nvme0n0 status=progress
(Fun sidenote: writing an image from RAM to a NVME drive was so ridiculously fast, the USB stick felt like a floppy disk in comparison.) After rebooting, I had a fresh HA install once again. This time, I monitored the restoration progress by periodically checking the supervisor logs on the console (ha supervisor logs
command.) Running supervisor stats
showed CPU usage at around 50% (one core at 100%). The restore took roughly 16 minutes to complete.
I had some connection trouble after restoring but at a quick glance, everything seemed to work after a reboot. After a closer look, I noticed most of my addons were missing. I ran a partial restore of everything except HA core, which appeared to fail due to not being able to fetch add-on images.
Before, I had proxmox seemingly crash on me a couple of times. Actually, it was losing its network connection and needed a reboot to recover. I had thought this might be a problem with proxmox but turns out it was even worse when running baremetal HAOS! Every time the network cut out, I saw this message on the console:
rtl_rxtx_empty_cond == 0 (loop 42 delay 100)
There appears to be a bug in the RTL network chip or its driver. Intel doesn't list which chipset it uses in this NUC model, probably because they're ashamed of it. In proxmox the bug was triggered maybe once a week but in HAOS it was more like once every few minutes. Going back to proxmox wouldn't be an acceptable fix because I still couldn't trust the server to remain online.
I worked around the problem by running to a local computer shop and getting a USB ethernet adapter. Hopefully, a future kernel update will fix the issue and I no longer need to use it, but for now the USB adapter (with an AX88179 chip) has been working perfectly. After fixing the network problem, partial restore worked and all addons were reinstalled.
Finally, I wanted to add a second interface for my IoT VLAN. This was easy in proxmox, as I could simply add a second virtual adapter, but it can be done in plain HAOS just as easily. This feature doesn't seem to be mentioned in the documentation anywhere, but the ha
command line tool can configure VLANs for you:
ha network vlan enp0s20f0u1 200 --ipv4-method auto --ipv6-method auto
This adds a new virtual interface to the physical interface enp0s20f0u1 for VLAN tag 200. (This can also be done using NetworkManager.)
Having HA on two subnets simultaneously has worked well so far. Traffic to my IoT devices no longer needs to go through the router and, in the future, setting up Matter devices on the IoT subnet ought to be possible as (to my understanding) they utilize link local IPv6 addresses.
Lastly, I got a PoE camera and added Frigate. Configuring it was bit of a chore and the documentation feels a bit fragmented, but I did get it working in a couple of hours. Some relevant notes:
- OpenVINO detector seems to work well enough on the NUC. I currently have just one camera and feel no need to get a Coral accelerator
- VAAPI acceleration for ffmpeg requires protected mode to be disabled ("full access" version of the add-on needed)
- I used go2rtc to restream the detect stream, since that stream is also good for live view. It can be viewed from Home Assistant's UI, even through nabu casa.
- Frigate-card supports casting locally! (figured out how this works:
media_player.play_media
action, content id ismedia-source://camera/CAMERA_ENTITY_ID
, and content type isapplication/vnd.apple.mpegurl
) - Having frigate continuously running doesn't have any measurable effect on the NUC's power consumption. Maybe there's something wrong with my power settings and it wasn't idling as much as it should have?
- Using Frigate's person detection as an occupancy sensor works really well. This might actually replace a PIR sensor once I move the camera to its final location.
In the end, was it worth it moving from proxmox to bare metal? Maybe. One less moving part to worry about at least. It did not solve the random (network) crash issue, but I did figure out the root cause. There was no change in power consumption, the NUC still draws 10 watts (with or without Frigate running.) If I come up with something that needs to run in a VM I might go back, but I'm also planning on building a NAS in the near future which I could also use for running VMs and containers.
One problem still remains: the power button does nothing! I think hassos is missing acpid or its configuration. This is not a showstopper, but it would be nice to be able to reboot the system gracefully when/if it loses networking.
Kun Nokia alkoi valmistaa Windows puhelimia mietin, että mitenköhän tässä käy kun perinteisesti Microsoftin mobiilikumppaneille oli käynyt huonosti. Mieleen on jäänyt valokuva Elopista ja Ballmerista kättelemässä valtavat virneet naamoillaan. Ajattelin, että ehkä olen vain liian ennakkoluuloinen kun olen kerran tämmöinen mikkisoftaa vihaava linux käyttäjä.
Vanhaa N900:sta on edelleen ikävä. Androidiin siirtyminen tuntui valtavalta downgradelta. Kun ensimmäiset lumiat tuli ulos, N9 taisi edelleen myydä paremmin siitä huolimatta, että Nokia tuntui tekevän kaiken mahdollisen ollakseen myymättä niitä.
Adding to this as I'm also interested. I'm currently looking at cameras recommended in the Frigate wiki, since any camera that works well in Frigate also ought to work well in HA. One interesting thing I've noted is that some of the Hikvision and Dahua models have onboard AI features for object recognition. Does anyone have experience with these? Can they report these events back to home assistant and are they worth using?
One thing I’m curious about: Do you measure the idle power consumption of your NUC and does it really drop down to 6W? Because with a Hypervisor installed I would assume that it never really goes into „idle“ hence the resources are constantly bound.
I used a power metering plug to measure the consumption and it showed around 6W when no VMs were running. I think it's probably higher now with HA online, as my UPS is showing a 5W increase over when the Pi was plugged in. (The UPS always shows a higher number than the power meter though, so I'm not sure which one to trust.) If the new figures are correct, the NUC appears to be using 10 watts with HA on. I'll have to see if setting the CPU frequency governor to powersaving mode has any effect.
I considered bare metal HASSOS too and would have gone that route if HA were the only thing I was planning on running. Another option would have been to install a linux distro and run HA in docker, but having HA in its own separate VM means I don't need to worry about accidentally breaking it when I'm messing around with other services.
Now, having written this, I realize that there would have been some real advantages in running HA in docker on a bare metal OS. For one, it would have made running Frigate easier, as its documentation recommends against running it in a VM.
NUC is a brand of mini PCs from Intel (or now Asus, I suppose.) I haven't tried Lenovo mini PCs myself, but they fit the same niche.
I just finished migrating my Home Assistant installation from a Raspberry pi to an Intel NUC and I thought I'd share my experience. All in all, it went well but there were a couple of pain points I'd like to have known in advance.
Here's what I did. First, I prepared the NUC for installation. Rather than going bare metal, I installed proxmox because I plan on running other stuff on it as well. Proxmox installation was very straightforward, but figuring out how to install HA on it wasn't, as I've never used proxmox before.
I first tried to use the HA image as a virtual installation medium, which did not work. I realized that, like with the RPi, it's not an installer but a ready to use disk image. I found a nice guide on how to install HA on proxmox with a handy helper script to set everything up for you.
Now I had a new HA instance running, ready for initial setup. Time for the switchover:
- I made a new full backup on the Raspberry Pi, then shut it down.
- I reassigned the Home Assistant IP address to the VM in my router's DHCP settings.
- I logged in to the new HA instance and uploaded the backup file using the restore from back option on the setup screen.
This is where HA still has a pain point. There is no progress bar or anything to let you know the state of the restoration process. It took quite a while until the web UI came back up (and I'm not sure which log file I should have been monitoring in the console) and once it finally did, the add-ons were stuck in a weird state where some of them appeared to be running but were still shown as stopped. HA core was operational already, including all Wifi based integrations. Zigbee2mqtt wasn't up yet because I hadn't yet passed through the zigbee stick.
After I had grown tired of waiting, I rebooted the VM and now the add-ons started up properly. All the settings were migrated, including Mosquitto's state. Very nice!
The last things to fix were:
- Pass through the zigbee USB stick. I did this from the proxmox VM's hardware tab: Add USB device, use USB vendor/device ID and selected the one that said serial port. Zigbee2mqtt started working after doing this.
- Pass through bluetooth. The NUC's built-in bluetooth adapter was also visible as a USB device. There were only two devices in proxmox's USB device dropdown: the zigbee stick and an unnamed device. The unnamed one was the bluetooth adapter. In HA devices, I removed the old RPI bluetooth device and added the new one and immediately started receiving updates from my Ruuvi Tags.
- Deleted RPi power supply check device
All in all, a fairly smooth migration, with the only bump on the road being the lack of progress reporting when restoring from backup. Would recommend. The NUC (A NUC11 with a Celeron N4505 processor) plus memory and NVME drive was only about twice as expensive as a RPi4 with an SD card but is a lot more powerful with a similar idle power consumption of around 6W.
there are new dimmable LEDs that automatically change to whiter when bright and warmer when dim
I love the idea of these bulbs. I'm using the adaptive lighting component so my bulbs' temperature and brightness are always correlated anyway. For light fixtures with more than one bulb, a single smart dimmer could replace a whole zigbee light group.
However, are there any bulbs on the market yet with a good temperature range? So far, the only ones I've been able to find are Philips Warm Glow lamps that only go from 2200K to 2700K, which is way too warm for daytime use.