Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NO
Posts
0
Comments
856
Joined
2 yr. ago

  • If the package is popular then it is very likely already packaged by your distro. You should always go there first if you care that much. If the package is not popular enough to be packaged by a distro then how does another centralized approach help? Either it is fully curated like a distro package list and likely also wont contain some random small project, or it is open for anyone to upload scripts to so will become vulnerable to malicious scripts. Worst yet people would be able to upload scripts to projects they don't control as the developers of said project likely wont.

    Basically it is not really any safer then separate dev owned websites if open nor offer better package support then distro repos if curated.

    Maybe the server was hacked and the script was changed?

    Same thing can happen to any system though. What happens if your servers for this service are hacked? Being a central point makes you a bigger target and with more people able to change (assuming you are not going to be the only one to curate packages) things you have a bigger area of attack. And once hacked they can compromise far more downloads than a single package.

    Your solution does not improve security - just shuffles it around a bit. Sounds nice on paper but when you look at it in more details there are a lot more things you need to consider to create an actually secure system that is better then what we currently have.

  • Then how would you trust these scripts in a central repo? Seems to add no real value or safety over dev managed scripts if you are not willing to go down the path of becoming yet another distro packaging system.

  • There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors? What would a third party location bring to improve security?

    And generally what you are describing is a software repo, you know the one that comes with your distro.

  • Cannot remember if the study was stupid or if peoples interpretations of it where. But when covered up else where you will lose a lot of heat through your head. More so then if just an arm or just a leg was exposed as with your arms and legs your body will slow down blood flow through them to try and converse your core temperature - it cannot do that with your head.

  • once a developer enacts an end of life plan, their legal culpability is removed What legal culpability? If you are not hosting anything then you wont be liable for anything. It is not like if you create a painting and someone defaces it with something that you become liable for that... That would be insane.

  • Random programming certificates are generally worthless. The course to get them might teach you a lot and be worth while, but the certificate at the end is worthless. If it is free then it does not matter too much either way, might be a good way to test yourself. But I would not rely on it to get you a job at all. For that you need other ways to prove you can do the job - typically with the ability to talk about stuff and having written some real world like application. Which a course might help you do to.

  • The surge of cancer since the 1900s is also explainable by the surge in our ability to detect cancer and overall understanding of it.

    One big reason papers always find these links is just that they are finding correlations, which are always there, even for unrelated things. When you are looking at loads of factors in a observational study you are almost bound to find some accidental correlation. It is very hard to tell if that is just random or if there is a true cause behind it.

    There are all sorts of spurious correlations if you look hard enough.

  • The only things not linked to cancer are the things not yet been studied. Seems like everything at some point has been linked to cancer.

    The data showed that people who ate as little as one hot dog a day when it comes to processed meats had an 11% greater risk of type 2 diabetes and a 7% increased risk of colorectal cancer than those who didn’t eat any. And drinking the equivalent of about a 12-ounce soda per day was associated with an 8% increase in type 2 diabetes risk and a 2% increased risk of ischemic heart disease.

    Sounds like a correlation... someone who eats one hot dog and drinks one soda per day is probably doing a lot of unhealthy things.

    It’s also important to note that the studies included in the analysis were observational, meaning that the data can only show an association between eating habits and disease –– not prove that what people ate caused the disease.

    Yup, that is what it is. A correlation. So overall not really worth the effort involved IMO. Not eating any processed meats at all is not likely a big issue, but your overall diet and amount of exercise/lifestyle. I would highly suspect that even if you did eat one hotdog per day, but had a otherwise perfect diet for the rest of the day and did plenty of exercise, got good sleep and all the other things we know are good for you then these negative effects would likely becomes negligible. But who the hell is going to do that? That's the problem with these observational studies - you cannot really tease out the effect of one thing out of a whole bad lifestyle.

    I hate headlines like this as it makes it sounds like you can just do thins one simple thing and get massive beneficial effects. You cannot. You need to change a whole bunch of things to see the types of reduction in risk they always talk about. Instead they always make it sounds like if you have even one hot dog YOU ARE GOING TO DIE.

  • YAML is not a good format for this. But any line based or steamable format would be good enough for log data like this. Really easy to parse with any language or even directly with shell scripts. No need to even know SQL, any text processing would work fine.

  • CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.

  • Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.

  • export tracking data to analyze later on

    That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.

    The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.

  • What is wrong with a file for this? Sounds more like a local log or debug output that a single thread in a single process would be creating. A file is fine for high volume append only data like this. The only big issue is the format of that data.

    What benefit would a database bring here?

  • There is in this case, and why Linus did accept the patch in the end. Previous cases less so though which is why Linus is so pissed at this one.

    The reason for this new feature is to help fix data loss on users systems - which is a fine line between a bug and a new feature really. There is precedent for this type on thing in RC releases from other filesystems as well. So the issue in this instance is a lot less black and white.

    That doesn't excuse previous behaviour though.

  • The attack is known as the evil maid attack. It requires repeated access to the device. Basically if you can compromise the bootloader you can inject a keylogger to sniff out the encryption key the next time someone unlocks the device. This is what secure boot is meant to help protect against (though I believe that has also been compromised as well).

    But realistically very few people need to worry about that type of attack. Encryption is good enough for most people. And if you don't have your system encrypted then it does not matter what bootloader you use as anyone can boot any live usb to read your data.

  • I don't agree go is simpler to read. It is simpler to learn the syntax but the syntax is only part of what makes a language. Having learnt both, and having spent more time actually writing go I still prefer writing rust and finding it far easier to work with then go. Go has too many hidden gotchas that you need to trip up on to learn and then remember forever or else trip up on them again.

  • There is not really one best distro out there - or else there would only be one distro. But for someone new you will find basically any mainstream/popular distro good enough for your usecase. The best one for you will come down to personal preference and will likely - at least at the start - be centered on which desktop environment you like the most. KDE will probably feel more like Windows. Though gnome I think tends to be the default on most distros. You will find popular distros have multiple flavors with various desktop environments as well. Your best bet is to download a few and put them on a usb and try them out before installing. That will give you a better idea of what you want.Or just pick one and go for it if you don't care that much - it will probably be good enough.