Skip Navigation

Posts
38
Comments
882
Joined
2 yr. ago

  • here’s some interesting context on the class action:

    They wanted an expert who would state that 3D models aren't worth anything because they are so easy to make. Evidently Shmeta and an ivy league school we will call "Schmarvard" had scraped data illegally from a certain company's online library and used it to to train their AI...

    this fucking bizarro “your work is worthless so no we won’t stop using it” routine is something I keep seeing from both the companies involved in generative AI and their defenders. earlier on it was the claim that human creativity didn’t exist or was exhausted sometime in the glorious past, which got Altman & Co called fascists and made it hard for them to pretend they don’t hate artists. now the idea is that somehow the existence of easy creative work means that creative work in general (whether easy or hard) has no value and can be freely stolen (by corporations only, it’s still a crime when we do it).

    not that we need it around here, but consider this a reminder to never use generative AI anywhere in your creative workflow. not only is it trained using stolen work, but making a generative AI element part of your work proves to these companies that your work was created “easily” (in spite of all proof to the contrary) and itself deserves to be stolen.

  • definitely! that sounds like a great first Rust project.

  • Like many here on awful.systems I have a pretty thick skin, but reading the above put me in a really weird mood all day.

    same here. the thing is, I think a lot of us are on awful.systems because we’ve seen far too much of how fascism operates and spreads online. this is an antifascist place; it’s so core to the mission that we don’t publish it as a policy (because a policy can be argued against and twisted and the fash kids love doing that), we just demonstrate it in a way that can’t be ignored. so seeing the first or second (I don’t keep track of these things) most popular social media platform publish a policy whose only purpose is to be used as a weapon against marginalized people, for it to be written in a matter-of-fact “this is just how it is” way, and for essentially nobody outside of the fediverse to push back on it in any real way — that is shocking.

  • define a meta-format for specifying the variables, default values, allowed values, etc., for an arbitrary[0] program’s config file, and create a program that reads a meta-format file and presents a GUI for editing the config.

    I’d kinda love this even if I’m editing config files in a text editor. emacs could use this with a major-mode or LSP to provide suggestions, validity checking, various rendered versions of the config, and guarantee interoperability with graphical tools (so that changes you make in an editor don’t get trampled by the UI, and vice versa)

    • MS-DOS and Windows, of course…
    • but, and this will get some boos, Unix as a workstation OS compared with every other non-windows workstation OS
  • a few people have been posting along the lines of “we all knew Facebook was evil, why are you surprised” which seems to miss the point — this is happening so fast and blatant that we’re almost definitely seeing the early stages of a playbook being executed. and even if not: Facebook was already the fash pipeline targeting mom, dad, and grandma. should I feel good that the pipeline is getting much more efficient and pervasive? none of them are ever gonna be on something like the fediverse.

  • You are making assumptions about my stance on AI. I was making a general statement about tools.

    since apparently you decided to post about fucking nothing, you can take your pointless horseshit elsewhere

  • Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

    it doesn’t produce any meaningful answers for non-novel problems either

  • yep, original is still visible on mastodon

  • guess again

    what the locals are probably taking issue with is:

    If you want a more precise model, you need to make it larger.

    this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift

  • the worst Nix project has finally been announced: behold crypto NixOS

    I’m starting a paradigm shifting, open and new human / computer interface system that is global, multi device and privacy focused.

    […]

    DISCLAIMER: You are not my exit liquidity, I have the best performing long term spot crypto portfolio in the world - I’m a early adopter with 100% hit rate on geniuses. So, I don’t have to work, I’m not building this to become rich. I want to build something paradigm changing - truly mind-blowing, because now we have the tech and I’m annoyed how computer work. It is a lot of work, but it will reward us all.

    my “this isn’t a grift and I’m not a grifter” disclaimer is prompting a lot of questions already answered by the disclaimer. but speaking of prompting, what goes with crypto?

    ChatGPT-1o thinks, after some reinforced asking, that the MC of such a coin can reach 300-1000M; I think it could easily go higher - it solves so fundamental problems in a much more elegant way. In my opinion, it will be the same step as the command line to the windowed systems was. Or dump phone to smart phone. It will just span devices and span users while keeping the data under control and of companies.

    of course. after some reinforced asking, gpt told me you’re all haters if you don’t think I’m as important as Xerox PARC!

    there’s lots more in the OP to sneer at, but here’s the worst part of the thread:

    Mod note: I’m glad to see doubt and scepticism about crypto-based claims. However, that point has now been made; please avoid any further posts in that vein to avoid a pile-on dunkfest, and leave the thread for any potential on-topic discussion.

    thanks for nothing as usual NixOS discourse!

    e: via mastodon, archive

  • That’s some wildly disingenuous goal post moving

    check their recent post history. it’s all old school, Ayn Rand, bro you don’t even watch Bullshit?, internet libertarian takes and it’s fucking delicious, like eating a Big Mac straight from the time capsule where someone buried it

  • oh shit, it’s the only capitalist on lemmy and they’re in this thread! I’ve been waiting so long to tell you this:

    go fuck yourself

    e: holy fuck that post history, what did the objectivists do to you?

  • Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

    my god! let me fix that

  • if only I splurged on awfulsystems dotcom, I could’ve been one of the big boys

  • The dotcom bubble didn’t rid us of dotcoms.

    wait… is your definition of dotcom any corporation that owns a .com TLD domain? that’s so fucking precious, I love it

  • both those are related to information theory, but there are other things I legally can’t mention. shrug.

    hahahaha fuck off with this. no, the horseshit you’re fetishizing doesn’t fix LLMs. here’s what quantization gets you:

    • the LLM runs on shittier hardware
    • the LLM works worse too
    • that last one’s kinda bad when the technology already works like shit

    anyway speaking of basic information theory:

    but the research showing that 3 bits is as good as 64 is intuitive once you tie the original inspiration for some of the AI designs.

    lol