Seriously, what's with all the Mozilla hate on Lemmy? People bitch about almost everything they do. Sometimes it feels like, because it's non-profit/open-source, people have this idealized vision of a monastery full of impoverished, but zealous, single-minded monks working feverishly and never deviating from a very tiny mission.
Cards on the table, I remain an AI skeptic, but I also recognize that it's not going anywhere anytime soon. I vastly prefer to see folks like Mozilla branching out into the space a little than to have them ignore it entirely and cede the space to corporate interests/advertisers.
Can you show some examples of where people complain about Mozilla taking Google money?
Because when I complain about Mozilla, it's because they fired their employees while bloating the salary of their CEO, that Firefox languishes while they throw in privacy invasive junk that nobody asked for.
That seems more aligned with their mission of fighting misinformation on the web. It looks like fake spot was an acquisition so hopefully efforts like the ones mentioned in this post better help aligne this with their other goals.
That observation is... Very tangential to my comment. I'm not sure if anyone asked Mozilla Corp to start violating people's privacy, and purchasing data sets, in order to allegedly fight misinformation (while showing ads in the same place, of course)...
What I'm saying is Mozzilla, from my understanding, didn't set out to do that but instead aqquired a business that was in order to use their services to fight misinformation. We should pressure them to reform the new part of business to better align with the rest of Mozzilla's goals.
but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I'm supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.