NASA's James Webb Space Telescope have Found Carbon Molecule in Space. Let's have a keen glance at the research findings.
> For the first time ever in space, scientists discovered a novel carbon molecule known as methyl cation (CH3+). This molecule is significant because it promotes the synthesis of more complex carbon-based compounds.
Originally posted on https://emacs.ch/@yantar92/110571114222626270
> Please help collecting statistics to optimize Emacs GC defaults > > Many of us know that Emacs defaults for garbage collection are rather ancient and often cause singificant slowdowns. However, it is hard to know which alternative defaults will be better. > >Emacs devs need help from users to obtain real-world data about Emacs garbage collection. See the discussion in https://yhetil.org/emacs-devel/87v8j6t3i9.fsf@localhost/ > >Please install https://elpa.gnu.org/packages/emacs-gc-stats.html and send the generated statistics via email to emacs-gc-stats@gnu.org after several weeks.
> Early galaxies' stars allowed light to travel freely by heating and ionizing intergalactic gas, clearing vast regions around them. > > Cave divers equipped with brilliant headlamps often explore cavities in rock less than a mile beneath our feet. It’s easy to be wholly unaware of these cave systems – even if you sit in a meadow above them – because the rock between you and the spelunkers prevents light from their headlamps from disturbing the idyllic afternoon. > > Apply this vision to the conditions in the early universe, but switch from a focus on rock to gas. Only a few hundred million years after the big bang, the cosmos was brimming with opaque hydrogen gas that trapped light at some wavelengths from stars and galaxies. Over the first billion years, the gas became fully transparent – allowing the light to travel freely. Researchers have long sought definitive evidence to explain this flip. > > New data from the James Webb Space Telescope recently pinpointed the answer using a set of galaxies that existed when the universe was only 900 million years old. Stars in these galaxies emitted enough light to ionize and heat the gas around them, forming huge, transparent “bubbles.” Eventually, those bubbles met and merged, leading to today’s clear and expansive views.
More: https://eiger-jwst.github.io/index.html
When I was packaging Flatpaks, the greatest downside is
No built in package manager
There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies... So, I personally am not interested in packaging for flatpak other than in very rare occasions... Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!
Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich
Ironically enough, it is free software https://github.com/SafeExamBrowser
The small, distant galaxy JD1 is typical of the kind that burned off hydrogen fog left over from the Big Bang.
The nature of an ultra-faint galaxy in the cosmic Dark Ages seen with JWST https://arxiv.org/abs/2210.15639
cross-posted from !softwareengineering@group.lt: https://group.lt/post/46385
> Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them. > > In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.
---
The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):
- Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
- Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."
!Overview of the pattern candidates and their relation
The paper is interesting for the following structure in describing the patterns:
> - Name: An evocative name for the pattern. > - Context: Contains the context for the pattern providing a background for the problem. > - Problem: A question representing the problem that the pattern intends to solve. > - Forces: A list of forces that the solution must balance out. > - Solution: A detailed description of the solution for our pattern’s problem. > - Consequences: The implications, advantages and trade-offs caused by using the pattern. > - Related Patterns: Patterns which are connected somehow to the one being described. > - Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
If you want to know more about our current technical scaling issues: Rest...
cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632
> This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > The hardest scaling issue is: scaling human power. > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...
TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.
If you want to know more about our current technical scaling issues: Rest...
cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632
> This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > The hardest scaling issue is: scaling human power. > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...
TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.
Nix is a tool that takes a unique approach to package management and system configuration. Learn how to make reproducible, declarative and reliable systems.
cross-posted from: https://group.lt/post/30446
> 1652 contributors, who authored 30371 commits since the previous release. > > NixOS is already known as the most up to date distribution while also being the distribution with the most packages. > > This release saw 16678 new packages and 14680 updated packages in nixpkgs. We also removed 2812 packages in an effort to keep the package set maintainable and secure. In addition to packages the NixOS distribution also features modules and tests that make it what it is. This release brought 91 new modules and removed 20. In that process we added 1322 options and removed 487.
Keyoxide: https://keyoxide.org/9f193ae8aa25647ffc3146b5416f303b43c20ac3
OpenPGP: openpgp4fpr:9f193ae8aa25647ffc3146b5416f303b43c20ac3