I wonder when Firefox is going to add a configuration menu for keybinds.
In applied CS, it's common to talk about pure and impure functions instead of Turing machines.
Pure functions are, broadly speaking, equivalent to Turing machines. A pure function may only depend on its inputs (like a Turing machine) and has no outputs besides the return value (like the end state of a Turing machine's tape).
Impure functions cover algorithms that aren't Turing machines. For example, you might have a random number generator rand: 1 → N
that outputs different natural numbers on every invocation rand()
and is hence impure. Output functions are also impure, e.g., a function write: N → 1
that writes a number into a file can't be a Turing machine, because Turing machines have no concept of files.
Computer programs that consist entirely of pure functions aren't very useful, because they can't interact with the user or reality. The typical approach to solving this is to put the core program logic inside pure functions that interact with impure parts through a limited interface. That way, you can apply CS concepts to the important parts of the program and neatly separate out the impure parts.
Edit: Changed ∅ to 1 (singleton set) in function definitions. A function can't return ∅ and a function taking ∅ can't be called.
all of these are different ways to say “firefox account”?
Basically yes. The idea is that each suffix represents something that's typically a separate word in English, and you can mix and match suffixes just as you'd mix words.
- my account -> tilini
- to an account -> tilille
- to my account -> tililleni
- to my account too -> tilillenikin
As a Finnish speaker, I just can't see generic localisation systems like Fluent working with agglutinative languages. For instance, consider the Polish example for Firefox Account from Fluent's front page:
-sync-brand-name = {$case ->
*[nominative] Konto Firefox
[genitive] Konta Firefox
[accusative] Kontem Firefox
}
In Finnish, this would be called Firefox-tili. Tili means account and belongs to the Kotus type 5 words, which specifies how the word is inflected with various suffixes. Fluent doesn't support agglutination, so you'd have to specify every form of the word separately:
-sync-brand-name = {$case ->
[nominative] Firefox-tili
[partitive] Firefox-tiliä
... 10ish more inflections
[nominative first person] Firefox-tilini
[partitive first person] Firefox-tiliäni
... 10ish more inflections
... the above inflections but for 2nd and 3rd persons
[nominative first person plural] Firefox-tilimme
[partitive first person plural] Firefox-tiliämme
... 10ish more inflections
... the above inflections but for 2nd and 3rd persons plural
[nominative first person questioning] Firefox-tilinikö
[no idea what this is even called] Firefox-tilittömänäkin
[no idea what this is even called 2] Firefox-tililleensäkään
... lots more
}
Ideally, you'd only specify that Firefox-tili is a type 5 word and the system generates all of that boilerplate.
Teams does actually have a sound cue when people join calls: high-pitched repeated beeps that most people can't hear due to presbycusis, and the few who do hear it think they're going crazy.
The author should look into Koka. As I see it, Koka is at the bleeding edge of effect handling, which is why the async Rust team has taken some nibbles of inspiration from it. Alas, Rust as a whole is far too cemented to overhaul everything for generic effect support, but at least it's been beneficial for async.
Love it. Great lesson on why applicative exists, how it would/should look in C++ and discussion on prior/ongoing proposals.
I would call those language-specific. While they are useful in more than one language, they are also replaced by language features in many languages.
- Builder pattern can be simplified with kwargs (Python, C#) or argument objects (JS, C++).
- Factory pattern can be simplified with first-class types (Lisp, Haskell).
- Visitor pattern is similarly simplified by first-class functions (supported by most languages nowadays).
- Dependency injection of concept X is generally simplified by first-class X. I think the least widely supported is dependency injection of effects (Koka).
Design patterns are typically just workarounds for the limitations of the chosen programming language. What you might consider a pattern in C might just be a simple language feature in C++, and the patterns in C++ (as popularized by GoF) might just be language features in Lisp/Rust/whatever.
So rather than thinking about patterns, you should first choose the right language for the task. If you're working on a parser, you might prefer Haskell. If you need formal verification, there's C and Idris and little inbetween. If you need to hire lots of developers, something widely-known like JS might be the choice.
After you've chosen a language, you can identify the things that the language is bad at and apply the appropriate design patterns. Sometimes the patterns can be found in books (C++, Java) and sometimes it's just tribal knowledge (D).
Regular expressions are great and can always be matched in linear time with respect to the input string length.
The problem is that JS standard library RegExps aren't actually regular expressions, but rather a much broader language, which is impossible to implement efficiently. If RegExp switched to proper regular expressions, they would match much faster but supporting backreferences like /(.*)x\1/ would be impossible.
I appreciate that the talk focuses more on automated approaches to safety this time. I feel the past years' talks were much more focused on goading developers to follow core guidelines, which doesn't really work in industry.
Very nice. I remember looking up some of these details from StackOverflow years ago, but this blows those answers away.
It would be much easier to read if it was actually table, i.e., if hex codes and the characters were separated into their own columns.
My workplace has the opposite problem.
The company has been in dire need of programmers for years, so they hired people (including myself) without tests. However, the work involves lots of custom iterators and the occasional handcrafted parser, which most of the company is incapable of writing. The bright side is that management has their metrics mostly right, so I'm getting lots of raises for solving fun problems.
I recommend Pyright over Mypy if you don't mind it being owned by Microsoft. It has far fewer bugs, and if you do stumble on one, you don't have to fix it yourself because Microsoft's paid devs will fix it in a couple of working days (at least for the small bugs I've reported).
I do very little coding, but it's because our workplace has an abundance of junior developers, not because I'm pressed for time. My work is essentially just turning emails into technical specifications that others can implement and tutoring juniors when there are problems. Few to no pointless meetings because I insist on using emails or tickets whenever possible.
Agreed. I think operator overloading is a necessary feature in any math-oriented or general-purpose language. Being able to write formulae the same way as they're written in the source paper is a huge boon to readability.
Different applications require different tests, so no measure is going to please everyone. If you're making embedded devices for an airplane, the buyer might ask you to provide a formal proof that the program works. In contrast, web apps tend to simply use end users as testers, since it's cheaper.
What is the median amount of times you end up shuffling the array before it is sorted?
The answer is n! where n is length of the array.
I assume you meant mean instead of median. The median of a geometric distribution with parameter p is ceil(-1/log2(1-p)).
The author wanted "a more powerful type of inference that can actually infer the whole type of a function by analyzing the body of the function", which Rust doesn't have by design.
P2881 proposes a new way to build iterators, similar to generators but with less performance overhead and better nesting (and hence composability): ```c++ struct generator123 { auto operator()(auto&& sink) const { std::control_flow flow;
// With P2561: sink(1)??; flow = sink(1); if (!flow) return flow;
// With P2561: sink(2)??; flow = sink(2); if (!flow) return flow;
return sink(3); } };
int main() { for (int x : generator123()) // x iterates 1, 2, 3 { std::print("{}\n", x); if (x == 2) break; } } ```
While the benefits are undisputable, it's quite verbose without P2561 and I feel C++ is starting to have too many different iterator models.
What do you think?
One of the authors has a nice presentation about pitfalls of ye olde for loops, which was probably a big motivator too: https://www.youtube.com/watch?v=95uT0RhMGwA