You may be fewer irritated by this with age
Misusing words like "setup" vs "set up", or "login" vs "log in". "Anytime" vs "any time" also steams my clams.
I use Fossil for all of my personal projects. Having a wiki and bug tracker built-in is really nice, and I like the way repositories sync. It's perfect for small teams that want everything, but don't want to rely on a host like GitHub or set up complicated software themselves.
I had this set up the day it was available in my area. Never got an alert. I find it difficult to believe I wasn't "exposed" during the pandemic, so I assume this didn't really provide much value.
Google cases always seem hit-or-miss. I just buy the same Spigen case for every phone. I know I like it.
This is a good reason to use Dvorak
ಠ_ŕ˛
But I'm bi-testual
Oh so now you're saying I have a big focal length?!?!?!? What is it with you people
Looks like he realized he left the oven on
If you (or anyone) want to send a message to try it: briar://acqmqrd2pmpm5nqaugnbkaby2na72glt72rjx3xkf25qtl4ruf5ss
It seems pretty neat.
Got it. So more for data at rest rather than handling the sending too?
SimpleX does file transfer pretty well, not sure about Briar now that I think about it.
Briar and SimpleX seemed decent the last time I looked into this.
I ended up using neither because I don't need privacy when talking to myself.
I have all of my important electronics (computers, entertainment center, network equipment) on CP1500PFCLCD. They're scattered around the house, so there are multiple CP1500PFCLCD.
...then there's a 22 kW gas generator that handles everything once it switches on.
I'll uhhhhhh take a large uhhhhhhhhhh frappe with uhhhhhh extra creme
I can recommend Kagi. Happy customer for over a year.
Some reading to get started:
Nice, I like this one. Disappointed the units converter doesn't have pounds and ounces as a target, though. It only does pounds or ounces, not both. I tend to see a lot of quantities specified as pounds and ounces at the same time in my day-to-day activities.
Playing devil's advocate, I'd be worried you'd avoid doing work you don't want to do, but is core work that needs to be done. Not all employers want or are set up to employ wildcards. You may have to make your own path here, too.
Kagi's verbatim search does this. You will actually get no results if nothing matches. It doesn't change your search and give you something you didn't ask for.
Quoting in a normal "All results" search works, too.
Body, if you don't like le' Twitter:
>After 20 incredible years, I have decided to take a step back and work on the next chapter of my career. As I take a moment and think about all we have done together, I want to thank the millions of gamers around the world who have included me as part of their lives. (1/3)
> Also, thanks to Xbox team members for trusting me to have a direct dialogue with our customers. The future is bright for Xbox and as a gamer, I am excited to see the evolution. >Thank and I’ll see you online >Larry Hryb (2/3)
>P.S. The official Xbox Podcast will be taking a hiatus this Summer and will come back in a new format. (3/3)
Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.
For me:
- GPT-3.5: 8191
- GPT-4: 4095
- GPT-4 with Code Interpreter: 8192
- GPT-4 with Plugins: 8192
It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.
Here's the response I get from /backend-api/models, as a Plus subscriber:
json { "models": [ { "slug": "text-davinci-002-render-sha", "max_tokens": 8191, "title": "Default (GPT-3.5)", "description": "Our fastest model, great for most everyday tasks.", "tags": [ "gpt3.5" ], "capabilities": {} }, { "slug": "gpt-4", "max_tokens": 4095, "title": "GPT-4", "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.", "tags": [ "gpt4" ], "capabilities": {} }, { "slug": "gpt-4-code-interpreter", "max_tokens": 8192, "title": "Code Interpreter", "description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.", "tags": [ "gpt4", "beta" ], "capabilities": {}, "enabled_tools": [ "tools2" ] }, { "slug": "gpt-4-plugins", "max_tokens": 8192, "title": "Plugins", "description": "An experimental model that knows when and how to use plugins", "tags": [ "gpt4", "beta" ], "capabilities": {}, "enabled_tools": [ "tools3" ] }, { "slug": "text-davinci-002-render-sha-mobile", "max_tokens": 8191, "title": "Default (GPT-3.5) (Mobile)", "description": "Our fastest model, great for most everyday tasks.", "tags": [ "mobile", "gpt3.5" ], "capabilities": {} }, { "slug": "gpt-4-mobile", "max_tokens": 4095, "title": "GPT-4 (Mobile, V2)", "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.", "tags": [ "gpt4", "mobile" ], "capabilities": {} } ], "categories": [ { "category": "gpt_3.5", "human_category_name": "GPT-3.5", "subscription_level": "free", "default_model": "text-davinci-002-render-sha", "code_interpreter_model": "text-davinci-002-render-sha-code-interpreter", "plugins_model": "text-davinci-002-render-sha-plugins" }, { "category": "gpt_4", "human_category_name": "GPT-4", "subscription_level": "plus", "default_model": "gpt-4", "code_interpreter_model": "gpt-4-code-interpreter", "plugins_model": "gpt-4-plugins" } ] }
Anyone seeing anything different? I haven't really seen this compared anywhere.