You don't need to pirate OpenAI. I've built the AI Horde so y'all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.
Unfortunately I'm not an expert in LLMs so I don't know. I suggest you contact the KoboldAI community and they should be able to point you to the right direction
Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.
The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt
Anonchatgpt should stop being recommended, it really sucks. It has a VERY strict character limit, immediately forget/ignore the context, requires recaptcha, and the "anon" part of the name is obviously fake if you read the privacy policy.
I'm curious what your area of expertise is? I'm interested in using ai for a programming assistant, but it seems an entirely different skillset than, say, a language model. I assume some models will be good in 1 area and some models in another
Any news on how there tend to perform compared to GPT-4? I finally decided to toss OpenAI 20 quid to try it out for a month, and it’s pretty impressive.
I've tinkered with a Discord bot using the official gpt3.5 API. It's astonishingly cheap. Using the 3.5-turbo model, I've never cracked $1 in a month and usually am just a couple cents a week. Obviously this would be different if you're running a business with it or something, but for personal use like answering questions, writing short blurbs, and entertaining us while drunk...it's not bad at all in my experience.
You're billed per token usage. GPT-3.5-Turbo price per 1K tokens is quite low now.
I kinda made my own Custom ChatGPT with Python (and LOTS of coding help from Web CharGPT). It evolved from a few lines shitty script to a version that uses Langchain and has access to custom tools, including custom data indexes and has a persistent memory.
What will ramp up the cost are things like how much context (memory) you want the chatbot to have. If you use something like a recursive summarizer, that summarizes a text by chunks over and over until the text is below a set length, that also uses many API calls that consume tokens. Also, if you want your chatbot to use custom info that you provided to it, solutions like LlamaIndex are easy to use, but require quite some tokens per query.
On my worst month, with lots of usage due to testing and without the latest price drop, I reached 70$.
I’m working on a similar project right now with zero coding knowledge. I’ve been trying to find something like langchain all day. I built (by which I mean I coached GPT into building) a web scraper script that can interact with the web to perform searches and then parse the results, but the outputs are getting too big to manage in a hacked together terminal interface.
How are you doing the UI? That’s what I’m finding to be the biggest puzzle that isn’t fun to solve. I’ve been looking at react as a way to do it.
Loved the depth of this info - although it's over my head. But I kind of understood? I have a project for next while to focus on. But I hear that it's possible to do, and that's exciting
Of the language models you can run locally, I’ve found them to be awkward to use and not perform too well. If anyone knows of any newer ones that do a better job I’d love to know.