Hi there,
I’m using llama3.3-70B as the LLM. IMO it’s very good and open-source.
Normally, I use Cerebras/llama-3.3-70b, that’s blazing fast and free (generous daily limits). When internet eventually goes offline, the same LLM is slowly served by my old i7 notebook, giving the same functionality, except the delay, which I tolerate given the “emergency” state.
No problems while offline, but online it works only for short replys, otherwise it returns this error:
Is that “150” value tunable on my side?
Thank you,
Piero
2 posts - 2 participants