Quantcast
Channel: Configuration - Home Assistant Community
Viewing all articles
Browse latest Browse all 106009

Cerebras/llama-3.3: token length(`150`) exceeded. Increase maximum token to avoid the issue

$
0
0

Hi there,
I’m using llama3.3-70B as the LLM. IMO it’s very good and open-source.

Normally, I use Cerebras/llama-3.3-70b, that’s blazing fast and free (generous daily limits). When internet eventually goes offline, the same LLM is slowly served by my old i7 notebook, giving the same functionality, except the delay, which I tolerate given the “emergency” state.

No problems while offline, but online it works only for short replys, otherwise it returns this error:

Is that “150” value tunable on my side?
Thank you,
Piero

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 106009

Trending Articles