Hi, thanks for working on something like this it is a great idea, I was curious why is it asking directly llm, instead of getting the logprob of each token and using a topk set to 1, LLM can find the next word? KR.
Hi,
thanks for working on something like this it is a great idea,
I was curious why is it asking directly llm, instead of getting the logprob of each token and using a topk set to 1, LLM can find the next word?
KR.