connection error #350
Unanswered
marcorossi85
asked this question in
Q&A
Replies: 1 comment 9 replies
-
|
Can you give terminal output when runnong with |
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I installed Newelle in 2 different pc (ubuntu distro) with flatpak,
In the settings, I chose to run local models. On one of these PCs, I rebuilded llama.cpp with hardware acceleration (by the way, I recommend improving your tutorials because some important steps for compiling llama.cpp are not explained; if you want, I can contribute by sending you the text to add). I downloaded two local models suitable for my hardware. On both PCs, I tried chatting with both models and always received a “connection error.”
On the PC where I recompiled llama.cpp with hardware acceleration, I tried to run the models from the terminal by launching them with the newly compiled llama.cpp binary, and they work from the terminal.
Can you help me understand where the problem is?
I understood that Newelle runs the local models downloaded with llama.cpp without any configuration, out-of-the-box. But in both installation it doesn't work.
Thank you so much.
Beta Was this translation helpful? Give feedback.
All reactions