If you run this command on Windows:
|
|
and see an error like this:
|
|
the problem is usually not CUDA or llama.cpp itself. More often, the program cannot correctly access the system certificate chain in the current environment, so HTTPS verification fails.
From the log, ggml-rpc.dll and ggml-cpu-alderlake.dll were loaded successfully, which means the runtime environment is mostly fine. The issue is mainly in the model download step.
The easiest workaround: download the model manually
If you just want to get it running quickly, downloading the model manually is usually the most stable option.
- Open the matching Hugging Face repository page.
- Download the required
.gguffile fromFiles and versions. - After the download finishes, run it with the local file path:
|
|
This bypasses SSL verification during the -hf download step and is useful when you only want to verify that the model can run locally.
If you still want to use -hf automatic download
You can manually specify a certificate file path so the program can find a usable CA bundle in the current session.
cacert.pem can be obtained from the CA Extract page maintained by the curl project:
- Page: https://curl.se/docs/caextract.html
- Direct download: https://curl.se/ca/cacert.pem
If you download it in a browser, open the direct download link and save it as cacert.pem. You can also download it to a fixed directory with PowerShell:
|
|
After the download finishes, set these variables in the command line:
|
|
Then run the original command again:
|
|
If the issue really comes from the certificate chain, this usually fixes it directly.