Ggml.c:1368:13: error: assigning to 'int32x4_t' (vector of 4 'int32_t' values) from incompatible type 'int' Ggml.c:1367:13: error: assigning to 'int32x4_t' (vector of 4 'int32_t' values) from incompatible type 'int' Ggml.c:1365:19: error: initializing 'int32x4_t' (vector of 4 'int32_t' values) with an expression of incompatible type 'int' Ggml.c:1364:19: error: initializing 'int32x4_t' (vector of 4 'int32_t' values) with an expression of incompatible type 'int' ![]() Ggml.c:1364:25: error: implicit declaration of function 'vdotq_s32' is invalid in C99 O3 -DNDEBUG -std=c11 -fPIC -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o ![]() I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread O3 -DNDEBUG -std=c11 -fPIC -pthread -DGGML_USE_ACCELERATE To update, run: python3 -m pip install -upgrade pip A new release of pip available: 22.3.1 -> 23.0.1 llama.cpp/venv/lib/python3.10/site-packages (from requests->torchvision) (2022.12.7) Requirement already satisfied: certifi>=2017.4.17 in. llama.cpp/venv/lib/python3.10/site-packages (from requests->torchvision) (3.4) Requirement already satisfied: idna=2.5 in. llama.cpp/venv/lib/python3.10/site-packages (from requests->torchvision) (1.26.15) Requirement already satisfied: urllib3=1.21.1 in. llama.cpp/venv/lib/python3.10/site-packages (from requests->torchvision) (3.1.0) Requirement already satisfied: charset-normalizer=2 in. llama.cpp/venv/lib/python3.10/site-packages (from torchvision) (2.28.2) Requirement already satisfied: requests in. llama.cpp/venv/lib/python3.10/site-packages (from torchvision) (9.4.0) Requirement already satisfied: pillow!=8.3. llama.cpp/venv/lib/python3.10/site-packages (from torch) (4.5.0) Requirement already satisfied: typing-extensions in. Requirement already satisfied: sentencepiece in. Requirement already satisfied: torchaudio in. Requirement already satisfied: torchvision in. Io sentencepiece numpy/llama.cpp/venv/bin/pip install torch torchvision torchaud Users/rickg/llama.cpp/venv/bin/pip install torch torchvision torchaudio sentencepiece numpy Python3 -m venv /Users/rickg/llama.cpp/venvīash-3.2$ python3 -m venv /Users/rickg/llama.cpp/venvĮxec: /Users/rickg/llama.cpp/venv/bin/pip install torch torchvision torchaudio sentencepiece numpy in undefined To update your account to use zsh, please run `chsh -s /bin/zsh`.įatal: destination path '/Users/rickg/llama.cpp' already exists and is not an empty directory.Įxec: python3 -m venv /Users/rickg/llama.cpp/venv in undefined The default interactive shell is now zsh. What exactly do I need to install prior to running that command?Įxec: git clone /Users/rickg/llama.cpp in undefined To stay updated, feel free to follow me on twitter (or you could create issues on GitHub too!) python version: which version of python doesn't work?Īm I missing anything? Feel free to leave comments, will try to roll out some updates as soon as I can. node version: which version of node.js is required? document the storage spec: how much space is used? (They are already in the code but just was not exposed for CLI and not documented) *other flags:* The rest of the flags natively supported by the llama.cpp project, such as top_k, top_p, temp, batch_size, threads, seed, n_predict, etc. Might add a flag to avoid downloading models in case you already have them (EDIT: actually upon thinking about it, it's better to just set the source model folder, something like -model) *avoid downloading files:* I have this issue too when I just want to install the code instead of downloading the full model which takes a long time. Maybe add a flag to specify the exact name of the python binary (such as "-python python3") *python:* some people are having issues with the python binary (since the package is essentially calling these shell commands). ![]() *path:* customize the home directory (instead of automatically storing to $HOME) *customize features:* Should not be difficult (will add flag features) Thanks for all the feedback! I went outside to take a walk after posting this and just came back, and went through them to summarize what needs to be improved.īasically looks like it comes down to the following:
0 Comments
Leave a Reply. |