@flavionm, yes you are right! I have added them to provides. Thank you!
Shouldn't this package provide ggml/libggml, since that is basically fully included on this?
Hi @Orion-zhen, b7652-1 builds successfully. I had problem with previous 2-3 versions (b7589), while building from source worked, so I guess it doesent matter now. Thanks anyway.
Hi, @nula. I couldn't reproduce the error. I'm building on both Ryzen9 7950x and GitHub Action runner.
Not able to build it anymore. Ryzen 9 7900
[ 3%] No patch step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c: In function ‘ggml_fp32_to_bf16_row’:
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:482:9: error: implicit declaration of function ‘_mm512_storeu_si512’ [-Wimplicit-function-declaration]
482 | _mm512_storeu_si512(
| ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: error: ‘__m512i’ undeclared (first use in this function); did you mean ‘m512i’?
483 | (__m512i *)(y + i),
| ^~~~~~~
| m512i
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: note: each undeclared identifier is reported only once for each function it appears in
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:23: error: expected expression before ‘)’ token
483 | (__m512i *)(y + i),
| ^
[ 4%] Performing configure step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:19: error: implicit declaration of function ‘_mm512_cvtne2ps_pbh’ [-Wimplicit-function-declaration]
484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
| ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
61 | #define m512i(p) (__m512i)(p)
| ^
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:39: error: implicit declaration of function ‘_mm512_loadu_ps’ [-Wimplicit-function-declaration]
484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
| ^~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
61 | #define m512i(p) (__m512i)(p)
| ^
make[2]: *** [ggml/src/CMakeFiles/ggml-base.dir/build.make:79: ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
-- The C compiler identification is GNU 15.2.1
[ 4%] Linking CXX executable ../../bin/llama-gemma3-cli
[ 4%] Linking CXX executable ../../bin/llama-minicpmv-cli
[ 4%] Linking CXX executable ../../bin/llama-qwen2vl-cli
[ 4%] Linking CXX executable ../../bin/llama-llava-cli
-- The CXX compiler identification is GNU 15.2.1
-- Detecting C compiler ABI info
[ 4%] Built target llama-gemma3-cli
[ 4%] Built target llama-minicpmv-cli
[ 4%] Built target llama-llava-cli
[ 4%] Built target llama-qwen2vl-cli
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
[ 4%] Built target xxhash
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Enabling coopmat glslc support
-- Enabling coopmat2 glslc support
-- Enabling dot glslc support
-- Enabling bfloat16 glslc support
-- Configuring done (0.4s)
-- Generating done (0.0s)
-- Build files have been written to: /home/fd/Downloads/llama.cpp-vulkan/src/build/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build
[ 4%] Performing build step for 'vulkan-shaders-gen'
[ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
make[1]: *** [CMakeFiles/Makefile2:1267: ggml/src/CMakeFiles/ggml-base.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[100%] Linking CXX executable vulkan-shaders-gen
[ 4%] Linking CXX static library libcpp-httplib.a
[ 4%] Built target cpp-httplib
[100%] Built target vulkan-shaders-gen
[ 4%] Performing install step for 'vulkan-shaders-gen'
-- Installing: /home/fd/Downloads/llama.cpp-vulkan/src/build/Release/./vulkan-shaders-gen
[ 4%] Completed 'vulkan-shaders-gen'
[ 4%] Built target vulkan-shaders-gen
make: *** [Makefile:136: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
Hi, @Dominiquini. I have added that line, thank you for your advice.
@Orion-zhen: Add this line to the PKGBUILD to avoid replacing the config file after every update:
backup=("etc/conf.d/llama.cpp")
Hi, @Basxto. I have added .service and .conf files. Now you are able to start llama-server more easily :)