VOOZH about

URL: https://aur.archlinux.org/packages/llama.cpp-vulkan

⇱ AUR (en) - llama.cpp-vulkan


Arch Linux User Repository

Search Criteria

Package Details: llama.cpp-vulkan b8665-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-vulkan.git (read-only, click to copy)
Package Base: llama.cpp-vulkan
Description: Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations)
Upstream URL: https://github.com/ggml-org/llama.cpp
Licenses: MIT
Conflicts: ggml, libggml, llama.cpp, stable-diffusion.cpp
Provides: ggml, libggml, llama.cpp
Submitter: txtsd
Maintainer: Orion-zhen
Last Packager: Orion-zhen
Votes: 21
Popularity: 1.88
First Submitted: 2024-10-26 20:10 (UTC)
Last Updated: 2026-04-05 00:34 (UTC)

Required by (5)

Sources (3)

Pinned Comments

Orion-zhen commented on 2025-09-02 03:17 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

1 2 3 4 5 Next › Last »

Orion-zhen commented on 2026-04-03 07:24 (UTC)

@flavionm, yes you are right! I have added them to provides. Thank you!

flavionm commented on 2026-03-31 01:13 (UTC)

Shouldn't this package provide ggml/libggml, since that is basically fully included on this?

nula commented on 2026-01-07 13:25 (UTC) (edited on 2026-01-07 13:26 (UTC) by nula)

Hi @Orion-zhen, b7652-1 builds successfully. I had problem with previous 2-3 versions (b7589), while building from source worked, so I guess it doesent matter now. Thanks anyway.

Orion-zhen commented on 2026-01-07 08:55 (UTC)

Hi, @nula. I couldn't reproduce the error. I'm building on both Ryzen9 7950x and GitHub Action runner.

nula commented on 2026-01-03 17:01 (UTC)

Not able to build it anymore. Ryzen 9 7900

[ 3%] No patch step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c: In function ‘ggml_fp32_to_bf16_row’:
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:482:9: error: implicit declaration of function ‘_mm512_storeu_si512’ [-Wimplicit-function-declaration]
 482 | _mm512_storeu_si512(
 | ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: error: ‘__m512i’ undeclared (first use in this function); did you mean ‘m512i’?
 483 | (__m512i *)(y + i),
 | ^~~~~~~
 | m512i
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: note: each undeclared identifier is reported only once for each function it appears in
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:23: error: expected expression before ‘)’ token
 483 | (__m512i *)(y + i),
 | ^
[ 4%] Performing configure step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:19: error: implicit declaration of function ‘_mm512_cvtne2ps_pbh’ [-Wimplicit-function-declaration]
 484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
 | ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
 61 | #define m512i(p) (__m512i)(p)
 | ^
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:39: error: implicit declaration of function ‘_mm512_loadu_ps’ [-Wimplicit-function-declaration]
 484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
 | ^~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
 61 | #define m512i(p) (__m512i)(p)
 | ^
make[2]: *** [ggml/src/CMakeFiles/ggml-base.dir/build.make:79: ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
-- The C compiler identification is GNU 15.2.1
[ 4%] Linking CXX executable ../../bin/llama-gemma3-cli
[ 4%] Linking CXX executable ../../bin/llama-minicpmv-cli
[ 4%] Linking CXX executable ../../bin/llama-qwen2vl-cli
[ 4%] Linking CXX executable ../../bin/llama-llava-cli
-- The CXX compiler identification is GNU 15.2.1
-- Detecting C compiler ABI info
[ 4%] Built target llama-gemma3-cli
[ 4%] Built target llama-minicpmv-cli
[ 4%] Built target llama-llava-cli
[ 4%] Built target llama-qwen2vl-cli
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
[ 4%] Built target xxhash
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Enabling coopmat glslc support
-- Enabling coopmat2 glslc support
-- Enabling dot glslc support
-- Enabling bfloat16 glslc support
-- Configuring done (0.4s)
-- Generating done (0.0s)
-- Build files have been written to: /home/fd/Downloads/llama.cpp-vulkan/src/build/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build
[ 4%] Performing build step for 'vulkan-shaders-gen'
[ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
make[1]: *** [CMakeFiles/Makefile2:1267: ggml/src/CMakeFiles/ggml-base.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[100%] Linking CXX executable vulkan-shaders-gen
[ 4%] Linking CXX static library libcpp-httplib.a
[ 4%] Built target cpp-httplib
[100%] Built target vulkan-shaders-gen
[ 4%] Performing install step for 'vulkan-shaders-gen'
-- Installing: /home/fd/Downloads/llama.cpp-vulkan/src/build/Release/./vulkan-shaders-gen
[ 4%] Completed 'vulkan-shaders-gen'
[ 4%] Built target vulkan-shaders-gen
make: *** [Makefile:136: all] Error 2
==> ERROR: A failure occurred in build().
 Aborting...

Orion-zhen commented on 2025-12-17 05:20 (UTC)

Hi, @Dominiquini. I have added that line, thank you for your advice.

Dominiquini commented on 2025-12-17 03:13 (UTC)

@Orion-zhen: Add this line to the PKGBUILD to avoid replacing the config file after every update:

backup=("etc/conf.d/llama.cpp")

Orion-zhen commented on 2025-12-16 18:30 (UTC) (edited on 2025-12-16 18:30 (UTC) by Orion-zhen)

Hi, @Basxto. I have added .service and .conf files. Now you are able to start llama-server more easily :)

1 2 3 4 5 Next › Last »

aurweb v6.3.4

Report issues here.

Copyright © 2004-2026 aurweb Development Team.

AUR packages are user produced content. Any use of the provided files is at your own risk.