@mmmbuto/llama-cpp-termux-tensor
v0.8012.8-termux.338085c6-tensor
Published
Prebuilt llama.cpp for Android Termux (arm64), Pixel/Tensor-optimized build with bundled ggml/llama shared libs.
Downloads
46
Maintainers
Readme
llama.cpp - Termux (Tensor / Pixel)
Upstream
llama.cpp, built for Android Termux (arm64) with bundled.so, tuned for Google Pixel / Tensor devices.
Prebuilt llama.cpp binaries for Android Termux (arm64) with bundled shared libraries for maximum CPU performance on Pixel/Tensor.
What This Is
- Prebuilt
llama.cppbinaries for Termux on Android (arm64). - Optimized build profile intended for Google Pixel / Tensor devices.
- This package bundles
libllama/libggml*shared libs underlib/and runs via thin wrappers.
Exact upstream commit + build flags for this specific release are recorded in docs/build_meta.txt.
Commands (No Clobbering Termux PKG)
This package intentionally installs namespaced commands so you can keep the Termux package llama-cpp installed for benchmarks:
llama-cli-tensorllama-bench-tensorllama-server-tensor
Installation
pkg update && pkg upgrade -y
pkg install -y nodejs-lts openssl
npm install -g @mmmbuto/llama-cpp-termux-tensorAutomatic Runtime Deps (postinstall)
On Termux, the npm postinstall step will attempt to install missing runtime packages automatically via pkg:
openssl(forlibssl.so.3/libcrypto.so.3)libc++(forlibc++_shared.so)
To skip (CI/dry runs):
export LLAMA_CPP_TERMUX_SKIP_PKG_INSTALL=1Verify
llama-cli-tensor --version
llama-bench-tensor -hBenchmarks (Pixel 9 Pro)
Bench parameters: threads=6 batch=256 ubatch=256 mmap=1 prompt=512 gen=256 reps=3
| Model (Q4_K_M) | PKG pp512 | TENSOR pp512 | PKG tg256 | TENSOR tg256 | |---|---:|---:|---:|---:| | Llama 3.2 3B | 10.46 | 25.42 | 4.33 | 4.82 | | Gemma3n E2B-it | 12.47 | 30.18 | 5.49 | 6.10 | | Phi 3.5 mini | 6.19 | 17.27 | 4.31 | 4.83 | | SmolLM2 1.7B | 16.62 | 48.79 | 6.98 | 7.88 | | Qwen3 1.7B | 12.85 | 40.59 | 6.55 | 8.05 |
These numbers are provided as a real-world example; your results will vary with temperature/throttling, Android build, and Termux version.
Build (Repro)
See docs/build_meta.txt for the exact build metadata.
High-level build steps:
pkg install -y git cmake ninja clang make openssl
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
git checkout 338085c69e486b7155e5b03d7b5087e02c0e2528
cmake -S . -B build-android-tensor -DCMAKE_BUILD_TYPE=Release \
-DGGML_OPENMP=ON -DLLAMA_OPENSSL=ON \
-DLLAMA_BUILD_SERVER=ON -DLLAMA_BUILD_TOOLS=OFF -DLLAMA_BUILD_TESTS=OFF \
-DGGML_BACKEND_DL=OFF \
-DGGML_NATIVE=ON
cmake --build build-android-tensor -j"$(nproc)"Packaging Details (.so)
This package bundles these libraries under lib/:
libggml*.so*,libllama*.so*,libmtmd*.so*
The wrappers in bin/ set LD_LIBRARY_PATH so the binaries use the packaged .so.
License
Upstream llama.cpp is MIT licensed. This package redistributes compiled binaries from upstream and includes the upstream LICENSE.
