# GPULlama3.java **Repository Path**: robelHbq/GPULlama3.java ## Basic Information - **Project Name**: GPULlama3.java - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-02-02 - **Last Updated**: 2026-02-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # GPULlama3.java powered by TornadoVM [](https://github.com/beehive-lab/GPULlama3.java/actions/workflows/build-and-run.yml) [](https://central.sonatype.com/artifact/io.github.beehive-lab/gpu-llama3)  [](https://docs.langchain4j.dev/)   [](https://hub.docker.com/r/beehivelab/gpullama3.java-nvidia-openjdk-opencl) [](https://hub.docker.com/r/beehivelab/gpullama3.java-nvidia-openjdk-ptx) [](https://deepwiki.com/beehive-lab/GPULlama3.java) -----------
|
Llama3 models written in native Java automatically accelerated on GPUs with TornadoVM.
Runs Llama3 inference efficiently using TornadoVM's GPU acceleration.
Currently, supports Llama3, Mistral, Qwen2.5, Qwen3, Phi-3, IBM Granite 3.2+ and IBM Granite 4.0 models in the GGUF format. Also, it is used as GPU inference engine in Quarkus and LangChain4J. Builds on Llama3.java by AlfonsoΒ² Peterssen. Previous integration of TornadoVM and Llama2 it can be found in llama2.tornadovm. |