# kati-llama
**Repository Path**: dam520/kati-llama
## Basic Information
- **Project Name**: kati-llama
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: LGPL-2.1
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-04-17
- **Last Updated**: 2024-04-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# KATI-LLAMA (local large language model chat)
[](https://github.com/hswlab/kati-llama/blob/main/LICENSE)

[](https://github.com/hswlab/kati-llama/releases/latest)
KATI-LLAMA is an interface for chatting with Large Language Models on a private PC. The Language Model can
be downloaded automatically in the settings and then used offline.
The KATI application allows the user to communicate with an AI in a human-like manner. The AI's responses
can be output with a natural voice and the AI's avatar image changes appearance depending on the chatbot's
mood. Below is a summary of the features of KATI-LLAMA.
[](https://github.com/hswlab/kati-llama/releases/latest)
*Key features of KATI:*
- [X] Talk to AI without an internet connection
- [X] Optional voice output with a voice pre-installed in the operating system or a natural-sounding TikTok voice. (The TikTok voice requires an internet connection)
- [X] Voice input (System Speech or Whisper)
- [X] Dynamic avatar images to represent AI emotions.
- [X] Chat history with filter function and read-aloud function.
- [X] Rating function for AI responses as an aid to the filter function
- [X] Reduce wait times by streaming responses directly. (If the read-aloud function is active, the output only happens when the sentence is complete)
- [X] Text and code are formatted for better readability.
- [X] Multilingual user interface (DE, EN, FR, ES, PT, JA, KO)



# Demonstration Video
- [Demonstration of how to install and use KATI-LLAMA (German)](https://youtu.be/rMKaL4mhw5A)
- [Demonstration of how to use STT in KATI-LLAMA (German)](https://youtu.be/N8AAO0Dv5gc)
# Nuget packages and associated licenses used in KATI
- LLamaSharp `MIT License`
- ElectronNET.API `MIT License`
- Esprima `BSD 3-Clause License`
- LiteDB `MIT License`
- Microsoft.AspNetCore.SignalR.Client `MIT License`
- NAudio `License Info`
- Newtonsoft.Json `MIT License`
- System.Data.SQLite `Public Domain`
- System.Linq.Async `MIT License`
- System.Speech `MIT License`
- SoundTouch `License Info`
- WhisperNet `MPL-2.0 License`
# Next milestone (research)
- [ ] Add more Language Models in the settings for download.
# Known bugs that will be fixed soon
- [ ] Can't find any bugs yet :)
# Performance issues
- Depending on the model configured, more or
less RAM and processor power are required.
This can affect the performance of the AI's responses. Try a smaller model and see if the AI
responds faster. Keep in mind that the smaller
the model, the lower the quality of the responses.
- Slow output may also be due to the configured
processor setting. AVX is very slow, but it is
mostly supported by older processors. With
AVX2, the latency is significantly lower, but not
all processors support it. Try chatting with
AVX2 to see if it works for you.
- If the read-aloud function is enabled, the program waits to output until a complete sentence is available. To minimize the response
time, you can disable the audio output, then
the response text will be streamed without interruption.
- The AI sometimes takes longer to answer in
general if it finds little information about a
question. In this case, you can try to cancel the
chat session and rephrase the question