# phidata **Repository Path**: lict1986/phidata ## Basic Information - **Project Name**: phidata - **Description**: No description available - **Primary Language**: Python - **License**: MPL-2.0 - **Default Branch**: abc_pyagent - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2024-07-20 - **Last Updated**: 2025-07-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

phidata

A collection of AI Apps you can run with 1 command 🚀

⭐️ for when you need to spin up an AI project quickly.

version pythonversion downloads build-status

## ⭐ Features: - **Powerful:** Get a production-ready LLM App with 1 command. - **Simple**: Built using a human-like `Conversation` interface to language models. - **Production Ready:** Your app can be deployed to aws with 1 command. ## 🚀 How it works - Create your codebase using a template: `phi ws create` - Run your app locally: `phi ws up dev:docker` - Run your app on AWS: `phi ws up prd:aws` ## 💻 Example: Build a RAG LLM App Let's build a **RAG LLM App** with GPT-4. We'll use PgVector for Knowledge Base and Storage and serve the app using Streamlit and FastApi. Read the full tutorial here. > Install docker desktop to run this app locally. ### Installation Open the `Terminal` and create an `ai` directory with a python virtual environment. ```bash mkdir ai && cd ai python3 -m venv aienv source aienv/bin/activate ``` Install phidata ```bash pip install phidata ``` ### Create your codebase Create your codebase using the `llm-app` template pre-configured with FastApi, Streamlit and PgVector. Use this codebase as a starting point for your LLM product. ```bash phi ws create -t llm-app -n llm-app ``` This will create a folder named `llm-app` ### Serve your LLM App using Streamlit Streamlit allows us to build micro front-ends for our LLM App and is extremely useful for building basic applications in pure python. Start the `app` group using: ```bash phi ws up --group app ``` **Press Enter** to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard. ### Example: Chat with PDFs - Open localhost:8501 to view streamlit apps that you can customize and make your own. - Click on **Chat with PDFs** in the sidebar - Enter a username and wait for the knowledge base to load. - Choose the `RAG` Conversation type. - Ask "How do I make chicken curry?" - Upload PDFs and ask questions chat-with-pdf ### Serve your LLM App using FastApi Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like `next.js` backed by a RestApi built using a framework like `FastApi`. Your LLM App comes ready-to-use with FastApi endpoints, start the `api` group using: ```bash phi ws up --group api ``` **Press Enter** to confirm and give a few minutes for the image to download. ### View API Endpoints - Open localhost:8000/docs to view the API Endpoints. - Load the knowledge base using `/v1/pdf/conversation/load-knowledge-base` - Test the `v1/pdf/conversation/chat` endpoint with `{"message": "How do I make chicken curry?"}` - The LLM Api comes pre-built with endpoints that you can integrate with your front-end. ### Optional: Run Jupyterlab A jupyter notebook is a must have for AI development and your `llm-app` comes with a notebook pre-installed with the required dependencies. Enable it by updating the `workspace/settings.py` file: ```python {{ title: 'workspace/settings.py'}} ... ws_settings = WorkspaceSettings( ... # Uncomment the following line dev_jupyter_enabled=True, ... ``` Start `jupyter` using: ```bash phi ws up --group jupyter ``` **Press Enter** to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard. ### View Jupyterlab UI - Open localhost:8888 to view the Jupyterlab UI. Password: **admin** - Play around with cookbooks in the `notebooks` folder. ### Delete local resources Play around and stop the workspace using: ```bash phi ws down ``` ### Run your LLM App on AWS Read how to run your LLM App on AWS here. ## 📚 More Information: - Read the documentation - Chat with us on Discord - Email us at help@phidata.com