# pentagi **Repository Path**: efreets/pentagi ## Basic Information - **Project Name**: pentagi - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-25 - **Last Updated**: 2026-03-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # PentAGI
Penetration testing Artificial General Intelligence

> **Join the Community!** Connect with security researchers, AI enthusiasts, and fellow ethical hackers. Get support, share insights, and stay updated with the latest PentAGI developments. [![Discord](https://img.shields.io/badge/Discord-7289DA?logo=discord&logoColor=white)](https://discord.gg/2xrMh7qX6m)⠀[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?logo=telegram&logoColor=white)](https://t.me/+Ka9i6CNwe71hMWQy) vxcontrol%2Fpentagi | Trendshift
--- > [!NOTE] > **License Compliance Audit in Progress** > > To avoid licensing issues and preserve our commitment to free distribution under the MIT License, we have decided to conduct a comprehensive code audit to ensure there are no copyleft license conflicts. > > **We will restore access to the source code within one week, once we are confident that all licensing requirements are met and the codebase is fully compliant.** > > Join our community to stay updated on the latest news: > - [Discord](https://discord.gg/2xrMh7qX6m) > - [Telegram](https://t.me/+Ka9i6CNwe71hMWQy) --- ## Table of Contents - [Overview](#overview) - [Features](#features) - [Quick Start](#quick-start) - [Credits](#credits) - [License](#license) ## Overview PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. The project is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. You can watch the video **PentAGI overview**: [![PentAGI Overview Video](https://github.com/user-attachments/assets/0828dc3e-15f1-4a1d-858e-9696a146e478)](https://youtu.be/R70x5Ddzs1o) ## Features - Secure & Isolated. All operations are performed in a sandboxed Docker environment with complete isolation. - Fully Autonomous. AI-powered agent that automatically determines and executes penetration testing steps with optional execution monitoring and intelligent task planning for enhanced reliability. - Professional Pentesting Tools. Built-in suite of 20+ professional security tools including nmap, metasploit, sqlmap, and more. - Smart Memory System. Long-term storage of research results and successful approaches for future use. - Knowledge Graph Integration. Graphiti-powered knowledge graph using Neo4j for semantic relationship tracking and advanced context understanding. - Web Intelligence. Built-in browser via [scraper](https://hub.docker.com/r/vxcontrol/scraper) for gathering latest information from web sources. - External Search Systems. Integration with advanced search APIs including [Tavily](https://tavily.com), [Traversaal](https://traversaal.ai), [Perplexity](https://www.perplexity.ai), [DuckDuckGo](https://duckduckgo.com/), [Google Custom Search](https://programmablesearchengine.google.com/), [Sploitus Search](https://sploitus.com) and [Searxng](https://searxng.org) for comprehensive information gathering. - Team of Specialists. Delegation system with specialized AI agents for research, development, and infrastructure tasks, enhanced with optional execution monitoring and intelligent task planning for optimal performance with smaller models. - Comprehensive Monitoring. Detailed logging and integration with Grafana/Prometheus for real-time system observation. - Detailed Reporting. Generation of thorough vulnerability reports with exploitation guides. - Smart Container Management. Automatic Docker image selection based on specific task requirements. - Modern Interface. Clean and intuitive web UI for system management and monitoring. - Comprehensive APIs. Full-featured REST and GraphQL APIs with Bearer token authentication for automation and integration. - Persistent Storage. All commands and outputs are stored in PostgreSQL with [pgvector](https://hub.docker.com/r/vxcontrol/pgvector) extension. - Scalable Architecture. Microservices-based design supporting horizontal scaling. - Self-Hosted Solution. Complete control over your deployment and data. - Flexible Authentication. Support for 10+ LLM providers ([OpenAI](https://platform.openai.com/), [Anthropic](https://www.anthropic.com/), [Google AI/Gemini](https://ai.google.dev/), [AWS Bedrock](https://aws.amazon.com/bedrock/), [Ollama](https://ollama.com/), [DeepSeek](https://www.deepseek.com/en/), [GLM](https://z.ai/), [Kimi](https://platform.moonshot.ai/), [Qwen](https://www.alibabacloud.com/en/), Custom) plus aggregators ([OpenRouter](https://openrouter.ai/), [DeepInfra](https://deepinfra.com/)). For production local deployments, see our [vLLM + Qwen3.5-27B-FP8 guide](examples/guides/vllm-qwen35-27b-fp8.md). - API Token Authentication. Secure Bearer token system for programmatic access to REST and GraphQL APIs. - Quick Deployment. Easy setup through [Docker Compose](https://docs.docker.com/compose/) with comprehensive environment configuration. ## Quick Start ### System Requirements - Docker and Docker Compose - Minimum 2 vCPU - Minimum 4GB RAM - 20GB free disk space - Internet access for downloading images and updates ### Using Installer (Recommended) PentAGI provides an interactive installer with a terminal-based UI for streamlined configuration and deployment. The installer guides you through system checks, LLM provider setup, search engine configuration, and security hardening. **Supported Platforms:** - **Linux**: amd64 [download](https://pentagi.com/downloads/linux/amd64/installer-latest.zip) | arm64 [download](https://pentagi.com/downloads/linux/arm64/installer-latest.zip) - **Windows**: amd64 [download](https://pentagi.com/downloads/windows/amd64/installer-latest.zip) - **macOS**: amd64 (Intel) [download](https://pentagi.com/downloads/darwin/amd64/installer-latest.zip) | arm64 (M-series) [download](https://pentagi.com/downloads/darwin/arm64/installer-latest.zip) **Quick Installation (Linux amd64):** ```bash # Create installation directory mkdir -p pentagi && cd pentagi # Download installer wget -O installer.zip https://pentagi.com/downloads/linux/amd64/installer-latest.zip # Extract unzip installer.zip # Run interactive installer ./installer ``` **Prerequisites & Permissions:** The installer requires appropriate privileges to interact with the Docker API for proper operation. By default, it uses the Docker socket (`/var/run/docker.sock`) which requires either: - **Option 1 (Recommended for production):** Run the installer as root: ```bash sudo ./installer ``` - **Option 2 (Development environments):** Grant your user access to the Docker socket by adding them to the `docker` group: ```bash # Add your user to the docker group sudo usermod -aG docker $USER # Log out and log back in, or activate the group immediately newgrp docker # Verify Docker access (should run without sudo) docker ps ``` ⚠️ **Security Note:** Adding a user to the `docker` group grants root-equivalent privileges. Only do this for trusted users in controlled environments. For production deployments, consider using rootless Docker mode or running the installer with sudo. The installer will: 1. **System Checks**: Verify Docker, network connectivity, and system requirements 2. **Environment Setup**: Create and configure `.env` file with optimal defaults 3. **Provider Configuration**: Set up LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, Custom) 4. **Search Engines**: Configure DuckDuckGo, Google, Tavily, Traversaal, Perplexity, Sploitus, Searxng 5. **Security Hardening**: Generate secure credentials and configure SSL certificates 6. **Deployment**: Start PentAGI with docker-compose **For Production & Enhanced Security:** For production deployments or security-sensitive environments, we **strongly recommend** using a distributed two-node architecture where worker operations are isolated on a separate server. This prevents untrusted code execution and network access issues on your main system. **See detailed guide**: [Worker Node Setup](examples/guides/worker_node.md) The two-node setup provides: - **Isolated Execution**: Worker containers run on dedicated hardware - **Network Isolation**: Separate network boundaries for penetration testing - **Security Boundaries**: Docker-in-Docker with TLS authentication - **OOB Attack Support**: Dedicated port ranges for out-of-band techniques ### Manual Installation 1. Create a working directory or clone the repository: ```bash mkdir pentagi && cd pentagi ``` 2. Copy `.env.example` to `.env` or download it: ```bash curl -o .env https://raw.githubusercontent.com/vxcontrol/pentagi/master/.env.example ``` 3. Touch examples files (`example.custom.provider.yml`, `example.ollama.provider.yml`) or download it: ```bash curl -o example.custom.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/custom-openai.provider.yml curl -o example.ollama.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/ollama-llama318b.provider.yml ``` 4. Fill in the required API keys in `.env` file. ```bash # Required: At least one of these LLM providers OPEN_AI_KEY=your_openai_key ANTHROPIC_API_KEY=your_anthropic_key GEMINI_API_KEY=your_gemini_key # Optional: AWS Bedrock provider (enterprise-grade models) BEDROCK_REGION=us-east-1 # Choose one authentication method: BEDROCK_DEFAULT_AUTH=true # Option 1: Use AWS SDK default credential chain (recommended for EC2/ECS) # BEDROCK_BEARER_TOKEN=your_bearer_token # Option 2: Bearer token authentication # BEDROCK_ACCESS_KEY_ID=your_aws_access_key # Option 3: Static credentials # BEDROCK_SECRET_ACCESS_KEY=your_aws_secret_key # Optional: Ollama provider (local or cloud) # OLLAMA_SERVER_URL=http://ollama-server:11434 # Local server # OLLAMA_SERVER_URL=https://ollama.com # Cloud service # OLLAMA_SERVER_API_KEY=your_ollama_cloud_key # Required for cloud, empty for local # Optional: Chinese AI providers # DEEPSEEK_API_KEY=your_deepseek_key # DeepSeek (strong reasoning) # GLM_API_KEY=your_glm_key # GLM (Zhipu AI) # KIMI_API_KEY=your_kimi_key # Kimi (Moonshot AI, ultra-long context) # QWEN_API_KEY=your_qwen_key # Qwen (Alibaba Cloud, multimodal) # Optional: Local LLM provider (zero-cost inference) OLLAMA_SERVER_URL=http://localhost:11434 OLLAMA_SERVER_MODEL=your_model_name # Optional: Additional search capabilities DUCKDUCKGO_ENABLED=true DUCKDUCKGO_REGION=us-en DUCKDUCKGO_SAFESEARCH= DUCKDUCKGO_TIME_RANGE= SPLOITUS_ENABLED=true GOOGLE_API_KEY=your_google_key GOOGLE_CX_KEY=your_google_cx TAVILY_API_KEY=your_tavily_key TRAVERSAAL_API_KEY=your_traversaal_key PERPLEXITY_API_KEY=your_perplexity_key PERPLEXITY_MODEL=sonar-pro PERPLEXITY_CONTEXT_SIZE=medium # Searxng meta search engine (aggregates results from multiple sources) SEARXNG_URL=http://your-searxng-instance:8080 SEARXNG_CATEGORIES=general SEARXNG_LANGUAGE= SEARXNG_SAFESEARCH=0 SEARXNG_TIME_RANGE= SEARXNG_TIMEOUT= ## Graphiti knowledge graph settings GRAPHITI_ENABLED=true GRAPHITI_TIMEOUT=30 GRAPHITI_URL=http://graphiti:8000 GRAPHITI_MODEL_NAME=gpt-5-mini # Neo4j settings (used by Graphiti stack) NEO4J_USER=neo4j NEO4J_DATABASE=neo4j NEO4J_PASSWORD=devpassword NEO4J_URI=bolt://neo4j:7687 # Assistant configuration ASSISTANT_USE_AGENTS=false # Default value for agent usage when creating new assistants ``` 5. Change all security related environment variables in `.env` file to improve security.
Security related environment variables ### Main Security Settings - `COOKIE_SIGNING_SALT` - Salt for cookie signing, change to random value - `PUBLIC_URL` - Public URL of your server (eg. `https://pentagi.example.com`) - `SERVER_SSL_CRT` and `SERVER_SSL_KEY` - Custom paths to your existing SSL certificate and key for HTTPS (these paths should be used in the docker-compose.yml file to mount as volumes) ### Scraper Access - `SCRAPER_PUBLIC_URL` - Public URL for scraper if you want to use different scraper server for public URLs - `SCRAPER_PRIVATE_URL` - Private URL for scraper (local scraper server in docker-compose.yml file to access it to local URLs) ### Access Credentials - `PENTAGI_POSTGRES_USER` and `PENTAGI_POSTGRES_PASSWORD` - PostgreSQL credentials - `NEO4J_USER` and `NEO4J_PASSWORD` - Neo4j credentials (for Graphiti knowledge graph)
6. Remove all inline comments from `.env` file if you want to use it in VSCode or other IDEs as a envFile option: ```bash perl -i -pe 's/\s+#.*$//' .env ``` 7. Run the PentAGI stack: ```bash curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose.yml docker compose up -d ``` Visit [localhost:8443](https://localhost:8443) to access PentAGI Web UI (default is `admin@pentagi.com` / `admin`) > [!NOTE] > If you caught an error about `pentagi-network` or `observability-network` or `langfuse-network` you need to run `docker-compose.yml` firstly to create these networks and after that run `docker-compose-langfuse.yml`, `docker-compose-graphiti.yml`, and `docker-compose-observability.yml` to use Langfuse, Graphiti, and Observability services. > > You have to set at least one Language Model provider (OpenAI, Anthropic, Gemini, AWS Bedrock, or Ollama) to use PentAGI. AWS Bedrock provides enterprise-grade access to multiple foundation models from leading AI companies, while Ollama provides zero-cost local inference if you have sufficient computational resources. Additional API keys for search engines are optional but recommended for better results. > > **For fully local deployment with advanced models**: See our comprehensive guide on [Running PentAGI with vLLM and Qwen3.5-27B-FP8](examples/guides/vllm-qwen35-27b-fp8.md) for a production-grade local LLM setup. This configuration achieves ~13,000 TPS for prompt processing and ~650 TPS for completion on 4× RTX 5090 GPUs, supporting 12+ concurrent flows with complete independence from cloud providers. > > `LLM_SERVER_*` environment variables are experimental feature and will be changed in the future. Right now you can use them to specify custom LLM server URL and one model for all agent types. > > `PROXY_URL` is a global proxy URL for all LLM providers and external search systems. You can use it for isolation from external networks. > > The `docker-compose.yml` file runs the PentAGI service as root user because it needs access to docker.sock for container management. If you're using TCP/IP network connection to Docker instead of socket file, you can remove root privileges and use the default `pentagi` user for better security. ### Accessing PentAGI from External Networks By default, PentAGI binds to `127.0.0.1` (localhost only) for security. To access PentAGI from other machines on your network, you need to configure external access. #### Configuration Steps 1. **Update `.env` file** with your server's IP address: ```bash # Network binding - allow external connections PENTAGI_LISTEN_IP=0.0.0.0 PENTAGI_LISTEN_PORT=8443 # Public URL - use your actual server IP or hostname # Replace 192.168.1.100 with your server's IP address PUBLIC_URL=https://192.168.1.100:8443 # CORS origins - list all URLs that will access PentAGI # Include localhost for local access AND your server IP for external access CORS_ORIGINS=https://localhost:8443,https://192.168.1.100:8443 ``` > [!IMPORTANT] > - Replace `192.168.1.100` with your actual server's IP address > - Do NOT use `0.0.0.0` in `PUBLIC_URL` or `CORS_ORIGINS` - use the actual IP address > - Include both localhost and your server IP in `CORS_ORIGINS` for flexibility 2. **Recreate containers** to apply the changes: ```bash docker compose down docker compose up -d --force-recreate ``` 3. **Verify port binding:** ```bash docker ps | grep pentagi ``` You should see `0.0.0.0:8443->8443/tcp` or `:::8443->8443/tcp`. If you see `127.0.0.1:8443->8443/tcp`, the environment variable wasn't picked up. In this case, directly edit `docker-compose.yml` line 31: ```yaml ports: - "0.0.0.0:8443:8443" ``` Then recreate containers again. 4. **Configure firewall** to allow incoming connections on port 8443: ```bash # Ubuntu/Debian with UFW sudo ufw allow 8443/tcp sudo ufw reload # CentOS/RHEL with firewalld sudo firewall-cmd --permanent --add-port=8443/tcp sudo firewall-cmd --reload ``` 5. **Access PentAGI:** - **Local access:** `https://localhost:8443` - **Network access:** `https://your-server-ip:8443` > [!NOTE] > You'll need to accept the self-signed SSL certificate warning in your browser when accessing via IP address. ## Credits This project is made possible thanks to the following research and developments: - [Emerging Architectures for LLM Applications](https://lilianweng.github.io/posts/2023-06-23-agent) - [A Survey of Autonomous LLM Agents](https://arxiv.org/abs/2403.08299) - [Codel](https://github.com/semanser/codel) by Andriy Semenets - initial architectural inspiration for agent-based automation ## License ### PentAGI Core License **PentAGI Core**: Licensed under [MIT License](LICENSE) Copyright (c) 2025 PentAGI Development Team ### VXControl Cloud SDK Integration **VXControl Cloud SDK Integration**: This repository integrates [VXControl Cloud SDK](https://github.com/vxcontrol/cloud) under a **special licensing exception** that applies **ONLY** to the official PentAGI project. #### Official PentAGI Project - This official repository: `https://github.com/vxcontrol/pentagi` - Official releases distributed by VXControl LLC-FZ - Code used under direct authorization from VXControl LLC-FZ #### ⚠️ Important for Forks and Third-Party Use If you fork this project or create derivative works, the VXControl SDK components are subject to **AGPL-3.0** license terms. You must either: 1. **Remove VXControl SDK integration** 2. **Open source your entire application** (comply with AGPL-3.0 copyleft terms) 3. **Obtain a commercial license** from VXControl LLC #### Commercial Licensing For commercial use of VXControl Cloud SDK in proprietary applications, contact: - **Email**: info@vxcontrol.com - **Subject**: "VXControl Cloud SDK Commercial License"