# model_navigator **Repository Path**: triton-inference-server/model_navigator ## Basic Information - **Project Name**: model_navigator - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-12-16 - **Last Updated**: 2024-06-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Triton Model Navigator The NVIDIA [Triton Inference Server](https://github.com/triton-inference-server) provides a robust and configurable solution for deploying and managing AI models. The [Triton Model Navigator](https://github.com/triton-inference-server/model_navigator) is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server. The tool optimize models running conversion to available formats and applying addition Triton backends optimizations. Finally, it uses the [Triton Model Analyzer](https://github.com/triton-inference-server/model_analyzer) to find the best Triton Model configuration, matches the provided constraints, and optimize performance. ## Documentation * [Overview](docs/overview.md) * [Support Matrix](docs/support_matrix.md) * [Quick Start](docs/quick_start.md) * [Installation](docs/installation.md) * [Running the Triton Model Navigator](docs/run.md) * [Model Conversions](docs/conversion.md) * [Triton Model Configurator](docs/triton_model_configurator.md) * [Models Profiling](docs/profiling.md) * [Models Analysis](docs/analysis.md) * [Helm Charts](docs/helm_charts.md) * [Changelog](CHANGELOG.md) * [Known Issues](docs/known_issues.md) * [Contributing](CONTRIBUTING.md)