# Efficientdet-Pytorch2tTensorRT **Repository Path**: sportversion/efficientdet-pytorch2t-tensor-rt ## Basic Information - **Project Name**: Efficientdet-Pytorch2tTensorRT - **Description**: 该项目将efficientdet由pytorch转换为TensorRT - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 3 - **Forks**: 1 - **Created**: 2021-04-24 - **Last Updated**: 2025-04-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## Background Introduction The purpose of this project is to provide a faster target tracking system. We choose EfficientDet and DeepSort as the project algorithms. We use TensorRT API to convert the Pytorch models of EfficientDet and DeepSort into TensorRT for acceleration. The pytorch source code of this project comes from [Yet-Another-EfficientDet-Pytorch](https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch) and [Yolov5_DeepSort_Pytorch](https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch).**Our motivation for completing this project is the TensorRT Hackthon competition held by NVIDIA and Alibaba. Thanks very much for the training and hardware environment provided by NVIDIA and Alibaba.** ## Now Our Work - [x] EfficientDet-D3 use TenorRT API conversion model - [x] EfficientDet-D3 int8 quantization - [x] deepsort use onnx conversion model - [ ] deepsort use int8 quantization - [ ] cpp demo Later we will complete the conversion of all EfficientDet-D0~D7 models ## Test Results
Figure Notes (click to expand) * GPU Speed measures model process time per image averaged 1000 images using a 1080ti GPU, not includes image preprocessing, postprocessing * TensorRT version 7.2.3.4 * **Reproduce** by `python effientdet_trt_test.py --img_path test/images/img.png --engine_file_path tensorrt_engine/efficientdet.engine --batch_size=1 `
## EfficientDet-D3 Performance Model |Batchsize
(1) |Latency
(ms)|Throughput
1000/latency*batchsize) |Latency Speedup
(TRT latency / original latency) |Throughput speedup
(TRT throughput / original thoughput) --- |--- |--- |--- |--- |--- PyTorch |1 |- |- |- |- PyTorch |4 |- |- |- |- PyTorch |8 |- |- |- |- PyTorch |16 |- |- |- |- | | | | | | || TensorRT |1 |- |- |- |- TensorRT |4 |- |- |- |- TensorRT |8 |- |- |- |- TensorRT |16 |- |- |- |- | | | | | | Model |Latency
(fp32, ms) |Latencyval
(fp16, ms)|Latencyval
(int8, ms) --- |--- |--- |--- PyTorch |60 |52 | - TensorRT | 36 |32 | 24 ## Environments This project may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): - **Operating System** Our test code runs on Ubuntu 20.04.1 LTS,We think it also can run normally on Ubuntu 18.04 - **CUDA** Our NVIDIA driver version is 455.23.05 and the CUDA version is 455.23.05 ## Requirements * Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To run the following : We recommend using conda to create a virtual environment: ```bash $ conda create -n pytorch2trt python=3.8 $ conda activate pytorch2trt ``` Update your pip and setuptools: ```bash $ pip install --upgrade setuptools pip ``` Then install the requirements: ```bash $ pip install -r requirements.txt ``` ## How to run ? First you must download the Yet-Another-EfficientDet-Pytorch project to your own directory: ```bash $ git clone https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch.git && cd Yet-Another-EfficientDet-Pytorch ``` Then download our project: ```bash $ git clone https://gitee.com/sportversion/efficientdet-pytorch2t-tensor-rt.git && cd efficientdet-pytorch2t-tensor-rt ``` ## Conversion EfficientDet model `conver2trt.py` You can convert the pytorch model of EfficientDet to a tensorrt model, and you can get the engine file through it , downloading EfficientDet-D3 models automatically from the [EfficientDet-D3](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d3.pth) and put it in `weights`. ```bash $ python conver2trt.py --weight_path weights # pytorch models path --engine_file_path tensorrt_engine # save path --precision fp32 #precision --batch_size 1 #batch_size ``` To int8 quantify, you must download the [BaiduYun, l5dp](https://pan.baidu.com/s/1yZYYXgKd0r5Au6wMO0zJyg) data, put it in the data/tensorrtx-int8calib-data/coco_calib folder, Then: ```bash $ python conver2trt.py --weight_path weights # pytorch models path --engine_file_path tensorrt_engine # save path --precision int8 #precision --batch_size 1 #batch_size ``` ## EfficientDet Test To run EfficientDet Test on example images in `test/images`: ```bash $ python effientdet_trt_test.py --img_path test/images/img.png --engine_file_path tensorrt_engine/efficientdet.engine --batch_size 1 ``` ## Conversion DeepSort model [REQUIRE] [TensorRT 7](https://developer.nvidia.com/nvidia-tensorrt-download)

Download this pytorch project and install: ``` $ git clone git@github.com:ZQPei/deep_sort_pytorch.git ``` After configuring the deep_sort_pytorch project,copy `DeepSortExportONNX.py` to the project directory of the pytorch project and run,deepsort.onnx will be generated ``` $ python DeepSortExportONNX.py ``` ,or **click [BaiduYun, osig](https://pan.baidu.com/s/15No4HL6tujQZNMoqRibytA) to download the model converted by us.**

With deepsort.onnx as input, run the following command:

``` $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/TensorRT-7.2.3.4/lib $ trtexec --explicitBatch --onnx=deepsort.onnx --saveEngine=deepsort.trt ``` ## DeepSort Test Copy `deepsort.trt` to the `tensorrt_engine` directory, use the following command to run demo: ``` $ deep_trt_test.py --batch_size 1 ``` ## Contact **Issues should be raised directly in the repository.** For business inquiries or professional support requests please email ZhangQi at xiaoer_qi@live.com.