# TransReID **Repository Path**: yangxubbc/TransReID ## Basic Information - **Project Name**: TransReID - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-12-25 - **Last Updated**: 2025-12-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README   # [ICCV2021] TransReID: Transformer-based Object Re-Identification [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/papers/He_TransReID_Transformer-Based_Object_Re-Identification_ICCV_2021_paper.pdf) The *official* repository for [TransReID: Transformer-based Object Re-Identification](https://arxiv.org/abs/2102.04378) achieves state-of-the-art performances on object re-ID, including person re-ID and vehicle re-ID. ## News - 🌟 2023.11 [VGSG](https://arxiv.org/abs/2311.07514.pdf) for Text-based Person Search is accepted to TIP. - 🌟 2023.9 [RGANet](https://arxiv.org/abs/2309.03558) for Occluded Person Re-identification is accepted to TIFS. - 2023.3 The general human representation pre-training model. [SOLIDER](https://github.com/tinyvision/SOLIDER) - 2021.12 We improve TransReID via self-supervised pre-training. Please refer to [TransReID-SSL](https://github.com/michuanhaohao/TransReID-SSL) - 2021.3 We release the code of TransReID. ## Pipeline  ## Abaltion Study of Transformer-based Strong Baseline  ## Requirements ### Installation ```bash pip install -r requirements.txt (we use /torch 1.6.0 /torchvision 0.7.0 /timm 0.3.2 /cuda 10.1 / 16G or 32G V100 for training and evaluation. Note that we use torch.cuda.amp to accelerate speed of training which requires pytorch >=1.6) ``` ### Prepare Datasets ```bash mkdir data ``` Download the person datasets [Market-1501](https://drive.google.com/file/d/0B8-rUzbwVRk0c054eEozWG9COHM/view), [MSMT17](https://arxiv.org/abs/1711.08565), [DukeMTMC-reID](https://arxiv.org/abs/1609.01775),[Occluded-Duke](https://github.com/lightas/Occluded-DukeMTMC-Dataset), and the vehicle datasets [VehicleID](https://www.pkuml.org/resources/pku-vehicleid.html), [VeRi-776](https://github.com/JDAI-CV/VeRidataset), Then unzip them and rename them under the directory like ``` data ├── market1501 │  └── images .. ├── MSMT17 │  └── images .. ├── dukemtmcreid │  └── images .. ├── Occluded_Duke │  └── images .. ├── VehicleID_V1.0 │  └── images .. └── VeRi └── images .. ``` ### Prepare DeiT or ViT Pre-trained Models You need to download the ImageNet pretrained transformer model : [ViT-Base](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth), [ViT-Small](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/vit_small_p16_224-15ec54c9.pth), [DeiT-Small](https://dl.fbaipublicfiles.com/deit/deit_small_distilled_patch16_224-649709d9.pth), [DeiT-Base](https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_224-df68dfff.pth) ## Training We utilize 1 GPU for training. ```bash python train.py --config_file configs/transformer_base.yml MODEL.DEVICE_ID "('your device id')" MODEL.STRIDE_SIZE ${1} MODEL.SIE_CAMERA ${2} MODEL.SIE_VIEW ${3} MODEL.JPM ${4} MODEL.TRANSFORMER_TYPE ${5} OUTPUT_DIR ${OUTPUT_DIR} DATASETS.NAMES "('your dataset name')" ``` #### Arguments - `${1}`: stride size for pure transformer, e.g. [16, 16], [14, 14], [12, 12] - `${2}`: whether using SIE with camera, True or False. - `${3}`: whether using SIE with view, True or False. - `${4}`: whether using JPM, True or False. - `${5}`: choose transformer type from `'vit_base_patch16_224_TransReID'`,(The structure of the deit is the same as that of the vit, and only need to change the imagenet pretrained model) `'vit_small_patch16_224_TransReID'`,`'deit_small_patch16_224_TransReID'`, - `${OUTPUT_DIR}`: folder for saving logs and checkpoints, e.g. `../logs/market1501` **or you can directly train with following yml and commands:** ```bash # DukeMTMC transformer-based baseline python train.py --config_file configs/DukeMTMC/vit_base.yml MODEL.DEVICE_ID "('0')" # DukeMTMC baseline + JPM python train.py --config_file configs/DukeMTMC/vit_jpm.yml MODEL.DEVICE_ID "('0')" # DukeMTMC baseline + SIE python train.py --config_file configs/DukeMTMC/vit_sie.yml MODEL.DEVICE_ID "('0')" # DukeMTMC TransReID (baseline + SIE + JPM) python train.py --config_file configs/DukeMTMC/vit_transreid.yml MODEL.DEVICE_ID "('0')" # DukeMTMC TransReID with stride size [12, 12] python train.py --config_file configs/DukeMTMC/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" # MSMT17 python train.py --config_file configs/MSMT17/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" # OCC_Duke python train.py --config_file configs/OCC_Duke/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" # Market python train.py --config_file configs/Market/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" # VeRi python train.py --config_file configs/VeRi/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" # VehicleID (The dataset is large and we utilize 4 v100 GPUs for training ) CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port 66666 train.py --config_file configs/VehicleID/vit_transreid_stride.yml MODEL.DIST_TRAIN True # or using following commands: Bash dist_train.sh ``` Tips: For person datasets with size 256x128, TransReID with stride occupies 12GB GPU memory and TransReID occupies 7GB GPU memory. ## Evaluation ```bash python test.py --config_file 'choose which config to test' MODEL.DEVICE_ID "('your device id')" TEST.WEIGHT "('your path of trained checkpoints')" ``` **Some examples:** ```bash # DukeMTMC python test.py --config_file configs/DukeMTMC/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/duke_vit_transreid_stride/transformer_120.pth' # MSMT17 python test.py --config_file configs/MSMT17/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/msmt17_vit_transreid_stride/transformer_120.pth' # OCC_Duke python test.py --config_file configs/OCC_Duke/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/occ_duke_vit_transreid_stride/transformer_120.pth' # Market python test.py --config_file configs/Market/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/market_vit_transreid_stride/transformer_120.pth' # VeRi python test.py --config_file configs/VeRi/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/veri_vit_transreid_stride/transformer_120.pth' # VehicleID (We test 10 times and get the final average score to avoid randomness) python test.py --config_file configs/VehicleID/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT '../logs/vehicleID_vit_transreid_stride/transformer_120.pth' ``` ## Trained Models and logs (Size 256) 
| Datasets | MSMT17 | Market | Duke | OCC_Duke | VeRi | VehicleID |
|---|---|---|---|---|---|---|
| Model | mAP | R1 | mAP | R1 | mAP | R1 | mAP | R1 | mAP | R1 | R1 | R5 |
| Baseline(ViT) | 61.8 | 81.8 | 87.1 | 94.6 | 79.6 | 89.0 | 53.8 | 61.1 | 79.0 | 96.6 | 83.5 | 96.7 |
| model | log | model | log | model | log | model | log | model | log | model | test | |
| TransReID*(ViT) | 67.8 | 85.3 | 89.0 | 95.1 | 82.2 | 90.7 | 59.5 | 67.4 | 82.1 | 97.4 | 85.2 | 97.4 |
| model | log | model | log | model | log | model | log | model | log | model | test | |
| TransReID*(DeiT) | 66.3 | 84.0 | 88.5 | 95.1 | 81.9 | 90.7 | 57.7 | 65.2 | 82.4 | 97.1 | 86.0 | 97.6 |
| model | log | model | log | model | log | model | log | model | log | model | test |