# SDT **Repository Path**: killjsj/SDT ## Basic Information - **Project Name**: SDT - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-03-18 - **Last Updated**: 2024-03-18 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README   # π₯ Disentangling Writer and Character Styles for Handwriting Generation
## π’ Introduction - The proposed style-disentangled Transformer (SDT) generates online handwritings with conditional content and style. Existing RNN-based methods mainly focus on capturing a personβs overall writing style, neglecting subtle style inconsistencies between characters written by the same person. In light of this, SDT disentangles the writer-wise and character-wise style representations from individual handwriting samples for enhancing imitation performance. - We extend SDT and introduce an offline-to-offline framework for improving the generation quality of offline Chinese handwritings.  ## πΊ Handwriting generation results - **Online Chinese handwriting generation**  - **Applications to various scripts**  - **Extension on offline Chinese handwriting generation**  ## π¨ Requirements ``` python 3.8 pytorch >=1.8 easydict 1.9 einops 0.4.1 ``` ## π Folder Structure ``` SDT/ β βββ train.py - main script to start training βββ test.py - generate characters via trained model βββ evaluate.py - evaluation of generated samples β βββ configs/*.yml - holds configuration for training βββ parse_config.py - class to handle config file β βββ data_loader/ - anything about data loading goes here β βββ loader.py β βββ model_zoo/ - pre-trained content encoder model β βββ data/ - default directory for storing experimental datasets β βββ model/ - networks, models and losses β βββ encoder.py β βββ gmm.py β βββ loss.py β βββ model.py β βββ transformer.py β βββ saved/ β βββ models/ - trained models are saved here β βββ tborad/ - tensorboard visualization β βββ samples/ - visualization samples in the training process β βββ trainer/ - trainers β βββ trainer.py β βββ utils/ - small utility functions βββ util.py βββ logger.py - set log dir for tensorboard and logging output ``` ## πΏ Datasets We provide Chinese, Japanese and English datasets in [Google Drive](https://drive.google.com/drive/folders/17Ju2chVwlNvoX7HCKrhJOqySK-Y-hU8K?usp=share_link) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u. Please download these datasets, uzip them and move the extracted files to /data. ## π Pre-trained model - We provide the pre-trained content encoder model in [Google Drive](https://drive.google.com/drive/folders/1N-MGRnXEZmxAW-98Hz2f-o80oHrNaN_a?usp=share_link) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u. Please download and put it to the /model_zoo. - We provide the well-trained SDT model in [Google Drive](https://drive.google.com/drive/folders/1LendizOwcNXlyY946ThS8HQ4wJX--YL7?usp=sharing) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u, so that users can get rid of retraining one and play it right away. ## π Training & Test **Training** - To train the SDT on the Chinese dataset, run this command: ``` python train.py --cfg configs/CHINESE_CASIA.yml --log Chinese_log ``` - To train the SDT on the Japanese dataset, run this command: ``` python train.py --cfg configs/Japanese_TUATHANDS.yml --log Japanese_log ``` - To train the SDT on the English dataset, run this command: ``` python train.py --cfg configs/English_CASIA.yml --log English_log ``` **Qualitative Test** - To generate Chinese handwritings with our SDT, run this command: ``` python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/Chinese ``` - To generate Japanese handwritings with our SDT, run this command: ``` python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/Japanese ``` - To generate English handwritings with our SDT, run this command: ``` python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/English ``` **Quantitative Evaluation** - To evaluate the generated handwritings, you need to set `data_path` to the path of the generated handwritings (e.g., Generated/Chinese), and run this command: ``` python evaluate.py --data_path Generated/Chinese ``` ## β€οΈ Citation If you find our work inspiring or use our codebase in your research, please cite our work: ``` @inproceedings{dai2023disentangling, title={Disentangling Writer and Character Styles for Handwriting Generation}, author={Dai, Gang and Zhang, Yifan and Wang, Qingfeng and Du, Qing and Yu, Zhuliang and Liu, Zhuoman and Huang, Shuangping}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages={5977--5986}, year={2023} } ``` ## β StarGraph [](https://star-history.com/#dailenson/SDT&Timeline)