# PI-REC **Repository Path**: KafurTan/PI-REC ## Basic Information - **Project Name**: PI-REC - **Description**: :fire: PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain. :fire: 图像翻译,条件GAN,AI绘画 - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-03-08 - **Last Updated**: 2024-10-12 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README PI-REC ------------------------------------------------------------------------------------------------------

Version Status Platform PyTorch License

**Progressive Image Reconstruction Network With Edge and Color Domain**
### [Paper on arXiv](https://arxiv.org/abs/1903.10146) | [Paper Read Online](https://www.arxiv-vanity.com/papers/1903.10146/) | [BibTex](#citation) -----

When I was a schoolchild,

I dreamed about becoming a painter.

With PI-REC, we realize it nowadays.

For you, for everyone.

-----

English | 中文版


🏳️‍🌈 Demo show time 🏳️‍🌈 ------ #### Draft2Painting

#### Tool operation



Introduction ----- We propose a universal image reconstruction method to represent detailed images purely from binary sparse edge and flat color domain. Here is the open source code and the drawing tool.
*\*The codes of training for release are no completed yet, also waiting for release license of lab.*
**Find more details in our paper: [Paper on arXiv](https://arxiv.org/abs/1903.10146)**

Quick Overview of Paper ----- ### What can we do?

- Figure (a): Image reconstruction from extreme sparse inputs.
- Figure (b): Hand drawn draft translation.
- Figure (c): User-defined edge-to-image **(E2I)** translation.

### Model Architecture We strongly recommend you to understand our model architecture before running our drawing tool. Refer to the paper for more details.

## Prerequisites - Python 3+ - PyTorch `1.0` (`0.4` is not supported) - NVIDIA GPU + CUDA cuDNN ## Installation - Clone this repo - Install PyTorch and dependencies from http://pytorch.org - Install python requirements: ```bash pip install -r requirements.txt ``` ## Usage #### We provide two ways in the project: - **Basic command line mode** for batch test - **Drawing tool GUI mode** for creation Firstly, follow steps below to prepare pre-trained models with patience: 1. Download the pre-trained models you want here: Google Drive | Baidu (Extraction Code: 9qn1) 2. Unzip the `.7z` and put it under your dir `./models/`.
So make sure your path now is: `./models/celeba/` 3. Complete the above [Prerequisites](#pre) and [Installation](#ins) #### Files are ready now! Read the [User Manual](USAGE.md) for firing operations.


中文版介绍 :mahjong: ----- Demo演示 ----- 自己看上面的咯~ 简介 ----- 我们提出了一种基于GAN的渐进式训练方法 PI-REC,能从超稀疏二值边缘以及色块中还原重建真实图像。 这属于*图像重建,图像翻译,条件图像生成,AI自动绘画*的前沿交叉领域,而非简单的以图搜图。更多相关可以阅读论文里的 Related Work。
这里包含了测试代码以及交互式绘画工具。
*\*由于训练过程过于复杂,用于训练的发布版代码还未完成*
**在我们的论文中你可以获得更多信息(强烈推荐阅读): [Paper on arXiv](https://arxiv.org/abs/1903.10146)**

论文概览 ----- ### PI-REC能做啥?

- Figure (a): 超稀疏输入信息重建原图。
- Figure (b): 手绘草图转换。
- Figure (c): 用户自定义的 edge-to-image **(E2I)** 转换.

### 模型结构 我们强烈建议你先仔细阅读论文熟悉我们的模型结构,对运行使用大有裨益。

## 基础环境 - Python 3 - PyTorch `1.0` (`0.4` 会报错) - NVIDIA GPU + CUDA cuDNN (当前版本已可选cpu,请修改`config.yml`中的`DEVICE`) ## 第三方库安装 - Clone this repo - 安装PyTorch和torchvision --> http://pytorch.org - 安装 python requirements: ```bash pip install -r requirements.txt ``` ## 运行使用 #### 我们提供以下两种方式运行: - **基础命令行模式** 用来批处理测试整个文件夹的图片 - **绘画GUI工具模式** 用来创作 首先,请耐心地按照以下步骤做准备: 1. 在这里下载你想要的预训练模型文件:Google Drive | Baidu (提取码: 9qn1) 2. 解压,放到目录`./models`下
现在你的目录应该像这样: `./models/celeba/` 3. 完成上面的基础环境和第三方库安装 #### 啦啦啦啦,准备工作已完成,阅读[用户手册](USAGE.md#jump_zh)来开始运行程序咯~ Acknowledgment ----- Code structure is modified from [Anime-InPainting](https://github.com/youyuge34/Anime-InPainting), which is based on [Edge-Connect](https://github.com/knazeri/edge-connect). BibTex ----- ``` @article{you2019pirec, title={PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain}, author={You, Sheng and You, Ning and Pan, Minxue}, journal={arXiv preprint arXiv:1903.10146}, year={2019} } ```