# Transformer_f-CLSWGAN
**Repository Path**: buleli/Transformer_f-CLSWGAN
## Basic Information
- **Project Name**: Transformer_f-CLSWGAN
- **Description**: Advanced Feature Generating Networks for Zero-Shot Learning with Axial Attention transformer
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-10-17
- **Last Updated**: 2021-10-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# f-CLSWGAN
# Introduction
This work improves the performance of the model proposed in the paper "Feature Generating Networks for Zero-Shot Learning." CVPR (2018) by Yongqin Xian, Tobias Lorenz, Bernt Schiele, Zeynep Akata. To improve the performance of the generator and the discriminator I have used axial attention transformer. It is a simple but powerful technique to attend to multi-dimensional data efficiently.
# Environment
* Python: 3.7,
* PyTorch: 1.2,
* scipy.
## Dataset
The datasets can be downloaded from here. The datasets are 2048-d extracted feature maps from resnet-101.
## Acknowledgement
The Axial Attention code is taken from this amazing repository.