# LSAB **Repository Path**: kyle-liao/lsab ## Basic Information - **Project Name**: LSAB - **Description**: LSAB: User Behavioral Pattern Modeling in Sequential Recommendation by Learning Self-Attention Bias - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 4 - **Forks**: 0 - **Created**: 2022-09-05 - **Last Updated**: 2023-12-10 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # LSAB ## Setup Install the required packages into your python environment: ``` pip install -r requirements.txt ``` This code was tested with `python 3.7.13` on `ubuntu` with `cuda 10.1` and various types of GPUs. ## QuickStart In order to train lsab, run run.py as follows: ``` python run.py --templates train_sas --dataset_code ml-100k --Biasmode learn ``` This will apply all the options specified in `templates/train_sas.yaml`, and train **LSAB** on **MovieLens 100K** dataset as a result. You can also apply other bias. For example, ``` python run.py --templates train_bert --dataset_code ml-100k --Biasmode lognormal ``` You can also apply other templates in the `templates` folder. For example, ``` python run.py --templates train_bert --dataset_code ml-100k --Biasmode learn ``` will train **BERT4Rec** model instead of **LSAB**. ## Training Here is a more detailed explanation of how one can train a model. We will explain in two levels ('Big' Choices and 'Small' Choices). Remember that you can always study the template files in `./templates` to learn about desirable choices. ### 'Big' Choices This project is highly modularized so that any (valid) combination of `model`, `Biasmode`,`dataset`, `dataloader`, `negative_sampler` and `trainer` will run. #### Model Currently, this repository provides several other baselines. Choose one of these for `--model_code` option * meantime * sas * tisas * bert #### Biasmode Choose one of these for `--Biasmode` option * learn * abs * normal * lognormal #### Privacy Choose one of these for `--use_epsilon` option * 1 when use_epsilon==1, use epsilon * 0 * Privacy parameter, you can adjust it. sensitivity: 1 epsilon: 0.1 #### Dataset We experimented with four datasets: **MovieLens 100K**, **MovieLens 1M**, **Amazon Beauty** and **Amazon Game**. Choose one of these for `--dataset_code` option * ml-100k * ml-1m * beauty * game The raw data of these datasets will be automatically downloaded to `./Data` the first time they are required. They will be preprocessed according to the related hyperparameters and will be saved also to `./Data` for later re-use. Note that downloading/preprocessing is done only once per every setting to save time. If you want to change the Data folder's path from `./Data` to somewhere else (e.g. shared folder), modify `LOCAL_DATA_FOLDER` variable in `meantime/config.py`. #### Negative Sampler There are two types of negative samplers: * random (sample by random) * popular (sample according to item's popularity) Choose one for `--train_negative_sampler`(used for training) and `--test_negative_sampler`(used for evaluation). ### Results In the result folder, you will find: ``` . ├── config.json ├── models │   ├── best_model.pth │   ├── recent_checkpoint.pth │   └── recent_checkpoint.pth.final ├── status.txt └── tables ├── test_log.csv ├── train_log.csv └── val_log.csv ``` Below are the descriptions for the contents of each file. #### Models * best_model.pth: state_dict of the best model * recent_checkpoint.pth: state_dict of the model at the latest epoch * recent_checkpoint.pth.final: state_dict of the model at the end of the training #### Tables * train_log.csv: training loss at every epoch * val_log.csv: evaluation metrics for validation data at every epoch * test_log.csv: evaluation metrics for test data at best epoch ## References The baseline codes were translated to PyTorch from the following repositories: * **MARANK**: [https://github.com/voladorlu/MARank](https://github.com/voladorlu/MARank) * **SASRec**: [https://github.com/kang205/SASRec](https://github.com/kang205/SASRec) * **TiSASRec**: [https://github.com/JiachengLi1995/TiSASRec](https://github.com/JiachengLi1995/TiSASRec) * **BERT4Rec**: [https://github.com/FeiSun/BERT4Rec](https://github.com/FeiSun/BERT4Rec) * **BERT4Rec**: [https://github.com/SungMinCho/MEANTIME](https://github.com/SungMinCho/MEANTIME)