# Human-Path-Prediction
**Repository Path**: bf19983205/Human-Path-Prediction
## Basic Information
- **Project Name**: Human-Path-Prediction
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-04-30
- **Last Updated**: 2025-04-30
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Human Path Prediction
This repository contains the code for the papers:
**It is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction**
Karttikeya Mangalam,
Harshayu Girase,
Shreyas Agarwal,
Kuan-Hui Lee,
Ehsan Adeli,
Jitendra Malik,
Adrien Gaidon
Accepted at [ECCV 2020](https://eccv2020.eu/) (Oral)
**From Goals, Waypoints & Paths To Long Term Human Trajectory Forecasting**
Karttikeya Mangalam*,
Yang An*,
Harshayu Girase,
Jitendra Malik
Accepted to [ICCV 2021](https://iccv2021.thecvf.com/)
This repository supports several state of the art pedestrian trajectory forecasting models on both short term (3.2 seconds input, 4.8 seconds ouput) and long term (upto a minute in future) prediction horizons. To train/test models, please visit the PECNet and Ynet folders for model-specific code.
Keywords: human path prediction, human trajectory prediction, human path forecasting, pedestrian location forecasting, location prediction, position forecasting, future path forecasting, long term prediction, instantaneous prediction, next second location, multi-agent forecasting, behavior prediction
## Datasets
**Stanford Drone Dataset**:
- Both Short and Long Term prediction horizon dataloaders
- Hand Annotated Segmentation Maps
- State of the art prediction trained prediction models as well as training/evaluation code
**ETH/UCY Dataset**:
- Short term prediction dataloaders
- SOTA trajectory prediction trained prediction models for all five scenes
- Training/evaluation code for SOTA model and the baselines
**InD Dataset**:
- Long term prediction dataloaders
- Hand Annotated Segmentation Maps
- SOTA trajectory prediction trained prediction models as well as training/evaluation code
We hope this allows easy benchmarking of several baselines as well as state-of-the-art path prediction models across several datasets and settings. If you find this repository or any code thereof useful in your work, kindly cite:
```
@inproceedings{mangalam2020pecnet,
title={It is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction},
author={Mangalam, Karttikeya and Girase, Harshayu and Agarwal, Shreyas and Lee, Kuan-Hui and Adeli, Ehsan and Malik, Jitendra and Gaidon, Adrien},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {August},
year={2020}
}
```
```
@inproceedings{mangalam2021goals,
author = {Mangalam, Karttikeya and An, Yang and Girase, Harshayu and Malik, Jitendra},
title = {From Goals, Waypoints \& Paths To Long Term Human Trajectory Forecasting},
booktitle = {Proc. International Conference on Computer Vision (ICCV)},
year = {2021},
month = oct,
month_numeric = {10}
}
```
## Paper Summaries
**It is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction**
Published at [ECCV 2020](https://eccv2020.eu/) (Oral)
**Abstract**: Human trajectory forecasting with multiple socially interacting agents is of critical importance for autonomous navigation in human
environments, e.g., for self-driving cars and social robots. In this work, we present Predicted Endpoint Conditioned Network (PECNet) for flexible
human trajectory prediction. PECNet infers distant trajectory endpoints to assist in long-range multi-modal trajectory prediction. A novel nonlocal social pooling layer enables PECNet to infer diverse yet socially compliant trajectories. Additionally, we present a simple “truncation trick” for improving diversity and multi-modal trajectory prediction performance.
Below is an example of pedestrian trajectories predicted by our model and the corresponding ground truth. Left pane shows future trajectories for 9.6 seconds predicted in a recurrent input fashion. Right pane shows the predicted trajectories for future 4.8 seconds at an intersection. Solid circles represent the past input & stars represent the future ground truth. Predicted multi-modal trajectories are shown as translucent circles jointly for all present pedestrians. Animation is best viewed in Adobe Acrobat Reader. More video visualizations available at project homepage: https://karttikeya.github.io/publication/htf/