# SpecForge **Repository Path**: qzl66/SpecForge ## Basic Information - **Project Name**: SpecForge - **Description**: https://github.com/sgl-project/SpecForge - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-10-22 - **Last Updated**: 2026-01-23 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
[](https://docs.sglang.ai/SpecForge/)
[](https://huggingface.co/collections/lmsys/specbundle)
[](https://deepwiki.com/sgl-project/SpecForge)
[](https://lmsys.org/blog/2025-07-25-spec-forge/)
[](https://sgl-fru7574.slack.com/archives/C09784E3EN6)
[](./LICENSE)
We would like to express our sincere gratitude to the official EAGLE team, especially Hongyang Zhang and Yuhui Li, for their invaluable contributions and support. Our thanks also go to the NVIDIA teamโparticularly Avery H and Izzy Puttermanโand to the Google team, especially Ying Wang, for their insightful discussions and generous assistance throughout the project.
We are especially grateful to Meituan for their strong backing and meaningful contributions, which played a vital role in driving this project forward.
This project has also been inspired by many outstanding open-source projects from the LLM community, including [EAGLE](https://github.com/SafeAILab/EAGLE), [BaldEagle](https://github.com/NickL77/BaldEagle), and [TensorRT-Model-Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer) and others. Their contributions and shared knowledge have greatly benefited our work.
## ๐ก Special Thanks to Voltage Park
We would like to extend our sincere thanks to [Voltage Park](https://www.voltagepark.com/), our official infrastructure partner. As part of a formal collaboration with the SGLang team, Voltage Park provided critical GPU resources that empowered us to train and evaluate large-scale speculative decoding models efficiently and reliably. This partnership was instrumental in making SpecForge possible. We deeply appreciate Voltage Parkโs mission to make cutting-edge AI infrastructure more accessible, and we look forward to continued collaboration as we push the boundaries of open-source LLM serving and optimization.
## ๐ Citation
```bibtex
@misc{specforge2025,
title={SpecForge: Train speculative decoding models effortlessly},
author={Shenggui Li, Yikai Zhu, Chao Wang, Fan Yin, Shuai Shi, Yubo Wang, Yi Zhang, Yingyi Huang, Haoshuai Zheng, Yineng Zhang},
year={2025},
publisher={GitHub},
howpublished={\url{https://github.com/sgl-project/specforge}},
}