HuggingFace conversion and training library for Megatron-based models
最近更新: 13天前DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
最近更新: 13天前A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blackwell GPUs, to provide better performance with lower memory utilization in both training and inference.
最近更新: 13天前