# Upsample-Anything_Pytorch **Repository Path**: gengumeng/Upsample-Anything_Pytorch ## Basic Information - **Project Name**: Upsample-Anything_Pytorch - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-01-30 - **Last Updated**: 2026-01-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ๐ **[2025-12-01]** : Initial Application code release ๐ **[2025-11-24]** : Initial code release (Given the high number of requests, we have decided to release the code in its current state before further cleanup.) :warning: The current version is slightly slower than the one reported in the paper, because the speed-optimized code has not yet been fully integrated due to compatibility issues across different versions. # Upsample Anything **KAIST, MIT, Microsoft** Minseok Seo, Mark Hamilton, Changick Kim [ :scroll: [`Paper`](https://arxiv.org/html/2511.16301v1)] [ :globe_with_meridians: [`Website`](https://seominseok0429.github.io/Upsample-Anything/)] [ :book: [`BibTeX`](#-)] ## Overview
Our method performs lightweight test-time optimization (โ0.419 s/image) without requiring any dataset-level training.It generalizes seamlessly across domains while maintaining consistent reconstruction quality for every image. (All examples are randomly selected, without cherry-picking.)
Our method not only upsamples features but also denoises them and reinforces coherent object-level grouping.
Our method can operate not only with RGB guidance but with *any* modality (with only minimal code changes). The depth sample below was generated using **[Depth Pro](https://github.com/apple/ml-depth-pro)**.
Our method is not limited to feature-map upsampling—it can also upsample
probability maps, depth maps, and other modalities.
By applying our method to Segearth OV, which has made significant contributions to OV segmentation for satellite imagery, we can achieve even sharper and more refined results.