SatDiff: A Stable Diffusion Framework for Satellite Imagery Inpainting 🌍

Teerapong Panboonyuen

AGL (Advancing Geoscience Laboratory), Chulalongkorn University

MIT License

SatDiff leverages the power of Stable Diffusion Models to intelligently inpaint missing or corrupted regions in very high-resolution satellite imagery. Designed specifically for Earth observation, it preserves both texture and geospatial semanticsβ€”making it ideal for applications where visual and structural integrity matter most.

Built at the Advancing Geoscience Laboratory (AGL), our model incorporates a custom dual-branch architecture with attention-enhanced modules, tailored for aerial and satellite domains. It significantly outperforms classical and deep-learning inpainting approaches in PSNR, SSIM, and perceptual realism.

  • PSNR: 35.72 dB
  • SSIM: 0.952
  • Perceptual Realism: High structural fidelity & semantic awareness

πŸ“„ IEEE 2025 Publication

SatDiff Inpainting Result

πŸ—οΈ Architecture Overview

SatDiff Overview Dual-Branch Attention Design Diffusion Model Structure

πŸ”§ Installation

git clone https://github.com/kaopanboonyuen/SatDiff.git
cd SatDiff
pip install -r requirements.txt
        

πŸš€ Usage

1. Prepare Configuration

Edit the config.yaml file with dataset paths and hyperparameters.

2. Train Model

python train.py

3. Evaluate Results

python evaluate.py

4. Run Inference

python inference.py --image_path path/to/image.png

πŸ“Š Example Output

SatDiff Output Example

πŸ§‘β€πŸ”¬ About the Author

Author Bio

πŸ“– Citation

If you use SatDiff in your research, please cite the following paper:

@article{panboonyuen2025satdiff,
  title={SatDiff: A Stable Diffusion Framework for Inpainting Very High-Resolution Satellite Imagery},
  author=Panboonyuen, Teerapong, et al.,
  journal={IEEE Access},
  year={2025},
  publisher={IEEE}
}
        

🌐 Project Repository

View the source code and contribute at: πŸ”— GitHub Repository