FusionNetGeoLabel is a research-driven deep learning framework for semantic segmentation of remote sensing imagery. Built upon my doctoral research, it introduces a novel architecture that integrates:
This unified design, named HR-GCN-FF-DA, delivers state-of-the-art performance on benchmarks such as ISPRS Vaihingen, Potsdam, Landsat-8, and Massachusetts Roads, consistently surpassing existing baselines in IoU, F1-score, and overall accuracy.
Beyond academic evaluation, FusionNetGeoLabel is designed with practical deployment in mind β featuring modular utilities, dataset download scripts, pretrained models, and Docker support for seamless reproducibility. The goal is to provide both a research-grade contribution to the remote sensing community and a production-ready tool for real-world geospatial applications such as urban planning, agriculture monitoring, and map updating.
I received my Ph.D. in Computer Engineering from Chulalongkorn University (2018β2020), supported by two prestigious scholarships:
Prior to this, I received my Master of Engineering in Computer Engineering from Chulalongkorn University (2016β2017), supported by:
FusionNetGeoLabel: A Deep Learning Framework for Semantic Segmentation in Remote Sensing.
Teerapong Panboonyuen
Chulalongkorn University, 2020
High-Resolution Road Extraction: Using Deep Convolutional Neural Networks and CRFs.
Teerapong Panboonyuen
Chulalongkorn University, 2017
Panboonyuen, T., et al.
Transformer-Based Decoder Designs for Semantic Segmentation on Remotely Sensed Images
Remote Sensing, 2021
Panboonyuen, T., et al.
Feature Fusion-Based Enhanced Global Convolutional Network with Channel Attention for Remote Sensing
Remote Sensing, 2020
Panboonyuen, T., et al.
Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning
Remote Sensing, 2019
Panboonyuen, T., et al.
Road Segmentation on Aerial Imagery Using Deep CNNs and Conditional Random Fields
Remote Sensing, 2017
My research contributes to the advancement of intelligent systems in geospatial analysis β supporting smart cities, environmental monitoring, disaster response, and geospatial intelligence with more robust and accurate semantic segmentation models.
Semantic segmentation plays a crucial role in remote sensing, impacting fields such as agriculture, map updating, and navigation.
While Deep Convolutional Encoder-Decoder networks are widely used, they often struggle to accurately identify fine low-level features such as rivers and vegetation due to architectural limits and scarcity of domain-specific training data.
This dissertation proposes an advanced semantic segmentation framework designed specifically for remote sensing imagery, featuring five key innovations:
Experiments on Landsat-8 datasets and the ISPRS Vaihingen benchmark demonstrate significant performance improvements over baseline models.
Explore the core assets underpinning my research and contributions to the field of semantic segmentation on remote sensing imagery:
These resources highlight the rigor, reproducibility, and impact of my work within the computer vision and remote sensing communities.
The FusionNetGeoLabel framework provides a full semantic segmentation pipeline built on HR-Backbone + Feature Fusion + Depthwise Atrous Convolution. Below are quick steps to run training, inference, and evaluation.
git clone https://github.com/kaopanboonyuen/FusionNetGeoLabel.git
cd FusionNetGeoLabel
pip install -r requirements.txt
Prepare your dataset (ISPRS Vaihingen, Potsdam, Massachusetts Roads, or Landsat-8) using our dataset download scripts:
bash scripts/download_isprs.sh
bash scripts/download_potsdam.sh
bash scripts/download_massachusetts.sh
Modify config.json
(e.g., dataset paths, hyperparameters), then start training:
python train.py --config config.json
Run inference on single images or folders using pretrained checkpoints:
python inference.py --model checkpoints/hrgcn_ff_da.pth --image sample.png
Test your model against benchmark datasets with built-in metrics (IoU, F1-score, Accuracy):
python test.py --config config.json --model checkpoints/hrgcn_ff_da.pth
Build and run inside a container for reproducibility:
docker build -t fusionnetgeolabel .
docker run --gpus all -it fusionnetgeolabel
Some highlights of our model's performance:
I design and develop advanced deep learning architectures for semantic segmentation of aerial and satellite imagery, enabling machines to interpret complex geospatial scenes β from roads and vegetation to urban structures β with unprecedented precision.
My latest framework, FusionNetGeoLabel (HR-GCN-FF-DA), pushes beyond the state of the art by introducing three key innovations:
Validated on leading benchmarks such as ISPRS Vaihingen and Landsat-8, FusionNetGeoLabel consistently achieves 90%+ F1-scores β surpassing previous baselines and setting new standards in remote sensing segmentation.
Beyond research impact, my work powers practical applications in urban planning, environmental monitoring, disaster management, and navigation systems, directly contributing to smarter, data-driven decision-making at scale.
This project is licensed under the MIT License.
If you use this framework, please cite the following thesis:
@phdthesis{panboonyuen2019semantic,
title = {Semantic segmentation on remotely sensed images using deep convolutional encoder-decoder neural network},
author = {Teerapong Panboonyuen},
year = {2019},
school = {Chulalongkorn University},
type = {Ph.D. thesis},
doi = {10.58837/CHULA.THE.2019.158},
address = {Faculty of Engineering},
note = {Doctor of Philosophy}
}