My research focuses on Learning Representations—developing cutting-edge algorithms with optimization theory to push AI’s limits. I work with advanced models like GANs and Diffusion Models, leverage Self-Supervised Learning, delve into Adversarial Attacks with Large Language Models (LLMs) to revolutionize AI capabilities.

I am currently a Senior AI Research Scientist at MARS (Motor AI Recognition Solution) and a Postdoctoral Fellow at Chulalongkorn University. I earned my Ph.D. in Computer Engineering from Chulalongkorn University, where I specialized in AI.

My passion is focused on Cognitive Intelligence to unlock human potential. I am keenly interested in Remote Sensing, where LLMs reveals transformative insights and redefines how we perceive and interact with our environment.

You can find summaries of my academic, industry, and teaching experience in my CV, and explore more about my personal life on my blog. Additionally, check out some of my music on SoundCloud.

Call me Teerapong Panboonyuen, or just Kao (เก้า) in Thai: ธีรพงศ์ ปานบุญยืน.

Download my CV. Download my Thai CV.

Interests
  • Applied Earth Observations
  • Geoscience and Remote Sensing
  • Computer Vision
  • Semantic Distillation
  • Human-AI Interaction
  • Learning Representations
Education
  • PostDoc Fellow in AI, 2025

    Chulalongkorn University

  • PhD in Computer Engineering, 2020

    Chulalongkorn University

  • MEng in Computer Engineering, 2017

    Chulalongkorn University

  • BEng in Computer Engineering, 2015

    KMUTNB (Top 1% in University Mathematics)

  • Pre-Engineering School (PET21), 2012

    KMUTNB (Senior High School, 10th - 12th Grade)

Selected Awards

Reviewer for International Journals/Conferences:

Selected Press

  • The Leader Asia: Dr. Teerapong and his team introduced their advanced AI for car damage detection at ICIAP 2023 in Udine, setting new accuracy standards with their innovative MARS model.
  • Techsauce: Highlighted their AI technology for automatic car damage assessment, earning recognition for excellence at ICIAP 2023 in Italy.
  • LINE TODAY: Showcased the MARS model at ICIAP 2023, noted for its high accuracy and setting new global standards in car damage detection.
  • Moneychat: Reported the award-winning innovation in AI for car damage estimation presented at ICIAP 2023.
  • Kaohoon: Celebrated the award-winning success of MARS at ICIAP 2023.
  • Mitistock: Introduced the MARS model, featuring advanced self-attention mechanisms for vehicle damage assessment in Thailand.
  • The Story Thailand: Presented cutting-edge AI techniques in car wound detection, achieving high accuracy and setting international benchmarks.
  • Media of Thailand: Unveiled the MARS model at ICIAP 2023, recognized globally for its precision in car damage detection.
  • Thailand Insurance News: Featured Dr. Teerapong’s MARS model at ICIAP 2023 for its groundbreaking accuracy in car damage detection.
  • Chulalongkorn University: Published a study on semantic road segmentation using deep convolutional neural networks.

Publications

To find relevant content, try searching publications, filtering using the buttons below, or exploring popular topics. A * denotes equal contribution.

*
Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning
In the remote sensing domain, it is crucial to complete semantic segmentation on the raster images, e.g., river, building, forest, etc., on raster images. A deep convolutional encoder–decoder (DCED) network is the state-of-the-art semantic segmentation method for remotely sensed images. However, the accuracy is still limited, since the network is not designed for remotely sensed images and the training data in this domain is deficient. In this paper, we aim to propose a novel CNN for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose applying a recent CNN called a global convolutional network (GCN), since it can capture different resolutions by extracting multi-scale features from different stages of the network. Additionally, we further enhance the network by improving its backbone using larger numbers of layers, which is suitable for medium resolution remotely sensed images. Second, “channel attention” is presented in our network in order to select the most discriminative filters (features). Third, “domain-specific transfer learning” is introduced to alleviate the scarcity issue by utilizing other remotely sensed corpora with different resolutions as pre-trained data. The experiment was then conducted on two given datasets (i) medium resolution data collected from Landsat-8 satellite and (ii) very high resolution data called the ISPRS Vaihingen Challenge Dataset. The results show that our networks outperformed DCED in terms of 𝐹1 for 17.48% and 2.49% on medium and very high resolution corpora, respectively.
Real-Time Polyps Segmentation for Colonoscopy Video Frames Using Compressed Fully Convolutional Network
Colorectal cancer is one of the leading causes of cancer death worldwide. As of now, colonoscopy is the most effective screening tool for diagnosing colorectal cancer by searching for polyps which can develop into colon cancer. The drawback of manual colonoscopy process is its high polyp miss rate. Therefore, polyp detection is a crucial issue in the development of colonoscopy application. Despite having high evaluation scores, the recently published methods based on fully convolutional network (FCN) require a very long inferring (testing) time that cannot be applied in a real clinical process due to a large number of parameters in the network. In this paper, we proposed a compressed fully convolutional network by modifying the FCN-8s network, so our network is able to detect and segment polyp from video images within a real-time constraint in a practical screening routine. Furthermore, our customized loss function allows our network to be more robust when compared to the traditional cross-entropy loss function. The experiment was conducted on CVC-EndoSceneStill database which consists of 912 video frames from 36 patients. Our proposed framework has obtained state-of-the-art results while running more than 7 times faster and requiring fewer weight parameters by more than 9 times. The experimental results convey that our system has the potential to support clinicians during the analysis of colonoscopy video by automatically indicating the suspicious polyps locations.
Semantic Segmentation On Medium-Resolution Satellite Images Using Deep Convolutional Networks With Remote Sensing Derived Indices
Semantic Segmentation is a fundamental task in computer vision and remote sensing imagery. Many applications, such as urban planning, change detection, and environmental monitoring, require the accurate segmentation; hence, most segmentation tasks are performed by humans. Currently, with the growth of Deep Convolutional Neural Network (DCNN), there are many works aiming to find the best network architecture fitting for this task. However, all of the studies are based on very-high resolution satellite images, and surprisingly; none of them are implemented on medium resolution satellite images. Moreover, no research has applied geoinformatics knowledge. Therefore, we purpose to compare the semantic segmentation models, which are FCN, SegNet, and GSN using medium resolution images from Landsat-8 satellite. In addition, we propose a modified SegNet model that can be used with remote sensing derived indices. The results show that the model that achieves the highest accuracy RGB bands of medium resolution aerial imagery is SegNet. The overall accuracy of the model increases when includes Near Infrared (NIR) and Short-Wave Infrared (SWIR) band. The results showed that our proposed method (our modified SegNet model, named RGB-IR-IDX-MSN method) outperforms all of the baselines in terms of mean F1 scores.
Road segmentation of remotely-sensed images using deep convolutional neural networks with landscape metrics and conditional random fields
Semantic segmentation of remotely-sensed aerial (or very-high resolution, VHS) images and satellite (or high-resolution, HR) images has numerous application domains, particularly in road extraction, where the segmented objects serve as essential layers in geospatial databases. Despite several efforts to use deep convolutional neural networks (DCNNs) for road extraction from remote sensing images, accuracy remains a challenge. This paper introduces an enhanced DCNN framework specifically designed for road extraction from remote sensing images by incorporating landscape metrics (LMs) and conditional random fields (CRFs). Our framework employs the exponential linear unit (ELU) activation function to improve the DCNN, leading to a higher quantity and more accurate road extraction. Additionally, to minimize false classifications of road objects, we propose a solution based on the integration of LMs. To further refine the extracted roads, a CRF method is incorporated into our framework. Experiments conducted on Massachusetts road aerial imagery and Thailand Earth Observation System (THEOS) satellite imagery datasets demonstrated that our proposed framework outperforms SegNet, a state-of-the-art object segmentation technique, in most cases regarding precision, recall, and F1 score across various types of remote sensing imagery.

Featured Talks

Research Communities