MARS

🚌 MARS: Mask Attention Refinement with Sequential Quadtree Nodes for Car Damage Instance Segmentation

License: MIT

Welcome to the official repository for MARSβ€”an innovative deep learning model tailored for precise car damage instance segmentation. Leveraging advanced self-attention mechanisms with sequential quadtree nodes, MARS delivers superior segmentation masks, surpassing state-of-the-art methods like Mask R-CNN, PointRend, and Mask Transfiner.

MARS in Action

πŸ› οΈ Project Overview

In the realm of car insurance, accurately assessing vehicle damage is crucial. Traditional models often struggle with complex images and fine segmentation tasks. MARS (Mask Attention Refinement with Sequential Quadtree Nodes) addresses these challenges by recalibrating channel weights using a quadtree transformer, enhancing segmentation accuracy.

Key Achievements:

MARS was showcased at the International Conference on Image Analysis and Processing 2023 (ICIAP 2023) in Udine, Italy.

πŸ‘₯ Author

πŸ“„ Publications

If you’re interested in exploring the academic work behind MARS, please check out the following publication:

πŸš€ Quick Start

Requirements

Installation

  1. Clone the Repository:
    git clone https://github.com/kaopanboonyuen/MARS.git
    cd MARS
    
  2. Set Up a Virtual Environment:
    python3 -m venv mars-env
    source mars-env/bin/activate  # For Windows: `mars-env\Scripts\activate`
    
  3. Install Dependencies:
    pip install -r requirements.txt
    
  4. Download Datasets:
    • Public Dataset: Download here and place it in the data/ directory.
    • Private Dataset: Access restricted due to licensing with THAIVIVAT INSURANCE PCL.

🎯 How to Use

  1. Train the Model:
    python train.py --config configs/mars_config.yaml
    
  2. Evaluate the Model:
    python evaluate.py --checkpoint checkpoints/mars_best_model.pth --data data/test/
    
  3. Run Inference:
    python inference.py --image_path images/sample.jpg --output_dir results/
    

🌐 Live Demos

Experience MARS in action: Visit GitHub Pages

πŸ“‚ Datasets

Our models were trained on both public and private datasets:

πŸ” Citation

If you find our work helpful, please consider citing it:

@inproceedings{panboonyuen2023mars,
  title={MARS: Mask Attention Refinement with Sequential Quadtree Nodes for Car Damage Instance Segmentation},
  author={Panboonyuen, Teerapong and Nithisopa, Naphat and Pienroj, Panin and Jirachuphun, Laphonchai and Watthanasirikrit, Chaiwasut and Pornwiriyakul, Naruepon},
  booktitle={International Conference on Image Analysis and Processing},
  pages={28--38},
  year={2023},
  organization={Springer}
}

If you’re utilizing the public dataset Car Damage Detection (CarDD), which includes 4,000 high-resolution images and over 9,000 well-annotated instances across six damage categories (dent, scratch, crack, glass shatter, lamp broken, and tire flat), please make sure to cite the following paper:

@article{wang2023cardd,
  title={Cardd: A new dataset for vision-based car damage detection},
  author={Wang, Xinkuang and Li, Wenjing and Wu, Zhongcheng},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  volume={24},
  number={7},
  pages={7202--7214},
  year={2023},
  publisher={IEEE}
}

πŸ“œ License

This project is licensed under the MIT License. For more details, see the LICENSE file.

πŸ“§ Contact

For inquiries or collaborations, feel free to reach out:

MARS Demo 1 MARS Demo 2 MARS Demo 3 MARS Demo 4 MARS Demo 5