Welcome to the official repository for MARSβan innovative deep learning model tailored for precise car damage instance segmentation. Leveraging advanced self-attention mechanisms with sequential quadtree nodes, MARS delivers superior segmentation masks, surpassing state-of-the-art methods like Mask R-CNN, PointRend, and Mask Transfiner.
In the realm of car insurance, accurately assessing vehicle damage is crucial. Traditional models often struggle with complex images and fine segmentation tasks. MARS (Mask Attention Refinement with Sequential Quadtree Nodes) addresses these challenges by recalibrating channel weights using a quadtree transformer, enhancing segmentation accuracy.
MARS was showcased at the International Conference on Image Analysis and Processing 2023 (ICIAP 2023) in Udine, Italy.
If youβre interested in exploring the academic work behind MARS, please check out the following publication:
requirements.txt
git clone https://github.com/kaopanboonyuen/MARS.git
cd MARS
python3 -m venv mars-env
source mars-env/bin/activate # For Windows: `mars-env\Scripts\activate`
pip install -r requirements.txt
data/
directory.python train.py --config configs/mars_config.yaml
python evaluate.py --checkpoint checkpoints/mars_best_model.pth --data data/test/
python inference.py --image_path images/sample.jpg --output_dir results/
Experience MARS in action: Visit GitHub Pages
Our models were trained on both public and private datasets:
If you find our work helpful, please consider citing it:
@inproceedings{panboonyuen2023mars,
title={MARS: Mask Attention Refinement with Sequential Quadtree Nodes for Car Damage Instance Segmentation},
author={Panboonyuen, Teerapong and Nithisopa, Naphat and Pienroj, Panin and Jirachuphun, Laphonchai and Watthanasirikrit, Chaiwasut and Pornwiriyakul, Naruepon},
booktitle={International Conference on Image Analysis and Processing},
pages={28--38},
year={2023},
organization={Springer}
}
If youβre utilizing the public dataset Car Damage Detection (CarDD), which includes 4,000 high-resolution images and over 9,000 well-annotated instances across six damage categories (dent, scratch, crack, glass shatter, lamp broken, and tire flat), please make sure to cite the following paper:
@article{wang2023cardd,
title={Cardd: A new dataset for vision-based car damage detection},
author={Wang, Xinkuang and Li, Wenjing and Wu, Zhongcheng},
journal={IEEE Transactions on Intelligent Transportation Systems},
volume={24},
number={7},
pages={7202--7214},
year={2023},
publisher={IEEE}
}
This project is licensed under the MIT License. For more details, see the LICENSE file.
For inquiries or collaborations, feel free to reach out: