Vehicle Re-identification System using Residual Network with Instance-Batch Normalization

Main Article Content

Wei Jie Low
Kah-Ong Michael Goh
Check-Yee Law
Connie Tee
Yong-Wee Sek
Md Ismail Hossen

Abstract

Vehicle Re-identification (Re-ID) has become extremely important due to the increasing number of vehicles on the road and its potential to address traffic-related challenges. As a result, there is also a growing need for efficient methods to track and identify vehicles across multiple traffic cameras. One of the biggest challenges of this task is the variations in vehicle appearances across different camera angles. This is because vehicles can appear significantly different when captured from various camera angles and viewpoints. Furthermore, the current vehicle Re-ID solutions typically require extensive coding knowledge, making it inaccessible to many potential users. Therefore, we focus on developing a user-friendly software application that simplifies the entire Re-ID workflow. This includes tasks like dataset preparation and data preprocessing using YOLO, model training with ResNet-ibn, performance evaluation, and visualization of results. The application provides a comprehensive pipeline that enables users to perform vehicle Re-ID tasks without requiring advanced programming skills. The experiment results shown that ResNet-IBN model achieved the highest results on custom dataset MMUVD_1500 with mAP of 87.63% and Rank@1 of 84.68% respectively. For instance, users would be able to input query vehicle images and receive matched gallery images from different camera viewpoints through the application interface. Thus, this makes it easier for users to track vehicles across multiple locations, enhance the usability and broaden the accessibility of vehicle Re-ID tasks. The final outcome is a complete software solution with a user-friendly interface that allows users to perform vehicle Re-ID tasks effortlessly.

Article Details

How to Cite
Low, W. J., Goh , K.-O. M. ., Law, C.-Y., Tee, C., Sek, Y.-W., & Md Ismail Hossen. (2026). Vehicle Re-identification System using Residual Network with Instance-Batch Normalization . Journal of Informatics and Web Engineering, 5(1), 267–282. https://doi.org/10.33093/jiwe.2026.5.1.17
Section
Regular issue

References

S. Teng, and T. Dong, “Unsupervised vehicle re-identification via raw UAV videos,” in Image and Graphics, H. Lu, W. Ouyang, H. Huang, J. Lu, R. Liu, J. Dong, and M. Xu, Eds., Cham: Springer Nature Switzerland, pp. 360–372, 2023, doi: 10.1007/978-3-031-46308-2_30.

H. Taleb, “Vehicle re-identification using deep learning methods with a focus on challenges and recent solutions,” International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 4, Art. no. 4, Jun 2024.

A. Amiri, A. Kaya, and A. S. Keceli, “A comprehensive survey on deep-learning-based vehicle re-identification: models, data sets and challenges,” arXiv: arXiv:2401.10643, Jan 2024, doi: 10.48550/arXiv.2401.10643.

H. Li et al., “Attributes guided feature learning for vehicle re-identification,” arXiv: arXiv:1905.08997, Nov. 2021, doi: 10.48550/arXiv.1905.08997.

A. O. R. Franco et al., “Vehicle re-identification by deep feature embedding and approximate nearest neighbors,” in 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom: IEEE, pp. 1–8, Jul. 2020, doi: 10.1109/IJCNN48605.2020.9207150.

P. Moral, A. Garcia-Martin, and J. M. Martinez, “Vehicle re-identification in multi-camera scenarios based on ensembling deep learning features,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA: IEEE, pp. 2574–2580, Jun. 2020, doi: 10.1109/CVPRW50498.2020.00310.

H. Zhou, C. Li, L. Zhang, and W. Song, “Attention-aware network and multi-loss joint training method for vehicle re-identification,” in 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China: IEEE, pp. 1330–1334, Jun. 2020, doi: 10.1109/ITNEC48623.2020.9085181.

X. Zhu, Z. Luo, P. Fu, and X. Ji, “VOC-RelD: Vehicle re-identification based on vehicle-orientation-camera,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA: IEEE, pp. 2566–2573, Jun. 2020, doi: 10.1109/CVPRW50498.2020.00309.

N. Dilshad, and J. Song, “Dual-stream siamese network for vehicle re-identification via dilated convolutional layers,” in 2021 IEEE International Conference on Smart Internet of Things (SmartIoT), Jeju, Korea, Republic of: IEEE, pp. 350–352, Aug. 2021, doi: 10.1109/SmartIoT52359.2021.00065.

E. Kamenou, J. M. Del Rincon, P. Miller, and P. Devlin-Hill, “Multi-level deep learning vehicle re-identification using ranked-based loss functions,” in 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy: IEEE, pp. 9099–9106, Jan. 2021, doi: 10.1109/ICPR48806.2021.9412415.

Q. Truong, and B. Mei, “Not all data matters: An efficient approach to multi-domain learning in vehicle re-identification,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA: IEEE, pp. 1278–1285, Sep. 2021, doi: 10.1109/ITSC48978.2021.9564945.

J. Zhao, F. Qi, G. Ren, and L. Xu, “PhD Learning: Learning with Pompeiu-hausdorff Distances for video-based vehicle re-identification,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA: IEEE, pp. 2225–2235, Jun. 2021, doi: 10.1109/CVPR46437.2021.00226.

Z. Zheng, T. Ruan, Y. Wei, Y. Yang, and T. Mei, “VehicleNet: Learning robust visual representation for vehicle re-identification,” IEEE Transactions on Multimedia, vol. 23, pp. 2683–2693, 2021, doi: 10.1109/TMM.2020.3014488.

H. Chen, Y. Liu, Y. Huang, W. Ke, and H. Sheng, “Partition and reunion: A viewpoint-aware loss for vehicle re-identification,” in 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France: IEEE, pp. 2246–2250, Oct. 2022, doi: 10.1109/ICIP46576.2022.9897541.

Y. Gao, L. Lu, B. Xu, and D. Chu, “Multi-dimensional attention network for vehicle re-identification,” in 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China: IEEE, pp. 1–5, Oct. 2022, doi: 10.1109/CVCI56766.2022.9965183.

A. Kumar, T. Kashiyama, H. Maeda, F. Zhang, H. Omata, and Y. Sekimoto, “Vehicle re-identification and trajectory reconstruction using multiple moving cameras in the CARLA driving simulator,” in 2022 IEEE International Conference on Big Data (Big Data), Osaka, Japan: IEEE, pp. 1858–1865, Dec. 2022, doi: 10.1109/BigData55660.2022.10020814.

N. Martinel, M. Dunnhofer, R. Pucci, G. L. Foresti, and C. Micheloni, “Lord of the Rings: Hanoi pooling and self-knowledge distillation for fast and accurate vehicle reidentification,” IEEE Transactions on Industrial Informatics, vol. 18, no. 1, pp. 87–96, Jan. 2022, doi: 10.1109/TII.2021.3068927.

Y. Shi, X. Zhang, and X. Tan, “Local-guided global collaborative learning transformer for vehicle reidentification,” in 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI), Macao, China: IEEE, pp. 793–798, Oct. 2022, doi: 10.1109/ICTAI56018.2022.00122.

P. Spagnolo, P. L. Mazzeo, F. Buccoliero, P. L. Carcagni, and C. Distante, “A deep learning approach for vehicle re-identification,” in 2022 7th International Conference on Smart and Sustainable Technologies (SpliTech), Split /Bol, Croatia: IEEE, pp. 1–6, Jul. 2022, doi: 10.23919/SpliTech55088.2022.9854225.

Y. Sun et al., “SCAN: Stronger Collaborative Attention Network for vehicle re-identification,” in 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy: IEEE, pp. 1–8, Jul. 2022, doi: 10.1109/IJCNN55064.2022.9892171.

H. Yang, J. Cai, M. Zhu, C. Liu, and Y. Wang, “Traffic-Informed Multi-Camera Sensing (TIMS) system based on vehicle re-identification,” IEEE Transactions on Intelligent Transportation System, vol. 23, no. 10, pp. 17189–17200, Oct. 2022, doi: 10.1109/TITS.2022.3154368.

C. Zhang et al., “Vehicle re-identification for lane-level travel time estimations on congested urban road networks using video images,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 12877–12893, Aug. 2022, doi: 10.1109/TITS.2021.3118206.

E. Almeida, B. Silva, and J. Batista, “Strength in diversity: Multi-branch representation learning for vehicle re-identification,” in 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain: IEEE, pp. 4690–4696, Sep. 2023, doi: 10.1109/ITSC57777.2023.10422175.

L. Du, K. Huang, and H. Yan, “ViT-ReID: A vehicle re-identification method using visual transformer,” in 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China: IEEE, pp. 287–290, Feb. 2023, doi: 10.1109/NNICE58320.2023.10105738.

C. Zhao, M. Wang, N. Su, Y. Yan, and S. Feng, “DSTNet: Dynamic-Static Transformer Style Network for cross-resolution vehicle reidentification,” IEEE Geoscience and Remote Sensing Letters, vol. 20, pp. 1–5, 2023, doi: 10.1109/LGRS.2023.3270883.

B. Jiao, L. Yang, L. Gao, P. Wang, S. Zhang, and Y. Zhang, “Vehicle re-identification in aerial images and videos: Dataset and approach,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 3, pp. 1586–1603, Mar. 2024, doi: 10.1109/TCSVT.2023.3298788.

H. Li, J. Chen, A. Zheng, Y. Wu and Y. Luo, “Day-night cross-domain vehicle re-identification,” 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 12626-12635, 2024, doi: 10.1109/CVPR52733.2024.01200.

J. Lian, D.-H. Wang, Y. Wu, and S. Zhu, “Multi-Branch enhanced discriminative network for vehicle re-identification,” IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 2, pp. 1263–1274, Feb. 2024, doi: 10.1109/TITS.2023.3316068.

H. Sheng et al., “Discriminative feature learning with co-occurrence attention network for vehicle ReID,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 5, pp. 3510–3522, May 2024, doi: 10.1109/TCSVT.2023.3326375.

S. Wang, D. Yang, H. Sheng, J. Shen, Y. Zhang, and W. Ke, “A blockchain-enabled distributed system for trustworthy and collaborative intelligent vehicle re-identification,” IEEE Transactions on Intelligent Vehicles, vol. 9, no. 2, pp. 3271–3282, Feb. 2024, doi: 10.1109/TIV.2023.3347267.

H. Wang, J. Peng, G. Jiang, and X. Fu, “Learning multiple semantic knowledge for cross-domain unsupervised vehicle re-identification,” in 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China: IEEE, pp. 1–6, Jul. 2021, doi: 10.1109/ICME51207.2021.9428440.

Z. He, H. Zhao, J. Wang, and W. Feng, “Multi-level progressive learning for unsupervised vehicle re-identification,” IEEE Transactions on Vehicular Technology, vol. 72, no. 4, pp. 4357–4371, Apr. 2023, doi: 10.1109/TVT.2022.3228127.

X. Tao, J. Kong, M. Jiang, and X. Luo, “Semantic camera self-aware contrastive learning for unsupervised vehicle re-identification,” IEEE Signal Processing Letters, vol. 31, pp. 2175–2179, 2024, doi: 10.1109/LSP.2024.3449233.

J. Yu, H. Oh, M. Kim, and J. Kim, “Weakly supervised contrastive learning for unsupervised vehicle reidentification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 11, pp. 15543–15553, Nov. 2024, doi: 10.1109/TNNLS.2023.3288139.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, real-time object detection,” in 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, USA: IEEE, pp. 779–788, Jun. 2016, doi: 10.1109/CVPR.2016.91.

N. Aharon, R. Orfaig, and B.-Z. Bobrovsky, “BoT-SORT: Robust Associations multi-pedestrian tracking,” arXiv: arXiv:2206.14651, Jul. 07, 2022, doi: 10.48550/arXiv.2206.14651.

A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, Art. no. 2, Feb. 2020, doi: 10.3390/info11020125.

Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz, “Joint discriminative and generative learning for person re-identification,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA: IEEE, pp. 2133–2142, Jun. 2019, doi: 10.1109/CVPR.2019.00224.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv: arXiv:1512.03385, Dec. 10, 2015, doi: 10.48550/arXiv.1512.03385.

X. Pan, P. Luo, J. Shi, and X. Tang, “Two at once: Enhancing Learning and Generalization Capacities via IBN-Net,” arXiv: arXiv:1807.09441, Mar. 23, 2020, doi: 10.48550/arXiv.1807.09441.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” arXiv: arXiv:1608.06993, Jan. 28, 2018, doi: 10.48550/arXiv.1608.06993.

Z. Liu et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” arXiv: arXiv:2103.14030, Aug. 17, 2021, doi: 10.48550/arXiv.2103.14030.

B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” arXiv: arXiv:1707.07012, Apr. 11, 2018, doi: 10.48550/arXiv.1707.07012.

J. Wang et al., “Deep high-resolution representation learning for visual recognition,” arXiv: arXiv:1908.07919, Mar. 13, 2020, doi: 10.48550/arXiv.1908.07919.

M. Tan, and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” arXiv: arXiv:1905.11946, Sep. 11, 2020, doi: 10.48550/arXiv.1905.11946.

V7, “Mean Average Precision (mAP) explained: Everything you need to know,” V7labs, Mar. 7, 2022. [Online]. Available: https://www.v7labs.com/blog/mean-average-precision [Accessed: Apr. 23, 2025].

D. Xu et al., “Vehicle re-identification system based on appearance features,” Security and Communication Networks, vol. 2022, Jan. 2022, doi: 10.1155/2022/1833362.

X. Liu, W. Liu, H. Ma, and H. Fu, “Large-scale vehicle re-identification in urban surveillance videos,” in 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, Jul. 2016, doi: 10.1109/ICME.2016.7553002.

A. Kanaci, X. Zhu, and S. Gong, “Vehicle re-identification in context, ” arXiv: arXiv:1809.09409, Sep. 2018, doi: 10.48550/arXiv.1809.09409.

S. X. Tan, J. Y. Ong, K. O. M. Goh, and C. Tee, “Boosting vehicle classification with augmentation techniques across multiple YOLO versions”, International Journal on Informatics Visualization (JOIV), vol. 8, No. 1, 2024, doi: 10.62527/joiv.8.1.2313.

C. H. Lim, C. Tee., T. S. Ong et al., “Visual-based vehicle detection with adaptive oversampling,” International Journal of Information Technology, vol. 16, no.8, pp. 4767–4777, 2024, doi: 10.1007/s41870-024-01977-w.