Algorithm for detecting and classification of objects on a unhomogeneous background for optoelectronic systems


DOI: 10.34759/trd-2023-129-26

Аuthors

Chernikov A. A.

Scientific Research Institute of Electronic Devices, Novosibirsk, Russia

e-mail: ancher1994@gmail.com

Abstract

This article discusses the automatic detection and classification of unmanned aerial vehicles (UAV) and armored vehicles (BT) in the optical flow for optoelectronic systems. The algorithm is able to detect and classify objects in real time against a non-uniform background. Despite the presence of a fairly large number of methods for detecting and localizing objects in images, the solution of this problem to the full extent is still a rather laborious task, which, as a rule, requires manual labor of expert operators, which requires large time costs and may affect the efficiency of detection.

The purpose of the presented work is to increase the efficiency of detecting complex objects of interest against the background for further classification of objects. When detecting the probable location of the object, the two-dimensional wavelet transform algorithm and the DBSCAN spatial clustering algorithm were used. A convolutional neural network was trained to classify the detected object. When training a convolutional neural network, a training sample was prepared, consisting of real and simulated images of objects. The algorithm for automatic detection and classification of objects was developed in Python using the OpenCV library.

At the end of the work, the results of an experimental study of the developed algorithm are presented, on simulated and real images in the infrared range. The presented studies were performed using video image processing methods for object detection and a convolutional neural network for object classification. The proposed algorithm can be used to detect objects against an inhomogeneous background in real time by optoelectronic system in the infrared range.

Keywords:

optoelectronic system, unmanned aerial vehicle, armored vehicles, neural networks

References

  1. Alpatov B.A., Blokhin A.N. Murav’ev V.S. Tsifrovaya obrabotka signalov, 2010, no. 4, pp. 12–17.
  2. Krizhevsky A., Sutskever I. Hinton G.E. ImageNet Classification with Deep Convolutional Neural Networks, Advances in Neural Information Processing Systems, 2012, vol. 25 (2). DOI:10.1145/3065386
  3. Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting // Journal of Machine Learning Research, 2014, vol. 15 (1), pp. 1929-1958.
  4. Guzenko O.B., Katulev A.N., Khramichev A.A., Yagol’nikov SV. Avtomaticheskoe obnaruzhenie i soprovozhdenie dinamicheskikh obektov na izobrazheniyakh, formiruemykh optiko-elektronnymi priborami v usloviyakh apriornoi neopredelennosti. Metody i algoritmy (Automatic detection and tracking of dynamic objects in images generated by optoelectronic devices under conditions of a priori uncertainty. Methods and algorithms): monografiya. Moscow, Radiotekhnika, 2015, 280 p.
  5. Apollonov D.V., Bibikova K.I., Shibaev V.M., Efimova I.E. Trudy MAI, 2022, no. 122. URL: https://trudymai.ru/eng/published.php?ID=164299. DOI: 10.34759/trd-2022-122-23
  6. Trusfus M.V., Abdullin I.N. Trudy MAI, 2021, no. 116. URL: https://trudymai.ru/eng/published.php?ID=121099. DOI: 10.34759/trd-2021-116-13
  7. Gainanov D.N., Chernavin N.P., Chernavin P.F., Chernavin F.P., Rasskazova V.A. Trudy MAI, 2019, no. 109. URL: https://trudymai.ru/eng/published.php?ID=111419. DOI: 10.34759/trd-2019-109-20
  8. Wang K. et al. Detection of infrared small targets using feature fusion convolutional network, IEEE Access, 2019, pp. 146081–146092. DOI:10.1109/ACCESS.2019.2944661
  9. Shuang-Chen Wu, Zheng-Rong Zuo. Small target detection in infrared images using deep convolutional neural networks, Journal Infrared Millimeter Waves, 2019, vol. 38, issue 3. URL: https://www.researching.cn/articles/OJ89ba8c9b029a4965
  10. LeCun Y., Bengio Y., Hinton G. Deep learning, Nature, 2015, vol. 521, pp. 436–444. DOI:10.1038/nature14539
  11. Nguyen A., Yosinski J., Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015, pp. 427–436. DOI:10.1109/CVPR.2015.7298640
  12. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, LasVegas, 2016, pp. 770–778. DOI:10.1109/CVPR.2016.90
  13. Zhou B., Khosla A., Lapedriza A., Oliva A., Torralba A. Learning Deep Features for Discriminative Localization, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June—1 July 2016; pp. 2921–2929. DOI:10.1109/CVPR.2016.319
  14. Boikov V.A., Kolyuchkin V.Ya. Vestnik MGTU im. N.E. Baumana. Seriya: Priborostroenie, 2017, no. 5, pp. 4–13. DOI: 10.18698/0236-3933-2017-5-4-13
  15. Zhao B. et al. Object detection based on multi-channel deep CNN // 14th International Conference on Computational Intelligence and Security (CIS), IEEE Computer Society, 2018, pp. 164–168. DOI:10.1109/CIS2018.2018.00043
  16. Druki A.A., Spitsyn V.G., Boltova Yu.A., Bashlykov A.A. Sematic segmentation of earth remote sensing data using neural network algorithms, Bulletin of the Tomsk Polytechnic University. Series: Engineering of Georesources, 2018, vol. 329(1), pp. 59-68.
  17. Voulodimos A., Doulamis N., Doulamis A., Protopapadakis A. Deep Learning for Computer Vision: A Brief Review, Computational Intelligence and Neuroscience, 2018, pp. 1-13. DOI:10.1155/2018/7068349
  18. Valdenegro-Toro M. Learning Objectness from Sonar Images for Class-Independent Object Detection, 2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4–6 September 2019, pp. 1–6. DOI:10.1109/ECMR.2019.8870959
  19. Rashid T. Sozdaem neironnuyu set’ (Creating a neural network), Saint Petersburg, Al’fa-kniga, 2017, 274 p.
  20. Huu Thu Nguyen. et al. Multiple Object Detection Based on Clustering and Deep Learning Methods, Sensors, 2020, vol. 20 (16). DOI:10.3390/s20164424
  21. Antoine d’Acremont et al. CNN-Based Target Recognition and Identification for Infrared Imaging in Defense Systems, Sensors, 2019, vol. 19 (9), pp. 2040.
    DOI:10.3390/s19092040
  22. Khismatov I.F. Trudy MAI, 2019, no. 108. URL: https://trudymai.ru/eng/published.php?ID=109572. DOI: 10.34759/trd-2019-108-18
  23. Chernikov A.A. Purtov A.I., Prokof’ev I.V., Yushchenko V.P. Izvestiya vysshikh uchebnykh zavedenii. Povolzhskii region. Tekhnicheskie nauki, 2020, no. 4 (56), pp. 38–45. DOI: 10.21685/2072-3059-2020-4-4
  24. Chernikov A.A., Legkii V.N. Gagarinskie chteniya — 2020: tezisy dokladov, Moscow, Izd-vo MAI, 2020, pp. 848.
  25. Chernikov A.A., Legkii V.N. XIX Vserossiiskaya nauchno-tekhnicheskaya konferentsiya studentov, magistrantov, aspirantov i molodykh uchenykh «Tekhnika XXI veka glazami molodykh uchenykh i spetsialistov»: sbornik statei, Tula, Izd-vo TulGU, 2021, 335 p.
  26. Chernikov A.A. XIV Vserossiiskii mezhotraslevoi molodezhnyi konkurs nauchno-tekhnicheskikh rabot i proektov «Molodezh’ i budushchee aviatsii i kosmonavtiki»: sbornik trudov, Moscow, Izd-vo Pero, 2022, pp. 151.

Download

mai.ru — informational site MAI

Copyright © 2000-2024 by MAI

Вход