Comparative analysis of neural network architectures in the task of detection and identification of target and velocity jammer signals


Аuthors

Koval N. A.

Moscow Aviation Institute (National Research University), 4, Volokolamskoe shosse, Moscow, А-80, GSP-3, 125993, Russia

e-mail: niki-kov@yandex.ru

Abstract

As is known, radar signals are subjected to various kinds of interference. A special place is occupied by the imitating interference to automatic tracking systems (leading-astray interference). The means for the leading-astray interference creating are capable of generating signals, which smoothly introduces false information about the target movement parameters (such as Doppler frequency or delay time), which ultimately leads to the automatic tracking failure [1-2].

Velocity leading-astray jammers (VJ) represent the greatest danger for the onboard Doppler radar stations. The jamming effect file detection and jamming and target signals distinction at the initial stage of the jammer operation, which would prevent the traction failure and ensure reliable information obtaining about the target, may be solution to the problem of the VJ counteracting. The capabilities of artificial neural networks (deep learning) are being studied in the presented article for this problem solution. The idea consists in regularities educing by the neural networks in characteristic dynamics of the signal spectrum received by the radar system under the imVJ pact of the in the process of their learning.

Temporal interrelation of the spectrums, obtained at the successive radar station operating cycles, should be accounted for the characteristic dynamics educing, which brings us explicitly to the time sequences processing task. As long as the very fact of the interference impact educing is understood as the VJ signal detection task, this can be represented in the context of machine learning in the form of the spectrum classification task (spectrum transformation of the signal received at every cycle of the radar operation into the VJ presence/absence mark). The signal frequency estimation in its turn is being reduced to determining the index number of the Doppler’s filter, which in their essence are the frequency domain sampling. That is, each filter is assigned a certain frequency range of the signal being analyzed at a given time instant. Thus, this task can be represented as a spectrum regression (converting the spectrum into the number of the Doppler filter of the target signal).

The following architectures intended for thetime sequences processing are being studied within the framework of this article: classical convolutional CNN network (for working with time sequences, layers of one-dimensional convolution are employed) [7]; temporary convolutional TCN network [8]; recurrent networks based on the LSTM long short-term memory layers [9]; networks based on managed recurrent GRU units [10]. Several models of each presented architecture were trained with different number of layers, size of layers, etc. To assess the trained models quality, the root of the mean square error (RMSE) for the regression problem and the F-measure (F1-score) for the classification problem were applied.

The result of the accuracy comparing of the considered architectures reveealed that CNN displayed the worst result of 6.3 RMSE and 0.986 F1-score. TheLSTM and GRU appeared to be the most accurate in both tasks (1.23 RMSE and 0.997 F1-score, 1.27 RMSE and 0.995 F1-score, respectively), and in the classification task, they apparently reached the limit of accuracy. TCN performed slightly worse (1.45 RMSE and 0.994 F1-score), however, the required network size to achieve results comparable to LSTM and GRU makes the use of TCN impractical.

The author recommends employing the LSTM or GRU network with two layers of 100 hidden units for regression and an LSTM with two layers of 25 hidden units for classification in the considered task. The choice between architectures is being stipulated by the required level of accuracy and hardware limitations on the counteraction algorithm being developed: the LSTM is slightly more accurate, but due to its structure ,it has more trainable parameters with the same number of hidden units, which leads to the use of more memory and lower computing speed compared to GRU.

Thus, the article demonstrates that neural networks are able to solve the said problems quite accurately. In the course of the study, a comparative analysis of neural network architectures designed for processing time sequences was performed and suitable for further integration into onboard digital signal processing algorithms were identified.

The author’s further intention is to study the selected architectures structures in the more complicated jamming situation.

Keywords:

velocity jammer, radar signal, deep learning, neural networks, spectrum

References

  1. Leonov A.I., Fomichev K.I. Monoimpul'snaya radiolokatsiya. (Monopulse radiolocation), Moscow, Sovetskoe Radio, 1970, 392 p.

  2. Berikashvili V.Sh., Cherepanov A.K. Radiotekhnicheskie sistemy izvlechenii i obrabotki informatsii (Radio engineering systems of information retrieval and processing), Moscow, MGTU MIREA, 2011, 272 p.

  3. Bogdanov A.V., Zakomoldin D.V., Dokuchaev Ya.S., Novichënok V.A., Kochetov I.V. Zhurnal Sibirskogo federal'nogo universiteta. Tekhnika i tekhnologii, 2019, vol. 12, no. 1, pp. 30-40. DOI: 10.17516/1999-494X-0103

  4. Xiong W., Wang X., Zhang G. Cognitive waveform design for anti-velocity deception jamming with adaptive initial phases, 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2016, pp. 1-5. DOI: 10.1109/RADAR.2016.7485306

  5. Ya Yang, Jian Wu, Guolong Cui, Liang Li, Lingjiang Kong and Yulin Huang. Optimized phase-coded waveform design against velocity deception, 2015 IEEE Radar Conference (RadarCon), Arlington, VA, 2015, pp. 0400-0404. DOI: 10.1109/RADAR.2015.7131032

  6. Liu Z., Sui J., Wei Z., Li X. A Sparse-Driven Anti-Velocity Deception Jamming Strategy Based on Pulse-Doppler Radar with Random Pulse Initial Phases, Sensors, 2018, vol. 18, pp. 1249. DOI: 10.3390/s18041249

  7. Podstrigaev A.S., Smolyakov A.V. Trudy MAI, 2020, no. 114. URL: https://trudymai.ru/eng/published.php?ID=118984. DOI: 10.34759/trd-2020-114-11

  8. Malygin I.V., Bel'kov S.A., Tarasov A.D., Usvyatsov M.R. Trudy MAI, 2017, no. 96. URL: https://trudymai.ru/eng/published.php?ID=85797

  9. Efimov E.N., Shevgunov T.Ya. Trudy MAI, 2015, no. 82. URL: https://trudymai.ru/eng/published.php?ID=58786

  10. Efimov E.N., Shevgunov T.Ya. Trudy MAI, 2012, no. 51. URL: https://trudymai.ru/eng/published.php?ID=29159

  11. Nikolenko S., Kadurin A., Arkhangel'skaya E. Glubokoe obuchenie (Deep Learning), Sankt-Petersburg, Piter, Piter, 2018, 480 p.

  12. Bai Shaojie, J. Zico Kolter, and Vladlen Koltun. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling, Computer Science, 2018. URL: https://doi.org/10.48550/arXiv.1803.01271

  13. Hochreiter S., Schmidhuber J. Long short-term memory, Neural computation, 1997, vol. 9 (8), pp.1735–1780. DOI: 10.1162/neco.1997.9.8.1735

  14. Cho Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation, Computer Science, 2014. URL: https://doi.org/10.48550/arXiv.1406.1078

  15. Bishop C.M. Pattern Recognition and Machine Learning, Springer, New York, NY, 2006.

  16. Kingma Diederik, Jimmy Ba. Adam: A method for stochastic optimization, Computer Science, 2014. URL: https://doi.org/10.48550/arXiv.1412.6980

  17. Murphy K.P. Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge, Massachusetts, 2012.

  18. Pascanu R., Mikolov T., Bengio Y. On the difficulty of training recurrent neural networks, Proceedings of the 30th International Conference on Machine Learning, 2013, vol. 28 (3), pp. 1310–1318.

  19. Bottou Léon, Bousquet Olivier. The Tradeoffs of Large Scale Learning, Conference: Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007.

  20. Bottou Léon. Online Algorithms and Stochastic Approximations. Online Learning and Neural Networks, Cambridge University Press, Cambridge, UK, 1998.


Download

mai.ru — informational site MAI

Copyright © 2000-2024 by MAI

Вход