Machine learning methods in classification of radio signals
Radio engineering, including TV systems and devices
Аuthors
1*, 1**, 1***, 2****1. Ural Federal University named after the first President of Russia B.N. Yeltsin, 19, Mira str., Ekaterinburg, 620002, Russia
2. Moscow Institute of Physics and Technology (National Research University), 9, Institutskiy per., Dolgoprudny, Moscow region, 141701, Russia
*e-mail: pit_pit2@mail.ru
**e-mail: buf2@mail.ru
***e-mail: alex@chrns.com
****e-mail: m.usvyatsov@gmail.com
Abstract
This paper is devoted to the issue of recognizing received encoded radio signal sequences. Traditionally, correlators or matched filters are used in communication systems to detect and process noise-like signals. These two models use a threshold detection parameter. The use of a neural network is proposed to improve the quality of signal recognition when the noise characteristics in an environment are unknown. The neural network is a non-linear model with a relatively large number of parameters which tests proposed examples during the learning phase and attempts to reproduce the relationship between them using the responses. To use the neural network, it is necessary to reformulate the original issue in terms of optimization. That is, to enter a quality functional, which will be optimized, and choose a method of optimization for the given functionality. To select the neural network parameters that achieve the optimum of the quality functional, the back propagation algorithm will be used. The stochastic gradient descent algorithm determined by the quality functional will be used to calculate parameter updates of the neural network. For convenience, only digital signals will be considered in this paper. Nevertheless, the method described in this paper can be applied to a continuous signal. The digitizing of a continuous signal will transform the task into one of signal detecting, which has been formulated above. In each cycle of the receiver’s operation, a signal is generated at its input.
The signal is a sequence of fixed-length and can either be similar to the protocol-defined signal, or be significantly different. Therefore, the task can be formulated as a task of binary classification of the received signal in each cycle of the receiver’s operation as a valid signal and noise.
It is assumed that the quality of recognition will be better than with the traditional methods because, during training, the neural network is able to memorize the special features of the noise and, consequently, use the obtained model at the signal classification stage.
A diagram for an experimental stand is also provided in this paper that would allow for the confirmation of this assumption.Keywords:
neural network, signal processing, m-sequence, Barker's codes, correlatorReferences
-
Barker R.H. Group synchronizing of binary digital sequences, Communication theory, Butterworth, London, 1953, pp. 273-287.
-
Digital Design and Computer Architecture. 2nd Edition. David Harris Sarah Harris, ISBN: 9780123978165, Paperback ISBN: 9780123944245, Imprint: Morgan Kaufmann, Published Date: 24th July 2012, 712 p.
-
Forney G. Generalized minimum distance decoding, IEEE Transactions on Information Theory, 1966, vol. 12, no. 2, pp. 125-131.
-
Rüschendorf L. The Wasserstein distance and approximation theorems, Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 1985, vol. 70, no. 1, pp. 117-129.
-
Welch L. Lower bounds on the maximum cross correlation of signals, IEEE Transactions on Information theory, 1974, vol. 20, no. 3, pp. 397-399.
-
Amari S. Backpropagation and stochastic gradient descent method, Neurocomputing, 1993, vol. 5, no. 4-5, pp. 185-196.
-
Vorontsov K.V. Matematicheskie metody obucheniya po pretsedentam (teoriya obucheniya mashin), available at: http://www.machinelearning.ru/wiki/images/6/6d/Voron-ML-1
-
Chen T., Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems, IEEE Transactions on Neural Networks, 1995, vol. 6, no. 4, pp. 911-917.
-
Shore J., Johnson R. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy, IEEE Transactions on information theory, 1980, vol. 26, no. 1, pp. 26-37.
-
Gurakov M.A., Krivonosov E.O., Kostyuchenko E.Yu. Trudy MAI, 2016, no. 86, available at: http://trudymai.ru/eng/published.php?ID=67851
-
Efimov E.N., Shevgunov T.Ya. Trudy MAI, 2015, no. 82, available at: http://trudymai.ru/eng/published.php?ID=58786
-
Filatov V.I. Trudy MAI, 2015, no. 81, available at: http://trudymai.ru/eng/published.php?ID=57889
-
Sukhanov N.V. Trudy MAI, 2013, no. 65, available at: http://trudymai.ru/eng/published.php?ID=36013
-
Tyumentsev Yu.V., Kozlov D.S. Trudy MAI, 2012, no. 52. available at: http://trudymai.ru/eng/published.php?ID=29421
-
Efimov E.N., Shevgunov T.Ya. Trudy MAI, 2012, no. 51, available at: http://trudymai.ru/eng/published.php?ID=29159
Download