Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE credits
Abstract: Growing needs of communication, demands a higher data transmission rate in 5G NR. In the 3rd generation partnership project (3GPP), frames are used to schedule the data to be transferred between the cellular base station (gNB) and user equipment (UE). These frames are further divided into slots and a fixed number of slots are used for uplink and downlink. In downlink, several slots are being utilized for CSI-RS report, containing the best narrow beams and their power (RSRP). In this thesis, cell downlink capacity is improved by using supervised learning algorithms. The narrow beam is selected using machine learning, no longer using the scheduled slots in downlink, these slots are further utilized in data transmission, resulting in improved cell capacity. Supervised learning algorithms namely, Support Vector Machines (SVM), K-nearest Neighbor (k-NN), and Logistic Regression (LR) are compared, collecting the data using the 5G simulator at Ericsson AB, Lund and training them to classic narrow beams. The SVM algorithm is found to outperform other algorithms with an accuracy of 78.5% plus 19.6% of neighbor beam selection. The accuracy of the algorithm varies depending on the scenario and the quantity of training data used. Plugging-in the SVM algorithm into the simulator, the average throughput of multiple users (2, 5, 10,20, 30, and 40) is collected varying different user speeds (1m/s, 5m/s, and 10m/s) and different SSB intervals (20ms and 40ms). For 40ms SSB interval, 40 users, and user speed 10m/s, the average gain in throughput is found to be 46.6%. Similarly, for 20msSSB interval, 30 users, and user speed 10m/s, the average throughput gain is 21.15%.
Keywords: 5G NR, 3GPP, Beamforming, Supervised learning, Machine learning, SVM, and Multi-class classification (MCC).
Aim and Objective: The goal of this thesis study is to investigate the usage of machine learning algorithms for optimizing the beam tracking process. Machine learning algorithms can be used to make beam tracking more intelligent, robust, and less resource-demanding. Different machine learning methods are explored to find the best possible beam for a user, and the performance of the algorithm is then compared with the baseline(3GPP) algorithm to find the most suitable algorithm. The machine learning algorithms are trained by using as input, the beam selection result of the baseline algorithm. Then, a moving UE is utilized to extract the measurements. Finally, each algorithm is evaluated with respect to the performance of the baseline algorithm. The best performing algorithm is plugged into the Ericsson simulator to select the best narrow beam and is used to collect the KPIs. Finally, the gain in total throughput is evaluated for a different number of users, moving with different speeds.
Methodology: SLR and Experiment is conducted.
Results: The three algorithms which are SVM, k-NN, and Logistic Regression were trained to classify 12 different classes, using the same data, to be able to make fair comparisons of their performances. Here, the results are divided into several sections. Firstly, the shortlisted algorithms are compared using a 10-fold cross-validation technique. Secondly, performances of single UE and multiple UE scenarios are shown separately. The best performance of each algorithm is presented in the form of a confusion matrix, showing the number of classified narrow beams. The X-axis in the matrix represents the labels classified by machine learning algorithms and Y-axis represents the true label (selected as per 3GPP algorithm).
Conclusion: In this thesis, the usage of a machine learning algorithm for selecting a narrow beam inside a wide beam is examined. Three different algorithms were considered namely, Support Vector Machines (SVM), k-Nearest Neighbor (kNN), and Logistic Regression(LR) using SLR methodology. Out of which the SVM algorithm is found to perform best, with an accuracy of 78.5% when tested offline. This algorithm is plugged into the Ericsson simulator and tested to perform acceptably with an accuracy of 78.3%and 19.6% of times the UE selects a neighbor narrow beam. Side lobes that exist in a beam tend to mislead the machine learning model, resulting in misclassification. Thus, most of the inaccurate narrow beam classification is found to be inside lobes. Using SVM has improved the average downlink throughput significantly in multiple users scenario, by not using CSI-RS measurements in the shared channel. This leads to more availability of slots for data transmission. Therefore, an increase of29.40% in average downlink throughput for 40ms CSI-RS reporting interval and 40UEs is found compared to baseline algorithm also, an increase of 21.15% in average downlink throughput for 20ms CSI-RS reporting interval and 30 UEs scenario. Average downlink throughput increment was greater for 20ms interval, compared to40ms CSI-RS reporting interval. This proves that frequent use of machine learning updating the narrow beam leads to better throughput. Hence, the increased capacity of the cell. Finally, all the aims and objectives in the study are achieved and the research questions mentioned in the thesis are answered and justified.
2021. , p. 79