An Adaptive Framework for Resource Allocation Management in 5G Vehicular Networks
DOI:
https://doi.org/10.9781/ijimai.2025.04.002Keywords:
5G, Deep Learning, Energy Efficiency, Machine Learning, Resource AllocationAbstract
Vehicle-to-everything (V2X) communication is crucial in vehicular networks, for enhancing traffic safety by ensuring dependable and low latency services. However, interference has a significant impact on V2X communication when channel states are changed in a high mobility environment. Integration of next generation cellular networks such as 5G in V2X communication can solve this issue. Also, successful resource allocation among users achieves a better interference control in high mobility scenarios. This work proposes a novel resource allocation strategy for 5G cellular V2X communication based on clustering technique and Deep Reinforcement Learning (DRL) with the aim of maximizing systems energy efficiency and MVNO’s profit. DRL is used to distribute communication resources for the best interference control in high mobility scenarios. To reduce signalling overhead in DRL deployments, the proposed method adopted RRH grouping and vehicle clustering technique. The overall architecture is implemented in two phases. The first phase addresses the RRH grouping and vehicle clustering technique with the objective of maximising the energy efficiency of the system and the second phase addresses the technique of employing DRL in conjunction with bidding to optimise MVNO’s profit. Second phase addresses the resource allocation which is implemented in two level stage. First level addresses the bidding of resources to BS using bidding and DRL techniques and the second level addresses the resource allocation to users using Dueling DQN technique. Through simulations, the proposed algorithm's performance is compared with the existing algorithms and the results depicts the improved performance of the proposed system.
Downloads
References
X. You, CX. Wang, J. Huang, et al. “Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts,” Science China Information Sciences 64, 110301 (2021). https://doi.org/10.1007/s11432-020-2955-6.
L. Feng, W. Li, Y. Lin, L. Zhu, S. Guo and Z. Zhen, “Joint computation offloading and URLLC resource allocation for collaborative MEC assisted cellular-V2X networks,” IEEE Access, vol. 8, pp. 24914-24926, 2020.
J. -W. Ke, R. -H. Hwang, C. -Y. Wang, J. -J. Kuo and W. -Y. Chen, “Efficient RRH activation management for 5G V2X,” in IEEE Transactions on Mobile Computing, vol. 23, no. 2, pp. 1215-1229, Feb. 2024, doi: 10.1109/TMC.2022.3232547.
M.A. Thanedar and S.K. Panda, “A dynamic resource management algorithm for maximizing service capability in fog-empowered vehicular ad-hoc networks,” Peer-to-Peer Networking and Applications 16, 932–946 (2023). https://doi.org/10.1007/s12083-023-01451-7
B. Fu, Z. Wei, X. Yan, K. Zhang, Z. Feng and Q. Zhang, “A game-theoretic approach for bandwidth allocation and pricing in heterogeneous wireless networks,” 2015 IEEE Wireless Communications and Networking Conference (WCNC), New Orleans, LA, USA, 2015, pp. 1684-1689, doi: 10.1109/WCNC.2015.7127721.
A. R. Elsherif, W. -P. Chen, A. Ito and Z. Ding, “Resource allocation and inter-cell interference management for dual-access small cells,” in IEEE Journal on Selected Areas in Communications, vol. 33, no. 6, pp. 1082-1096, June 2015, doi: 10.1109/JSAC.2015.2416990.
S. Tang, Z. Pan, G. Hu, Y. Wu and Y. Li, “Deep reinforcement learning-based resource allocation for satellite internet of things with diverse QoS guarantee,” Sensors 22, (2022).
N. C. Luong, D. T. Hoang, S. Gong, et al., “Applications of deep reinforcement learning in communications and networking: A survey,” in IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3133-3174, Fourthquarter 2019, doi: 10.1109/COMST.2019.2916583.
D. Bega, M. Gramaglia, M. Fiore, A. Banchs, and X. Costa-Perez, “DeepCog: cognitive network management in sliced 5G networks with deep learning,” in Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 280–288, IEEE, Paris, France, July 2019.
J. Gante, G. Falcão, and L. Sousa, “Deep learning architectures for accurate millimeter wave positioning in 5G,” Neural Processing Letters, vol. 51, no. 1, pp. 487–514, 2020.
D. Huang, Y. Gao, Y. Li, et al., “Deep learning based cooperative resource allocation in 5G wireless networks,” Mobile Networks and Applications, pp. 1–8, 2018.
P. Yu, F. Zhou, X. Zhang, X. Qiu, M. Kadoch, and M. Cheriet, “Deep learning-based resource allocation for 5G broadband TV service,” IEEE Transactions on Broadcasting, vol. 66, no. 4, pp. 800–813, 2020.
A. Pradhan and S. Das, “Reinforcement learning-based resource allocation for adaptive transmission and retransmission scheme for URLLC in 5G,” in Advances in Machine Learning and Computational Intelligence, Springer, Singapore, 2020.
G. Zhao, M. Wen, J. Hao, and T. Hai, “Application of dynamic management of 5G network slice resource based on reinforcement Learning in Smart Grid,” in International Conference on Computer Engineering and Networks, pp. 1485–1492, Springer, Singapore, 2020.
Y. Abiko, D. Mochizuki, T. Saito, D. Ikeda, T. Mizuno, and H. Mineno, “Proposal of allocating radio resources to multiple slices in 5G using deep reinforcement learning,” in Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), pp. 1-2, IEEE, Osaka, Japan, October 2019.
P. Yu, J. Guo, Y. Huo, X. Shi, J. Wu, and Y. Ding, “Three-dimensional aerial base station location for sudden traffic with deep reinforcement learning in 5G mmWave networks,” International Journal of Distributed Sensor Networks, vol. 16, no. 5, Article ID 1550147720926374, 2020.
M. A. Salahuddin, A. Al-Fuqaha and M. Guizani, “Reinforcement learning for resource provisioning in the vehicular cloud,” IEEE Wireless Communications 23, 128–135 (2016).
Z. Li, C. Wang and C. J. Jiang, “User association for load balancing in vehicular networks: an online reinforcement learning approach,” IEEE Transactions on Intelligent Transportation Systems 18, 2217–2228 (2017).
Z. Khan, P. Fan, F. Abbas, H. Chen and S. Fang, “Two-level cluster based routing scheme for 5G V2X communication,” IEEE Access 7, 16194–16205 (2019).
X. Zhang, M. Peng, S. Yan and Y. Sun, “Deep-reinforcement-learning-based mode selection and resource allocation for cellular V2X communications,” IEEE Internet Things Journal 7, 6380–6391 (2020).
H. D. R. Albonda and J. Pérez-Romero, “Reinforcement learning-based radio access network slicing for a 5G System with support for cellular V2X,” Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 291, 262–276 (2019).
H. D. R. Albonda and J. Pérez-Romero, “An efficient RAN slicing strategy for a heterogeneous network with eMBB and V2X services,” IEEE Access 7, 44771–44782 (2019).
T. Sanguanpuak, N. Rajatheva, D. Niyato and M. Latva-Aho, “Network slicing with mobile edge computing for micro-operator networks in beyond 5G,” International Symposium on Wireless Personal Multimedia Communications, WPMC 2018-November, 352–357 (2018).
B. Kartal, P. Hernandez-Leal and M.E. Taylor, “Using monte carlo tree search as a demonstrator within asynchronous deep RL”, (2018). doi:10.48550/arxiv.1812.00045.
N. Khumalo, O. Oyerinde and L. Mfupe, “Reinforcement learning-based computation resource allocation scheme for 5G fog-radio access network,” 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC), Paris, France, 2020, pp. 353-355, doi: 10.1109/FMEC49853.2020.9144787.
G. Sun, Z. T. Gebrekidan, G. O. Boateng, D. Ayepah-Mensah and W. Jiang, “Dynamic reservation and deep reinforcement learning based autonomous resource slicing for virtualized radio access networks,” IEEE Access 7, 45758–45772 (2019).
G. Sun, G. T. Zemuy and K. Xiong, “Dynamic reservation and deep reinforcement learning based autonomous resource management for wireless virtual networks,” 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC), Orlando, FL, USA, 2018, pp. 1-4, doi: 10.1109/PCCC.2018.8710960.
G. Sun, H. Al-Ward, G. O. Boateng and G. Liu, “Autonomous cache resource slicing and content placement at virtualized mobile edge network,” IEEE Access 7, 84727–84743 (2019).
Y. Liu, J. Ding, Z. L. Zhang and X. Liu, “CLARA: A constrained reinforcement learning based resource allocation framework for network slicing,” Proc. - 2021 IEEE International Conference on Big Data, Big Data 2021 1427–1437 (2021).
Y. Hua, R. Li, Z. Zhao, X. Chen and H. Zhang, “GAN-powered deep distributional reinforcement learning for resource management in network slicing,” IEEE Journal on Selected Areas in Communications, 38, 334–349 (2019).
A. Gupta and S. Namasudra, “A Novel Technique for Accelerating Live Migration in Cloud Computing,” Automated Software Engineering 29, 34 (2022). https://doi.org/10.1007/s10515-022-00332-2.
D. Choudhary and R. Pahuja, “Improvement in quality of service against doppelganger attacks for connected network”, International Journal of Interactive Multimedia and Artificial Intelligence, 7. 51. 10.9781/ijimai.2022.08.003.
M. B. Ahmad, M. A. Shehu and D. E. Sylvanus, “Enhancing phishing awareness strategy through embedded learning tools: a simulation approach,” v1, OpenAlex, Dec. 2023, doi:10.60692/Q9Y25-7W438.
J. D. C. Little, “A proof for the queuing formula: L = λW,” https://doi.org/10.1287/opre.9.3.3839, 383–387 (1961).
A. Ghosh, L. Cottatellucci and E. Altman, “Nash Equilibrium for Femto-Cell Power Allocation in HetNets with Channel Uncertainty,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, USA, 2015, pp. 1-7, doi: 10.1109/GLOCOM.2015.7417510.
G. Auer, V. Giannini, C. Desset, et al., „How much energy is needed to run a wireless network?,“ in IEEE Wireless Communications, vol. 18, no. 5, pp. 40-49, October 2011, doi: 10.1109/MWC.2011.6056691
S. K. Sharma and X. Wang, “Toward massive machine type communications in ultra-dense cellular IoT networks: current issues and machine learning-assisted solutions,” IEEE Communications Surveys & Tutorials 22, 426–471 (2020).
D. Sempere-García, M. Sepulcre and J. Gozalvez, “LTE-V2X mode 3 scheduling based on adaptive spatial reuse of radio resources,” Ad Hoc Networks 113, 102351 (2021).
Specification # 23.303. Available at: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=840.
T. T. Nguyen, N. D. Nguyen and S. Nahavandi, “Deep reinforcement learning for multiagent systems: a review of challenges, solutions, and applications,” IEEE Transactions on Cybernetics 50, 3826–3839 (2020).
K. Arulkumaran, M. P. Deisenroth, M. Brundage and A. Bharath, “A. deep reinforcement learning: A brief survey,” IEEE Signal Processing Magzine34, 26–38 (2017).
W. Qiang and Z. Zhongli, “Reinforcement learning model, algorithms and its application,” 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, China, 2011, pp. 1143-1146, doi: 10.1109/MEC.2011.6025669.
I. H. Sarker, “Machine learning: algorithms, real-world applications and research directions,” SN Computer Science 2, 1–21 (2021).
R. Li, Z. Zhao, Q. Sun, et al., “Deep reinforcement learning for resource management in network slicing,” in IEEE Access, vol. 6, pp. 74429-74441, 2018, doi: 10.1109/ACCESS.2018.2881964.
R. S. Sutton and A. G. Barto, “Reinforcement learning: an introduction,” in IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 1054-1054, Sept. 1998, doi: 10.1109/TNN.1998.712192.
H. Jiang, R. Gui, Z. Chen, L. Wu, J. Dang and J. Zhou, “An improved Sarsa(λ) reinforcement learning algorithm for wireless communication systems,” in IEEE Access, vol. 7, pp. 115418-115427, 2019, doi: 10.1109/ACCESS.2019.2935255.
J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” ArXiv, abs/1707.06347, 2017.
C. J. C. H. Watkins and P. Dayan, “Q-learning. mach. Learn,” 1992 83 8, 279–292 (1992).
A. Haydari and Y. Yilmaz, “Deep reinforcement learning for intelligent transportation systems: a survey,” IEEE Transactions on Intelligent Transportation Systems 23, 11–32 (2022).
Q. Mao, F. Hu and Q. Hao, “Deep learning for intelligent wireless networks: a comprehensive survey,” IEEE Communications Surveys & Tutorials 20, 2595–2621 (2018).
T. -W. Ban, “An autonomous transmission scheme using dueling DQN for d2d communication networks,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 16348-16352, Dec. 2020, doi: 10.1109/TVT.2020.3041458.
P. A. Lopez, M. Behrisch, L. Bieker-Walz, et al., “Microscopic Traffic Simulation using SUMO,” 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 2018, pp. 2575-2582, doi: 10.1109/ITSC.2018.8569938.
Specification # 38.885. Available at: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3497.
Y. S. Song and H. K. Choi, “Analysis of V2V broadcast performance limit for WAVE communication systems using two-ray path loss model,” ETRI J. 39, 213–221 (2017).
L. Codeca, R. Frank, S. Faye and T. Engel, “Luxembourg SUMO Traffic (LuST) Scenario: Traffic Demand Evaluation,” in IEEE Intelligent Transportation Systems Magazine, vol. 9, no. 2, pp. 52-63, Summer 2017, doi: 10.1109/MITS.2017.2666585.
R. Monga and D. Mehta, “Sumo (Simulation of Urban Mobility) and OSM (Open Street Map) implementation,” 2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 2022, pp. 534-538, doi: 10.1109/SMART55829.2022.10046720.
Downloads
Published
-
Abstract7
-
PDF2