12 results on '"Liu, Ren Ping"'
Search Results
2. GNN-Based Network Traffic Analysis for the Detection of Sequential Attacks in IoT.
- Author
-
Altaf, Tanzeela, Wang, Xu, Ni, Wei, Yu, Guangsheng, Liu, Ren Ping, and Braun, Robin
- Subjects
COMPUTER network traffic ,BOTNETS ,CONVOLUTIONAL neural networks ,GRAPH neural networks ,SEQUENTIAL analysis ,INTERNET of things - Abstract
This research introduces a novel framework utilizing a sequential gated graph convolutional neural network (GGCN) designed specifically for botnet detection within Internet of Things (IoT) network environments. By capitalizing on the strengths of graph neural networks (GNNs) to represent network traffic as complex graph structures, our approach adeptly handles the temporal dynamics inherent to botnet attacks. Key to our approach is the development of a time-stamped multi-edge graph structure that uncovers subtle temporal patterns and hidden relationships in network flows, critical for recognizing botnet behaviors. Moreover, our sequential graph learning framework incorporates time-sequenced edges and multi-edged structures into a two-layered gated graph model, which is optimized with specialized message-passing layers and aggregation functions to address the challenges of time-series traffic data effectively. Our comparative analysis with the state of the art reveals that our sequential gated graph convolutional neural network achieves substantial improvements in detecting IoT botnets. The proposed GGCN model consistently outperforms the conventional model, achieving improvements in accuracy ranging from marginal to substantial—0.01% for BoT IoT and up to 25% for Mirai. Moreover, our empirical analysis underscores the GGCN's enhanced capabilities, particularly in binary classification tasks, on imbalanced datasets. These findings highlight the model's ability to effectively navigate and manage the varying complexity and characteristics of IoT security threats across different datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A High-Performance Hybrid Blockchain System for Traceable IoT Applications
- Author
-
Wang, Xu, Yu, Ping, Yu, Guangsheng, Zha, Xuan, Ni, Wei, Liu, Ren Ping, Guo, Y. Jay, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Joseph K., editor, and Huang, Xinyi, editor
- Published
- 2019
- Full Text
- View/download PDF
4. Secure and Differentiated Fog-Assisted Data Access for Internet of Things.
- Author
-
Yu, Ping, Ni, Wei, Zhang, Hua, Liu, Ren Ping, Wen, Qiaoyan, Li, Wenmin, and Gao, Fei
- Subjects
INTERNET access ,INTERNET of things ,DATA encryption ,PARALLEL programming ,TRUST ,FOG - Abstract
The ability of Fog computing to admit and process huge volumes of heterogeneous data is the catalyst for the fast expansion of Internet of things (IoT). The critical challenge is secure and differentiated access to the data, given limited computation capability and trustworthiness in typical IoT devices and Fog servers, respectively. This paper designs and develops a new approach for secure, efficient and differentiated data access. Secret sharing is decoupled to allow the Fog servers to assist the IoT devices with attribute-based encryption of data while preventing the Fog servers from tampering with the data and the access structure. The proposed encryption supports direct revocation and can be decoupled among multiple Fog servers for acceleration. Based on the decisional |$q$| -parallel bilinear Diffie–Hellman exponent assumption, we propose a new extended |$q$| -parallel bilinear Diffie–Hellman exponent (E |$q$| -PBDHE) assumption and prove that the proposed approach provides 'indistinguishably chosen-plaintext attacks secure' data access for legitimate data subscribers. As numerically and experimentally verified, the proposed approach is able to reduce the encryption time by 20% at the IoT devices and by 50% at the Fog network using parallel computing as compared to the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Nested Hybrid Cylindrical Array Design and DoA Estimation for Massive IoT Networks.
- Author
-
Lin, Zhipeng, Lv, Tiejun, Ni, Wei, Zhang, J. Andrew, and Liu, Ren Ping
- Subjects
INTERNET of things ,MIMO systems ,DEGREES of freedom ,ELECTRICITY pricing ,COMPUTATIONAL complexity ,RADIO frequency ,CHANNEL estimation ,SIGNAL processing - Abstract
Reducing cost and power consumption while maintaining high network access capability is a key physical-layer requirement of massive Internet of Things (mIoT) networks. Deploying a hybrid array is a cost- and energy-efficient way to meet the requirement, but would penalize system degree of freedom (DoF) and channel estimation accuracy. This is because signals from multiple antennas are combined by a radio frequency (RF) network of the hybrid array. This article presents a novel hybrid uniform circular cylindrical array (UCyA) for mIoT networks. We design a nested hybrid beamforming structure based on sparse array techniques and propose the corresponding channel estimation method based on the second-order channel statistics. As a result, only a small number of RF chains are required to preserve the DoF of the UCyA. We also propose a new tensor-based two-dimensional (2-D) direction-of-arrival (DoA) estimation algorithm tailored for the proposed hybrid array. The algorithm suppresses the noise components in all tensor modes and operates on the signal data model directly, hence improving estimation accuracy with an affordable computational complexity. Corroborated by a Cramér-Rao lower bound (CRLB) analysis, simulation results show that the proposed hybrid UCyA array and the DoA estimation algorithm can accurately estimate the 2-D DoAs of a large number of IoT devices. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Online Learning of Optimal Proactive Schedule Based on Outdated Knowledge for Energy Harvesting Powered Internet-of-Things.
- Author
-
Lyu, Xinchen, Ren, Chenshan, Ni, Wei, Tian, Hui, Cui, Qimei, and Liu, Ren Ping
- Abstract
This paper aims to produce an effective online scheduling technique, where a base station (BS) schedules the transmissions of energy harvesting-powered Internet-of-Things (IoT) devices only based on the (differently outdated) in-band reports of the devices on their states. We establish a new primal-dual learning framework, which learns online the optimal proactive schedules to maximize the time-average throughput of all the devices. Batch gradient descent is designed to enable stochastic gradient descent (SGD)-based dual learning to learn the network dynamics from the outdated reports. Replay memory is deployed to allow online convex optimization (OCO)-based primal learning to predict channel conditions and prevent over-fitting. We also decentralize the online learning between the BS and devices, and speed up learning by leveraging the instantaneous knowledge of the devices on their states. We prove that the proposed framework asymptotically converges to the global optimum, and the impact of the outdated knowledge of the BS diminishes. Simulation results confirm that the proposed approach can increasingly outperform state of the art, as the number of devices grows. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Enabling Attribute Revocation for Fine-Grained Access Control in Blockchain-IoT Systems.
- Author
-
Yu, Guangsheng, Zha, Xuan, Wang, Xu, Ni, Wei, Yu, Kan, Yu, Ping, Zhang, J. Andrew, Liu, Ren Ping, and Guo, Y. Jay
- Subjects
ACCESS control ,REVOCATION ,ALGORITHMS ,INTERNET of things ,BLOCKCHAINS - Abstract
The attribute-based encryption (ABE) has drawn a lot of attention for fine-grained access control in blockchains, especially in blockchain-enabled tampering-resistant Internet-of-Things (IoT) systems. However, its adoption has been severely hindered by the incompatibility between the immutability of typical blockchains and the attribute updates/revocations of ABE. In this article, we propose a new blockchain-based IoT system, which is compatible with the ABE technique, and fine-grained access control is implemented with the attribute update enabled by integrating Chameleon Hash algorithms into the blockchains. We design and implement a new verification scheme over a multilayer blockchain architecture to guarantee the tamper resistance against malicious and abusive tampering. The system can provide an update-oriented access control, where historical on-chain data can only be accessible to new members and inaccessible to the revoked members. This is distinctively different from existing solutions, which are threatened by data leakage toward the revoked members. We also provide analysis and simulations showing that our system outperforms other solutions in terms of overhead, searching complexity, security, and compatibility. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Cooperative Computing Anytime, Anywhere: Ubiquitous Fog Services.
- Author
-
Lyu, Xinchen, Ren, Chenshan, Ni, Wei, Tian, Hui, and Liu, Ren Ping
- Abstract
IoT provides ubiquitous connectivity and pervasive intelligence. Challenges arise from the data-intensive applications pertaining to IoT. The major contribution of this article is a new asymptotically optimal, fully decentralized, real-time framework which seamlessly integrates wireless computation offloading and fog computing in IoT networks with random traffic variations over space and time. Other enabling techniques are discussed to address practical challenges, such as outdated incentive for the cooperation of multiple service providers, massive access requests within limited system bandwidth, and computing acceleration. The new framework is able to provide ubiquitous computing for continuously increasing IoT services. Numerical experiments indicate that ubiquitous fog computing can substantially improve throughput, reduce computing latency, and cut off signaling overhead, without compromising the optimality of network operations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Ensuring Max–Min Fairness of UL SIMO-NOMA: A Rate Splitting Approach.
- Author
-
Zeng, Jie, Lv, Tiejun, Ni, Wei, Liu, Ren Ping, Beaulieu, Norman C., and Guo, Y. Jay
- Subjects
MULTIPLE access protocols (Computer network protocols) ,FAIRNESS ,INTERNET of things ,TRANSMITTING antennas ,RECEIVING antennas ,RATES - Abstract
Single-input multiple-output non-orthogonal multiple access (SIMO-NOMA) is a promising technology for the uplink (UL) of the Internet of things (IoT). A rate splitting scheme to guarantee max-min fairness (MMF) of SIMO-NOMA is studied. First, the achievable data rate of the minimum mean square error successive interference cancellation (MMSE-SIC) detection is formulated. An upper bound for the minimum data rate of UL users is derived, and it is proved that the upper bound can be achieved by rate splitting with MMSE-SIC detection. Then, an exhaustive-search rate splitting algorithm to maximize the minimum data rate is designed. The existence of a lower bound for the minimum data rate with two-layer SIMO-NOMA is proved. A low-complexity two-layer rate splitting algorithm is designed with suitable detection order to exceed the lower bound. Numerical results verify that rate splitting SIMO-NOMA has higher minimum data rate and lower transmission latency than single-input multiple-output orthogonal multiple access (SIMO-OMA) and un-layered SIMO-NOMA. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Downlink MIMO-NOMA for Ultra-Reliable Low-Latency Communications.
- Author
-
Xiao, Chiyang, Zeng, Jie, Ni, Wei, Su, Xin, Liu, Ren Ping, Lv, Tiejun, and Wang, Jing
- Subjects
MELLIN transform ,INTEGRAL transforms ,BINOMIAL theorem ,INTERNET of things ,QUEUING theory ,RELIABILITY in engineering - Abstract
With the emergence of the mission-critical Internet of Things applications, ultra-reliable low-latency communications are attracting a lot of attentions. Non-orthogonal multiple access (NOMA) with multiple-input multiple-output (MIMO) is one of the promising candidates to enhance connectivity, reliability, and latency performance of the emerging applications. In this paper, we derive a closed-form upper bound for the delay target violation probability in the downlink MIMO-NOMA, by applying stochastic network calculus to the Mellin transforms of service processes. A key contribution is that we prove that the infinite-length Mellin transforms resulting from the non-negligible interferences of NOMA are Cauchy convergent and can be asymptotically approached by a finite truncated binomial series in the closed form. By exploiting the asymptotically accurate truncated binomial series, another important contribution is that we identify the critical condition for the optimal power allocation of MIMO-NOMA to achieve consistent latency and reliability between the receivers. The condition is employed to minimize the total transmit power, given a latency and reliability requirement of the receivers. It is also used to prove that the minimal total transmit power needs to change linearly with the path losses, to maintain latency and reliability at the receivers. This enables the power allocation for mobile MIMO-NOMA receivers to be effectively tracked. The extensive simulations corroborate the accuracy and effectiveness of the proposed model and the identified critical condition. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information.
- Author
-
Lyu, Xinchen, Ni, Wei, Tian, Hui, Liu, Ren Ping, Wang, Xin, Giannakis, Georgios B., and Paulraj, Arogyaswami
- Subjects
COMPUTER networks ,ARTIFICIAL intelligence - Abstract
Mobile edge computing is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
12. Harmonising Coexistence of Machine Type Communications With Wi-Fi Data Traffic Under Frame-Based LBT.
- Author
-
Sutton, Gordon J., Liu, Ren Ping, and Guo, Y. Jay
- Subjects
- *
WIRELESS Internet , *LONG-Term Evolution (Telecommunications) , *INTERNET of things , *MARKOV processes , *DATA packeting - Abstract
The existence of relatively long LTE data blocks within the licensed-assisted access (LAA) framework results in bursty machine-type communications (MTC) packet arrivals, which cause system performance degradation and present new challenges in Markov modeling. We develop an embedded Markov chain to characterize the dynamic behavior of the contention arising from bursty MTC and Wi-Fi data traffic in the LAA framework. Our theoretical model reveals a high-contention phenomenon caused by the bursty MTC traffic, and quantifies the resulting performance degradation for both MTC and Wi-Fi data traffic. The Markov model is further developed to evaluate three potential solutions aiming to alleviate the contention. Our analysis shows that simply expanding the contention window, although successful in reducing congestion, may cause unacceptable MTC data loss. A TDMA scheme instead achieves better MTC packet delivery and overall throughput, but requires centralized coordination. We propose a distributed scheme that randomly spreads the MTC access processes through the available time period. Our model results, validated by simulations, demonstrate that the random spreading solution achieves a near TDMA performance, while preserving the distributed nature of the Wi-Fi protocol. It alleviates the MTC traffic contention and improves the overall throughput by up to 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.