17 results on '"Zhou Shijie"'
Search Results
2. Deep Learning-Based Dynamic Stable Cluster Head Selection in VANET
- Author
-
Amarah Maqbool, Maham Tariq, Zhou Shijie, Tanveer Ahmad, Muhammad Asim Saleem, Casper Shikali Shivachi, and Muhammad Umer Sarwar
- Subjects
Economics and Econometrics ,Article Subject ,Computer science ,Strategy and Management ,Stability (learning theory) ,Throughput ,02 engineering and technology ,computer.software_genre ,Robustness (computer science) ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Network performance ,Cluster analysis ,HE1-9990 ,050210 logistics & transportation ,TA1001-1280 ,Wireless network ,Network packet ,Mechanical Engineering ,05 social sciences ,020206 networking & telecommunications ,Computer Science Applications ,Transportation engineering ,Automotive Engineering ,Metric (mathematics) ,Data mining ,Transportation and communications ,computer - Abstract
VANET is the spontaneous evolving creation of a wireless network, and clustering in these networks is a challenging task due to rapidly changing topology and frequent disconnection in networks. The cluster head (CH) stability plays a prominent role in robustness and scalability in the network. The stable CH ensures minimum intra- and intercluster communication, thereby reducing the overhead. These challenges lead the authors to search for a CH selection method based on a weighted amalgamation of four metrics: befit factor, community neighborhood, eccentricity, and trust. The stability of CH depends on the vehicle’s speed, distance, velocity, and change in acceleration. These all are included in the befit factor. Also, the accurate location of the vehicle in changing the model is very vital. Thus, the predicted location with the Kalman filter’s help is used to evaluate CH stability. The results have shown better performance than the existing state of the art for the befit factor. The change in dynamics and frequent disconnection in communication links due to the vehicle’s high speed are inevitable. To comprehend this problem, a graphing approach is used to evaluate the eccentricity and the community neighborhood. The link reliability is calculated using the eigengap heuristic. The last metric is trust; this is one of the concepts that has not been included in the weighted approach to date as per the literature. An adaptive spectrum sensing is designed for evaluating the trust values specifically for the primary users. A deep recurrent learning network, commonly known as long short-term memory (LSTM), is trained for the probability of detection with various signals and noise conditions. The false rate has drastically reduced with the usage of LSTM. The proposed scheme is tested on the real map of Chengdu, southwestern China’s Sichuan province, with different vehicular mobilities. The comparative study with the individual and weighted metric has shown significant improvement in the cluster head stability during high vehicular density. Also, there is a considerable increase in network performance in energy, packet delay, packet delay ratio, and throughput.
- Published
- 2021
- Full Text
- View/download PDF
3. Learning Syllables Using Conv-LSTM Model for Swahili Word Representation and Part-of-speech Tagging
- Author
-
Casper Shikali Shivachi, Liu Qi-he, Refuoe Mokhosi, and Zhou Shijie
- Subjects
Agglutinative language ,General Computer Science ,Computer science ,business.industry ,Deep learning ,02 engineering and technology ,computer.software_genre ,Convolutional neural network ,Morpheme ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Syllabic verse ,Language model ,Artificial intelligence ,Syllable ,business ,computer ,Word (computer architecture) ,Natural language processing - Abstract
The need to capture intra-word information in natural language processing (NLP) tasks has inspired research in learning various word representations at word, character, or morpheme levels, but little attention has been given to syllables from a syllabic alphabet. Motivated by the success of compositional models in morphological languages, we present a Convolutional-long short term memory (Conv-LSTM) model for constructing Swahili word representation vectors from syllables. The unified architecture addresses the word agglutination and polysemous nature of Swahili by extracting high-level syllable features using a convolutional neural network (CNN) and then composes quality word embeddings with a long short term memory (LSTM). The word embeddings are then validated using a syllable-aware language model ( 31.267 ) and a part-of-speech (POS) tagging task ( 98.78 ), both yielding very competitive results to the state-of-art models in their respective domains. We further validate the language model using Xhosa and Shona, which are syllabic-based languages. The novelty of the study is in its capability to construct quality word embeddings from syllables using a hybrid model that does not use max-over-pool common in CNN and then the exploitation of these embeddings in POS tagging. Therefore, the study plays a crucial role in the processing of agglutinative and syllabic-based languages by contributing quality word embeddings from syllable embeddings, a robust Conv–LSTM model that learns syllables for not only language modeling and POS tagging, but also for other downstream NLP tasks.
- Published
- 2021
- Full Text
- View/download PDF
4. Survey on security issues of routing and anomaly detection for space information networks
- Author
-
Zhuo, Ming, Liu, Leyuan, Zhou, Shijie, and Tian, Zhiwen
- Subjects
Multidisciplinary ,Process (engineering) ,business.industry ,Emerging technologies ,Computer science ,Network security ,Science ,Deep learning ,Information technology ,Space (commercial competition) ,Computer security ,computer.software_genre ,Article ,Aerospace engineering ,Deep space exploration ,Medicine ,Anomaly detection ,Artificial intelligence ,business ,computer - Abstract
Space information networks is network systems that can receive, transmit, and process spatial information lively. It uses satellites, stratosphere airships, Unmanned Aerial Vehicles, and other platforms as the carrier. It supports high-dynamic, real-time broadband transmission of earth observations and ultra-long-distance, long-delay reliable transmission of deep space exploration. The deeper the network integration, the higher the system’s security concerns and the more likely SINs will be controlled and destroyed in terms of cybersecurity. How to integrate new IT technologies such as artificial intelligence, digital twins, and blockchain to diverse application scenarios of SINs while maintaining SIN cybersecurity will be a long-term critical technical issue. This study is a review of the security issues for space information networks. First, this paper examines space information networks’ security issues and figures out the relationship between the main security threats, services, and mechanisms. Then, this article selects secure routing and anomaly detection from many security technologies to conduct a detailed overview from two perspectives of traditional methods and artificial intelligence. Subsequently, this paper investigates anomaly detection schemes for spatial information networks and proposes a deep learning-based anomaly detection scheme. Finally, we suggest the potential research directions and opening problems of space information network security. Overall, this paper aims to give readers an overview of the newly emerging technologies in space information networks’ security issues and provide inspiration for future exploration.
- Published
- 2021
- Full Text
- View/download PDF
5. The Asynchronous Training Algorithm Based on Sampling and Mean Fusion for Distributed RNN
- Author
-
Tao Cai, Zhou Shijie, Tianquan Liu, and Dejiao Niu
- Subjects
General Computer Science ,Computer science ,Node (networking) ,Asynchronous training ,General Engineering ,Sampling (statistics) ,mean fusion ,Bottleneck ,Synchronization ,distributed recurrent neural network ,Recurrent neural network ,Asynchronous communication ,Benchmark (computing) ,Overhead (computing) ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,Algorithm - Abstract
Training of large scale deep neural networks with distributed implementations is an effective way to improve the efficiency. However, high network communication cost for synchronizing gradients and parameters is a major bottleneck in distributed training. In this work, we propose an asynchronous training algorithm based on sampling and mean fusion for distributed recurrent neural network (RNN). In distributed RNN, multiple distributed neuron nodes and an interaction node work together to implement the training. The synchronization overhead is reduced by a unique asynchronous sampling strategy amongst the distributed neuron nodes. Then, in order to make up for the accuracy loss caused by the asynchronous parameter update, a mean fusion algorithm is proposed, where the interaction node averages all local parameters from the distributed neurons. We mathematically prove the convergence of the proposed algorithm. Experimental verification is performed on two language modeling benchmark datasets. The results demonstrate significant speed gains for distributed RNN, while the accuracy loss is less than 1% on average.
- Published
- 2020
- Full Text
- View/download PDF
6. The Design of the Challenge Experimental Course of 'Information Security System R & D' under the Background of Emerging Engineering Education
- Author
-
Jin Wu, Ruijin Wang, Zhou Shijie, Mengjie Zhang, Fengli Zhang, and Xuchenghao Luo
- Subjects
Engineering management ,Complete information ,Computer science ,Engineering education ,ComputingMilieux_COMPUTERSANDEDUCATION ,Subject (documents) ,Context (language use) ,Plan (drawing) ,Information security ,Security system ,Course (navigation) - Abstract
Currently, there is a lot of emphasis on emerging engineering education. Under this background, we present a novel challenging experimental course, Information Security System R & D. Specifically, in this paper, we discuss the related course construction background, propose the teaching system and implementation plan of this course, and analyze its effectiveness and innovation. Based on subject competitions, the teaching philosophy of this course embodies student-centric, interest-driven, and project-oriented learning. The proposed course builds a practical teaching platform for the development of information security systems, which provides a valuable reference for the construction of cross-disciplinary teaching platforms in the context of emerging engineering education. More important, the proposed course gradually guides students in developing a practical, innovative, and complete information security system, which explores the potential of implementing “the plan for educating and training outstanding engineers 2.0”.
- Published
- 2020
- Full Text
- View/download PDF
7. Data Transmission Using IoT in Vehicular Ad-Hoc Networks in Smart City Congestion
- Author
-
Abida Sharif, Zhou Shijie, and Muhammad Asim Saleem
- Subjects
Vehicular ad hoc network ,Computer Networks and Communications ,business.industry ,Computer science ,Network packet ,Wireless ad hoc network ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020206 networking & telecommunications ,02 engineering and technology ,Vehicle-to-vehicle ,law.invention ,Bluetooth ,Hardware and Architecture ,law ,Smart city ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,020201 artificial intelligence & image processing ,Routing (electronic design automation) ,business ,Software ,Information Systems ,Computer network - Abstract
Development of Internet of Things (IoT) enables smart city advancement throughout the world. Increasing number of vehicles has brought focus on road safety precautions and in-vehicle communication. This is the right time to focus on the development of new applications and services for vehicular environments. The Vehicular Ad-hoc Networks (VANETs) are an interesting range of Mobile Ad-hoc Networks (MANETs) where the Vehicle to Vehicle (V2V) and vehicle roadways transmission is possible. The V2V scheme is fresh by combining Wireless Fidelity (Wi-Fi), Bluetooth and other all sorts of communication standards. An immense number of nodes working with these networks and due to their immense displacements, the analysis is prevailing regarding the possibility of routing standards. The estimation of conventional routing standards for MANETs illustrates that their behaviors are minimal in VANETs. The intention is to make use of mediators for routing with an effort to address the before described issues. The mediators are accountable for gathering data related to routing and identifying the optimal paths for forwarding information packets. The routing scheme is based on group routing standards and data cluster framework for locating the best possible routes. In this paper, we analyze smart cities vehicle communication development by implementing IoT. We also discuss the ways to minimize the limitations connected to IoT deployment and implementation in smart city environment using multi mediator scheme.
- Published
- 2019
- Full Text
- View/download PDF
8. Spark Performance Optimization Analysis In Memory Management with Deploy Mode In Standalone Cluster Computing
- Author
-
Deleli Mesay Adinew, Zhou Shijie, and Yongjian Liao
- Subjects
020203 distributed computing ,Hardware_MEMORYSTRUCTURES ,Relation (database) ,Computer science ,business.industry ,Distributed computing ,Big data ,02 engineering and technology ,Bottleneck ,Memory management ,Computer cluster ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
As data is growing in different dimensions, it is difficult to get appropriate data analytic tools. Spark is one of high speed "in-memory computing" big data analytic tool designed to improve the efficiency of data computing in both batch and realtime data analytic. Spark is memory bottleneck problem which degrades the performance of applications due to in memory computation and uses of storing intermediate and output result in memory. Investigating how performance is increased in relation to spark executor memory, number of executors, number of cores, and deploy mode parameters configuration in a standalone cluster model is our primary goal. Three representative spark applications are used as workloads to evaluates performance in relation to changing these parameters value. Experimental result show, submitting the job in cluster deploy mode is faster to finish than a submitting job in client deploy mode under two workloads. This implies spark performance does not depend on deploy mode rather it depends on types of application. However, increasing number of executor per worker, a number of core per executor and memory fraction will increase spark performance under all workloads in any deploy mode.
- Published
- 2020
- Full Text
- View/download PDF
9. Spark Performance Optimization Analysis in Memory Tuning On GC Overhead for Big Data Analytics
- Author
-
Deleli Mesay Adinew, Zhou Shijie, and Yongjian Liao
- Subjects
Data retrieval ,Computer science ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Spark (mathematics) ,Volume (computing) ,Process (computing) ,Overhead (computing) ,Parallel computing ,Software_PROGRAMMINGLANGUAGES ,Throughput (business) ,Garbage collection ,Heap (data structure) - Abstract
Apache spark is one of the high speed "in-memory computing" that run over the JVM. Due to increasing data in volume, it needs performance optimization mechanism that requires management of JVM heap space. To Manage JVM heap space it needs management of garbage collector pause time that affects application performance. There are different parameters to pass to spark to control JVM heap space and GC time overhead to increase application performance. Passing appropriate heap size with appropriate types of GC as a parameter is one of performance optimization which is known as Spark Garbage collection tuning. To reduce GC overhead, an experiment was done by adjusting certain parameters for loading and dataframe creation and data retrieval process. The result shows 3.23% improvement in Latency and 1.62% improvement in Throughput as compared to default parameter configuration in garbage collection tuning approach.
- Published
- 2019
- Full Text
- View/download PDF
10. A Robust Method for Averting Attack Scenarios in Location Based Services
- Author
-
Zhou Shijie, Ashir Javeed, Abdullah Aman Khan, and Saifullah Tumrani
- Subjects
Information privacy ,Information sensitivity ,Computer science ,Location-based service ,State (computer science) ,Service provider ,Computer security ,computer.software_genre ,computer ,Private information retrieval - Abstract
The development of Location Base Services (LBS) poses new challenges for the protection of user's privacy. Users have to send their current location information to the service provider. The current location of the user can expose critical information such as home/work address, other sensitive information, etc. thus, it is important to protect the users private and sensitive information. As a solution to this problem, different techniques have been proposed over the past few years. Among them, one of the most widely used technique is dummy location generation. However, current dummy location generation methods hardly account for the factor that an attacker has prior knowledge and spatiotemporal information about the user. In this paper, we identify shortcomings and vulnerabilities of the existing techniques and provide a robust solution to maintain the user's data privacy. Furthermore, we propose a robust dummy location generation method capable of averting the negative effects of an attacker having prior knowledge and spatiotemporal information. Additionally, we present some attacker strategies and remedies to avert such attacks. Experiment results show that our proposed method successfully preserves the user's private information, where other state of the art techniques fails to do so.
- Published
- 2019
- Full Text
- View/download PDF
11. A Novel Distributed Duration-Aware LSTM for Large Scale Sequential Data Analysis
- Author
-
Yawen Liu, Zhou Shijie, Tao Cai, Tianquan Liu, Xia Zheng, and Dejiao Niu
- Subjects
Sequence ,Adaptive memory ,Scale (ratio) ,business.industry ,Computer science ,Computation ,Machine learning ,computer.software_genre ,Bottleneck ,Recurrent neural network ,Overhead (computing) ,Artificial intelligence ,Duration (project management) ,business ,computer - Abstract
Long short-term memory (LSTM) is an important model for sequential data processing. However, large amounts of matrix computations in LSTM unit seriously aggravate the training when model grows larger and deeper as well as more data become available. In this work, we propose an efficient distributed duration-aware LSTM(D-LSTM) for large scale sequential data analysis. We improve LSTM’s training performance from two aspects. First, the duration of sequence item is explored in order to design a computationally efficient cell, called duration-aware LSTM(D-LSTM) unit. With an additional mask gate, the D-LSTM cell is able to perceive the duration of sequence item and adopt an adaptive memory update accordingly. Secondly, on the basis of D-LSTM unit, a novel distributed training algorithm is proposed, where D-LSTM network is divided logically and multiple distributed neurons are introduced to perform the easier and concurrent linear calculations in parallel. Different from the physical division in model parallelism, the logical split based on hidden neurons can greatly reduce the communication overhead which is a major bottleneck in distributed training. We evaluate the effectiveness of the proposed method on two video datasets. The experimental results shown our distributed D-LSTM greatly reduces the training time and can improve the training efficiency for large scale sequence analysis.
- Published
- 2019
- Full Text
- View/download PDF
12. An Improved TLD Tracking Algorithm for Fast-moving Object
- Author
-
Kecheng Gong, Zhou Shijie, Shu Leizhi, and Peng Yuanxi
- Subjects
Computer science ,business.industry ,Video tracking ,Computer vision ,Thermoluminescent dosimeter ,Artificial intelligence ,Object (computer science) ,Tracking (particle physics) ,business - Published
- 2018
- Full Text
- View/download PDF
13. Research on the error of the fourier algorithm with DC filtering in fault current calculation of power system
- Author
-
Chen Li'an, Zhou Shijie, Wen Weihong, Cui Jian, Chen Benbin, Tian Hong, Zhang Shijia, Li Hongguang, and Li Guanfa
- Subjects
Computer science ,Sampling (statistics) ,Filter (signal processing) ,Fault (power engineering) ,Electric power system ,symbols.namesake ,Fourier transform ,Control theory ,Initial phase ,symbols ,Range (statistics) ,Algorithm design ,Algorithm ,DC bias - Abstract
DC filtering is commonly used together with Fourier algorithm in the calculation of root-mean-square (RMS) values of fault current of power system, so as to eliminate the errors produced by the decaying DC component in the fault current. In this paper, theoretical derivation was conducted, showing that errors of sampling values could be amplified by the DC filtering itself, which could lower the accuracy. So the effect of DC filtering should be comprehensively considered when applied in such circumstance. Numerical simulations concerning different number of sampling points and different initial phase angle of the fault current were also conducted to verify the conclusion. Results indicated that within a certain range of the initial phase angle, the amplification of the sampling error is maximized by the DC filtering, weakening the improvement in accuracy brought by the elimination of DC component, or even worsen the accuracy with too few sampling points for DC filtering calculation.
- Published
- 2017
- Full Text
- View/download PDF
14. Adaptive Flooding Routing Algorithm in Unstructured P2P
- Author
-
Zhou Shijie, Luo Jia-Qing, Wu Chunjiang, Yang Xiao-qian, and Deng Yi-yi
- Subjects
Computer science ,business.industry ,Distributed computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Routing algorithm ,Peer-to-peer ,computer.software_genre ,Flooding (computer networking) ,law.invention ,Relay ,law ,Peer to peer computing ,business ,computer ,Computer network - Abstract
Flooding is widely used in unstructured Peer to Peer (P2P) systems, such as Gnutella. Although it is effective in content search, flooding among peers or super-peers causes large volume of unnecessary traffic. To address this problem, we propose an efficient and adaptive search mechanism, Adaptive Flooding Routing Algorithm (AFRA). AFRA provides the flexibility to adaptively adjust the number of relay neighbors and TTL value to meet different performance requirements. The effectiveness of AFRA is demonstrated through simulation studies. Primary experimental results show that our new AFRA solution reduces about 65% of the flooding messages while maintaining the acceptable high searching quality.
- Published
- 2006
- Full Text
- View/download PDF
15. Interconnected Peer-to-Peer Network: A Community Based Scheme
- Author
-
Qin Zhi-guang, Luo Xucheng, Zhao Xiaomei, and Zhou Shijie
- Subjects
Scheme (programming language) ,Computer science ,business.industry ,Distributed computing ,Joins ,Construct (python library) ,Peer-to-peer ,computer.software_genre ,Task (computing) ,Order (exchange) ,Algorithm design ,Routing (electronic design automation) ,business ,computer ,Computer network ,computer.programming_language - Abstract
How to construct an interconnected p2p network is an important task in the dynamical environment. This paper presents a new community-based constructing scheme, which can build an interconnected random peer-to-peer network (RP2P) in an efficient and correct way. The basic unit of RP2P is the logical "community", in which one or more peers join in freely. The peers in different communities collaborate to link all the isolated communities into an interconnected network, which in turn allows the peers to directly or indirectly communicate with each other. In RP2P, each peer can enter or leave the communities at will, thus no extra tasks are imposed on the peer. Furthermore, in this paper the RP2P model is also formally described. In order to show how to construct such RP2P, a community based routing algorithm is advanced and analyzed. According to the formula used to compute the connectivity of RP2P, three factors affect its connectivity: the number of the peers, the number of the communities, and the average number of the communities each peer joins. Moreover, this paper argues that, according to the formula, we can improve the connectivity by the means of increasing the number of the peers and the average number of the communities each peer joins. On the contrary, the connectivity decreases with increasing of the communities. The theoretical analysis shows that our novel scheme can provide an interconnected P2P network.
- Published
- 2006
- Full Text
- View/download PDF
16. Cost-based intelligent intrusion detection and response: design and implement
- Author
-
Zhang Feng, Zhang xian-feng, Luo Xucheng, Zhou Shijie, Liu Jin-de, and Qin Zhi-guang
- Subjects
Guard (information security) ,Network security ,business.industry ,Computer science ,Intrusion detection system ,Computer security model ,computer.software_genre ,Data flow diagram ,Host-based intrusion detection system ,Security service ,Operating system ,business ,computer ,Host (network) - Abstract
A flexible intrusion detection and response system (ID&R) needs to maximize security while minimizing cost and making response automatically. CI/sup 2/D&R, the cost-based intelligent intrusion detection and response system, is proposed, which is originally developed as a facility to deal with network-based attacks and to make effective response automatically and intelligently. The networking environment deployed with the CI/sup 2/D&R consists of two major parts: guard, which runs on the specific guarded host (GH), and spy, which runs in guarded network (GN). The components of the CI/sup 2/D&R are introduced, which include intrusion detection, attack classification, damage analysis, attack path rebuilding, resources automatically safeguarding, calamity recovery, and security officer. The several kinds of data flow in CI/sup 2/D&R are discussed, too. While CI/sup 2/&R is only a prototype, some experimental results are also presented.
- Published
- 2004
- Full Text
- View/download PDF
17. CI/sup 2/D&R: Cost-based Intelligent Intrusion Detection and Response System
- Author
-
Luo Xucheng, Lu Qin, Liu Jin-de, Qin Zhi-guang, and Zhou Shijie
- Subjects
Data flow diagram ,Host-based intrusion detection system ,Computer science ,Operating system ,Intrusion detection system ,computer.software_genre ,computer ,Response system - Abstract
Intrusion detection and response system (ID&R) need to maximize security while minimizing cost and taking response automatically. CI/sup 2/D&R, the Cost-based Intelligent Intrusion Detection and Response System, is originally developed as facilities for deal with network-based attacks automatically and intelligently. This paper provides an overview of CI/sup 2/D&R architecture. The primary components of CI/sup 2/D&R and their functions are also probed into. What is more, the data flow of CI/sup 2/D&R is discussed and compared. Finally, the related work is touched on and conclusions are drawn.
- Published
- 2003
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.