22 results on '"Honghui Chen"'
Search Results
2. A Novel Data Placement and Retrieval Service for Cooperative Edge Clouds
- Author
-
Deke Guo, Ge Wang, Xin Li, Junjie Xie, Honghui Chen, and Chen Qian
- Subjects
Mobile edge computing ,Computer Networks and Communications ,business.industry ,Computer science ,Routing table ,Distributed computing ,Cloud computing ,Computer Science Applications ,Data retrieval ,Hardware and Architecture ,Enhanced Data Rates for GSM Evolution ,Data as a service ,Routing (electronic design automation) ,business ,Software ,Edge computing ,Information Systems - Abstract
Mobile edge computing is a new paradigm in which the computing and storage resources are placed at the edge of the Internet. Data placement and retrieval are fundamental services of mobile edge computing when a network of edge clouds collaboratively provide data services. However existing methods such as distributed hash tables (DHTs) are not enough to achieve efficient data placement and retrieval services for cooperative edge clouds. This paper presents GRED, a novel data placement and retrieval service for mobile edge computing, which is efficient in not only the load balance but also routing path lengths and forwarding table sizes. GRED utilizes the programmable switches to support a virtual-space based DHT with only one overlay hop. Data location can be easily implemented on top of the GRED. We implement GRED in a P4 prototype, which provides a simple and efficient solution. Results from theoretical analysis, simulations, and experiments show that GRED can efficiently balance the load of edge clouds, and can fast answer data queries due to its low routing stretch.
- Published
- 2023
- Full Text
- View/download PDF
3. COIN: An Efficient Indexing Mechanism for Unstructured Data Sharing Systems
- Author
-
Deke Guo, Minmei Wang, Honghui Chen, Junjie Xie, Ge Wang, and Chen Qian
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Search engine indexing ,Cloud computing ,Unstructured data ,Computer Science Applications ,Data sharing ,Server ,Bandwidth (computing) ,Electrical and Electronic Engineering ,business ,Software-defined networking ,Software ,Edge computing ,Computer network - Published
- 2022
- Full Text
- View/download PDF
4. Joint Optimization of Speed and Voltage Trajectories for Hybrid Electric Trams
- Author
-
Qingyuan Wang, Pengfei Sun, Zhuang Xiao, Jinsong Guo, Xiaoyun Feng, and Honghui Chen
- Subjects
Vehicle dynamics ,Dynamic programming ,Control and Systems Engineering ,Control theory ,Computer science ,Energy consumption ,Electrical and Electronic Engineering ,Optimal control ,Industrial and Manufacturing Engineering ,Energy storage ,Efficient energy use ,Voltage ,Power (physics) - Abstract
A tram with an on-board energy storage system is a promising candidate for urban traffic systems. The co-optimization of speed and voltage trajectories for a catenary-supercapacitors hybrid electric tram to minimize energy consumption from traction substations is presented in this paper. A source-catenary-load-storage integrated optimization model is proposed to reflect the coupling performance between vehicle dynamics and power sources. Two different methods are designed to address this problem. First, a hierarchical method based on dynamic programming is proposed, and the optimal control problem is recast into two subproblems of speed trajectories optimization and supercapacitors voltage trajectories optimization. Efforts are made to reduce the computation burden. To further investigate the optimality of the hierarchical method, a coupling optimization method is designed using the three dimensional dynamic programming. The performances of the two methods are evaluated by different track conditions and compared with the only optimizing speed profiles based method, which shows that the proposed methods can improve energy efficiency and reduce the fluctuation of catenary voltage. Meanwhile, the coupling method has lower energy consumption than the hierarchical method, but suffers from more considerable computation time. Finally, the influences of tram weight and dynamic traffic constraints are investigated.
- Published
- 2021
- Full Text
- View/download PDF
5. Exploiting Reliable and Scalable Multicast Services in IaaS Datacenters
- Author
-
Junjie Xie, Bangbang Ren, Deke Guo, Jie Wu, Honghui Chen, and Tao Chen
- Subjects
Service (systems architecture) ,Information Systems and Management ,Multicast ,Computer Networks and Communications ,business.industry ,Computer science ,Quality of service ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Hardware and Architecture ,Server ,Scalability ,Reliable multicast ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Unicast ,business ,REMUS ,Computer network - Abstract
A large number of servers are interconnected using a specific datacenter network to deliver the infrastructure as a service (IaaS). Multicast can jointly utilize the network resources and further reduce the consumption of network bandwidth more than individual unicast. The source of a multicast service, however, does not need to be in a specific location as long as certain constraints are satisfied. This means the multicast can have uncertain sources, which could reduce the network resource consumption more than a traditional multicast service and further improve the quality of service. In this paper, we propose a novel reliable multicast service with uncertain sources named ReMUS. The goal is to minimize the sum of the transfer cost and the recovery cost, although finding such a ReMUS is very challenging. Thus, we design a source-based multicast method to solve this problem by exploiting the flexibility of sources when no recovery nodes exist in the network. Furthermore, we design a general multicast method to jointly exploit the benefits of uncertain sources and recovery nodes to minimize the total cost of ReMUS. We conduct extensive evaluations under Internet2 and datacenter networks. The results indicate that our methods can efficiently realize the reliable and scalable multicast with uncertain sources, irrespective of the settings of networks and multicasts. To the best of our knowledge, we are the first to study the reliable multicast service under uncertain sources.
- Published
- 2021
- Full Text
- View/download PDF
6. HDS: A Fast Hybrid Data Location Service for Hierarchical Mobile Edge Computing
- Author
-
Junjie Xie, Deke Guo, Haofan Cai, Honghui Chen, Xiaofeng Shi, and Chen Qian
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,Distributed computing ,020206 networking & telecommunications ,Geographic routing ,02 engineering and technology ,Computer Science Applications ,Data sharing ,Data access ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Software ,Edge computing - Abstract
The hierarchical mobile edge computing satisfies the stringent latency requirements of data access and processing for emerging edge applications. The data location service is a basic function to provide data storage and retrieval to enable these applications. However, it still lacks research of a scalable and low-latency data location service in the environment. The existing solutions, such as DNS and DHT, fail to meet the requirement of those latency-sensitive applications. Therefore, in this article, we present a low-latency hybrid data-sharing framework, HDS. The HDS divides the data location service into two parts: intra-region and inter-region. More precisely, we design a data sharing protocol called Cuckoo Summary to achieve fast data localization in intra-region. Furthermore, for the inter-region data sharing, we develop a geographic routing based scheme to achieve efficient data localization with only one overlay hop. The advantages of HDS include short response latency, low implementation overhead, and few false positives. We implement the HDS framework based on a P4 prototype. The experimental results show that, compared to the state-of-the-art solutions, our design achieves 50.21% shorter lookup paths and 92.75% fewer false positives.
- Published
- 2021
- Full Text
- View/download PDF
7. Efficient Risk-Averse Request Allocation for Multi-Access Edge Computing
- Author
-
Honghui Chen, Yawei Zhao, Deke Guo, Xiaofeng Cao, and Yan Li
- Subjects
Base station ,Mathematical optimization ,Computer science ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,Resource management ,02 engineering and technology ,Variance (accounting) ,Electrical and Electronic Engineering ,Edge computing ,5G ,Computer Science Applications - Abstract
With the evolution of multi-access edge computing (MEC) and 5G communications, diverse services can be flexibly offered by multiple cache-enabled base stations (BSs) in dense deployments. This leads to an important request allocation problem, which studies how to direct user requests among multiple BSs. Efficient request allocation faces several challenges corresponding to the risk associated with the significant uncertainty in the MEC system. Thus, we formulate the request allocation in a risk-averse learning framework to minimize the expected user response delay, and also control the risk measured by the variance of uncertain return. However, solving this risk-averse optimization can be very difficult due to the high computation cost and limited computing resources in the MEC system. This further motivates us to convert the proposed model to a finite-sum composition optimization, and propose a new variant of composition stochastic variance-reduced gradient (C-SVRG) algorithm to accelerate parameter training by estimating the inner function on its linearization. Theoretical analysis proves linear convergence rate and significant complexity reduction of C-SVRG, and simulation results confirm its efficacy.
- Published
- 2021
- Full Text
- View/download PDF
8. Minimal Fault-Tolerant Coverage of Controllers in IaaS Datacenters
- Author
-
Junjie Xie, Bangbang Ren, Honghui Chen, Xiaomin Zhu, and Deke Guo
- Subjects
Information Systems and Management ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Network virtualization ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Network topology ,Networking hardware ,Computer Science Applications ,Hardware and Architecture ,Server ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,020201 artificial intelligence & image processing ,Software-defined networking ,business - Abstract
Large-scale datacenters are the key infrastructures of cloud computing. Inside a datacenter, a large number of servers are interconnected using a specific datacenter network to deliver the infrastructure as a service (IaaS) for tenants. To realize novel cloud applications like the network virtualization and network isolation among tenants, the principle of software-defined network (SDN) has been applied to datacenters. In the setting, multiple distributed controllers are deployed to offer a control plane over the entire datacenter to efficiently manage the network usage. Despite such efforts, cloud datacenters, however, still lack a scalable and resilient control plane. Consequently, this paper systematically studies the coverage problem of controllers, which means to cover all network devices using the least number of controllers. More precisely, we tackle this essential problem from three aspects, including the minimal coverage, the minimal fault-tolerant coverage, and the minimal communication overhead among controllers. After modelling and analyzing such three problems, we design efficient approaches to approximate the optimal solution, respectively. Extensive evaluation results indicate that our approaches can significantly save the number of required controllers, improve the fault-tolerant capability of the control plane and reduce the communication overhead of state synchronization among controllers. The design methodologies proposed in this paper can be applied to cloud datacenters with other networking structures after minimal modifications.
- Published
- 2020
- Full Text
- View/download PDF
9. Task Scheduling With UAV-Assisted Vehicular Cloud for Road Detection in Highway Scenario
- Author
-
Jiangfan Li, Cao Xiaofeng, Honghui Chen, Deke Guo, and Junjie Xie
- Subjects
Optimization problem ,Computer Networks and Communications ,Computer science ,business.industry ,Real-time computing ,Approximation algorithm ,Cloud computing ,Computer Science Applications ,Scheduling (computing) ,Base station ,Hardware and Architecture ,Signal Processing ,Task analysis ,business ,Information Systems - Abstract
Vehicular cloud computing (VCC) has been utilized to enhance traffic management and road safety. By connecting with base stations (BSes), VCC can provide the information of real-time dynamics for smart vehicles (SVs). However, the area outside the coverage of BSes will be the blind areas, where SVs cannot obtain the real-time safety guarantee, especially on the highway. In this article, we utilize unmanned aerial vehicles (UAVs) to assist the communication between SVs and BSes to solve the above problem. In particular, we study the interdependency task scheduling for the highway driving environment detection, where SVs, BSes, and UAVs collect the environmental data, schedule tasks, and feedback results cooperatively. There are two main problems in this scenario: 1) the scheduling within the coverage of BSes and 2) the rescheduling between the coverage of BSes. We model both the processes as constrained numerical optimization problems aiming to minimize the request–response time. To this end, we propose a systematical scheduling scheme named Teso, which consists of two stages: 1) designed approximation algorithm for scheduling and 2) offloading algorithm for rescheduling. Extensive experiments show that Teso can significantly reduce the response time overall and improve the system stability.
- Published
- 2020
- Full Text
- View/download PDF
10. An approach to measuring business-IT alignment maturity via DoDAF2.0
- Author
-
Yi Mao, Honghui Chen, Mengmeng Zhang, and Aimin Luo
- Subjects
Process management ,Computer science ,Business-IT alignment ,Maturity (finance) - Published
- 2020
- Full Text
- View/download PDF
11. A Hybrid-Preference Neural Model for Basket-Sensitive Item Recommendation
- Author
-
Wanyu Chen, Zhiqiang Pan, and Honghui Chen
- Subjects
Information retrieval ,General Computer Science ,Computer science ,General Engineering ,InformationSystems_DATABASEMANAGEMENT ,02 engineering and technology ,Recommender system ,Session (web analytics) ,Field (computer science) ,Preference ,TK1-9971 ,Task (project management) ,Recurrent neural network ,020204 information systems ,sequential recommendation ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,recurrent neural networks ,020201 artificial intelligence & image processing ,General Materials Science ,Electrical engineering. Electronics. Nuclear engineering ,Basket-sensitive item recommendation ,Electrical and Electronic Engineering ,attention mechanism - Abstract
Basket-Sensitive Item Recommendation (BSIR) is a challenging task that aims to recommend an item to add to the current basket given a user’s historical behaviors. The recommended item is supposed to be relevant to the items in current basket. Previous works mainly produce a recommendation based on user’s current basket, ignoring the inherent preference released by user’s long-term behaviors and failing to accurately distinguish the item importance in the basket for detecting user intent. To tackle the above issues, we propose a hybrid model, i.e., Hybrid-Preference Neural Model (HPNM), where a user’s inherent preference is recognized by modeling the historical sequential baskets and the recent preference is identified by focusing on the current basket. In detail, we apply an attention mechanism for distinguishing the importance of items in a basket to generate an accurate basket representation. GRU is utilized for modeling the basket-level sequential information to obtain user’s long-term preference and the representation of the current session is regarded as user’s short-term preference. We evaluate the performance of our proposals against the state-of-the-art baselines in the field of BSIR on two public datasets, i.e., TaFeng and Foursquare. The experimental results show that HPNM can achieve obvious improvements against the baselines in terms of HLU and Recall. In addition, we find HPNM with an attention mechanism can lead to a larger improvement against the baseline for item recommendation in terms of HLU and Recall on testing baskets with relatively fewer items.
- Published
- 2020
- Full Text
- View/download PDF
12. A Time-Aware Graph Neural Network for Session-Based Recommendation
- Author
-
Honghui Chen, Yupu Guo, and Yanxiang Ling
- Subjects
sequence recommendation ,Theoretical computer science ,General Computer Science ,Graph neural networks ,Computer science ,General Engineering ,graph neural network ,Construct (python library) ,Session (web analytics) ,Field (computer science) ,Session-based recommendation ,Collaborative filtering ,Interval (graph theory) ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Representation (mathematics) ,lcsh:TK1-9971 ,Generator (mathematics) - Abstract
Recently, Graph Neural Networks (GNNs) have attracted increasing attention in the field of session-based recommendation, due to its strong ability on capturing complex interactive transitions within sessions. However, existing GNN-based models either lack the use of user's long-term historical behaviors or fail to address the impact of collaborative filtering information from neighbor users on the current session, which are both important to boost recommendation. In addition, previous work only focuses on the sequential relations of interactions while neglects the time interval information which can imply the correlations between different interactions. To tackle these problems, we propose a Time-Aware Graph Neural Network (TA-GNN) for session-based recommendation. Specifically, we first construct a user behavior graph by linking the interacted items of the same user according to their corresponding time order. A time-aware generator is designed to model the correlations between different nodes of the user behavior graph by considering the time interval information. Moreover, items from the neighbor sessions of the current session are selected to build a neighborhood graph. Then the two graphs are respectively processed by two different modules to learn the representation of the current session, which is applied to produce the final recommendation list. Comprehensive experiments show that our model outperforms state-of-the-art baselines on three real world datastes. We also investigate the performance of TA-GNN on different numbers of historical interactions and on different session length, finding that our model presents consistently advantages under different conditions.
- Published
- 2020
- Full Text
- View/download PDF
13. Fully Projection-Free Proximal Stochastic Gradient Method With Optimal Convergence Rates
- Author
-
Cao Xiaofeng, Yan Li, and Honghui Chen
- Subjects
General Computer Science ,Proximal stochastic gradient ,Convergence (routing) ,projection-free optimization ,General Engineering ,Applied mathematics ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Stochastic gradient method ,Projection (set theory) ,convergence optimization ,lcsh:TK1-9971 ,Mathematics - Abstract
Proximal stochastic gradient plays an important role in large-scale machine learning and big data analysis. It needs to iteratively update models within a feasible set until convergence. The computational cost is usually high due to the projection over the feasible set. To reduce complexity, many projection-free methods such as Frank-Wolfe methods have been proposed. However, those projection-free methods have to solve a linear programming problem for every update of models which still leads to high computational cost for a complex feasible set, and can be unbearable in practical scenarios. Motivated by this problem, we propose a fully projection-free proximal stochastic gradient method, which has two advantages over previous methods. First, it enjoys high efficiency. The proposed method does not conduct projection directly but finds an approximately correct projection point with a very low computational cost. Second, it achieves tight and optimal convergence rates. Our theoretical analysis shows that the proposed method achieves convergence rates of O(1/√T) and O(log T/T) for convex and strongly convex functions, respectively. These convergence rates successfully match with the known lower bounds. Therefore, in this paper, we provide a valuable insight that some loss of accuracy of projection can improve the efficiency significantly, but does not impair convergence rates. Finally, empirical studies show that the proposed method achieves more than 5× speedup than previous methods.
- Published
- 2020
- Full Text
- View/download PDF
14. A Joint Neural Network for Session-Aware Recommendation
- Author
-
Yanxiang Ling, Yupu Guo, Honghui Chen, and Duolong Zhang
- Subjects
Scheme (programming language) ,General Computer Science ,Artificial neural network ,Process (engineering) ,business.industry ,Computer science ,Session-aware recommendation ,General Engineering ,Machine learning ,computer.software_genre ,Convolutional neural network ,Recurrent neural network ,convolutional neural networks ,sequential recommendation ,Mean reciprocal rank ,recurrent neural networks ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Session (computer science) ,business ,Representation (mathematics) ,lcsh:TK1-9971 ,computer ,computer.programming_language - Abstract
Session-aware recommendation is a special form of sequential recommendation, where users' previous interactions before current session are available. Recently, Recurrent Neural Network (RNN) based models are widely used in sequential recommendation tasks with great success. Previous works mainly focus on the interaction sequences of the current session without analyzing a user's long-term preferences. In this paper, we propose a joint neural network (JNN) for session-aware recommendation, which employs a Convolutional Neural Network(CNN) and a RNN to process the long-term historical interactions and the short-term sequential interactions respectively. Then, we apply a fully-connected neural network to study the complex relationship between these two types of features, which aims to generate a unified representation of the current session. Finally, a recommendation score for given items is generated by a bi-linear scheme upon the session representation. We conduct our experiments on three public datasets, showing that JNN outperforms the state-of-the-art baselines on all datasets in terms of Recall and Mean Reciprocal Rank (MRR). The outperforming results indicate that proper handling of historical interactions can improve the effectiveness of recommendation. The experimental results show that JNN is more prominent in samples with short current session or long historical interactions.
- Published
- 2020
- Full Text
- View/download PDF
15. Validation of Distributed SDN Control Plane Under Uncertain Failures
- Author
-
Lei Liu, Chen Qian, Junjie Xie, Deke Guo, Honghui Chen, and Bangbang Ren
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Network topology ,Computer Science Applications ,Reliability engineering ,Exponential function ,0202 electrical engineering, electronic engineering, information engineering ,Process control ,Electrical and Electronic Engineering ,Routing (electronic design automation) ,Throughput (business) ,computer ,Software ,computer.programming_language - Abstract
The design of distributed control plane is an essential part of SDN. While there is an urgent need for verifying the control plane, little, however, is known about how to validate that the control plane offers assurable performance, especially across various failures. Such validation is hard due to two fundamental challenges. First, the number of potential failure scenarios could be exponential or even non-enumerable. Second, it is still an open problem to model the performance change when the control plane employs different failure recovery strategies. In this paper, we first characterize the validation of the distributed control plane as a robust optimization problem and further propose a robust validation framework to verify whether a control plane provides assurable performance across various failure scenarios and multiple failure recovery strategies. Then, we prove that identifying an optimal recovery strategy is NP-hard after developing an optimization model of failure recovery. Accordingly, we design two efficient failure recovery strategies, which can well approximate the optimal strategy and further exhibit good performance against potential failures. Furthermore, we design the capacity augmentation scheme when the control plane fails to accommodate the worst failure scenario even with the optimal failure recovery strategy. We have conducted extensive evaluations based on an SDN test bed and large-scale simulations over real network topologies. The evaluation results show the efficiency and effectiveness of the proposed validation framework.
- Published
- 2019
- Full Text
- View/download PDF
16. Collaborative Learning for Answer Selection in Question Answering
- Author
-
Pengfei Zhang, Honghui Chen, Xiaoyan Kui, and Taihua Shao
- Subjects
General Computer Science ,Computer science ,collaborative learning ,02 engineering and technology ,computer.software_genre ,Convolutional neural network ,Knowledge extraction ,Answer selection ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,Selection (linguistics) ,General Materials Science ,natural language processing ,Artificial neural network ,business.industry ,Deep learning ,05 social sciences ,General Engineering ,deep learning ,Collaborative learning ,question answering ,Task analysis ,Embedding ,050211 marketing ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,computer ,Sentence ,Natural language processing - Abstract
Answer selection is an essential step in a question answering (QA) system. Traditional methods for this task mainly focus on developing linguistic features that are limited in practice. With the great success of deep learning method in distributed text representation, deep learning-based answer selection approaches have been well investigated, which mainly employ only one neural network, i.e., convolutional neural network (CNN) or long short term memory (LSTM), leading to failures in extracting some rich sentence features. Thus, in this paper, we propose a collaborative learning-based answer selection model (QA-CL), where we deploy a parallel training architecture to collaboratively learn the initial word vector matrix of the sentence by CNN and bidirectional LSTM (BiLSTM) at the same time. In addition, we extend our model by incorporating the sentence embedding generated by the QA-CL model into a joint distributed sentence representation using a strong unsupervised baseline weight removal (WR), i.e., the QA-CLWR model. We evaluate our proposals on a popular QA dataset, InsuranceQA. The experimental results indicate that our proposed answer selection methods can produce a better performance compared with several strong baselines. Finally, we investigate the models’ performance with respect to different question types and find that question types with a medium number of questions have a better and more stable performance than those types with too large or too small number of questions.
- Published
- 2019
- Full Text
- View/download PDF
17. Modeling the dynamic alignment of business and information systems via the lens of human-centered architecture evolution
- Author
-
Mengmeng, Zhang, primary, Shuanghui, Yi, additional, Honghui, Chen, additional, Aimin, Luo, additional, Junxian, Liu, additional, and Xiaoxue, Zhang, additional
- Published
- 2021
- Full Text
- View/download PDF
18. A Systematic Review of Business-IT Alignment Research With Enterprise Architecture
- Author
-
Honghui Chen, Mengmeng Zhang, and Aimin Luo
- Subjects
Business-IT alignment ,Knowledge management ,General Computer Science ,Status quo ,business.industry ,Computer science ,media_common.quotation_subject ,Perspective (graphical) ,review ,General Engineering ,Enterprise architecture ,5W1H ,02 engineering and technology ,Market research ,Systematic review ,020204 information systems ,enterprise architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,media_common - Abstract
Because of the dynamic environments of business and IT, achieving any alignment between the two fields has become challenging. In view of its multiple viewpoints and artifacts, the discipline of enterprise architecture (EA) is often regarded as an effective methodology to deal with business-IT alignment (BITA) issues, and thus has attracted plenty of research. This article conducts a systematic literature review of BITA research using EA. Six questions are answered through 5W1H (When, Who, What, Why, Where, How) analysis; these questions aim to acquire a thorough understanding of BITA from the perspective of EA, to discover weak points in the status quo, and to identify future research directions.
- Published
- 2018
- Full Text
- View/download PDF
19. A coevolutionary framework of business-IT alignment via the lens of enterprise architecture
- Author
-
Menglong, Lin, primary, Shuanghui, Yi, additional, Mengmeng, Zhang, additional, Tao, Chen, additional, Honghui, Chen, additional, and Xiaoxue, Zhang, additional
- Published
- 2020
- Full Text
- View/download PDF
20. Resource allocation approach to associate business-IT alignment to enterprise architecture design
- Author
-
Junxian Liu, Honghui Chen, and Mengmeng Zhang
- Subjects
0209 industrial biotechnology ,Process management ,Computer science ,Corporate governance ,ComputingMethodologies_MISCELLANEOUS ,Enterprise architecture ,02 engineering and technology ,020901 industrial engineering & automation ,Business architecture ,Metric (mathematics) ,Systems architecture ,Resource allocation ,Design process ,Business-IT alignment - Abstract
Enterprise architecture (EA) development is always a superior way to address business-IT alignment (BITA) issue. However, most EA design frameworks are inadequate to allocate IT resources, which is an important metric of BITA maturity. Under this situation, the idea of IT resource allocation is combined with the EA design process, in order to extend prior EA research on BITA and to demonstrate EA's capability of implementing IT governance. As an effective resource allocation method, portfolio decision analysis (PDA) is used to align business functions of business architecture and applications of system architecture. Furthermore, this paper exhibits an illustrative case with the proposed framework.
- Published
- 2019
- Full Text
- View/download PDF
21. The Dynamic Bloom Filters
- Author
-
Xueshan Luo, Honghui Chen, Jie Wu, Ye Yuan, and Deke Guo
- Subjects
Theoretical computer science ,Computer science ,Dynamic data ,Cardinal number ,Bloom filter ,Data structure ,Upper and lower bounds ,Computer Science Applications ,Set (abstract data type) ,Cardinality ,Computational Theory and Mathematics ,Filter (video) ,Algorithm design ,Computer Science::Data Structures and Algorithms ,Algorithm ,Physics::Atmospheric and Oceanic Physics ,Computer Science::Databases ,Information Systems - Abstract
A Bloom filter is an effective, space-efficient data structure for concisely representing a set, and supporting approximate membership queries. Traditionally, the Bloom filter and its variants just focus on how to represent a static set and decrease the false positive probability to a sufficiently low level. By investigating mainstream applications based on the Bloom filter, we reveal that dynamic data sets are more common and important than static sets. However, existing variants of the Bloom filter cannot support dynamic data sets well. To address this issue, we propose dynamic Bloom filters to represent dynamic sets, as well as static sets and design necessary item insertion, membership query, item deletion, and filter union algorithms. The dynamic Bloom filter can control the false positive probability at a low level by expanding its capacity as the set cardinality increases. Through comprehensive mathematical analysis, we show that the dynamic Bloom filter uses less expected memory than the Bloom filter when representing dynamic sets with an upper bound on set cardinality, and also that the dynamic Bloom filter is more stable than the Bloom filter due to infrequent reconstruction when addressing dynamic sets without an upper bound on set cardinality. Moreover, the analysis results hold in stand-alone applications, as well as distributed applications.
- Published
- 2010
- Full Text
- View/download PDF
22. Improving the Efficiency of Localization-Oriented Network Adjustment in Wireless Sensor Networks
- Author
-
Deke Guo, Honghui Chen, Tao Chen, Zheng Yang, and Xueshan Luo
- Subjects
business.industry ,Computer science ,Wireless ad hoc network ,Distributed computing ,Topology (electrical circuits) ,Network topology ,Computer Science Applications ,Key distribution in wireless sensor networks ,Software deployment ,Modeling and Simulation ,Wireless ,Electrical and Electronic Engineering ,business ,Focus (optics) ,Wireless sensor network - Abstract
In real-world deployment, some wireless sensor networks are not entirely localizable. A remedy for such situation is to adjust the network to make it well-prepared for localization. Previous studies mainly focus on adding measurable edges into the network, so as to enhance localizability, but they introduce redundant distance measurement. This letter presents a new theoretic finding in 2-connected graphs, and proposes accompanying network adjustment approach, which can significantly reduce the number of added edges from (d-1)2n to only 2n. Simulation results show that our approach outperforms previous work in adjustment efficiency.
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.