2,472 results on '"caching"'
Search Results
2. A Novel Approach for Improving XML Querying over Wireless Broadcast Channels.
- Author
-
Ahlawat, Vinay Kumar, Agarwal, Gaurav, Goel, Vikas, Sanghi, Akash, Choi, Sun Young, Hui, Kueh Lee, and Sain, Mangal
- Abstract
The querying of large XML data over wireless broadcast channels can reduce bandwidth utilization, cause significant latency, and produce inefficient energy usage. This paper proposes a scheme to improve XML querying over wireless broadcast channels in order to address the issues mentioned above. Various techniques, including partitioning, load balancing, and query routing, have been combined into one approach. The proposed scheme partitions the XML data stream into several partitions based on criteria like document size, type, or content. Each partition is routed to a separate channel to balance the load on each wireless broadcast channel. A query routing mechanism that directs queries to the right channel or combination of channels that hold the relevant XML data partition was implemented. This study simulates, evaluates, and compares the proposed scheme's performance. The results from the comparison study with existing schemes demonstrate a considerable reduction in the access time for XML querying via wireless broadcast channels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Caching Historical Embeddings in Conversational Search.
- Author
-
Frieder, Ophir, Mele, Ida, Muntean, Cristina Ioana, Nardini, Franco Maria, Perego, Raffaele, and Tonellotto, Nicola
- Subjects
SEMANTICS ,MOTIVATION (Psychology) ,CONFERENCES & conventions - Abstract
Rapid response, namely, low latency, is fundamental in search applications; it is particularly so in interactive search sessions, such as those encountered in conversational settings. An observation with a potential to reduce latency asserts that conversational queries exhibit a temporal locality in the lists of documents retrieved. Motivated by this observation, we propose and evaluate a client-side document embedding cache, improving the responsiveness of conversational search systems. By leveraging state-of-the-art dense retrieval models to abstract document and query semantics, we cache the embeddings of documents retrieved for a topic introduced in the conversation, as they are likely relevant to successive queries. Our document embedding cache implements an efficient metric index, answering nearest-neighbor similarity queries by estimating the approximate result sets returned. We demonstrate the efficiency achieved using our cache via reproducible experiments based on Text Retrieval Conference Conversational Assistant Track datasets, achieving a hit rate of up to 75% without degrading answer quality. Our achieved high cache hit rates significantly improve the responsiveness of conversational systems while likewise reducing the number of queries managed on the search back-end. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. The hidden truth: unexpected acorn caching sites by Eurasian Jays (Garrulus glandarius L.) re-examined.
- Author
-
Wróbel, Aleksandra, Kurek, Przemysław, and Bobiec, Andrzej
- Subjects
- *
COMPULSIVE hoarding , *ENGLISH oak , *SCOTS pine , *OAK , *SEEDS - Abstract
Eurasian Jays (Garrulus glandarius) typically store seeds on the ground in shallow caches, promoting tree recruitment. However, speculation exists that Eurasian Jays occasionally store a portion of seeds in microhabitats unsuitable for proper germination. Here, we report that unexpected caching sites in Eurasian Jays can be much more widespread than previously considered and despite their accidental character it seems to be a durable aspect of Eurasian Jay's hoarding behavior. Out of 259 removed acorns of Pedunculate Oak (Quercus robur), we localized 31 consumed and 222 stored acorns. Six experimental acorns (3% of stored acorns) were found stored by jays in unexpected caching sites: (i) above the ground on individuals of Scots Pine (Pinus sylvestris), (ii) inside the woody stems of Reynoutria sp. individuals, (iii) in a rotten trunk, and (iv) among ruin debris. Our findings suggest the need to revise our understanding of so-called unexpected caching in Eurasian Jays. This highlights a previously overlooked aspect of oak-jay interactions, offering a valuable piece to the puzzle. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Analytical modeling of cache-enabled heterogeneous networks using Poisson cluster processes
- Author
-
Junhui Zhao, Lihua Yang, Xiaoting Ma, and Ziyang Zhang
- Subjects
Heterogeneous networks ,Millimeter wave ,Poisson cluster processes ,Caching ,Stochastic geometry ,Information technology ,T58.5-58.64 - Abstract
The dual frequency Heterogeneous Network (HetNet), including sub-6 GHz networks together with Millimeter Wave (mmWave), achieves the high data rates of user in the networks with hotspots. The cache-enabled HetNets with hotspots are investigated using an analytical framework in which Macro Base Stations (MBSs) and hotspot centers are treated as two independent homogeneous Poisson Point Processes (PPPs), and locations of Small Base Stations (SBSs) and users are modeled as two Poisson Cluster Processes (PCPs). Under the PCP-based modeling method and the Most Popular Caching (MPC) scheme, we propose a cache-enabled association strategy for HetNets with limited storage capacity. The performance of association probability and coverage probability is explicitly derived, and Monte Carlo simulation is utilized to verify that the results are correct. The outcomes of the simulation present the influence of antenna configuration and cache capacities of MBSs and SBSs on network performance. Numerical optimization of the standard deviation ratio of SBSs and users of association probability is enabled by our analysis.
- Published
- 2024
- Full Text
- View/download PDF
6. The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities.
- Author
-
Naeem, Muhammad Ali, Meng, Yahui, and Chaudhary, Sushank
- Subjects
FEDERATED learning ,SMART cities ,SUSTAINABLE urban development ,INTERNET of things ,ENERGY consumption - Abstract
The caching mechanism of federated learning in smart cities is vital for improving data handling and communication in IoT environments. Because it facilitates learning among separately connected devices, federated learning makes it possible to quickly update caching strategies in response to data usage without invading users' privacy. Federated learning caching promotes improved dynamism, effectiveness, and data reachability for smart city services to function properly. In this paper, a new caching strategy for Named Data Networking (NDN) based on federated learning in smart cities' IoT contexts is proposed and described. The proposed strategy seeks to apply a federated learning technique to improve content caching more effectively based on its popularity, thereby improving its performance on the network. The proposed strategy was compared to the benchmark in terms of the cache hit ratio, delay in content retrieval, and energy utilization. These benchmarks evidence that the suggested caching strategy performs far better than its counterparts in terms of cache hit rates, the time taken to fetch the content, and energy consumption. These enhancements result in smarter and more efficient smart city networks, a clear indication of how federated learning can revolutionize content caching in NDN-based IoT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. FlexpushdownDB: rethinking computation pushdown for cloud OLAP DBMSs.
- Author
-
Yang, Yifei, Yu, Xiangyao, Serafini, Marco, Aboulnaga, Ashraf, and Stonebraker, Michael
- Abstract
Modern cloud-native OLAP databases adopt a storage-disaggregation architecture that separates the management of computation and storage. A major bottleneck in such an architecture is the network connecting the computation and storage layers. Computation pushdown is a promising solution to tackle this issue, which offloads some computation tasks to the storage layer to reduce network traffic. This paper presents FlexPushdownDB (FPDB), where we revisit the design of computation pushdown in a storage-disaggregation architecture, and then introduce several optimizations to further accelerate query processing. First, FPDB supports hybrid query execution, which combines local computation on cached data and computation pushdown to cloud storage at a fine granularity. Within the cache, FPDB uses a novel Weighted-LFU cache replacement policy that takes into account the cost of pushdown computation. Second, we design adaptive pushdown as a new mechanism to avoid throttling the storage-layer computation during pushdown, which pushes the request back to the computation layer at runtime if the storage-layer computational resource is insufficient. Finally, we derive a general principle to identify pushdown-amenable computational tasks, by summarizing common patterns of pushdown capabilities in existing systems, and further propose two new pushdown operators, namely, selection bitmap and distributed data shuffle. Evaluation on SSB and TPC-H shows each optimization can improve the performance by 2.2 × , 1.9 × , and 3 × respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Software-Defined Named Data Networking in Literature: A Review.
- Author
-
Alhawas, Albatool and Belghith, Abdelfettah
- Subjects
NETWORK performance ,RESEARCH questions ,SCALABILITY ,OPEN-ended questions ,INTERNET of things - Abstract
This paper presents an in-depth review of software-defined named data networking (SD-NDN), a transformative approach in network architectures poised to deliver substantial benefits. By addressing the limitations inherent in traditional host-centric network architectures, SD-NDN offers improvements in network performance, scalability, and efficiency. The paper commences with an overview of named data networking (NDN) and software-defined networking (SDN), the two fundamental building blocks of SD-NDN. It then explores the specifics of integrating NDN with SDN, illustrating examples of various SD-NDN models. These models are designed to leverage SDN for NDN routing, caching, and forwarding. The paper concludes by proposing potential strategies for further integration of SDN and NDN and some open research questions. These proposed strategies aim to stimulate further exploration and innovation in the field of SD-NDN. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. To Cache or Not to Cache.
- Author
-
Lyons Jr., Steven and Rangaswami, Raju
- Subjects
- *
ALGORITHMS , *INSTITUTIONAL repositories , *STORAGE , *PERCENTILES , *CACHE memory - Abstract
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload's footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Upscaling Current Data Caching and Prefetching Strategies for Online Databases in Nigeria.
- Author
-
Kehinde, Olatunji Austine, Zulkifli, Zahidah, Salwana, Ely, Putri Junurham, Nur Leyni Nilam, Mahmud, Murni, and Bahmaid, Salem
- Subjects
ONLINE databases ,PUBLIC libraries ,PUBLIC institutions ,COMPUTER science ,SAMPLING (Process) - Abstract
This study investigated upscaling current data caching and prefetching strategies for online databases in Nigeria, with a focus on practical implications. The research design adopted for this study was the descriptive survey. The population comprised of all undergraduate's library students in public tertiary institutions in Ekiti State. A simple random sampling technique was adopted to select 200 library students from Ekiti State University in the study area. The instrument used for data collection was a structured 4 Likert type questionnaire. The questionnaire was distributed to the respondents to find out the effectiveness of caching and prefetching techniques on online databases. The instrument was both face and content validated by two experts from the department of Library studies in Ekiti State University, Ado-Ekiti State. The reliability of the instrument was ensured using Pearson Product Moment Correlation formula which yielded a coefficient of 0.99. The data collected were analyzed using descriptive statistics such as mean and standard deviation. The result showed that the current caching and prefetching techniques employed in online databases are highly effective; the different access patterns have effect on the effectiveness of caching and prefetching techniques in online databases and there are impacts of cache coherence mechanisms on the efficiency of caching and prefetching techniques in online databases. It was therefore recommended that the inclusion of caching and prefetching in curriculum is important across all educational level in Nigeria, with a clear emphasis on the practical implications. In addition, caching and perfecting have come under fire for focusing mostly on computer science. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Content Delivery Networks in the Modern Age: Analyzing Trends, Overcoming Challenges, and Pioneering Developments
- Author
-
Vagmi, Gupta, Rohit Kumar, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Somani, Arun K., editor, Mundra, Ankit, editor, Gupta, Rohit Kumar, editor, Bhattacharya, Subhajit, editor, and Mazumdar, Arka Prokash, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Analyzing the Performance: B-trees vs. Red-Black Trees with Caching Strategies
- Author
-
Wayawahare, Medha, Awale, Chinmayee, Deshkahire, Aditya, Barabadekar, Ashwinee, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Rajagopal, Sridaran, editor, Popat, Kalpesh, editor, Meva, Divyakant, editor, and Bajeja, Sunil, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Blockchain and Reputation Based Secure Service Provision in Edge-Cloud Environments
- Author
-
Chanyour, Tarik, El Kasmi Alaoui, Seddiq, Kaddari, Abdelhak, Hmimz, Youssef, Chiba, Zouhair, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Farhaoui, Yousef, editor, Hussain, Amir, editor, Saba, Tanzila, editor, Taherdoost, Hamed, editor, and Verma, Anshul, editor
- Published
- 2024
- Full Text
- View/download PDF
14. Caching Contents with Varying Popularity Using Restless Bandits
- Author
-
Pavamana, K. J., Singh, Chandramani, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Kalyvianaki, Evangelia, editor, and Paolieri, Marco, editor
- Published
- 2024
- Full Text
- View/download PDF
15. Cloud Industry Trends
- Author
-
Kingsley, M. Scott, El-Bawab, Tarek S., Series Editor, and Kingsley, M. Scott
- Published
- 2024
- Full Text
- View/download PDF
16. Elevating Database Performance: Current Caching and Prefetching Strategies for Online Databases in Nigeria
- Author
-
Kehinde, Olatunji Austine, Zulkifli, Zahidah, Surin, Ely Salwana Mat, Junurham, Nur Leyni Nilam Putri, Mahmud, Murni, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Badioze Zaman, Halimah, editor, Robinson, Peter, editor, Smeaton, Alan F., editor, De Oliveira, Renato Lima, editor, Jørgensen, Bo Nørregaard, editor, K. Shih, Timothy, editor, Abdul Kadir, Rabiah, editor, Mohamad, Ummul Hanan, editor, and Ahmad, Mohammad Nazir, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Online Paging with Heterogeneous Cache Slots
- Author
-
Chrobak, Marek, Haney, Samuel, Liaee, Mehraneh, Panigrahi, Debmalya, Rajaraman, Rajmohan, Sundaram, Ravi, and Young, Neal E.
- Published
- 2024
- Full Text
- View/download PDF
18. Digital twin driven and intelligence enabled content delivery in end-edge-cloud collaborative 5G networks
- Author
-
Bo Yi, Jianhui Lv, Xingwei Wang, Lianbo Ma, and Min Huang
- Subjects
Digital twin ,IoE ,Content delivery ,Caching ,Routing ,Information technology ,T58.5-58.64 - Abstract
The rapid development of 5G/6G and AI enables an environment of Internet of Everything (IoE) which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay. However, these massive devices will lead to explosive traffic growth, which in turn cause great burden for the data transmission and content delivery. This challenge can be eased by sinking some critical content from cloud to edge. In this case, how to determine the critical content, where to sink and how to access the content correctly and efficiently become new challenges. This work focuses on establishing a highly efficient content delivery framework in the IoE environment. In particular, the IoE environment is re-constructed as an end-edge-cloud collaborative system, in which the concept of digital twin is applied to promote the collaboration. Based on the digital asset obtained by digital twin from end users, a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention (TPA) enabled Long Short-Term Memory (LSTM) model. Then, the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning (RL) technology. Finally, a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead. The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate, the average throughput, the successful content delivery rate and the average routing overhead.
- Published
- 2024
- Full Text
- View/download PDF
19. Limiting the memory consumption of caching for detecting subproblem dominance in constraint problems.
- Author
-
Medema, Michel, Breeman, Luc, and Lazovik, Alexander
- Abstract
Solving constraint satisfaction problems often involves a large amount of redundant exploration stemming from the existence of subproblems whose information can be reused for other subproblems. Subproblem dominance is a general notion of reusability that arises when one subproblem imposes more constraints on the remaining part of the search than another subproblem and allows the search to reuse the solutions of the dominating subproblem for the dominated subproblem. The search can exploit subproblem dominance by storing the subproblems that have been explored in a cache and abandoning the current subproblem whenever the cache contains a subproblem that dominates it. While using caching makes it possible to solve problems where subproblem dominance arises orders of magnitude faster, storing all of these subproblems can require a substantial amount of memory, making it impractical in many cases. This paper analyses the dominance between different subproblems for various constraint problems, revealing that only a relatively small number of subproblems dominate other subproblems. Based on these findings, two types of strategies are proposed for reducing the number of subproblems stored in the cache: limiting the number of subproblems that can be stored in the cache and periodically cleaning up the cache. An experimental evaluation demonstrates that these strategies provide an effective instrument for reducing the memory consumption of caching, allowing it to be used on a larger scale. However, there is a trade-off between saving memory and reducing redundant exploration, as removing subproblems from the cache may prevent dominance from being detected for certain subproblems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Consistent individual differences give rise to 'caching syndromes' in a food-storing passerine.
- Author
-
Vámos, Tas I.F. and Shaw, Rachael C.
- Subjects
- *
INDIVIDUAL differences , *SYNDROMES , *FOOD handling - Abstract
Caching food for later retrieval is vital for many animals' survival, but little is known about how this behaviour varies among individuals in the wild. If individuals consistently differ in where, when and how they store food, it could indicate that caching is subject to selection. In this study, we experimentally quantified the repeatability and relationship between different aspects of caching behaviour over two winter seasons in a wild population of toutouwai, Petroica longipes. Individuals were repeatable both within and between years in the number of cache sites they created, the distance they travelled to cache and food item handling time prior to caching. All three of these caching behavioural measures were positively correlated with one another, suggesting that toutouwai exhibit a caching syndrome analogous to a behavioural syndrome, with individuals ranging between 'clump caching' and 'scatter caching'. The number of food items that birds ate prior to caching and latency to begin cache retrieval were also consistently correlated, indicating a second 'fast - slow' syndrome linking cache retrieval to satiation level. In both syndromes most individuals were normally distributed between the behavioural extremes, indicating that birds tended to partake in intermediate caching rather than clustering at either end of the continuum. We posit that this distribution may be the result of stabilizing selection that balances the costs and benefits of each extreme. • Toutouwai are repeatable in numerous aspects of food-storing behaviour. • Different measures of caching behaviour consistently correlate with one another. • Individuals fall along an axis between 'scattering' and 'clumping' caching types. • An intermediate caching type may be maintained by stabilizing selection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. An efficient fuzzy hyper-edge clustering and popularity-based caching scheme for CCN-enabled IoT networks.
- Author
-
Kumar, Sumit and Bathla, Gourav
- Subjects
FUZZY algorithms ,INTERNET of things ,UNITS of time ,ACCESS to information - Abstract
Content-Centric Networking (CCN) has emerged as the most convenient architecture for efficient traffic management in contrast to the IP-based Internet. The in-network caching characteristic of CCN reduces server load and traffic in the network. Furthermore, it enhances end-user Quality-of-Service (QoS) by reducing content retrieval delay. Towards this, the proposed research focuses on the in-network caching capability of CCN-enabled IoT networks to improve content distribution and reduction of load from servers. In CCN-enabled IoT networks, content caching can be performed in any node of the fog layer that exists between the cloud server and IoT devices. To effectively utilize the available caching resources, it is crucial to determine the suitable fog node during content placement decisions. In this direction, a novel fuzzy hyper-edge clustering and content popularity-based caching scheme is proposed for CCN-based IoT networks. The proposed fuzzy clustering scheme dynamically partitions the network into overlapping clusters based on node connectivity. The proposed scheme overcomes the limitations of existing techniques where the number of clusters needs to be fixed initially. The proposed scheme considers the cluster information and content access frequency parameters for content placement decisions. Using the proposed heuristics, the scheme cooperatively caches the popular contents in the fog nodes near IoT devices. The performance of the proposed strategy is examined using extensive simulations on a realistic network configuration. Experiments are performed on the standard Abilene topology, and performance is measured using metrics such as cache hit ratio, average network hop count, and average network delay on cache sizes 50 and 100. The simulation results are recorded at 1, 250, 500, 750, 1000, 1250, 1500, 1750, and 2000 Simulation Time Units (STU). The results show that the proposed caching solution outperforms recent state-of-the-art techniques such as LCE, PDC, CPNDD, HPHC, and CSDD, making it suitable for CCN-enabled IoT networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment.
- Author
-
Bai, Jingpan, Zhu, Silei, and Ji, Houling
- Subjects
- *
MOBILE computing , *EDGE computing , *BLOCKCHAINS , *CONSENSUS (Social sciences) , *INTERIOR-point methods - Abstract
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An efficient edge caching approach for SDN-based IoT environments utilizing the moth flame clustering algorithm.
- Author
-
Jazaeri, Seyedeh Shabnam, Jabbehdari, Sam, Asghari, Parvaneh, and Javadi, Hamid Haj Seyyed
- Subjects
- *
INTERNET of things , *PROCESS capability , *FLAME , *DATA warehousing , *SOFTWARE-defined networking , *ALGORITHMS - Abstract
IoT networks can provide many benefits and opportunities, although their implementation poses challenges. Cloud-only storage of IoT data would be very costly and time-consuming. In this paper, a new scheme is proposed for caching IoT content on the edge with SDN-based processing capability. The proposed scheme considers a global SDN controller, which coordinates cache decisions across the entire IoT network. The Moth-Flame Optimization-Cluster Head Selection (MFO-CHS) algorithm is used to cluster devices where the selected cluster heads send the IoT data to the edge nodes for caching. In addition, by utilizing edge caching capabilities and using MFO to select and cache the appropriate contents on edge nodes, the proposed Moth-Flame Optimization-Edge Caching (MFO-EC) algorithm can provide data with lower latency on upcoming requests. Caching can also help ensure reliability and availability since intermittent connections and power limitations affect IoT devices. Caching decisions regarding IoT characteristics are not made intelligently in the default caching scheme for maximizing device longevity and managing the possibility that content producers may become unreachable. This scheme considers several metrics, and the proposed moth-flame optimization (MFO) algorithms, MFO-CHS, and MFO-EC algorithms, which are nature-inspired paradigms for the edge caching problem called "MFO-SDN-EC", Moth-Flame Optimization-Software-defined Networking -Edge Caching, are used to select the best options regarding considered criteria and improve the QoS in the SDN based IoT environment. Based on simulations, our proposed caching method can reduce energy consumption by 45%, decrease the average response time by 49%, and also increase cache-hit rates. Furthermore, our results demonstrate our algorithm's superiority over several current approaches in terms of assessment measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. QWLCPM: A Method for QoS-Aware Forwarding and Caching Using Simple Weighted Linear Combination and Proximity for Named Data Vehicular Sensor Network.
- Author
-
Dhakal, Dependra and Sharma, Kalpana
- Subjects
SENSOR networks ,VEHICULAR ad hoc networks ,AD hoc computer networks ,GLOBAL Positioning System ,DATA transmission systems ,QUALITY of service - Abstract
The named data vehicular sensor network (NDVSN) has become an increasingly important area of research because of the increasing demand for data transmission in vehicular networks. In such networks, ensuring the quality of service (QoS) of data transmission is essential. The NDVSN is a mobile ad hoc network that uses vehicles equipped with sensors to collect and disseminate data. QoS is critical in vehicular networks, as the data transmission must be reliable, efficient, and timely to support various applications. This paper proposes a QoS-aware forwarding and caching algorithm for NDVSNs, called QWLCPM (QoS-aware Forwarding and Caching using Weighted Linear Combination and Proximity Method). QWLCPM utilizes a weighted linear combination and proximity method to determine stable nodes and the best next-hop forwarding path based on various metrics, including priority, signal strength, vehicle speed, global positioning system data, and vehicle ID. Additionally, it incorporates a weighted linear combination method for the caching mechanism to store frequently accessed data based on zone ID, stability, and priority. The performance of QWLCPM is evaluated through simulations and compared with other forwarding and caching algorithms. QWLCPM's efficacy stems from its holistic decision-making process, incorporating spatial and temporal elements for efficient cache management. Zone-based caching, showcased in different scenarios, enhances content delivery by utilizing stable nodes. QWLCPM's proximity considerations significantly improve cache hits, reduce delay, and optimize hop count, especially in scenarios with sparse traffic. Additionally, its priority-based caching mechanism enhances hit ratios and content diversity, emphasizing QWLCPM's substantial network-improvement potential in vehicular environments. These findings suggest that QWLCPM has the potential to greatly enhance QoS in NDVSNs and serve as a promising solution for future vehicular sensor networks. Future research could focus on refining the details of its implementation, scalability in larger networks, and conducting real-world trials to validate its performance under dynamic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Enhancing Edge-Linked Caching in Information-Centric Networking for Internet of Things With Deep Reinforcement Learning
- Author
-
Hamid Asmat, Ikram Ud Din, Ahmad Almogren, Ayman Altameem, and Muhammad Yasar Khan
- Subjects
Information centric network ,caching ,Internet of Things ,content update ,machine learning ,reinforced learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper proposes an Enhanced Edge-Linked Caching (EELC) scheme for Internet of Things (IoT) environments under Information-Centric Networking (ICN), employing an advanced use of Proximal Policy Optimization (PPO), a form of deep reinforcement learning, to inform the caching decisions of edge nodes. The rapid proliferation of IoT devices has led to significant challenges in managing content efficiently within networks, particularly in terms of scalability, latency, and energy consumption. Traditional IP-based architectures are inadequate in addressing these challenges, necessitating a shift towards a content-centric approach provided by ICN. By leveraging PPO, EELC dynamically adapts to changing IoT network conditions, optimizing caching strategies to enhance energy efficiency and improve network responsiveness. In our simulation, we verify the performance of EELC in comparison to the Edge-Linked Caching (ELC) and Leave Copy Everywhere (LCE) approaches under different cache sizes and Zipf-distributed content requests. EELC performs far better than ELC under energy efficiency, cache hit ratios, and server hit reduction in all tested scenarios. This indicates that EELC could be a potential approach for significantly improving network efficiency and responsiveness in IoT networks.
- Published
- 2024
- Full Text
- View/download PDF
26. Modeling and Evaluating a Cache System in ICN Routers Using a Programmable Switch and Computers
- Author
-
Junji Takemasa, Yuki Koizumi, and Toru Hasegawa
- Subjects
Caching ,information-centric networking ,programmable switch ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Information-centric networking (ICN) is one of promising networking architectures to replace IP because its notable feature, in-network caching, is expected to reduce about a one-third of the forever increasing Internet traffic. However, the existing ICN router implementations with caches cannot come with the rapid increase of terabit-scale network bandwidth. An implementation on hardware-based router platforms like programmable switches is promising, but due to their limited memory capacity, cache stores should be located at external devices like computers. This makes packets being sent from and to the switch to cache them at the computers, which increases the number of ports for connecting them called external ports. That is, the number of ports for connecting the switch and external networks is reduced, and as a result, the packet forwarding rate is degraded. We analytically and experimentally show that a naive implementation decreases both the packet forwarding rate and the number external ports to almost half in the condition that the cache hit ratio is about 30%. To solve this bottleneck, this paper proposes the two algorithms, the cache admission and lazy response ones, to reduce the number of packets sent between the switch and the computers. In order to validate them, we evaluate the forwarding rate improvement by developing an analytical model and a prototype ICN router implementation on a programmable switch with computers. The evaluation shows that the proposed algorithms almost double the forwarding rate compared to a naive implementation without the proposed algorithms under realistic request patterns to data packets. The paper’s contributions are two-fold: realistic implementation and comprehensive performance evaluation. First, we implement a terabit-class ICN router with cache functionality on a Tofino switch which provides 3.2 terabit/s (Tbps) forwarding rate with two pipelines. The implemented router achieves 1.075 Tbps forwarding rate in the condition that just one pipeline of the Tofino switch is used and that the cache hit ratio is about 30%. As far as we know, this prototype implantation is one of the first full-fledged ICN routers on a Tofino switch. Second, the performance of the proposed router is both analytically and experimentally evaluated, and the both results show the similar packet forwarding rate under realistic request patterns following the Zipf distribution, which the Internet traffic is believed to follow. Finally, the paper is extended from the conference paper so that the proposed algorithms are evaluated both analytically and experimentally, whereas they are mainly evaluated by simulation in that paper.
- Published
- 2024
- Full Text
- View/download PDF
27. Comparative Analysis of Popularity Aware Caching Strategies in Wireless-Based NDN Based IoT Environment
- Author
-
Yahui Meng, Amran Bin Ahmad, and Muhammad Ali Naeem
- Subjects
Internet of Things ,Named Data Network ,caching ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The fundamental objective of the Internet of Things (IoT) and Named Data Networking (NDN) architectures is to facilitate the provision of communication services. The existing Internet infrastructure presents various challenges associated with its location-based architecture, including those related to latency, bandwidth, and power consumption. This paper explains the NDN-based IoT caching architecture and discusses the caching module, focusing on the selection of an optimal caching approach to address the issues above. This study selected two distinct caching categories, namely centrality-based and probability-based approaches. The selection of these caching strategies was based on their prominence within the research community. In order to determine the most optimal caching strategies, the Icarus network simulator is employed to conduct a comprehensive evaluation of the selected strategies. The evaluation of the performance is conducted based on several key metrics, including the hit ratio, content retrieval latency, and average hop count. The Popularity-Aware Closeness Centrality strategy and the Efficient Popularity-Aware Probabilistic Caching strategy demonstrated superior performance when employing centrality-based caching and probabilistic-aware caching categories, respectively, in order to enhance network performance.
- Published
- 2024
- Full Text
- View/download PDF
28. Enhancement of Fog Caching Using Nature Inspiration Optimization Technique Based on Cloud Computing
- Author
-
Mohamed R. Elnagar, Ahmed Awad Mohamed, Benbella S. Tawfik, and Hosam E. Refaat
- Subjects
Cloud ,fog computing ,information centric network (ICN) ,ICN-fog ,caching ,multi-objective optimization ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Caching plays an important role in reducing latency and increasing the overall performance of fog computing systems. With the rise of the Internet of Things (IoT) and edge computing, fog computing has become an essential paradigm for addressing the challenges related to latency-sensitive and bandwidth-intensive applications. The integration of Information-Centric Networking (ICN) and Fog Computing (ICN-Fog) has emerged as a promising solution for meeting the demands of low-latency and high-throughput applications in rapidly growing IoT devices. A step was taken by ICN-Fog to reduce latency and achieve higher data communication and information gathering for fog computing. Using Artificial Intelligence (AI) methods, the Firefly Optimization Technique was introduced as an effective optimization algorithm to enhance the caching technique. In this study, we aim to improve several performance metrics, such as the cache hit ratio, internal link load, and average query duration. To achieve these enhancements, we propose a unique solution inspired by the Firefly Optimization Technique. This technique applies to the ICN-Fog caching model, which was utilized to intelligently determine cache placement and network topology adaptations. CloudSim was employed to execute a simulation of the proposed strategy. The results of the experiments suggest that the Multi-Objective Firefly Algorithm (MOFA) outperforms the compared algorithms in terms of both efficiency and effectiveness in identifying the optimal caching technique.
- Published
- 2024
- Full Text
- View/download PDF
29. Smart data harvesting in cache-enabled MANETs: UAVs, future position prediction, and autonomous path planning
- Author
-
Umair B. Chaudhry and Chris Ian Phillips
- Subjects
UAV ,caching ,evolutionary algorithms ,MANETs ,WSN ,metaheuristic algorithms ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
The task of gathering data from nodes within mobile ad hoc wireless sensor networks presents an enduring challenge. Conventional strategies employ customized routing protocols tailored to these environments, with research focused on refining them for improved efficiency in terms of throughput and energy utilization. However, these elements are interconnected, and enhancements in one often come at the expense of the other. An alternative data collection approach involves the use of unmanned aerial vehicles (UAVs). In contrast to traditional methods, UAVs directly collect data from mobile nodes, bypassing the need for routing. While existing research predominantly addresses static nodes, this paper proposes an evolutionary based, innovative path selection approach based on future position prediction of caching enabled mobile ad hoc wireless sensor network nodes for UAV data collection, aimed at maximizing node encounters and gathering the most valuable information in a single trip. The proposed technique is evaluated for different movement models and parameter configurations.
- Published
- 2024
- Full Text
- View/download PDF
30. A Deep Learning Cache Framework for Privacy Security on Heterogeneous IoT Networks
- Author
-
Jian Li, Meng Feng, and Shuai Li
- Subjects
Heterogeneous networks ,deep learning ,caching ,differential privacy ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Caching technology is essential for enhancing content transmission rates and reducing data transmission delays in heterogeneous networks, making it a crucial component of the Internet of Things (IoT). However, during data transmission and caching model training, the security of information is destroyed by untrusted third parties. In addition, the flexibility of storage locations presents another bottleneck in heterogeneous network caching technology. Deep learning (DL) is an important method for improving caching performance due to its powerful learning capabilities. Nonetheless, the DL process is vulnerable to various attacks, including white-box and black-box attacks, disclosing private information. Therefore, this study proposes a DL-based caching framework aimed to enhance security in heterogeneous networks based on differential privacy-preserving technology. Moreover, we utilize a boosting integrated method to improve caching accuracy. Simulated experiments demonstrate that the proposed framework ensures security and accuracy in the heterogeneous network caching process, outperforming existing solutions.
- Published
- 2024
- Full Text
- View/download PDF
31. A Comprehensive Survey on Revolutionizing Connectivity Through Artificial Intelligence-Enabled Digital Twin Network in 6G
- Author
-
Muhammad Sheraz, Teong Chee Chuah, Ying Loong Lee, Muhammad Mahtab Alam, Ala'a Al-Habashna, and Zhu Han
- Subjects
Digital twin networks (DTNs) ,6G ,artificial intelligence (AI) ,caching ,resource allocation ,data offloading ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The deployment of 5G has exposed capacity constraints in realizing the key vision of the Internet of Everything (IoE). Therefore, the researchers are exploring potentials of Digital Twin Network (DTN) in wireless networks. DTN is a novel technology to create virtual replicas of physical environment for testing, optimizing, and managing wireless networks. The integration of Artificial Intelligence (AI) and DTN appears to be a promising approach to address communication systems by providing an efficient environment for testing and improving AI models before deployment in real networks for effective network management, optimal resource allocation, and precise behavior prediction. Therefore, AI-enabled DTN in 6G represents a compelling avenue to address multifaceted challenges faced by wireless networks. In this comprehensive work, we offer a holistic survey that delves into the state-of-the-art approaches for AI-enabled DTNs in 6G. Firstly, we discuss the evolution of wireless networks and concept of AI-enabled DTN in 6G. Secondly, we discuss the role of AI-enabled DTN in 6G and driving advancements in fundamental components of 6G including resource allocation, caching, data offloading, and data security. Thirdly, we conduct a detailed discussion on key enabling technologies for realizing the capabilities of AI-enabled DTN in 6G. Fourthly, several applications of AI-enabled DTN in 6G are discussed for the practical relevance and significance in various industries such as smart cities, healthcare, and transportation etc. Finally, we provide lessons learned and highlight existing challenges and research directions to embark on further research efforts in the realm of AI-enabled DTN in 6G.
- Published
- 2024
- Full Text
- View/download PDF
32. Efficient Vehicular Edge Computing: A Novel Approach With Asynchronous Federated and Deep Reinforcement Learning for Content Caching in VEC
- Author
-
Wentao Yang and Zhibin Liu
- Subjects
Caching ,asynchronous federated learning ,vehicular edge computing ,deep reinforcement learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Vehicular Edge Computing (VEC) technology holds great promise, but also poses significant challenges to the limited computing power of in-vehicle devices and the capacity of Roadside Units (RSUs). At the same time, the highly mobile nature of vehicles and the frequent changes in the content of requests from vehicles make it critical to offload applications to edge servers and to effectively predict and cache the most popular content, so that the most popular content can be cached in advance in the RSU. And also considering protecting the privacy of vehicle user vehicular users (VUs), traditional data-sharing methods may not be suitable for this work, so we use an asynchronous Federated learning (FL) approach to update the global model in time and at the same time can protect the personal privacy of VUs. Unlike the traditional synchronous FL, asynchronous FL no longer needs to wait for all vehicles to finish training and uploading local models before updating the global model, which avoids the problem of long training time. In this paper, we propose an in-vehicle edge computing caching scheme based on asynchronous federated learning and deep reinforcement learning(AFLR), which prefetches possible popular contents in advance and caches them in the edge nodes or vehicle nodes according to the vehicle’s location and moving direction while reducing the latency of the content requests. After extensive experimental comparisons, the AFLR scheme outperforms other benchmark caching schemes.
- Published
- 2024
- Full Text
- View/download PDF
33. Optimized Two-Tier Caching with Hybrid Millimeter-Wave and Microwave Communications for 6G Networks.
- Author
-
Sheraz, Muhammad, Chuah, Teong Chee, Roslee, Mardeni Bin, Ahmed, Manzoor, Iqbal, Amjad, and Al-Habashna, Ala'a
- Subjects
MICROWAVE communication systems ,TELECOMMUNICATION systems ,CACHE memory ,HYBRID systems ,COMPUTER network traffic ,REINFORCEMENT learning ,FEMTOCELLS ,DATA transmission systems - Abstract
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this paper, we propose a novel two-tier caching mechanism for data caching on mobile user equipment (UE) and the small base station (SBS) level in ultra-dense 6G heterogeneous networks for reducing data access failure via cache resources. The two-tier caching enables users to retrieve their desired content from cache resources through device-to-device (D2D) communications from neighboring users or the serving SBS. The cache-enabled UE exploits millimeter-wave (mmWave)-based D2D communications, utilizing line-of-sight (LoS) links for high-speed data transmission to content-demanding mobile UE within a limited connection time. In the event of D2D communication failures, a dual-mode hybrid system, combining mmWave and microwave μ Wave technologies, is utilized to ensure effective data transmission between the SBS and UE to fulfill users' data demands. In the proposed framework. the data transmission speed is optimized through mmWave signals in line-of-sight (LoS) conditions. In non-LoS scenarios, the system switches to μ Wave mode for obstacle-penetrating signal transmission. Subsequently, we propose a reinforcement learning (RL) approach to optimize cache decisions through the approximation of the Q action-value function. The proposed technique undergoes iterative learning, adapting to dynamic network conditions to enhance the content placement policy and minimize delay. Extensive simulations demonstrate the efficiency of our proposed approach in significantly reducing network delay compared with benchmark schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Improving web-oriented information systems efficiency using Redis caching mechanisms.
- Author
-
Privalov, Maksim Vladimirovich and Stupina, Mariya Valerevna
- Subjects
INFORMATION storage & retrieval systems ,RELATIONAL databases ,DATABASES ,WEB-based user interfaces ,WEBSITES ,ELECTRONIC data processing - Abstract
The responsiveness of a web application with minimum latency time and maximum web pages loading speed is determined by its overall performance. When dealing with a large number of users and amount of data, the performance of web applications is significantly affected by ways of data processing, storage and access. This paper identifies the significance of data caching process to speed up access to relational database. The study examines approaches to improve the performance of web applications through the joint use of MySQL relational database management system (DBMS) and Redis NoSQL DBMS. The practical part of the study presents a description of a web application built based on Java and Spring Boot framework. The paper proposes the implementation of the caching strategies that take into account the principles of aspect-oriented programming. Made experiments on performance testing of the developed web application with and without caching are presented. The presented results of the study allowed us to conclude that it is possible to improve the performance of web applications by the optimal use of caching strategies when performing database queries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. 命名数据网络中基于分级的数据缓存方法.
- Author
-
侯睿, 沙莫, and 金继欢
- Abstract
Copyright of Journal of South-Central Minzu University (Natural Science Edition) is the property of Journal of South-Central Minzu University (Natural Science Edition) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
36. Enhancing Energy Efficiency in IoT-NDN via Parameter Optimization.
- Author
-
Papenfuß, Dennis, Gerlach, Bennet, Fischer, Stefan, and Hail, Mohamed Ahmed
- Subjects
ENERGY consumption ,DATA compression ,DATA packeting ,INTERNET of things - Abstract
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into how different network parameters affect energy consumption, analyzing a range of parameters using hyperparameter optimization seems reasonable. The experiments from this work's ndnSIM-based hyperparameter setup indicate that the data packet size has the most significant impact on energy consumption, followed by the caching scheme, caching strategy, and finally, the forwarding strategy. The energy footprint of these parameters is orders of magnitude apart. Surprisingly, the packet request sequence influences the caching parameters' energy footprint more than the graph size and topology. Regarding energy consumption, the results indicate that data compression may be more relevant than expected, and caching may be more significant than the forwarding strategy. The framework for ndnSIM developed in this work can be used to simulate NDN networks more efficiently. Furthermore, the work presents a valuable basis for further research on the effect of specific parameter combinations not examined before. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Performance Optimization Techniques in Object Oriented Programming in PHP.
- Author
-
Vayadande, Kuldeep, Telsang, Shreyash, Thakare, Manthan, Thigale, Om, Thenge, Aniket, and Tarade, Shreya
- Subjects
OBJECT-oriented programming ,MATHEMATICAL optimization - Abstract
Object-Oriented Programming (OOP) in PHP is gaining traction due to its flexibility and control. This research paper delves into the best practices for optimizing OOP in PHP. The primary focus is on strategies such as designing classes, leveraging inheritance, and implementing caching to expedite processes. The paper also emphasizes the importance of other optimization techniques such as polymorphism, method access optimization, property access optimization, and memory management. Polymorphism allows objects of different classes to be treated as objects of a common superclass, enabling more flexible and dynamic code. Method and property access optimization involve using the most efficient ways to access methods and properties of objects, while memory management ensures that resources are used effectively. Overall, this research provides a comprehensive guide to optimizing PHP OOP, covering a wide range of techniques and strategies. It serves as a valuable resource for PHP developers looking to improve the performance and efficiency of their object-oriented applications. By employing tactics like class design, inheritance, caching, and profiling, developers can harness the full potential of OOP in PHP and build robust, high-performing applications. This paper is a must-read for anyone interested in PHP OOP and its optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
38. Effective Cache Placement for Content Delivery Networks in Fog Computing.
- Author
-
Kumari, Priti, Dubey, Vandana, Patel, Kavita, Shrivastava, Sarika, and Kaur, Parmeet
- Subjects
CONTENT delivery networks ,CACHE memory ,DOMINATING set ,ENERGY consumption ,USER experience - Abstract
Fog computing (FC) is a concept that encompasses cloud paradigm-like amenities to the edge of the network. The fog layer that complements the cloud-based concept aids in strengthening the standards of services (QoS) in time-sensitive activities. One such application of FC arises in the deployment of content delivery networks for reducing the latency in transferring content to customers and enhancing the user experience. This paper presents a fog-based system where fog nodes act as caches for content storage. Appropriate placement of cache nodes is important for fast content distribution to users and maintaining the QoS. Another challenge is to minimize energy expenditure while deploying caches at scale. To deal with these issues, a Connected Dominating Set (CDS) and popularity-based caching strategy in content distribution fog networks are suggested in this work. The content is sorted based on its past popularity and then CDS based rules have been developed to recommend the most ideal fog nodes (FNs) to place popular content or files. The selection of fog nodes as caches is performed based on factors of storage capacity, energy consumption, and user density (i.e., the number of neighbors of a fog node). Experiment outcomes authenticate the efficiency of the anticipated scheme without caching and random method of cache placement. In comparison to the without caching and random method, the proposed method produces better fog nodes as cache nodes, incurs a lower energy consumption and consumes less bandwidth. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. 基于信息年龄的无人机缓存和轨迹优化算法.
- Author
-
周晓雅 and 朱 琦
- Abstract
Copyright of Journal of Data Acquisition & Processing / Shu Ju Cai Ji Yu Chu Li is the property of Editorial Department of Journal of Nanjing University of Aeronautics & Astronautics and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
40. Incentive-Aware Blockchain-Assisted Intelligent Edge Caching and Computation Offloading for IoT
- Author
-
Qian Wang, Siguang Chen, and Meng Wu
- Subjects
Computation offloading ,Caching ,Incentive ,Blockchain ,Federated deep reinforcement learning ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The rapid development of artificial intelligence has pushed the Internet of Things (IoT) into a new stage. Facing with the explosive growth of data and the higher quality of service required by users, edge computing and caching are regarded as promising solutions. However, the resources in edge nodes (ENs) are not inexhaustible. In this paper, we propose an incentive-aware blockchain-assisted intelligent edge caching and computation offloading scheme for IoT, which is dedicated to providing a secure and intelligent solution for collaborative ENs in resource optimization and controls. Specifically, we jointly optimize offloading and caching decisions as well as computing and communication resources allocation to minimize the total cost for tasks completion in the EN. Furthermore, a blockchain incentive and contribution co-aware federated deep reinforcement learning algorithm is designed to solve this optimization problem. In this algorithm, we construct an incentive-aware blockchain-assisted collaboration mechanism which operates during local training, with the aim to strengthen the willingness of ENs to participate in collaboration with security guarantee. Meanwhile, a contribution-based federated aggregation method is developed, in which the aggregation weights of EN gradients are based on their contributions, thereby improving the training effect. Finally, compared with other baseline schemes, the numerical results prove that our scheme has an efficient optimization utility of resources with significant advantages in total cost reduction and caching performance.
- Published
- 2023
- Full Text
- View/download PDF
41. Cache in fog computing design, concepts, contributions, and security issues in machine learning prospective
- Author
-
Muhammad Ali Naeem, Yousaf Bin Zikria, Rashid Ali, Usman Tariq, Yahui Meng, and Ali Kashif Bashir
- Subjects
Internet of things ,Cloud computing ,Fog computing ,Caching ,Latency ,Information technology ,T58.5-58.64 - Abstract
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures. To deal with this problem, communication networks consider fog computing as one of promising technologies that can improve overall communication performance. It brings on-demand services proximate to the end devices and delivers the requested data in a short time. Fog computing faces several issues such as latency, bandwidth, and link utilization due to limited resources and the high processing demands of end devices. To this end, fog caching plays an imperative role in addressing data dissemination issues. This study provides a comprehensive discussion of fog computing, Internet of Things (IoTs) and the critical issues related to data security and dissemination in fog computing. Moreover, we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing. Besides, this paper presents a number of caching schemes with their contributions, benefits, and challenges to overcome the problems and limitations of fog computing. We also identify machine learning-based approaches for cache security and management in fog computing, as well as several prospective future research directions in caching, fog computing, and machine learning.
- Published
- 2023
- Full Text
- View/download PDF
42. Software-Defined Named Data Networking in Literature: A Review
- Author
-
Albatool Alhawas and Abdelfettah Belghith
- Subjects
SDN ,NDN ,IOT ,caching ,forwarding ,routing ,Information technology ,T58.5-58.64 - Abstract
This paper presents an in-depth review of software-defined named data networking (SD-NDN), a transformative approach in network architectures poised to deliver substantial benefits. By addressing the limitations inherent in traditional host-centric network architectures, SD-NDN offers improvements in network performance, scalability, and efficiency. The paper commences with an overview of named data networking (NDN) and software-defined networking (SDN), the two fundamental building blocks of SD-NDN. It then explores the specifics of integrating NDN with SDN, illustrating examples of various SD-NDN models. These models are designed to leverage SDN for NDN routing, caching, and forwarding. The paper concludes by proposing potential strategies for further integration of SDN and NDN and some open research questions. These proposed strategies aim to stimulate further exploration and innovation in the field of SD-NDN.
- Published
- 2024
- Full Text
- View/download PDF
43. To Cache or Not to Cache
- Author
-
Steven Lyons and Raju Rangaswami
- Subjects
non-datapath caching ,storage ,caching ,algorithms ,analysis ,Industrial engineering. Management engineering ,T55.4-60.8 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload’s footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate.
- Published
- 2024
- Full Text
- View/download PDF
44. Research on Dynamic Revenue-Based Edge Caching Strategy
- Author
-
Si, Mingyu, Liu, Qiang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Qian, Zhihong, editor, Jabbar, M.A., editor, Cheung, Simon K. S., editor, and Li, Xiaolong, editor
- Published
- 2023
- Full Text
- View/download PDF
45. Serverless Cloud Computing: State of the Art and Challenges
- Author
-
Lannurien, Vincent, D’Orazio, Laurent, Barais, Olivier, Boukhobza, Jalil, Xhafa, Fatos, Series Editor, Krishnamurthi, Rajalakshmi, editor, Kumar, Adarsh, editor, Gill, Sukhpal Singh, editor, and Buyya, Rajkumar, editor
- Published
- 2023
- Full Text
- View/download PDF
46. Intelligent Cache Placement in Multi-cache and One-Tenant Networks
- Author
-
Ameghchouche, Sara, Mami, Mohammed Amine, Khelfi, Mohamed Fayçal, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Salem, Mohammed, editor, Merelo, Juan Julián, editor, Siarry, Patrick, editor, Bachir Bouiadjra, Rochdi, editor, Debakla, Mohamed, editor, and Debbat, Fatima, editor
- Published
- 2023
- Full Text
- View/download PDF
47. OntoCA: Ontology-Aware Caching for Distributed Subgraph Matching
- Author
-
Qin, Yuzhou, Wang, Xin, Hao, Wenqi, Liu, Pengkai, Song, Yanyan, Zhang, Qingpeng, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Li, Bohan, editor, Yue, Lin, editor, Tao, Chuanqi, editor, Han, Xuming, editor, Calvanese, Diego, editor, and Amagasa, Toshiyuki, editor
- Published
- 2023
- Full Text
- View/download PDF
48. Delay Efficient Caching in Fog Computing
- Author
-
Gupta, Divya, Wadhwa, Shivani, Rani, Shalli, Sharma, Parth, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Goyal, Dinesh, editor, Kumar, Anil, editor, Piuri, Vincenzo, editor, and Paprzycki, Marcin, editor
- Published
- 2023
- Full Text
- View/download PDF
49. A Machine Learning-Based Attack Detection and Prevention System in Vehicular Named Data Networking.
- Author
-
Hussain Magsi, Arif, Ghulam, Ali, Memon, Saifullah, Javeed, Khalid, Alhussein, Musaed, and Rida, Imad
- Subjects
RANDOM forest algorithms ,K-nearest neighbor classification ,DECISION trees ,REPUTATION ,DATA privacy ,BLOCKCHAINS ,MACHINE learning - Abstract
Named Data Networking (NDN) is gaining a significant attention in Vehicular Ad-hoc Networks (VANET) due to its in-network content caching, name-based routing, and mobility-supporting characteristics. Nevertheless, existing NDN faces three significant challenges, including security, privacy, and routing. In particular, security attacks, such as Content Poisoning Attacks (CPA), can jeopardize legitimate vehicles with malicious content. For instance, attacker host vehicles can serve consumers with invalid information, which has dire consequences, including road accidents. In such a situation, trust in the content-providing vehicles brings a new challenge. On the other hand, ensuring privacy and preventing unauthorized access in vehicular (VNDN) is another challenge. Moreover, NDN's pull-based content retrieval mechanism is inefficient for delivering emergency messages in VNDN. In this connection, our contribution is threefold. Unlike existing rule-based reputation evaluation, we propose a Machine Learning (ML)-based reputation evaluation mechanism that identifies CPA attackers and legitimate nodes. Based On ML evaluation results, vehicles accept or discard served content. Secondly, we exploit a decentralized blockchain system to ensure vehicles' privacy by maintaining their information in a secure digital ledger. Finally, we improve the default routing mechanism of VNDN from pull to a push-based content dissemination using Publish-Subscribe (Pub-Sub) approach. We implemented and evaluated our ML-based classification model on a publicly accessible BurST-Asutralian dataset for Misbehavior Detection (BurST-ADMA). We used five (05) hybrid ML classifiers, including Logistic Regression, Decision Tree, K-Nearest Neighbors, Random Forest, and Gaussian Naive Bayes. The qualitative results indicate that Random Forest has achieved the highest average accuracy rate of 100%. Our proposed research offers the most accurate solution to detect CPA in VNDN for safe, secure, and reliable vehicle communication. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. User plane acceleration service for next-generation cellular networks.
- Author
-
Zeydan, Engin and Turk, Yekta
- Subjects
NEXT generation networks ,SOFTWARE maintenance ,RADIO access networks ,SOFTWARE radio ,TRANSPORT planes ,QUALITY of service ,INFRASTRUCTURE (Economics) ,RAILROAD tunnels - Abstract
Reducing end-to-end latency is a key requirement for efficient and reliable new services offered by next-generation mobile networks. In this context, it is critical for mobile network operators (MNOs) to enable faster communications over backhaul transport networks between next-generation base stations and core networks. However, MNOs will need to make new investments and optimize many points of their current transport infrastructure to serve next-generation services well. In addition, even if MNOs make these investments, there may always be faults and performance degradation in transport networks. This paper presents a new approach to reduce the dependence of MNOs services on the quality of transport networks and rely on software updates on radio access network and core network components. A hyper text transfer protocol (HTTP)-based user plane that can be cached and accelerated is proposed, making it an ideal solution to combat transport problems in next-generation mobile networks. Numerical tests validate our proposed approach and underscore the significant improvements in transfer time, throughput, and overall performance achieved by leveraging HTTP caching and acceleration techniques. More specifically, GPRS tunneling protocol-user plane (GTP-U) is, on average, 35% slower than HTTP, with the performance difference increasing as the data size grows, primarily due to additional overhead and GTP-U encapsulation time. Additionally, HTTP caching with a size of 20 MB provides a 9.5% acceleration in data transfer time, with an average increase of approximately 9% when the data size exceeds 20 MB. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.