1,606 results
Search Results
2. Node Insertion Algorithm and Ant Colony Optimization: Performance Review on Travelling Salesman Problem Paper for the Optimality of Network Topology.
- Author
-
Juhana, Tutun, Lukman, Selvi, Rachmana, Nana, Husni, Faizal, Adhiluhung, Gelar Pambudi, and Harjono, Hafizh Mulya
- Subjects
- *
ANT algorithms , *TRAVELING salesman problem , *POLYNOMIAL time algorithms , *TOPOLOGY , *APPROXIMATION algorithms , *INDUSTRIAL controls manufacturing , *DETERMINISTIC algorithms , *GRAPH theory - Abstract
The integrated industrial control systems require a reliable communication among each part of control systems. The limitation of network topology comprises the low latency of data transmission. The network algorithm topologies are sometimes failed to ensure the faulttolerant network design even if they offer it. Therefore, this works investigates a Node Insertion Algorithm (NIA) and Ant Colony Optimization (ACO) in order to solve the Travelling Salesman Problem (TSP) for network topologies in a communication network. The NP Non deterministic (polynomial time)-hardness based of TSP leads to the impossibility of polynomial time algorithm to solve an exponentially increasing amount of communication addresses. TSP itself is a well-known solution method known in the graph theory which is based on approximation algorithm. Therefore, this work also presents a comparison between the NIA and ACO for 200 nodes of network topologies. The results show that NIA is superior than ACO in terms of resulting ring's cost and computation duration. Three topologies are tested and it is observable that NIA requires only 14.88 seconds to finish the computation in which the resulting ring's length reaches 164019 units. Therefore, the achievement of the NIA can be applied for logical and physical nodes for a reliable connection in an industrial topology network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Recursive Probability Trees for Bayesian Networks
- Author
-
Cano, Andrés, Gómez-Olmedo, Manuel, Moral, Serafín, Pérez-Ariza, Cora B., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Meseguer, Pedro, editor, Mandow, Lawrence, editor, and Gasca, Rafael M., editor
- Published
- 2010
- Full Text
- View/download PDF
4. An innovative deterministic algorithm for optimal placement of micro phasor measurement units in radial electricity distribution systems
- Author
-
Gholizadeh Manghutay, Aref, Salay Naderi, Mehdi, and Fathi, Seyed Hamid
- Published
- 2022
- Full Text
- View/download PDF
5. Robust optimal power flow considering uncertainty in wind power probability distribution.
- Author
-
Dai, Leisi, Xiao, Huangqing, Yang, Ping, Pan, Guangsheng, and Sun, Kaiqi
- Subjects
DISTRIBUTION (Probability theory) ,ELECTRICAL load ,WIND power ,OPTIMIZATION algorithms ,LINEAR matrix inequalities ,DETERMINISTIC algorithms ,TRAJECTORY optimization - Abstract
This paper proposes an optimal power flow model that takes into account the uncertainty in the probability distribution of wind power. The model can schedule controllable generators under any possible distribution of wind power to ensure the safe and economic operation of the system. Firstly, considering the incompleteness of historical wind power data, the paper models the uncertainty of wind power using second-order moments of probability distribution and their fluctuation intervals. Subsequently, a robust optimal power flow model based on probability distribution model and joint chance constraints is established. The Lagrangian duality theorem is then employed to eliminate random variables from the optimization model, transforming the uncertainty model into a deterministic linear matrix inequality problem. Finally, a convex optimization algorithm is used to solve the deterministic problem, and the results are compared with traditional chance-constrained optimal power flow model. The feasibility and effectiveness of the proposed method are validated through case study simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A new filled function method based on global search for solving unconstrained optimization problems.
- Author
-
Jia Li, Yuelin Gao, Tiantian Chen, and Xiaohua Ma
- Subjects
LOGARITHMIC functions ,DETERMINISTIC algorithms ,EXPONENTIAL functions ,CONTINUOUS functions ,CONJUGATE gradient methods ,GLOBAL optimization - Abstract
The filled function method is a deterministic algorithm for finding a global minimizer of global optimization problems, and its effectiveness is closely related to the form of the constructed filled function. Currently, the filled functions mainly have three drawbacks in form, namely, parameter adjustment and control (if any), inclusion of exponential or logarithmic functions, and properties that are discontinuous and non-differentiable. In order to overcome these limitations, this paper proposed a parameter-free filled function that does not include exponential or logarithmic functions and is continuous and differentiable. Based on the new filled function, a filled function method for solving unconstrained global optimization problems was designed. The algorithm selected points in the feasible domain that were far from the global minimum point as initial points, and improved the setting of the step size in the stage of minimizing the filled function to enhance the algorithm’s global optimization capability. In addition, tests were conducted on 14 benchmark functions and compared with existing filled function algorithms. The numerical experimental results showed that the new algorithm proposed in this paper was feasible and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Modeling and Analysis of Dekker-Based Mutual Exclusion Algorithms.
- Author
-
Nigro, Libero, Cicirelli, Franco, and Pupo, Francesco
- Subjects
DETERMINISTIC algorithms ,ALGORITHMS ,SCALABILITY ,TIME management - Abstract
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of this algorithm has generally been studied under the strong memory model, where the read and write operations on a memory cell are atomic or indivisible. In recent years, some variants of the algorithm have been proposed to make it RW-safe when using the weak memory model, which makes it possible, e.g., for multiple read operations to occur simultaneously to a write operation on the same variable, with the read operations returning (flickering) a non-deterministic value. This paper proposes a novel approach to formal modeling and reasoning on a mutual exclusion algorithm using Timed Automata and the Uppaal tool, and it applies this approach through exhaustive model checking to conduct a thorough analysis of the Dekker's algorithm and some of its variants proposed in the literature. This paper aims to demonstrate that model checking, although necessarily limited in the scalability of the number N of the processes due to the state explosions problem, is effective yet powerful for reasoning on concurrency and process action interleaving, and it can provide significant results about the correctness and robustness of the basic version and variants of the Dekker's algorithm under both the strong and weak memory models. In addition, the properties of these algorithms are also carefully studied in the context of a tournament-based binary tree for N ≥ 2 processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Sublinear Algorithms in T-Interval Dynamic Networks.
- Author
-
Jahja, Irvan and Yu, Haifeng
- Subjects
TIME complexity ,ALGORITHMS ,DISTRIBUTED computing ,UNDIRECTED graphs ,DETERMINISTIC algorithms ,DISTRIBUTED algorithms ,SPANNING trees ,TOPOLOGY - Abstract
We consider standard T-interval dynamic networks, under the synchronous timing model and the broadcast CONGEST model. In a T-interval dynamic network, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some adversary and subject to the following constraint: For every T consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let H r to be the maximum (in terms of number of edges) such subgraph for round r through r + T - 1 . We define the backbone diameterd of a T-interval dynamic network to be the maximum diameter of all such H r 's, for r ≥ 1 . We use n to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood. Existing algorithms for these problems all have time complexity of Ω (n) rounds, even for T = ∞ and even when d is as small as O(1). This paper presents a novel approach/framework, based on the idea of massively parallel aggregation. Following this approach, we develop a novel deterministic Count algorithm with O (d 3 log 2 n) complexity, for T-interval dynamic networks with T ≥ c · d 2 log 2 n . Here c is a (sufficiently large) constant independent of d, n, and T. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a Θ (n) term. This paper further develops novel algorithms for solving Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood, while incurring O (d 3 polylog (n)) complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a Θ (n) term. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Approximate and Randomized Algorithms for Computing a Second Hamiltonian Cycle.
- Author
-
Deligkas, Argyrios, Mertzios, George B., Spirakis, Paul G., and Zamaraev, Viktor
- Subjects
HAMILTONIAN graph theory ,DETERMINISTIC algorithms ,ALGORITHMS ,APPROXIMATION algorithms - Abstract
In this paper we consider the following problem: Given a Hamiltonian graph G, and a Hamiltonian cycle C of G, can we compute a second Hamiltonian cycle C ′ ≠ C of G, and if yes, how quickly? If the input graph G satisfies certain conditions (e.g. if every vertex of G is odd, or if the minimum degree is large enough), it is known that such a second Hamiltonian cycle always exists. Despite substantial efforts, no subexponential-time algorithm is known for this problem. In this paper we relax the problem of computing a second Hamiltonian cycle in two ways. First, we consider approximating the length of a second longest cycle on n-vertex graphs with minimum degree δ and maximum degree Δ . We provide a linear-time algorithm for computing a cycle C ′ ≠ C of length at least n - 4 α (n + 2 α) + 8 , where α = Δ - 2 δ - 2 . This results provides a constructive proof of a recent result by Girão, Kittipassorn, and Narayanan in the regime of Δ δ = o (n) . Our second relaxation of the problem is probabilistic. We propose a randomized algorithm which computes a second Hamiltonian cycle with high probability, given that the input graph G has a large enough minimum degree. More specifically, we prove that for every 0 < p ≤ 0.02 , if the minimum degree of G is at least 8 p log 8 n + 4 , then a second Hamiltonian cycle can be computed with probability at least 1 - 1 n 50 p 4 + 1 in p o l y (n) · 2 4 p n time. This result implies that, when the minimum degree δ is sufficiently large, we can compute with high probability a second Hamiltonian cycle faster than any known deterministic algorithm. In particular, when δ = ω (log n) , our probabilistic algorithm works in 2 o (n) time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Holistic Review of UAV-Centric Situational Awareness: Applications, Limitations, and Algorithmic Challenges.
- Author
-
MahmoudZadeh, Somaiyeh, Yazdani, Amirmehdi, Kalantari, Yashar, Ciftler, Bekir, Aidarus, Fathi, and Al Kadri, Mhd Omar
- Subjects
SUPERVISED learning ,ARTIFICIAL intelligence ,REAL-time computing ,SITUATIONAL awareness ,DETERMINISTIC algorithms - Abstract
This paper presents a comprehensive survey of UAV-centric situational awareness (SA), delineating its applications, limitations, and underlying algorithmic challenges. It highlights the pivotal role of advanced algorithmic and strategic insights, including sensor integration, robust communication frameworks, and sophisticated data processing methodologies. The paper critically analyzes multifaceted challenges such as real-time data processing demands, adaptability in dynamic environments, and complexities introduced by advanced AI and machine learning techniques. Key contributions include a detailed exploration of UAV-centric SA's transformative potential in industries such as precision agriculture, disaster management, and urban infrastructure monitoring, supported by case studies. In addition, the paper delves into algorithmic approaches for path planning and control, as well as strategies for multi-agent cooperative SA, addressing their respective challenges and future directions. Moreover, this paper discusses forthcoming technological advancements, such as energy-efficient AI solutions, aimed at overcoming current limitations. This holistic review provides valuable insights into the UAV-centric SA, establishing a foundation for future research and practical applications in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Home energy management strategy to schedule multiple types of loads and energy storage device with consideration of user comfort: a deep reinforcement learning based approach.
- Author
-
Tingzhe Pan, Zean Zhu, Hongxuan Luo, Chao Li, Xin Jin, Zijie Meng, and Xinlei Cai
- Subjects
DEEP reinforcement learning ,MACHINE learning ,DETERMINISTIC algorithms ,REINFORCEMENT learning ,ENERGY storage ,ENERGY management ,PRODUCTION scheduling ,POWER plants - Abstract
With the increase in the integration of renewable sources, the home energy management system (HEMS) has become a promising approach to improve grid energy efficiency and relieve network stress. In this context, this paper proposes an optimization dispatching strategy for HEMS to reduce total cost with full consideration of uncertainties, while ensuring the users' comfort. Firstly, a HEMS dispatching model is constructed to reasonably schedule the start/stop time of the dispatchable appliances and energy storage system to minimize the total cost for home users. Besides, this dispatching strategy also controls the switching time of temperature-controlled load such as air conditioning to reduce the energy consumption while maintaining the indoor temperature in a comfortable level. Then, the optimal dispatching problem of HEMS is modeled as a Markov decision process (MDP) and solved by a deep reinforcement learning algorithm called deep deterministic policy gradient. The example results verify the effectiveness and superiority of the proposed method. The energy cost can be effectively reduced by 21.9% at least compared with other benchmarks and the indoor temperature can be well maintained. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Enhancing Regular Expression Processing through Field-Programmable Gate Array-Based Multi-Character Non-Deterministic Finite Automata.
- Author
-
Zhang, Chuang, Tang, Xuebin, and Peng, Yuanxi
- Subjects
FINITE state machines ,DATABASE security ,GATE array circuits ,COMPUTER network security ,DETERMINISTIC algorithms ,HETEROGENEOUS computing ,ELECTRONIC data processing - Abstract
This work investigates the advantages of FPGA-based Multi-Character Non-Deterministic Finite Automata (MC-NFA) for enhancing regular expression processing over traditional software-based methods. By integrating Field-Programmable Gate Arrays (FPGAs) within a data processing framework, our study showcases significant improvements in processing efficiency, accuracy, and resource utilization for complex pattern matching tasks. We present a novel approach that not only accelerates database and network security applications, but also contributes to the evolving landscape of computational efficiency and hardware acceleration. The findings illustrate that FPGA's coherent access to main memory and the efficient use of resources lead to considerable gains in processing times and throughput for handling regular expressions, unaffected by expression complexity and driven primarily by dataset size and match location. Our research further introduces a phase shift compensation technique that elevates match accuracy to optimal levels, highlighting FPGA's potential for real-time, accurate data processing. The study confirms that the benefits of using FPGA for these tasks do not linearly correlate with an increase in resource consumption, underscoring the technology's efficiency. This paper not only solidifies the case for adopting FPGA technology in complex data processing tasks, but also lays the groundwork for future explorations into optimizing hardware accelerators for broader applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. D3-TD3: Deep Dense Dueling Architectures in TD3 Algorithm for Robot Path Planning Based on 3D Point Cloud.
- Author
-
Gu, Yuwan, Zhu, Zhitao, Chu, Yongtao, Lv, Jidong, Wang, Xueyuan, and Xu, Shoukun
- Subjects
ROBOTIC path planning ,POINT cloud ,POTENTIAL field method (Robotics) ,ALGORITHMS ,DETERMINISTIC algorithms ,ITERATIVE learning control - Abstract
Twin delayed deep deterministic (TD3) policy gradient has several limitations when applied in planning a path in environment with a number of dilemmas according to our experiment, due to the complexity of the robot path planning task, the rate of convergence of TD3 algorithm is slow and the rate of collision is high. To address this problem, deep dense dueling twin delayed deep deterministic (D3-TD3) architecture is proposed, a method that preserves important information from cross-layer inputs through dense connections and divides the network into a value function and a dominance function, thus, allowing for faster convergence when solving complex tasks. Finally, a spatial model based on three-dimension (3D) point cloud is built, and simulation experimental results show that in static environment, the algorithm proposed in the paper has 40.6% fewer collisions compared to TD3, 30% fewer collisions compared to TD3-BC, 19.2% fewer collisions compared to Dueling TD3 and 17.4% fewer collisions compared to deep dense TD3. In dynamic and static environment, the algorithm proposed in the paper has 34.4% fewer collisions compared to TD3, 24% fewer collisions compared to TD3-BC, 6% fewer collisions compared to Dueling TD3 and 25% fewer collisions compared to deep dense TD3. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. An energy-aware approach for resources allocating in the internet of things using a forest optimization algorithm.
- Author
-
Wu, Minning, Zhang, Feng, and Rui, X.
- Subjects
OPTIMIZATION algorithms ,INTERNET of things ,PARTICLE swarm optimization ,SERVICE level agreements ,RESOURCE allocation ,DETERMINISTIC algorithms - Abstract
Purpose: Internet of things (IoT) is essential in technical, social and economic domains, but there are many challenges that researchers are continuously trying to solve. Traditional resource allocation methods in IoT focused on the optimal resource selection process, but the energy consumption for allocating resources is not considered sufficiently. This paper aims to propose a resource allocation technique aiming at energy and delay reduction in resource allocation. Because of the non-deterministic polynomial-time hard nature of the resource allocation issue and the forest optimization algorithm's success in complex problems, the authors used this algorithm to allocate resources in IoT. Design/methodology/approach: For the vast majority of IoT applications, energy-efficient communications, sustainable energy supply and reduction of latency have been major goals in resource allocation, making operating systems and applications more efficient. One of the most critical challenges in this field is efficient resource allocation. This paper has provided a new technique to solve the mentioned problem using the forest optimization algorithm. To simulate and analyze the proposed technique, the MATLAB software environment has been used. The results obtained from implementing the proposed method have been compared to the particle swarm optimization (PSO), genetic algorithm (GA) and distance-based algorithm. Findings: Simulation results show that the proper performance of the proposed technique. The proposed method, in terms of "energy" and "delay," is better than other ones (GA, PSO and distance-based algorithm). Practical implications: The paper presents a useful method for improving resource allocation methods. The proposed method has higher efficiency compared to the previous methods. The MATLAB-based simulation results have indicated that energy consumption and delay have been improved compared to other algorithms, which causes the high application of this method in practical projects. In the future, the focus will be on resource failure and reducing the service level agreement violation rate concerning the number of resources. Originality/value: The proposed technique can solve the mentioned problem in the IoT with the best resource utilization, low delay and reduced energy consumption. The proposed forest optimization-based algorithm is a promising technique to help enterprises participate in IoT initiatives and develop their business. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Multi-Objective Advantage Actor-Critic Algorithm for Hybrid Disassembly Line Balancing with Multi-Skilled Workers.
- Author
-
Wang, Jiacun, Xi, Guipeng, Guo, Xiwang, Qin, Shujin, and Han, Henry
- Subjects
ALGORITHMS ,REINFORCEMENT learning ,DETERMINISTIC algorithms ,CARBON emissions ,GENETIC algorithms ,REINFORCEMENT (Psychology) - Abstract
The scheduling of disassembly lines is of great importance to achieve optimized productivity. In this paper, we address the Hybrid Disassembly Line Balancing Problem that combines linear disassembly lines and U-shaped disassembly lines, considering multi-skilled workers, and targeting profit and carbon emissions. In contrast to common approaches in reinforcement learning that typically employ weighting strategies to solve multi-objective problems, our approach innovatively incorporates non-dominated ranking directly into the reward function. The exploration of Pareto frontier solutions or better solutions is moderated by comparing performance between solutions and dynamically adjusting rewards based on the occurrence of repeated solutions. The experimental results show that the multi-objective Advantage Actor-Critic algorithm based on Pareto optimization exhibits superior performance in terms of metrics superiority in the comparison of six experimental cases of different scales, with an excellent metrics comparison rate of 70%. In some of the experimental cases in this paper, the solutions produced by the multi-objective Advantage Actor-Critic algorithm show some advantages over other popular algorithms such as the Deep Deterministic Policy Gradient Algorithm, the Soft Actor-Critic Algorithm, and the Non-Dominated Sorting Genetic Algorithm II. This further corroborates the effectiveness of our proposed solution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Quality of services in software defined networking: challenges and controller placement problems.
- Author
-
Aouad, Siham, El Meghrouni, Issam, Sabri, Yassine, Hilmani, Adil, and Maizate, Abderrahim
- Subjects
COMPUTER software quality control ,QUALITY of service ,SOFTWARE as a service ,NETWORK performance ,SCALABILITY ,DETERMINISTIC algorithms - Abstract
Quality of service (QoS) is pivotal for ensuring effective and reliable network performance, yet achieving end-to-end QoS within current network architectures remains a persistent challenge. The emergence of software defined networking (SDN) addresses limitations in traditional networking by offering a centralized control plane. This allows dynamic resource management and efficient enforcement of QoS policies by network administrators. However, the controller placement problem (CPP) within SDN poses a significant challenge, as identifying the optimal placement of controllers is a non-deterministic polynomial-time hardness (NP-hard) problem. Researchers are actively working on solutions to address this challenge, especially in large-scale networks where deploying controllers becomes complex. Additionally, maintaining QoS in terms of controller management presents another hurdle. This paper explores these challenges, delving into the literature and providing a comprehensive analysis of controller performance metrics related to QoS parameters such as load balancing, reliability, consistency, and scalability. By addressing these challenges, the research aims to enhance QoS within the SDN framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Online Geometric Covering and Piercing.
- Author
-
De, Minati, Jain, Saksham, Kallepalli, Sarat Varma, and Singh, Satyam
- Subjects
ONLINE algorithms ,DETERMINISTIC algorithms ,HYPERCUBES ,PROBLEM solving ,DECISION making - Abstract
We consider the online version of the piercing set problem, where geometric objects arrive one by one, and the online algorithm must maintain a valid piercing set for the already arrived objects by making irrevocable decisions. It is easy to observe that any deterministic algorithm solving this problem for intervals in R has a competitive ratio of at least Ω (n) . This paper considers the piercing set problem for similarly sized objects. We propose a deterministic online algorithm for similarly sized fat objects in R d . For homothetic hypercubes in R d with side length in the range [1, k], we propose a deterministic algorithm having a competitive ratio of at most 3 d ⌈ log 2 k ⌉ + 2 d . In the end, we show deterministic lower bounds of the competitive ratio for similarly sized α -fat objects in R 2 and homothetic hypercubes in R d . Note that piercing translated copies of a convex object is equivalent to the unit covering problem, which is well-studied in the online setup. Surprisingly, no upper bound of the competitive ratio was known for the unit covering problem when the corresponding object is anything other than a ball or a hypercube. Our result yields an upper bound of the competitive ratio for the unit covering problem when the corresponding object is any convex object in R d . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Distributed Independent Sets in Interval and Segment Intersection Graphs.
- Author
-
Bhatt, Nirmala, Gorain, Barun, Mondal, Kaushik, and Pandit, Supantha
- Subjects
- *
INDEPENDENT sets , *INTERSECTION graph theory , *GRAPH theory , *DETERMINISTIC algorithms , *DISTRIBUTED algorithms , *LOCAL mass media , *COMMUNICATION models - Abstract
The Maximum Independent Set problem is well-studied in graph theory and related areas. An independent set of a graph is a subset of non-adjacent vertices of the graph. A maximum independent set is an independent set of maximum size. This paper studies the Maximum Independent Set problem in some classes of geometric intersection graphs in a distributed setting. More precisely, we study the Maximum Independent Set problem on two geometric intersection graphs, interval and axis-parallel segment intersection graphs, and present deterministic distributed algorithms in a model that is similar but a little weaker than the local communication model. We compute the maximum independent set on interval graphs in O(k) rounds and O(n) messages, where k is the size of the maximum independent set and n is the number of nodes in the graph. We provide a matching lower bound of Ω(k) on the number of rounds, whereas Ω(n) is a trivial lower bound on message complexity. Thus, our algorithm is both time and message-optimal. We also study the Maximum Independent Set problem in interval count l graphs, a special case of the interval graphs where the intervals have exactly l different lengths. We propose an 1 2-approximation algorithm that runs in O(l) round. For axis-parallel segment intersection graphs, we design an 1 2-approximation algorithm that obtains a solution in O(D) rounds. The results in this paper extend the results of Molla et al. [J. Parallel Distrib. Comput. 2019]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A good balance for an online scheduling problem considering both learning and deteriorating effects in the handicraft process.
- Author
-
Ma, Ran, Yang, Xiaohua, Han, Wenwen, and Liu, Aijun
- Subjects
ONLINE algorithms ,HANDICRAFT ,DETERMINISTIC algorithms ,PROCESS optimization ,SCHEDULING ,ONLINE education ,SCHOOL schedules - Abstract
The paper considers an online scheduling problem with the effects of both learning and deterioration to minimize the total completion time. More specifically, we assume that the actual processing length of job J
j is p jr = p j r b + at , where pj is the initial processing time of Jj , t is the starting time of Jj , r is the seating arrangement position of Jj , b is the learning factor and a is the deterioration factor, respectively. For this problem, we show that the performance ratio of any deterministic online algorithm is not <2 and provide a best possible online algorithm DSBPT with a competitive ratio of 2. Furthermore, we also present a concise computational simulation study to verify the effectiveness and efficiency of the proposed algorithm DSBPT, as well as the management implications provided for decision-makers to production optimization. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
20. Action space noise optimization as exploration in deterministic policy gradient for locomotion tasks.
- Author
-
Nobakht, Hesan and Liu, Yong
- Subjects
REINFORCEMENT learning ,DISTRIBUTION (Probability theory) ,DETERMINISTIC algorithms ,NOISE ,SPACE exploration ,MARKOV processes - Abstract
Reinforcement learning (RL) algorithms with deterministic actors (policy) commonly apply noise to the action space for exploration. These exploration methods are either undirected or require extra knowledge of the environment. In the aim of addressing these fundamental limitations, this paper introduces a parameterized stochastic action-noise policy (as a probability distribution) that correlates with the objectivity of the RL algorithm. This policy is optimized based on state-action values of predicted future states. Consequently, the optimization does not rely on the explicit definition of the reward function which improves the adaptability of this exploration strategy for different environments and algorithms. Moreover, this paper presents a predictive model of system dynamics (transitional probability) with the capacity to capture the uncertainty of the environments with optimal design and fewer parameters. It significantly reduces the model complexity while maintaining the same level of accuracy as current methods. This research evaluates and analyzes the proposed method and models while demonstrating significant increase in performance and reliability across various locomotion and control tasks in comparison with current methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. A Review of Optimal Design for Large-Scale Micro-Irrigation Pipe Network Systems.
- Author
-
Wang, Yafei, Zhang, Yangkai, Wang, Wenjuan, Liu, Zhengguang, Yu, Xingjiao, Li, Henan, Wang, Wene, and Hu, Xiaotao
- Subjects
MICROIRRIGATION ,DETERMINISTIC algorithms ,WATER distribution ,HEURISTIC algorithms ,WATER conservation ,MATHEMATICAL optimization ,PIPE - Abstract
Micro-irrigation pipe network systems are commonly utilized for water transmission and distribution in agricultural irrigation. They effectively transport and distribute water to crops, aiming to achieve water and energy conservation, increased yield, and improved quality. This paper presents a model for the scaled micro-irrigation pipeline network system and provides a comprehensive review of the fundamental concepts and practical applications of optimization techniques in the field of pipeline network design. This paper is divided into four main sections: Firstly, it covers the background and theoretical foundations of optimal design for scaled micro-irrigation pipeline network systems. Secondly, the paper presents an optimal design model specifically tailored for scaled micro-irrigation pipeline networks. And then, it discusses various optimization solution techniques employed for addressing the design challenges of scaled micro-irrigation pipeline networks, along with real-world case studies. Finally, this paper concludes with an outlook on the ongoing research and development efforts in the field of scaled micro-irrigation pipeline network systems. In addition, this paper establishes a fundamental model for optimizing pipeline networks, to achieve minimum safe operation and total cost reduction. It considers constraints such as pipeline pressure-bearing capacity, maximum flow rate, and diameter. The decision-making variables include pipeline diameter, length, internal roughness, node pressure, future demand, and valve placement. Additionally, this paper provides an extensive overview of deterministic methods and heuristic algorithms utilized in the optimal design of micro-irrigation pipeline networks. Finally, this paper presents future research directions for pipeline network optimization and explores the potential for algorithmic improvements, integration of machine learning techniques, and wider adoption of EPANET 2.0 software. These endeavors aim to lay a strong foundation for effectively solving complex and challenging optimization problems in micro-irrigation pipeline network systems in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Acceleration for Efficient Automated Generation of Operational Amplifiers.
- Author
-
Zhao, Zhenxin, Liu, Jun, and Zhang, Lihong
- Subjects
OPTIMIZATION algorithms ,DETERMINISTIC algorithms ,DIFFERENTIAL evolution ,SIGNAL processing ,BOOSTING algorithms ,OPERATIONAL amplifiers ,ALGORITHMS - Abstract
Operational amplifiers (Op-Amps) are critical to sensor systems because they enable precise, reliable, and flexible signal processing. Current automated Op-Amp generation methods suffer from extremely low efficiency because the time-consuming SPICE-in-the-loop sizing is normally involved as its inner loop. In this paper, we propose an efficiently automated Op-Amp generation tool using a hybrid sizing method, which combines the merits together from a deterministic optimization algorithm and differential evolution algorithm. Thus, it can not only quickly find a decent local optimum, but also eventually converge to a global optimum. This feature is well fit to be serving as an acute filter in the circuit structure evaluation flow to efficiently eliminate any undesirable circuit structures in advance of detailed sizing. Our experimental results demonstrate its superiority over traditional sizing approaches and show its efficacy in highly boosting the efficiency of automated Op-Amp structure generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Fractional B-Spline Wavelets and U-Net Architecture for Robust and Reliable Vehicle Detection in Snowy Conditions.
- Author
-
Mokayed, Hamam, Ulehla, Christián, Shurdhaj, Elda, Nayebiastaneh, Amirhossein, Alkhaled, Lama, Hagner, Olle, and Hum, Yan Chai
- Subjects
DETERMINISTIC algorithms ,WEATHER ,DRONE aircraft ,SEVERE storms ,RASPBERRY Pi - Abstract
This paper addresses the critical need for advanced real-time vehicle detection methodologies in Vehicle Intelligence Systems (VIS), especially in the context of using Unmanned Aerial Vehicles (UAVs) for data acquisition in severe weather conditions, such as heavy snowfall typical of the Nordic region. Traditional vehicle detection techniques, which often rely on custom-engineered features and deterministic algorithms, fall short in adapting to diverse environmental challenges, leading to a demand for more precise and sophisticated methods. The limitations of current architectures, particularly when deployed in real-time on edge devices with restricted computational capabilities, are highlighted as significant hurdles in the development of efficient vehicle detection systems. To bridge this gap, our research focuses on the formulation of an innovative approach that combines the fractional B-spline wavelet transform with a tailored U-Net architecture, operational on a Raspberry Pi 4. This method aims to enhance vehicle detection and localization by leveraging the unique attributes of the NVD dataset, which comprises drone-captured imagery under the harsh winter conditions of northern Sweden. The dataset, featuring 8450 annotated frames with 26,313 vehicles, serves as the foundation for evaluating the proposed technique. The comparative analysis of the proposed method against state-of-the-art detectors, such as YOLO and Faster RCNN, in both accuracy and efficiency on constrained devices, emphasizes the capability of our method to balance the trade-off between speed and accuracy, thereby broadening its utility across various domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A pluralist hybrid model for moral AIs.
- Author
-
Song, Fei and Yeung, Shing Hay Felix
- Subjects
ARTIFICIAL intelligence ,ETHICAL decision making ,HYBRID systems ,ETHICS ,MACHINE learning ,DETERMINISTIC algorithms - Abstract
With the increasing degrees AIs and machines, the need for implementing ethics in AIs is pressing. In this paper, we first survey current approaches to moral AIs and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral AIs can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an AI based on the pluralist hybrid approach consists of two systems. The first is a deterministic algorithm system that embraces different moral rules for making explicit moral decisions. The second is a machine learning system that accounts for calculating the value of the variables required by the application of moral principles. The pluralist hybrid system is better than the existing proposals as it better addresses the moral disagreement problem of the top-down approach by including distinct moral principles. Besides, the pluralist hybrid system reduces the opacity of ethical decision-making by implementing explicit moral principles for moral decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A Review of Collaborative Trajectory Planning for Multiple Unmanned Aerial Vehicles.
- Author
-
Wang, Li, Huang, Weicheng, Li, Haoxin, Li, Weijie, Chen, Junjie, and Wu, Weibin
- Subjects
FLIGHT planning (Aeronautics) ,HEURISTIC algorithms ,EMERGENCY management ,DETERMINISTIC algorithms ,TECHNOLOGICAL progress ,DRONE aircraft - Abstract
In recent years, the collaborative operation of multiple unmanned aerial vehicles (UAVs) has been an important advancement in drone technology. The research on multi-UAV collaborative flight path planning has garnered widespread attention in the drone field, demonstrating unique advantages in complex task execution, large-scale monitoring, and disaster response. As one of the core technologies of multi-UAV collaborative operations, the research and technological progress in trajectory planning algorithms directly impact the efficiency and safety of UAV collaborative operations. This paper first reviews the application and research progress of path-planning algorithms based on centralized and distributed control, as well as heuristic algorithms in multi-UAV collaborative trajectory planning. It then summarizes the main technical challenges in multi-UAV path planning and proposes countermeasures for multi-UAV collaborative planning in government, business, and academia. Finally, it looks to future research directions, providing ideas for subsequent studies in multi-UAV collaborative trajectory planning technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. On bias reduction in parametric estimation in stage structured development models.
- Author
-
Pham, Hoa, Pham, Huong T. T., and Yow, Kai Siong
- Subjects
PLANT life cycles ,PLANT development ,STATISTICAL models ,PARAMETER estimation ,DETERMINISTIC algorithms ,ESTIMATION bias - Abstract
Multi-stage models for cohort data are popular statistical models in several fields such as disease progressions, biological development of plants and animals, and laboratory studies of life cycle development. A Bayesian approach on adopting deterministic transformations in the Metropolis–Hastings (MH) algorithm was used to estimate parameters for these stage structured models. However, the biases in later stages are limitations of this methodology, especially the accuracy of estimates for the models having more than three stages. The main aim of this paper is to reduce these biases in parameter estimation. In particular, we conjoin insignificant previous stages or negligible later stages to estimate parameters of a desired stage, while an adjusted MH algorithm based on deterministic transformations is applied for the non-hazard rate models. This means that current stage parameters are estimated separately from the information of its later stages. This proposed method is validated in simulation studies and applied for a case study of the incubation period of COVID-19. The results show that the proposed methods could reduce the biases in later stages for estimates in stage structured models, and the results of the case study can be regarded as a valuable continuation of pandemic prevention. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Exploration of High-Dimensional Grids by Finite State Machines.
- Author
-
Dobrev, Stefan, Narayanan, Lata, Opatrny, Jaroslav, and Pankratov, Denis
- Subjects
FINITE state machines ,TIME complexity ,GRID computing ,DETERMINISTIC algorithms ,DISTRIBUTED computing ,POLYNOMIAL time algorithms - Abstract
We consider the problem of finding a "treasure" at an unknown point of an n-dimensional infinite grid, n ≥ 3 , by initially collocated finite automaton (FA) agents. Recently, the problem has been well characterized for 2 dimensions for deterministic as well as randomized FA agents, both in synchronous and semi-synchronous models (Brandt et al. in Proceedings of 32nd International Symposium on Distributed Computing (DISC) LIPCS 121:13:1–13:17, 2018; Emek et al. in Theor Comput Sci 608:255–267, 2015). It has been conjectured that n + 1 randomized FA agents are necessary to solve this problem in the n-dimensional grid (Cohen et al. in Proceedings of the 28th SODA, SODA '17, pp 207–224, 2017). In this paper we disprove the conjecture in a strong sense: we show that three randomized synchronous FA agents suffice to explore an n-dimensional grid for anyn. Our algorithm is optimal in terms of the number of the agents. Our key insight is that a constant number of FA agents can, by their positions and movements, implement a stack, which can store the path being explored. We also show how to implement our algorithm using: four randomized semi-synchronous FA agents; four deterministic synchronous FA agents; or five deterministic semi-synchronous FA agents. We give a different, no-stack algorithm that uses 4 deterministic semi-synchronous FA agents for the 3-dimensional grid. This is provably optimal in the number of agents and the exploration cost, and surprisingly, matches the result for 2 dimensions. For n ≥ 4 , the time complexity of the stack-based algorithms mentioned above is exponential in distance D of the treasure from the starting point of the agents. We show that in the deterministic case, one additional finite automaton agent brings the time down to a polynomial. We also show that any algorithm using 3 synchronous deterministic FA agents in 3 dimensions must travel beyond Ω (D 3 / 2) from the origin. Finally, we show that all the above algorithms can be generalized to unoriented grids. More specifically, six deterministic semi-synchronous FA agents are sufficient to locate the treasure in an unoriented n-dimensional grid. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A hybrid deterministic–deterministic approach for high-dimensional Bayesian variable selection with a default prior.
- Author
-
Lee, Jieun and Goh, Gyuhyeong
- Subjects
MATRIX inversion ,LAURENCE-Moon-Biedl syndrome ,DEFAULT (Finance) ,SEARCH algorithms ,DETERMINISTIC algorithms ,GREEDY algorithms - Abstract
Identifying relevant variables among numerous potential predictors has been of primary interest in modern regression analysis. While stochastic search algorithms have surged as a dominant tool for Bayesian variable selection, when the number of potential predictors is large, their practicality is constantly challenged due to high computational cost as well as slow convergence. In this paper, we propose a new Bayesian variable selection scheme by using hybrid deterministic–deterministic variable selection (HD-DVS) algorithm that asymptotically ensures a rapid convergence to the global mode of the posterior model distribution. A key feature of HD-DVS is that it allows us to circumvent the iterative computation of inverse matrices, which is a common computational bottleneck in Bayesian variable selection. A simulation study is conducted to demonstrate that our proposed method outperforms existing Bayesian and frequentist methods. An analysis of the Bardet–Biedl syndrome gene expression data is presented to illustrate the applicability of HD-DVS to real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Sport in an Algorithmic Age: Michel Serres on Bodily Metamorphosis.
- Author
-
Houterman, Aldo
- Subjects
DETERMINISTIC algorithms ,METAMORPHOSIS ,HUMAN behavior ,PHYSICAL activity ,COMPUTER software ,SPORTS participation - Abstract
The algorithm has become an increasingly important concept in understanding human behavior in recent years. In the case of sport, human bodies are seen as superficial to the driving force of the algorithm, whether it be genetic, behavioral or surveillance-technological algorithms (Harari 2015, 2020; Zuboff 2019). However, the French mathematician and philosopher Michel Serres (1930–2019) structurally relate algorithms to sports and bodily experience at multiple places in his oeuvre. According to Serres, sport actually enables us to reprogram and rewrite our behavior, moods and thoughts, and therefore modifies our algorithmic status (Serres 1999, 2019). Against deterministic conceptions of the body, Serres argues that human bodies possess a 'metamorphic' nature that is similar to the procedural character of computer programs. This paper investigates the relationship between the algorithm and bodily existence through a reading of both Harari and Serres on sport. It will be argued that, against deterministic conceptions of the algorithm, the algorithm can actually broaden an understanding of the role of sports and physical activity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A Non-Parameter Filled Function Method for Unconstrained Global Optimization Problems.
- Author
-
Liu, Yingchun, Gao, Yuelin, Ma, Suxia, and Guo, Eryang
- Subjects
GLOBAL optimization ,INVERSE functions ,COSINE function ,DETERMINISTIC algorithms - Abstract
In the paper, we give a new non-parameter filled function method for finding global minimizer of global optimization programming problems, the filled function consists of a inverse cosine function and a logarithm function, and without parameter. Its theoretical residences are proved. A new filled function algorithm is given based on the proposed new parameterless filled function, The results of numerical with ten experiments verify the efficient and reliability for the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Software-Defined Time-Sensitive Networking for Cross-Domain Deterministic Transmission.
- Author
-
Guo, Mengjie, Shou, Guochu, Liu, Yaqiong, and Hu, Yihong
- Subjects
SOFTWARE-defined networking ,INDUSTRIALIZATION ,ETHERNET ,DETERMINISTIC algorithms ,SCALABILITY - Abstract
With the rapid development of Industrial 4.0, massive emerging time-critical applications pose new demands for industrial networks, such as strict latency boundaries, ultra-reliable transmission, and so on. Time-Sensitive Networking (TSN) is well suited for these demanding scenarios due to its support for deterministic transmission based on Ethernet. However, many use cases in real industrial scenarios involve non-TSN domains, and, thus, the network requires providing deterministic transmission across non-TSN domains to support them. To this end, this paper proposes a novel Software-Defined Time-Sensitive Networking (SD-TSN) framework that integrates the determinism of TSN and scalability of Software-Defined Networking (SDN). SD-TSN exploits the coordination between the coordinated controller and different domain controllers to bound non-deterministic queuing delays in a way that guarantees determinism. In particular, we designed a multi-domain time-aware traffic scheduling model, which harmonizes the time-aware shaper (TAS) in TSN and time-slots in non-TSN domains based on a global view, generating transmission schedules for each domain. A prototype testbed was built to conduct the experiments. The evaluation results demonstrate that the proposed method can efficiently achieve bounded delay and low delay variations in the transmission across non-TSN domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Three‐dimensional scan‐based algorithm for directional neighbor discovery in ad hoc networks.
- Author
-
Lan, Haiwen, Liu, Gang, Wang, Wei, Liang, Chengchao, and Ma, Zheng
- Subjects
AD hoc computer networks ,DETERMINISTIC algorithms ,DRONE aircraft ,DIRECTIONAL antennas ,ALGORITHMS - Abstract
Summary: In recent years, neighbor discovery techniques using directional antennas have attracted widespread attention. Most currently available directional neighbor discovery techniques are designed based on two‐dimensional (2D) space. Although these algorithms can accomplish the discovery between nodes, the algorithm portability is poor for three‐dimensional (3D) platforms, and the application scenarios are limited since new communication scenarios such as unmanned aerial vehicle (UAV) groups and maritime fleets emerge where information needs to be delivered in real time and nodes are located in 3D space. This paper proposes a new deterministic directional neighbor discovery algorithm in 3D space named 3D scan‐based algorithm (3D‐SBA) to meet the needs of the line‐of‐sight (LOS) scenes. Four existing neighbor discovery algorithms, namely, SBA‐based random mode selection (SBA‐R), quorum, complete random algorithm (CRA), and SBA‐based leader election algorithm (LE), have been extended to our proposed 3D algorithm model for simulation and comparative analysis with 3D‐SBA. The simulation results show that the 3D‐SBA algorithm consumes 51.15%, 180.20%, 9.48%, and 17.49% of the time slots compared with the four existing algorithms mentioned above. However, the quorum algorithm has a very high node collision rate, up to 92.11%. Ultimately, the 3D‐SBA algorithm has the best performance considering the conflicts and the density of network nodes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Finite Difference Method for Intuitionistic Fuzzy Partial Differential Equations.
- Author
-
Man, Sushanta, Saw, Bidhan Chandra, Bairagi, Anupama, and Hazra, Subhendu Bikash
- Subjects
PARTIAL differential equations ,INTUITIONISTIC mathematics ,FINITE difference method ,MEMBERSHIP functions (Fuzzy logic) ,DETERMINISTIC algorithms - Abstract
In this paper, we investigate an intuitionistic fuzzy Poisson equation with uncertain parameters, considering the parameters as intuitionistic fuzzy numbers. We apply a finite difference method to solve the 'Intuitionistic fuzzy Poisson equation'. The continuity of the membership and non-membership functions (which imply the continuity of the hesitancy function) is used to obtain qualitative properties on regular α-cut and β-cut of the intuitionistic fuzzy solution. The fuzzification of the deterministic α-cut and β-cut solutions obtained lead to the intuitionistic fuzzy solution. Finally, an example is presented to illustrate the proposed methodology, as well as to show a graphical representation of its corresponding intuitionistic fuzzy solution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. LOCAL VS NONLOCAL MODELS FOR MITOCHONDRIA SWELLING.
- Author
-
EFENDIEV, MESSOUD A., MURADOVA, ANTIGA, MURADOV, NIJAT, and ZISCHKA, HANS
- Subjects
MITOCHONDRIA ,CHEMOTAXIS ,CHEMOTACTIC factors ,DETERMINISTIC processes ,DETERMINISTIC algorithms - Abstract
In this paper, we consider deterministic, continuous, nonlocal models for the mitochondrial permeability transition, i.e. mitochondrial swelling. Based on seminal papers [1], [2], [3], [5] and the book [4], in which ODE-PDE and PDE-PDE local models for the swelling of mitochondria have been considered, we suggest here new nonlocal models for this process. This new nonlocal deterministic continuous model for mitochondrial swelling scenario contains nonlocal diffusion, nonlocal chemotaxis, as well as nonlocal source term. We would like to especially emphasize that some of the new nonlocal models that we consider in this paper do not have local counterparts in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
35. Component stability in low-space massively parallel computation.
- Author
-
Czumaj, Artur, Davies-Peck, Peter, and Parter, Merav
- Subjects
- *
DETERMINISTIC algorithms , *ALGORITHMS - Abstract
In this paper, we study the power and limitations of component-stable algorithms in the low-space model of massively parallel computation (MPC). Recently Ghaffari, Kuhn and Uitto (FOCS 2019) introduced the class of component-stable low-space MPC algorithms, which are, informally, those algorithms for which the outputs reported by the nodes in different connected components are required to be independent. This very natural notion was introduced to capture most (if not all) of the known efficient MPC algorithms to date, and it was the first general class of MPC algorithms for which one can show non-trivial conditional lower bounds. In this paper we enhance the framework of component-stable algorithms and investigate its effect on the complexity of randomized and deterministic low-space MPC. Our key contributions include: 1. We revise and formalize the lifting approach of Ghaffari, Kuhn and Uitto. This requires a very delicate amendment of the notion of component stability, which allows us to fill in gaps in the earlier arguments. 2. We also extend the framework to obtain conditional lower bounds for deterministic algorithms and fine-grained lower bounds that depend on the maximum degree Δ . 3. We demonstrate a collection of natural graph problems for which deterministic component-unstable algorithms break the conditional lower bound obtained for component-stable algorithms. This implies that, in the context of deterministic algorithms, component-stable algorithms are conditionally weaker than the component-unstable ones. 4. We also show that the restriction to component-stable algorithms has an impact in the randomized setting. We present a natural problem which can be solved in O(1) rounds by a component-unstable MPC algorithm, but requires Ω (log log ∗ n) rounds for any component-stable algorithm, conditioned on the connectivity conjecture. Altogether our results imply that component-stability might limit the computational power of the low-space MPC model, at least in certain contexts, paving the way for improved upper bounds that escape the conditional lower bound setting of Ghaffari, Kuhn, and Uitto. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Deterministic K-Identification for Future Communication Networks: The Binary Symmetric Channel Results †.
- Author
-
Salariseddigh, Mohammad Javad, Dabbabi, Ons, Deppe, Christian, and Boche, Holger
- Subjects
TELECOMMUNICATION systems ,HAMMING weight ,INTERNET of things ,SYSTEM identification ,COMMUNICATION models ,DETERMINISTIC algorithms ,MULTICASTING (Computer networks) ,VIDEO coding - Abstract
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K = 2 n κ , where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2 n R , where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Hardness estimates of the code equivalence problem in the rank metric.
- Author
-
Reijnders, Krijn, Samardjiska, Simona, and Trimoska, Monika
- Subjects
TWO-dimensional bar codes ,LINEAR operators ,HARDNESS ,DETERMINISTIC algorithms ,ISOMORPHISM (Mathematics) ,POLYNOMIALS - Abstract
In this paper, we analyze the hardness of the Matrix Code Equivalence (MCE) problem for matrix codes endowed with the rank metric, and provide the first algorithms for solving it. We do this by making a connection to another well-known equivalence problem from multivariate cryptography—the Isomorphism of Polynomials (IP). Under mild assumptions, we give tight reductions from MCE to the homogenous version of the Quadratic Maps Linear Equivalence (QMLE) problem, and vice versa. Furthermore, we present reductions to and from similar problems in the sum-rank metric, showing that MCE is at the core of code equivalence problems. On the practical side, using birthday techniques known for IP, we present two algorithms: a probabilistic algorithm for MCE running in time q 2 3 (n + m) up to a polynomial factor, and a deterministic algorithm for MCE with roots, running in time q min { m , n , k } up to a polynomial factor. Lastly, to confirm these findings, we solve randomly-generated instances of MCE using these two algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Deploying IIoT Systems for Long-Term Planning in Underground Mining: A Focus on the Monitoring of Explosive Atmospheres.
- Author
-
Medina, Fabian, Ruiz, Hugo, Espíndola, Jorge, and Avendaño, Eduardo
- Subjects
MINES & mineral resources ,WIRELESS sensor nodes ,COAL mining ,WIRELESS sensor networks ,LOCATION data ,EXPLOSIVES ,DETERMINISTIC algorithms - Abstract
This paper presents a novel methodology for deploying wireless sensor nodes in the Industrial Internet of Things (IIoT) to address the safety and efficiency challenges in underground coal mining. The methodology is intended to support long-term planning on mitigating the risks in occupational health and safety policies. To ensure realistic and accurate deployment, we propose a software tool that generates mine models based on geolocation data or blueprints in image format, allowing precise adaptation to the specific conditions of each mine. Furthermore, the process is based on sensing and communication range values obtained through simulations and on-site experiments. The deployment strategy is articulated in two complementary steps: a deterministic deployment, where nodes are strategically placed according to the structure of the tunnels, followed by a random stage to include additional nodes that ensure optimal coverage and connectivity inside the mine by comparing different methodologies for deploying sensor networks using coverage density as a performance metric. We analyze coverage and connectivity based on the three probability density functions (PDFs) for the random deployment of nodes: uniform, normal, and exponential, evaluating both the degree of coverage (k-coverage) and the degree of connectivity (k-connectivity). The results show that our proposed methodology stands out for its lower density of sensors per square meter, which translates into a reduction of between 20.81% and 23.46% for uniform and exponential PDFs, respectively, concerning the number of sensors compared to the analyzed methodologies. In this way, it is possible to determine which distribution is suitable to cover the elongated area with the smallest number of nodes, considering the coverage and connectivity requirements, to reduce the deployment cost. The uniform PDF minimizes the number of sensors needed by 44.70% in small mines and 46.27% in medium ones compared to the exponential PDF. These findings provide valuable information to optimize node deployment regarding cost and efficiency; a uniform function is a good option depending on prices. The exponential distribution reached the highest values of k-coverage and k-connectivity for small and medium-sized mines; in addition, it has greater robustness and tolerance to faults like signal network intermittence. This methodology not only improves the collection of critical information for the mining operation but also plays a vital role in reducing the risks to the health and safety of workers by providing a more robust and adaptive monitoring system. The approach can be used to plan IIoT systems based on Wireless Sensor Networks (WSN) for underground mining exploitation, offering a more reliable and adaptable strategy for monitoring and managing complex work environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Process Algebraic Approach for Probabilistic Verification of Safety and Security Requirements of Smart IoT (Internet of Things) Systems in Digital Twin.
- Author
-
Song, Junsup, Lee, Sunghyun, Karagiannis, Dimitris, and Lee, Moonkun
- Subjects
DIGITAL twins ,INTERNET of things ,INTERNET safety ,EMERGENCY medical services ,DETERMINISTIC algorithms - Abstract
Process algebra can be considered one of the most practical formal methods for modeling Smart IoT Systems in Digital Twin, since each IoT device in the systems can be considered as a process. Further, some of the algebras are applied to predict the behavior of the systems. For example, PALOMA (Process Algebra for Located Markovian Agents) and PACSR (Probabilistic Algebra of Communicating Shared Resources) process algebras are designed to predict the behavior of IoT Systems with probability on choice operations. However, there is a lack of analytical methods in the algebras to predict the nondeterministic behavior of the systems. Further, there is no control mechanism to handle undesirable nondeterministic behavior of the systems. In order to overcome these limitations, this paper proposes a new process algebra, called dTP-Calculus, which can be used (1) to specify the nondeterministic behavior of the systems with static probability, (2) verify the safety and security requirements of the nondeterministic behavior with probability requirements, and (3) control undesirable nondeterministic behavior with dynamic probability. To demonstrate the feasibility and practicality of the approach, the SAVE (Specification, Analysis, Verification, Evaluation) tool has been developed on the ADOxx Meta-Modeling Platform and applied to a SEMS (Smart Emergency Medical Service) example. In addition, a miniature digital twin system for the SEMS example was constructed and applied to the SAVE tool as a proof of concept for Digital Twin. It shows that the approach with dTP-Calculus on the tool can be very efficient and effective for Smart IoT Systems in Digital Twin. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. DEVELOPMENT OF FUZZY SEARCH METHOD FOR CREATING AN EFFICIENT INFORMATION SEARCH SYSTEM IN TEXT DATA.
- Author
-
Kleshch, Kyrylo
- Subjects
INFORMATION storage & retrieval systems ,FINITE state machines ,DETERMINISTIC algorithms ,DYNAMIC programming ,FUZZY systems - Abstract
The object of research is the processes of effective search for information in a set of textual data. The subject of the research is the fuzzy search method, which will allow to effectively solve the problem of searching for information in a set of textual data. The paper considers the process of developing a fuzzy search method, which consists of 9 consecutive steps and is required for a quick search for matches in a large set of text data. Based on this method, it is proposed to create a fuzzy search system that will solve the problem of finding the most relevant documents from a set of such documents. The proposed fuzzy search method combines the advantages of algorithms based on deterministic finite automata and algorithms based on dynamic programming for calculating the Damerau-Levenshtein distance. Such a combination allows to implement the symbol similarity table in an optimal way. As part of the work, an approach for creating a symbol similarity table was proposed and an example of such a table was created for symbols from the English alphabet, which allows to find the degree of similarity between two symbols with constant asymptotics and to convert the current symbol into its basic counterpart. For document filtering, a metric was developed to evaluate the correspondence of text data to a search phrase, which simultaneously takes into account the number of found and not found characters and the number of found and not found words. The Damerau-Levenstein algorithm allows to find the edit distance between two words, taking into account the following types of errors: substitution, addition, deletion, and transposition of characters. The work proposed a modification of this algorithm by using a similarity table to more accurately estimate the editing distance between two words. The developed method makes it possible to create a fuzzy search system that will help find the desired results faster and increase the relevance of the obtained results by sorting them according to the values of the proposed test data similarity metric. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Stochastic inversion of time‐lapse electrical resistivity tomography data by means of an adaptive ensemble‐based approach.
- Author
-
Vinciguerra, Alessandro, Aleardi, Mattia, Madsen, Line Meldgaard, Bording, Thue Sylvester, Christiansen, Anders Vest, and Stucchi, Eusebio
- Subjects
- *
ELECTRICAL resistivity , *TOMOGRAPHY , *ELECTRICAL resistance tomography , *DETERMINISTIC algorithms , *PROBABILITY density function , *ERROR functions , *INVERSE problems - Abstract
Inversion of time‐lapse electrical resistivity tomography is an extension of the conventional electrical resistivity tomography inversion that aims to reconstruct resistivity variations in time. This method is widely used in monitoring subsurface processes such as groundwater evolution. The inverse problem is usually solved through deterministic algorithms, which usually guarantee a fast solution convergence. However, the electrical resistivity tomography inverse problem is ill‐posed and non‐linear, and it could exist more than one resistivity model that explains the observed data. This paper explores a Bayesian approach based on data assimilation, the ensemble smoother multiple data assimilation. In particular, we apply an adaptive approach in which the inflation coefficient is chosen based on the error function, that is the ensemble smoother multiple data assimilation restricted step. Our inversion approach aims to invert the data acquired at two different times simultaneously, estimating the resistivity model and its variation. In addition, the Bayesian approach allows for the assessment of the posterior probability density function needed for quantifying the uncertainties associated with the results. To test the method, we first apply the algorithm to synthetic data generated from realistic resistivity models; then, we invert field data from the Pillemark landfill monitoring station (Samsø, Denmark). Inversion results show that the ensemble smoother multiple data assimilation restricted step can correctly detect the resistivity variation both in the synthetic and in the field case, with an affordable computational burden. In addition, assessing the uncertainties allows us to interpret the reconstructed resistivity model correctly. This paper demonstrates the potential of the data assimilation approach in Bayesian time‐lapse electrical resistivity tomography inversion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Model Identification of E. coli Cultivation Process Applying Hybrid Crow Search Algorithm.
- Author
-
Roeva, Olympia and Zoteva, Dafina
- Subjects
ESCHERICHIA coli ,SEARCH algorithms ,DETERMINISTIC algorithms ,GENETIC algorithms ,QUADRATIC programming ,METAHEURISTIC algorithms - Abstract
Cultivation process (CP) modeling and optimization are ambitious tasks due to the nonlinear nature of the models and interdependent parameters. The identification procedures for such models are challenging. Metaheuristic algorithms exhibit promising performance for such complex problems since a near-optimal solution can be found in an acceptable time. The present research explores a new hybrid metaheuristic algorithm built upon the good exploration of the genetic algorithm (GA) and the exploitation of the crow search algorithm (CSA). The efficiency of the proposed GA-CSA hybrid is studied with the model parameter identification procedure of the E. coli BL21(DE3)pPhyt109 fed-batch cultivation process. The results are compared with those of the pure GA and pure CSA applied to the same problem. A comparison with two deterministic algorithms, i.e., sequential quadratic programming (SQP) and the Quasi-Newton (Q-N) method, is also provided. A more accurate model is obtained by the GA-CSA hybrid with fewer computational resources. Although SQP and Q-N find a solution for a smaller number of function evaluations, the resulting models are not as accurate as the models generated by the three metaheuristic algorithms. The InterCriteria analysis, a mathematical approach to revealing certain relations between given criteria, and a series of statistical tests are employed to prove that there is a statistically significant difference between the results of the three stochastic algorithms. The obtained mathematical models are then successfully verified with a different set of experimental data, in which, again, the closest one is the GA-CSA model. The GA-CSA hybrid proposed in this paper is proven to be successful in the collaborative hybridization of GA and CSA with outstanding performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Probabilistic state synthesis based on optimal convex approximation.
- Author
-
Akibue, Seiseki, Kato, Go, and Tani, Seiichiro
- Subjects
APPROXIMATION error ,DETERMINISTIC algorithms ,QUANTUM states ,CIRCUIT complexity ,SAMPLING errors - Abstract
When preparing a pure state with a quantum circuit, there is an unavoidable approximation error due to the compilation error in fault-tolerant implementation. A recently proposed approach called probabilistic state synthesis, where the circuit is probabilistically sampled, is able to reduce the approximation error compared to conventional deterministic synthesis. In this paper, we demonstrate that the optimal probabilistic synthesis quadratically reduces the approximation error. Moreover, we show that a deterministic synthesis algorithm can be efficiently converted into a probabilistic one that achieves this quadratic error reduction. We also numerically demonstrate how this conversion reduces the T-count and analytically prove that this conversion halves an information-theoretic lower bound on the circuit size. In order to derive these results, we prove general theorems about the optimal convex approximation of a quantum state. Furthermore, we demonstrate that this theorem can be used to analyze an entanglement measure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Distributed Data-Driven Learning-Based Optimal Dynamic Resource Allocation for Multi-RIS-Assisted Multi-User Ad-Hoc Network.
- Author
-
Zhang, Yuzhu and Xu, Hao
- Subjects
RESOURCE allocation ,MACHINE learning ,MATHEMATICAL optimization ,GLOBAL optimization ,DETERMINISTIC algorithms ,TELECOMMUNICATION systems ,REINFORCEMENT learning - Abstract
This study investigates the problem of decentralized dynamic resource allocation optimization for ad-hoc network communication with the support of reconfigurable intelligent surfaces (RIS), leveraging a reinforcement learning framework. In the present context of cellular networks, device-to-device (D2D) communication stands out as a promising technique to enhance the spectrum efficiency. Simultaneously, RIS have gained considerable attention due to their ability to enhance the quality of dynamic wireless networks by maximizing the spectrum efficiency without increasing the power consumption. However, prevalent centralized D2D transmission schemes require global information, leading to a significant signaling overhead. Conversely, existing distributed schemes, while avoiding the need for global information, often demand frequent information exchange among D2D users, falling short of achieving global optimization. This paper introduces a framework comprising an outer loop and inner loop. In the outer loop, decentralized dynamic resource allocation optimization has been developed for self-organizing network communication aided by RIS. This is accomplished through the application of a multi-player multi-armed bandit approach, completing strategies for RIS and resource block selection. Notably, these strategies operate without requiring signal interaction during execution. Meanwhile, in the inner loop, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm has been adopted for cooperative learning with neural networks (NNs) to obtain optimal transmit power control and RIS phase shift control for multiple users, with a specified RIS and resource block selection policy from the outer loop. Through the utilization of optimization theory, distributed optimal resource allocation can be attained as the outer and inner reinforcement learning algorithms converge over time. Finally, a series of numerical simulations are presented to validate and illustrate the effectiveness of the proposed scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. User satisfaction-based energy-saving computation offloading in fog computing networks.
- Author
-
Li, Qun, Tang, Bei, Li, Jianxin, and Chen, Siguang
- Subjects
EDGE computing ,ENERGY consumption ,SATISFACTION ,QUALITY of service ,RESOURCE allocation ,DETERMINISTIC algorithms - Abstract
In order to enhance resource allocation in fog computing networks and establish an energy-aware service, this paper proposes a user satisfaction-based energy-saving computation offloading mechanism that jointly optimizes service decision, task offloading ratio, uplink bandwidth resource ratio, and computing resource ratio. Specifically, the proposed mechanism takes user satisfaction as a priority. It constructs a novel satisfaction function that considers the historical energy consumption distribution to capture the user's subjective perception of the service quality. Then, we develop a user satisfaction-based service decision (US-SD) algorithm to select unique service nodes for the users. Furthermore, to minimize the processing energy consumption, a subtask partition and resource allocation-based intelligent computation offloading (SPRA-ICO) algorithm is proposed. In such an algorithm, we design an innovative actor-critic network structure and add noise to the continuous output action to guarantee the randomness of deterministic policy exploration. Meanwhile, the experience replay buffer mechanism and parameter soft update operation are comprehensively employed to reduce the mutual guidance of training samples and improve the function convergence performance. Finally, the simulation results show that compared with other benchmark schemes, the proposed mechanism can realize good convergence speed and user retention rate while effectively mitigating the total energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A survey on cloud computing scheduling algorithms.
- Author
-
Malekimajd, Marzieh and Safarpoor-Dehkordi, Ali
- Subjects
DETERMINISTIC algorithms ,ALGORITHMS ,METAHEURISTIC algorithms ,CLOUD computing ,MACHINE learning ,RESOURCE allocation - Abstract
Cloud computing has emerged as one of the hottest topics in technology and has quickly become a widely used information and communication technology model. Performance is a critical component in the cloud environment concerning constraints like economic, time, and hardware issues. Various characteristics and conditions for providing solutions and designing strategies must be dealt with in different situations to perform better. For example, task scheduling and resource allocation are significant challenges in cloud management. Adopting proper techniques in such conditions leads to performance improvement. This paper surveys existing scheduling algorithms concerning the macro design idea. We classify these algorithms into four main categories: deterministic algorithms, metaheuristic algorithms, learning algorithms, and algorithms based on game theory. Each category is discussed by citing appropriate studies, and the MapReduce review is addressed as an example. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Autonomous Drone Electronics Amplified with Pontryagin-Based Optimization.
- Author
-
Xu, Jiahao and Sands, Timothy
- Subjects
ARTIFICIAL intelligence ,SYSTEM identification ,LEAST squares ,MOVING average process ,DETERMINISTIC algorithms ,SYSTEMS engineering - Abstract
In the era of electrification and artificial intelligence, direct current motors are widely utilized with numerous innovative adaptive and learning methods. Traditional methods utilize model-based algebraic techniques with system identification, such as recursive least squares, extended least squares, and autoregressive moving averages. The new method known as deterministic artificial intelligence employs physical-based process dynamics to achieve target trajectory tracking. There are two common autonomous trajectory-generation algorithms: sinusoidal function- and Pontryagin-based generation algorithms. The Pontryagin-based optimal trajectory with deterministic artificial intelligence for DC motors is proposed and its performance compared for the first time in this paper. This paper aims to simulate model following and deterministic artificial intelligence methods using the sinusoidal and Pontryagin methods and to compare the differences in their performance when following the challenging step function slew maneuver. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Auto Sizing of CANDU Nuclear Reactor Fuel Channel Flaws from UT Scans.
- Author
-
Hammad, Issam, Poloni, Matthew, Isherwood, Andrew, and Simpson, Ryan
- Subjects
NUCLEAR fuels ,CANDU reactors ,NUCLEAR reactors ,ULTRASONIC testing ,NUCLEAR power plants ,DETERMINISTIC algorithms - Abstract
The inspection of nuclear power plants is an essential process that occurs during plant outages. During this process, various systems are inspected, including the reactor's fuel channels to ensure that they are safe and reliable for the plant's operation. The inspection of Canada Deuterium Uranium (CANDU
® ) reactor pressure tubes, which are the core component of the fuel channels and house the reactor fuel bundles, is performed using Ultrasonic Testing (UT). Based on the current process that is followed by Canadian nuclear operators, the UT scans are manually examined by analysts to locate, measure, and characterize pressure tube flaws. This paper proposes solutions for the auto-detection and sizing of pressure tube flaws using two deterministic algorithms, the first uses segmented linear regression, while the second uses the average time of flight (ToF) within ±σ of µ. When compared against a manual analysis stream, the linear regression algorithm and the average ToF achieved an average depth difference of 0.0180 mm and 0.0206 mm, respectively. These results are very close to the depth difference of 0.0156 mm when comparing two manual streams. Therefore, the proposed algorithms can be adopted in production, which can lead to significant cost savings in terms of time and labor. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
49. Shallow buried improvised explosive device detection via convolutional neural networks.
- Author
-
Colreavy-Donnelly, Simon, Caraffini, Fabio, Kuhn, Stefan, Gongora, Mario, Florez-Lozano, Johana, and Parra, Carlos
- Subjects
CONVOLUTIONAL neural networks ,IMPROVISED explosive devices ,LAND mines ,DETERMINISTIC algorithms ,SENSOR arrays - Abstract
The issue of detecting improvised explosive devices, henceforth IEDs, in rural or built-up urban environments is a persistent and serious concern for governments in the developing world. In many cases, such devices are plastic, or varied metallic objects containing rudimentary explosives, which are not visible to the naked eye and are difficult to detect autonomously. The most effective strategy for detecting land mines also happens to be the most dangerous. This paper intends to leverage the use of a Convolutional Neural Network (CNN) to aid in the discovery of such IEDs. As part of a related project, an autonomous sensor array was used to detect the devices in terrains too hazardous for a human to survey. This paper presents a CNN and its training methodology, suitable to make use of the sensor system. This convolutional neural network can accurately distinguish between a potential IED and surrounding undergrowth and natural features of the environment in real-time. The training methodology enabled the CNN to successfully recognise the IEDs with an accuracy of 98.7%, in well-lit conditions. The results are evaluated against other convolutional neural systems as well as against a deterministic algorithm, showing that the proposed CNN outperforms its competitors including the deterministic method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Advanced stochastic approaches for option pricing based on Sobol sequence.
- Author
-
Todorov, V., Dimov, I., Georgieva, R., Dimitrov, Yu., Apostolov, S., and Stoenchev, M.
- Subjects
PRICES ,MULTIDIMENSIONAL databases ,IMAGE encryption ,DETERMINISTIC algorithms ,FINANCIAL engineering ,COMPUTATIONAL complexity ,FINANCIAL markets - Abstract
Recently Monte Carlo and quasi-Monte Carlo approaches have become a very attractive and necessary computational tools in finance. The field of computational finance is becoming more complicated with increasing number of applications. The pricing of options is a very important in financial markets today and especially difficult when the dimension of the problem goes higher. Monte Carlo and quasi Monte Carlo methods are appropriate for solving multidimensional problems, since their computational complexity increases polynomially, but not exponentially with the dimensionality. A comprehensive experimental study based on scrambling of the Sobol sequence is applied for the first time to evaluate European style options. The Sobol scrambling method is not only one of the best available algorithms for high dimensional integrals but also one of the few possible methods, because in this work we show that the deterministic algorithms need a huge amount of time for the evaluation of the multidimensional integral, as it was discussed in this paper. The numerical tests show that the method is efficient for multidimensional integration and especially for computing multidimensional integrals of a very high dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.