4,789 results on '"probabilistic method"'
Search Results
2. The Threshold of Existence of -Temporal Cliques in Random Simple Temporal Graphs
- Author
-
Mertzios, George B., Nikoletseas, Sotiris, Raptopoulos, Christoforos, Spirakis, Paul G., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bramas, Quentin, editor, Casteigts, Arnaud, editor, and Meeks, Kitty, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Application of Main Controlling Factors for Quantitative Evaluation of a Favorable Carbonate Oil- and Gas-Bearing Area in the Pre-exploration Stage: Lianglitage Formation in the Central Uplift Belt of the Tarim Basin.
- Author
-
Li, Bin, Ran, Junshuai, Tang, Tao, Deng, Taiyu, Yang, Suju, and Lv, Haitao
- Subjects
GEOLOGICAL modeling ,EVALUATION methodology ,MODEL validation ,QUANTITATIVE research ,DATA modeling - Abstract
The evaluation of oil- and gas-bearing areas (OGBAs) during the pre-exploration stage has always posed challenges due to the lack of an effective geological evaluation model and validation data. This paper introduces a novel quantitative evaluation method based on the vectorization of key geological factors related to hydrocarbon accumulation. In this study, we focused on the Lianglitage Formation in the Central Uplift Belt and aimed to evaluate the application of the proposed method to the OGBA in the Tarim Basin. First, the reservoir-forming parameters were quantified based on geological analysis and expert experience. Second, the weights of the main parameters were determined using a combination of the gray correlation method and expert knowledge. Finally, the OGBA was evaluated using a multifactor fusion method. The comprehensive evaluation results indicate that the platform margin in the northeastern part of the Katake Uplift shows promise for exploration, while the southern region has a good potential for future exploration. This study emphasizes the significance of selecting key factors and vectorizing evaluation parameter mapping for accurate and quantitative evaluation of an OGBA. The results of this study provide a valuable foundation for evaluating the OGBAs in the Lianglitage Formation within the Tarim Basin and offer a valuable reference for OGBAs in similar regions during the pre-exploration stage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. COVERING PERFECT HASH FAMILIES AND COVERING ARRAYS OF HIGHER INDEX.
- Author
-
COLBOURN, CHARLES J.
- Subjects
- *
FINITE fields , *FAMILIES - Abstract
By exploiting symmetries of finite fields, covering perfect hash families provide a succinct representation for covering arrays of index one. For certain parameters, this connection has led to both the best current asymptotic existence results and the best known efficient construction algorithms for covering arrays. The connection generalizes in a straightforward manner to arrays in which every t-way interaction is covered λ > 1 times, i.e., to covering arrays of index more than one. Using this framework, we focus on easily computed, explicit upper bounds on numbers of rows for various parameters with higher index. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A simplified vine copula-based probabilistic method for quantifying multi-dimensional ecological niches and niche overlap: take a three-dimensional case as an example.
- Author
-
Zhou, Qi and Huang, Shaoqian
- Subjects
ECOLOGICAL niche ,PROBLEM solving ,CLIMBING plants ,FAMILIES - Abstract
For quantifying m-dimensional ( m ≥ 3 ) niche regions and niche overlaps using a copula-based approach, commonly used copulas, including Archimedean and elliptical copula families, are unsatisfactory alternatives in characterizing a complex dependence structure among multiple variables, especially when bi-variate copulas characterizing dependency structures of two-dimensional sub-variables differ. To solve the problem, we improve the copula-based niche space modeling approach using simplified vine copulas, a powerful tool containing various bi-variate dependence structures in one multivariate copula. Using four simulated data sets, we then check the performance of simplified vine copula approximation when the simplifying assumption is invalid. Finally, we apply the improved copula-based approach to quantifying a three-dimensional niche space of a real case of Swanson et al. (Ecology 96(2):318–324, 2015. https://doi.org/10.1890/14-0235.1) and discover that among various simplified vine and other flexible multi-dimensional copulas, non-parametric simplified vine copula approximation performs best in fitting the data set. In the discussion, to analyze differences in calculating niche overlaps caused by using different copulas, we compare non-parametric simplified vine copula approximation with non-parametric and parametric simplified vine copula approximation, elliptical copula, Hierarchical Archimedean copula estimation, and empirical beta copula and give some comments on the results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A probabilistic approach to chronic effects assessments for listed species in a vernal pool case study.
- Author
-
Oliver, Leah, Sinnathamby, Sumathy, Purucker, Steven, and Raimondo, Sandy
- Subjects
ECOLOGICAL risk assessment ,VERNAL pools ,ENDANGERED species ,ENVIRONMENTAL exposure ,SPECIES pools ,PESTICIDES - Abstract
Ecological risk assessments for potential pesticide impacts on species listed as threatened or endangered must ensure that decisions to grant registration or establish water quality standards will not jeopardize species or their critical habitats. Pesticides are designed to affect pest species via physiological pathways that may be shared by some nontarget species for which toxicity data are usually unavailable, creating a need for robust methods to estimate acute and chronic toxicity with minimal data. We used a unique probabilistic approach to estimate the risk of chronic effects of two organophosphate (OP) pesticides on the vernal pool fairy shrimp Branchinecta lynchi. Acute toxicity estimates were derived from Monte Carlo (MC) sampling of acute toxicity distributions developed from interspecies relationships using surrogate species. Within each MC draw, acute values were divided by an acute to chronic ratio (ACR) sampled from a distribution of ACRs for OP pesticides and invertebrates, producing a distribution of chronic effects concentrations. The estimated exposure concentrations (EECs) were sampled from distributions representing different environmental conditions. Risk was characterized using probability distributions of acute toxicity, ACRs, and EECs in a probabilistic analysis, as well as partial probabilistic variations that used only some distributions whereas some variables were used deterministically. A deterministic risk quotient (RQ) was compared with the results of probabilistic methods to compare the approaches. Risk varied across exposure scenarios and the number of variables that were handled probabilistically, increasing as the number of variables drawn from distributions increased. The magnitude of RQs was not correlated with the probability that EECs would exceed chronic thresholds, and comparison of the two approaches demonstrates the limited interpretability of RQs. Our novel probabilistic approach to estimating chronic risk with minimal data incorporates uncertainty underlying both exposure and effects assessments for listed species. Integr Environ Assess Manag 2024;20:1654–1666. Published 2024. This article is a U.S. Government work and is in the public domain in the USA. Key Points: Chronic risk estimates are important for ecological risk assessments (ERAs), and we present a novel approach to evaluating chronic risk using minimal data incorporating the distributions of effects and environmental exposure concentrations.Ecological risk assessment for pesticides potentially affecting a listed vernal pool fairy shrimp species is used as a case study to demonstrate our approach that is applicable to ERAs in general.Probabilistic approaches incorporate real‐world uncertainty into the effects and exposure assessments for ERAs required by different regulatory authorities.Protection of listed species from jeopardy under the Endangered Species Act will benefit from the application of probabilistic approaches to ERAs that consider variability and uncertainty in effects and environmental exposure concentrations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Influence of input uncertainty on the 1-D hygrothermal simulation of composite walls in China
- Author
-
Gan, Weinan, Li, Xiaolong, Fang, Jinzhong, and Feng, Chi
- Published
- 2025
- Full Text
- View/download PDF
8. Coloring lines and Delaunay graphs with respect to boxes.
- Author
-
Tomon, István
- Subjects
INTERSECTION graph theory ,CHROMATIC polynomial - Abstract
The goal of this paper is to show the existence (using probabilistic tools) of configurations of lines, boxes, and points with certain interesting combinatorial properties. (i) First, we construct a family of n$$ n $$ lines in ℝ3$$ {\mathbb{R}}^3 $$ whose intersection graph is triangle‐free of chromatic number Ω(n1/15)$$ \Omega \left({n}^{1/15}\right) $$. This improves the previously best known bound Ω(loglogn)$$ \Omega \left(\mathrm{loglog}n\right) $$ by Norin, and is also the first construction of a triangle‐free intersection graph of simple geometric objects with polynomial chromatic number. (ii) Second, we construct a set of n$$ n $$ points in ℝd$$ {\mathbb{R}}^d $$, whose Delaunay graph with respect to axis‐parallel boxes has independence number at most n·(logn)−(d−1)/2+o(1)$$ n\cdotp {\left(\log n\right)}^{-\left(d-1\right)/2+o(1)} $$. This extends the planar case considered by Chen, Pach, Szegedy, and Tardos. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A novel hybrid model for bridge dynamic early warning using LSTM-EM-GMM
- Author
-
Shuangjiang Li, Jingzhou Xin, Yan Jiang, Changxi Yang, Xiaochen Wang, and Bingchuan Ran
- Subjects
Bridge ,Structural health monitoring ,Probabilistic method ,Dynamic early warning ,Uncertain factors ,Bridge engineering ,TG1-470 - Abstract
Abstract Early warning of existing bridges is now predominated by deterministic methods. However, these methods face challenges in expressing uncertain factors (such as wind load, temperature load, and other variables, etc.). These problems directly impact the timeliness and accuracy of bridge early warning. This study develops an innovative method for bridge dynamic early warning with high versatility and accuracy. Long short-term memory network model (LSTM), expectation maximization (EM) and Gaussian mixture model (GMM) were employed in the proposed method. Firstly, the LSTM model is used to predict the measured monitoring data (such as deflection, strain, cable force, etc.) in real time to obtain the predicted results. Next, the number of clusters for the EM-GMM model is determined using the Calinski-Harabasz (CH) index. The method aims to comprehensively consider the internal cohesion of the clustering, ensuring accurate and reliable clustering results. Then, the EM-GMM model is used to cluster the random influence error and the predicted value, which can get the probabilistic prediction result of each corresponding random influence error. On this basis, the dynamic early warning interval under 95% confidence level is constructed. This facilitates early warning and decision-making for potential structural abnormalities. Finally, the accuracy and practicability of the method are verified by the comparison of engineering applications and existing specifications. The results demonstrate that the probabilistic early warning method considering the uncertain factors in the complex service environment can accurately achieve the dynamic early warning of bridges.
- Published
- 2024
- Full Text
- View/download PDF
10. A novel hybrid model for bridge dynamic early warning using LSTM-EM-GMM
- Author
-
Li, Shuangjiang, Xin, Jingzhou, Jiang, Yan, Yang, Changxi, Wang, Xiaochen, and Ran, Bingchuan
- Published
- 2024
- Full Text
- View/download PDF
11. Reliability analysis and risk assessment of a landfill slope failure in spatially variable municipal solid waste.
- Author
-
Ghasemian, A., Karimpour-Fard, M., and Nadi, B.
- Abstract
Quantitative assessment of landfill slope failure risk provides valuable information about slope design and risk reduction. This study presents a reliability-based analysis in which an accurate method is applied to assess slope failure risk using the stochastic finite difference method. This method incorporates the spatial variability of municipal solid waste properties due to anisotropic autocorrelation structures and evaluates the consequence associated with each failure separately. This method was evaluated using the data of the Saravan landfill (Rasht, Iran) and presenting a parametric analysis. Several Monte Carlo simulations were conducted to indicate the heterogeneity of the municipal solid waste, taking into account the shear strength and the unit weight of the municipal solid waste randomly. Finally, the safety factor, probability of failure, and risk were assessed using different analysis cases. Deterministic analysis was also performed for all modes using mean values for various municipal solid waste properties. The results show that spatial variability of municipal solid waste parameters and autocorrelation structures significantly affect the safety factor, probability of failure, and risk. Also, comparing the obtained results revealed that for the given slope, the safety factor values in deterministic analyses are overestimated compared to those of the probabilistic analyses. However, risk shows the opposite behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Fatigue strength evaluation of case‐hardened components combining heat‐treatment simulation and probabilistic approaches.
- Author
-
Iss, Valérian, Meis, Jean‐André, Rajaei, Ali, Hallstedt, Bengt, and Broeckmann, Christoph
- Subjects
- *
FATIGUE limit , *STEEL fatigue , *RESIDUAL stresses , *HEAT treatment of steel , *NOTCH effect , *HEAT treatment , *HARDNESS , *FATIGUE testing machines - Abstract
In order to raise the hardness and strength of the surface layer of mechanical components and induce favorable residual compressive stresses, case‐hardening procedures have become established in the heat treatment of steel. In this work, a calculation concept for the fatigue strength of components that have been case‐hardened through carburizing heat treatment is being developed. The residual stress and the load stresses in complex‐shaped, carburized materials are determined using a finite element (FE) model. The fatigue limit of the components is derived using probabilistic methods and taking into account hardness gradients, residual stresses, and non‐metallic inclusions. The model is validated with available axial bending fatigue test data and then used to predict the rotating bending fatigue limit of samples with various geometries and heat‐treatment conditions. This work demonstrates the capability of combining probabilistic and FE‐based modeling to represent complex interactions between variables that affect the fatigue of heat‐treated components, such as steel cleanliness, notch shape, case‐hardening depth, or loading conditions. Highlights: Combined FE‐based and probabilistic methods can predict fatigue strength accurately.Interplay of heat‐treatment output, geometry, load, and material is considered.Crack initiation position gets shifted by increased case‐hardness depth.Fatigue strength reduction due to defects in steel depends on load concentration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Predicting Consumer Behavior Based on Big Data of User-Generated Online Content in Retail Marketing.
- Author
-
Karpushkin, Gleb
- Subjects
CONSUMER behavior ,MARKETING ,BIG data ,ELECTRONIC commerce ,USER-generated content ,RETAIL industry - Abstract
The purpose of this study is to create, using big data from user content in retail marketing, a prediction approach to predicting consumer behavior. Based on an approach with two key components, prediction is achievable. The first step is accurate big data analytics of user-generated material, highlighting the essential information and monitoring changes in user behavior (posting and purchasing) as a result of shifting value proposition variables. Second, it involves the inclusion of specialists and seasoned marketers in sociological surveys that use large samples of respondents and the probabilistic method. The value proposition structure was broken down into ten components that influence the rhetoric of user content using the stratification approach. The competitive advantages or business objectives of stores, in turn, made clear the essential categories of user content. The study focuses on Russia's most widely used digital trading platforms. The study developed an approach to the expert prediction of consumer behavior following changes in content quality and highlighted efficient digital tools for doing so using the sociological technique. The methodology for expert forecasting of consumer behavior amid changes in the quality of user content was developed using empirical, probabilistic, and sociological methods. The competitive advantage or goal of an online store was shown to be the most important element in altering consumer behavior. The proposed expert prediction methodology based on the likelihood matrix of a decline in customer conversion rates due to user content degradation is the study's scientific contribution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A lower bound for set‐coloring Ramsey numbers.
- Author
-
Aragão, Lucas, Collares, Maurício, Marciano, João Pedro, Martins, Taísa, and Morris, Robert
- Subjects
RAMSEY numbers ,COMPLETE graphs ,RAMSEY theory ,RANDOM graphs - Abstract
The set‐coloring Ramsey number Rr,s(k)$$ {R}_{r,s}(k) $$ is defined to be the minimum n$$ n $$ such that if each edge of the complete graph Kn$$ {K}_n $$ is assigned a set of s$$ s $$ colors from {1,...,r}$$ \left\{1,\dots, r\right\} $$, then one of the colors contains a monochromatic clique of size k$$ k $$. The case s=1$$ s=1 $$ is the usual r$$ r $$‐color Ramsey number, and the case s=r−1$$ s=r-1 $$ was studied by Erdős, Hajnal and Rado in 1965, and by Erdős and Szemerédi in 1972. The first significant results for general s$$ s $$ were obtained only recently, by Conlon, Fox, He, Mubayi, Suk and Verstraëte, who showed that Rr,s(k)=2Θ(kr)$$ {R}_{r,s}(k)={2}^{\Theta (kr)} $$ if s/r$$ s/r $$ is bounded away from 0 and 1. In the range s=r−o(r)$$ s=r-o(r) $$, however, their upper and lower bounds diverge significantly. In this note we introduce a new (random) coloring, and use it to determine Rr,s(k)$$ {R}_{r,s}(k) $$ up to polylogarithmic factors in the exponent for essentially all r$$ r $$, s$$ s $$, and k$$ k $$. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A special case of Vu's conjecture: colouring nearly disjoint graphs of bounded maximum degree.
- Author
-
Kelly, Tom, Kühn, Daniela, and Osthus, Deryk
- Subjects
LOGICAL prediction ,COLOR ,HYPERGRAPHS ,APPROXIMATION algorithms - Abstract
A collection of graphs is nearly disjoint if every pair of them intersects in at most one vertex. We prove that if $G_1, \dots, G_m$ are nearly disjoint graphs of maximum degree at most $D$ , then the following holds. For every fixed $C$ , if each vertex $v \in \bigcup _{i=1}^m V(G_i)$ is contained in at most $C$ of the graphs $G_1, \dots, G_m$ , then the (list) chromatic number of $\bigcup _{i=1}^m G_i$ is at most $D + o(D)$. This result confirms a special case of a conjecture of Vu and generalizes Kahn's bound on the list chromatic index of linear uniform hypergraphs of bounded maximum degree. In fact, this result holds for the correspondence (or DP) chromatic number and thus implies a recent result of Molloy and Postle, and we derive this result from a more general list colouring result in the setting of 'colour degrees' that also implies a result of Reed and Sudakov. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Method for Determining Design Heating Load of Rural Residential Buildings Considering Indoor Temperature Uncertainty.
- Author
-
Meng, Haiyan, Tian, Zhe, Wu, Xia, Lu, Yakai, and Mai, Haoran
- Subjects
HEATING load ,RESIDENTIAL heating systems ,DWELLINGS ,DISTRIBUTION (Probability theory) ,DEBYE temperatures - Abstract
In rural locations, the application of clean heating technologies requires an appropriate design heating load. However, the variation characteristics of indoor temperatures in rural residential buildings are rarely taken into consideration by traditional techniques of calculating the design heating load, which may result in over- or under-design. As a result, a new method that took the uncertainty of the indoor temperature into account was presented to calculate the design heating load for rural residential buildings. First, for the "part-time, part-space" heating mode in rural residential buildings, an indoor temperature stochastic model was established to generate multiple indoor temperature scenarios; on the basis of this, heating loads under these scenarios were simulated and their probability distributions were counted; lastly, the design heating load was selected from the load probability distribution based on a predetermined confidence level. When the new method and the traditional method were compared, it was discovered that the new method can offer a more thorough guide to the determination of the design load value for the design of heating systems in rural residential buildings, while the traditional method's result might not satisfy the reliability requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Stochastic modelling of material variability in structural dynamics: A threefold comparison of Monte Carlo, polynomial chaos, and random sampling techniques
- Author
-
Rakesh Kumar
- Subjects
Kosambi-Karhunen-Loève expansion ,Monte Carlo ,polynomial chaos ,probabilistic method ,stochastic ,vibration ,Science - Abstract
This article investigates the influence of random elastic modulus on beam eigenfrequencies using multiple simulation techniques: Monte Carlo simulations (employing Cholesky decomposition (MCS-CD) and Kosambi-KarhunenLoève expansion (MCS-KKL)), Polynomial Chaos expansion (PCE), and a proposed Random Sampling method (RSM). Anomalies in Monte Carlo simulations, where normally distributed elastic modulus led to negative values and imaginary eigenfrequencies, were effectively addressed by adopting a log-normal distribution. Comparative analyses focused on covariance variation of the first three eigenfrequencies with correlation length and standard deviation of the random field, highlighting nuanced differences between normal and log-normal distributions. PCE exhibited distinct responses, showcasing variations in covariance with different distributions. The study culminates in eigenfrequency estimation using the proposed RSM, wherein the beam is discretised into n elements with randomly assigned elastic moduli. The mean and variance of eigenfrequencies are compared with existing methods, which represent an alternative method for achieving similar outcomes. These comparative studies provide a comprehensive understanding of how different statistical treatments and simulation methods impact the reliability and accuracy of eigenfrequency predictions in beams with random elastic properties, thus contributing valuable insights for structural analysis and design under uncertainty.
- Published
- 2024
18. Bounds on the higher degree Erdős–Ginzburg–Ziv constants over Fqn.
- Author
-
Costa, Simone and Della Fiore, Stefano
- Abstract
The classical Erdős–Ginzburg–Ziv constant of a group G denotes the smallest positive integer ℓ such that any sequence S of length at least ℓ contains a zero-sum subsequence of length exp (G). In the recent paper (Integers 22: Paper No. A102, 17 pp., 2022), Caro and Schmitt generalized this concept, using the m-th degree symmetric polynomial e m (S) instead of the sum of the elements of S and considering subsequences of a given length t. In particular, they defined the higher degree Erdős–Ginzburg–Ziv constants EGZ(t, R, m) of a finite commutative ring R and presented several lower and upper bounds to these constants. This paper aims to provide lower and upper bounds for EGZ(t, R, m) in case R = F q n. The lower bounds here presented have been obtained, respectively, using the Lovász local lemma and the expurgation method and, for sufficiently large n, they beat the lower bound provided by Caro and Schmitt for the same kind of rings. Finally, we prove closed form upper bounds derived from the Ellenberg–Gijswijt and Sauermann results for the cap-set problem assuming that q = p k , t = p , and m = p - 1. Moreover, using the slice rank method, we derive a convex optimization problem that provides the best bounds for q = 3 k , t = 3 , m = 2 , and k = 2 , 3 , 4 , 5. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. تحليل خطر لرزهاي شهر جديد پرديس با روش احتمالاتي
- Author
-
علي بيتاللهي, نگار سودمند, فاطمه دهقان فاروجي, and غزاله رزاقيان
- Subjects
- *
TOURISM , *EARTHQUAKE hazard analysis , *MOUNTAINS , *GEOTECHNICAL engineering , *WATERSHEDS - Abstract
Pardis New Town located at a distance of 17 km from Tehran is bounded by the Alborz mountain range from the north, the Jajrud River from the west, Karasht, Siahsang and Taherabad villages from the south, and Bumehen from the east. It is located in the catchment area of the Jajrud River and has about 3600 hectares area. It includes 9 phases, 6 of which are residential regions and the other three are for research, industrial and tourism purposes. Since Pardis New Town is located in Tehran province and in proximity to the Mosha and North Tehran faults which have historical earthquakes, this study investigates its seismicity and seismic risk analysis by probabilistic method. After estimating the parameters of each seismic source, namely β, λ and the expected maximum magnitude, the peak ground acceleration of the sources was obtained for the return period of 475 years. For an earthquake with a return period of 475 years, the maximum value of ground acceleration in bedrock was equal to 0.34 g, and when we take into account the soil effect using the existing boreholes that indicate the geotechnical characteristics of the soil of the site, the maximum acceleration of 0.37 g in the eastern area of Pardis was obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Logarithmic Bound for Simultaneous Embeddings of Planar Graphs
- Author
-
Steiner, Raphael, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bekos, Michael A., editor, and Chimani, Markus, editor
- Published
- 2023
- Full Text
- View/download PDF
21. Probabilistic Method-Based Control Design for Attitude System of Hypersonic Vehicles
- Author
-
Yang, Xiaohong, Guo, Zongyi, Jiang, Ruimin, Guo, Jianguo, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Yan, Liang, editor, and Deng, Yimin, editor
- Published
- 2023
- Full Text
- View/download PDF
22. Sparse Multi-Reference Alignment: Phase Retrieval, Uniform Uncertainty Principles and the Beltway Problem.
- Author
-
Ghosh, Subhroshekhar and Rigollet, Philippe
- Subjects
- *
HARMONIC analysis (Mathematics) , *COMBINATORIAL optimization , *APPLIED mathematics , *FOURIER transforms , *POWER spectra , *HEISENBERG uncertainty principle , *HOPFIELD networks , *NOISE , *QUANTUM measurement - Abstract
Motivated by cutting-edge applications like cryo-electron microscopy (cryo-EM), the Multi-Reference Alignment (MRA) model entails the learning of an unknown signal from repeated measurements of its images under the latent action of a group of isometries and additive noise of magnitude σ . Despite significant interest, a clear picture for understanding rates of estimation in this model has emerged only recently, particularly in the high-noise regime σ ≫ 1 that is highly relevant in applications. Recent investigations have revealed a remarkable asymptotic sample complexity of order σ 6 for certain signals whose Fourier transforms have full support, in stark contrast to the traditional σ 2 that arise in regular models. Often prohibitively large in practice, these results have prompted the investigation of variations around the MRA model where better sample complexity may be achieved. In this paper, we show that sparse signals exhibit an intermediate σ 4 sample complexity even in the classical MRA model. Further, we characterize the dependence of the estimation rate on the support size s as O p (1) and O p (s 3.5) in the dilute and moderate regimes of sparsity respectively. Our techniques have implications for the problem of crystallographic phase retrieval, indicating a certain local uniqueness for the recovery of sparse signals from their power spectrum. Our results explore and exploit connections of the MRA estimation problem with two classical topics in applied mathematics: the beltway problem from combinatorial optimization, and uniform uncertainty principles from harmonic analysis. Our techniques include a certain enhanced form of the probabilistic method, which might be of general interest in its own right. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Hypergraph Ramsey numbers of cliques versus stars.
- Author
-
Conlon, David, Fox, Jacob, He, Xiaoyu, Mubayi, Dhruv, Suk, Andrew, and Verstraëte, Jacques
- Subjects
HYPERGRAPHS ,RAMSEY numbers ,RAMSEY theory ,RECTANGLES - Abstract
Let Km(3)$$ {K}_m^{(3)} $$ denote the complete 3‐uniform hypergraph on m$$ m $$ vertices and Sn(3)$$ {S}_n^{(3)} $$ the 3‐uniform hypergraph on n+1$$ n+1 $$ vertices consisting of all n2$$ \left(\genfrac{}{}{0ex}{}{n}{2}\right) $$ edges incident to a given vertex. Whereas many hypergraph Ramsey numbers grow either at most polynomially or at least exponentially, we show that the off‐diagonal Ramsey number r(K4(3),Sn(3))$$ r\left({K}_4^{(3)},{S}_n^{(3)}\right) $$ exhibits an unusual intermediate growth rate, namely, 2clog2n≤r(K4(3),Sn(3))≤2c′n2/3logn,$$ {2}^{c\log^2n}\le r\left({K}_4^{(3)},{S}_n^{(3)}\right)\le {2}^{c^{\prime }{n}^{2/3}\log n}, $$for some positive constants c$$ c $$ and c′$$ {c}^{\prime } $$. The proof of these bounds brings in a novel Ramsey problem on grid graphs which may be of independent interest: what is the minimum N$$ N $$ such that any 2‐edge‐coloring of the Cartesian product KN□KN$$ {K}_N\square {K}_N $$ contains either a red rectangle or a blue Kn$$ {K}_n $$? [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Adaptable and conflict colouring multigraphs with no cycles of length three or four.
- Author
-
Aliaj, Jurgen and Molloy, Michael
- Subjects
- *
MULTIGRAPH , *INTEGERS - Abstract
The adaptable choosability of a multigraph G $G$, denoted cha(G) ${\text{ch}}_{a}(G)$, is the smallest integer k $k$ such that any edge labelling, τ $\tau $, of G $G$ and any assignment of lists of size k $k$ to the vertices of G $G$ permits a list colouring, σ $\sigma $, of G $G$ such that there is no edge e=uv $e=uv$ where τ(e)=σ(u)=σ(v) $\tau (e)=\sigma (u)=\sigma (v)$. Here we show that for a multigraph G $G$ with maximum degree Δ ${\rm{\Delta }}$ and no cycles of length 3 or 4, cha(G)≤(22+o(1))Δ∕ln Δ ${\text{ch}}_{a}(G)\,\le (2\sqrt{2}+o(1))\sqrt{{\rm{\Delta }}\unicode{x02215}\mathrm{ln}\unicode{x0200A}{\rm{\Delta }}}$. Under natural restrictions we can show that the same bound holds for the conflict choosability of G $G$, which is a closely related parameter recently defined by Dvořák, Esperet, Kang and Ozeki. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Probabilistic estimation of earthquake source location and magnitude using inverse analysis of regional paleoliquefaction studies.
- Author
-
Kanth, Aparna, Bishoyi, Nitarani, and Kumar, Ritesh
- Subjects
SEISMIC event location ,EARTHQUAKE hazard analysis ,GROUND motion ,EARTHQUAKES ,EARTHQUAKE magnitude ,EARTHQUAKE engineering ,FAULT zones - Abstract
Liquefaction is one of the most significant and remarkable causes of ground failure in geotechnical earthquake engineering. The phenomenon mostly occurs in saturated cohesionless soils when subjected to seismic loading. Studies on past liquefaction evidence, also known as paleoliquefaction studies, have helped several researchers predict a particular region's future vulnerabilities. However, it is always difficult to prepare human life for future devastation resulting from ground failures. But, prior estimation of the magnitude and likelihood of earthquakes that may strike a location shortly can create an environment involving fewer risk factors. Several methods are available to back-calculate the strength of shaking and earthquake magnitude from seismic evidence, such as paleoliquefaction. Knowing the origin of an earthquake aids in locating the fault zone. As a result of these historical investigations, information for seismic hazard analyses and ground motion forecasts for a particular region becomes possible. The present study is designed on similar grounds to carry out the investigation. A total of nine sites are selected in the Roorkee region, which is vulnerable to earthquakes. The region is also prone to liquefaction based on experimental evidence available from the past studies. The Standard Penetration Test data analysis performed on all nine sites is used for site characterization. For probabilistic earthquake source characterization, magnitudes between 3.5 and 8.5 and PGA between 0.05 and 0.5 are considered. For the interpretation of the most likely source location and its corresponding likelihood of magnitude, both site and source data are utilized in the ground motion model. The findings show that with the increase in source-to-site distance, the likelihood of source occurrence reduces, whereas the most likely magnitude increases. Eventually, this framework illustrates a probabilistic method for determining the seismic source parameters based on paleoliquefaction inverse analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. An Efficient Probabilistic Methodology to Evaluate Web Sources as Data Source for Warehousing
- Author
-
Hariom Sharan Sinha, Saket Kumar Choudhary, and Vijender Kumar Solanki
- Subjects
mean-standard-deviation (msd) method ,multi-criteria decision method (mcdm) ,probabilistic method ,standard deviation of score ,web source ,Technology - Abstract
Internet is the largest source of data and the requirement of data analytics have fueled the data warehouse to switch from structured conventional Data Warehouse to complex Web Data Warehouse. The dynamic and complex nature of web poses various types of complexities during synthesis of web data into a conventional warehouse. Multi-Criteria-Decision Making (MCDM) is a prominent mechanism to select the best data for storing into the data-warehouse. In this article, a method, based on the probabilistic analysis of SAW and TOPSIS methods, has been proposed to select web data sources as data sources for web data warehouse. This method deals more efficiently with the dynamic and complex nature of web. Here, the result of the selection employs the analysis of both the methods (SAW and TOPSIS) to evaluate the probability of selection of respective score (1-9) for each feature. With these probability values, the probability of selection of the next web sources has been be determined. Moreover, using the same probability values, mean score and standard deviation of the scores of respective features of selected web sources have been deduced, which are further used to fix the standard score of each feature for selection of web sources. The standard score is a parameter of the proposed Mean-Standard-Deviation (MSD) method to check the suitability of web sources individually, whereas others do the same on comparative basis. The proposed method cuts down the cost of the repetitive comparison operation, once after computation of the Standard score using Mean and Standard deviation of each individual feature. Here, the respective value of the standard score of each feature is only compared with the score of each respective feature of the next web sources, so it reduces the cost of computation and selects the web sources faster as well.
- Published
- 2023
- Full Text
- View/download PDF
27. Separability within alternating groups and randomness
- Author
-
Buran, Michal and Wilton, Henry
- Subjects
Geometric group theory ,Free groups ,Residual properties ,Probabilistic method - Abstract
This thesis promotes known residual properties of free groups, surface groups, right angled Coxeter groups and right angled Artin groups to the situation where the quotient is only allowed to be an alternating group. The proofs follow two related threads of ideas. The first thread leads to 'alternating' analogues of extended residual finiteness in surface groups, right angled Artin groups and right angled Coxeter groups.
- Published
- 2020
- Full Text
- View/download PDF
28. Covering Codes for the Fixed Length Levenshtein Metric.
- Author
-
Vorobyev, I. V.
- Subjects
- *
SPHERE packings - Abstract
A covering code, or a covering, is a set of codewords such that the union of balls centered at these codewords covers the entire space. As a rule, the problem consists in finding the minimum cardinality of a covering code. For the classical Hamming metric, the size of the smallest covering code of a fixed radius is known up to a constant factor. A similar result has recently been obtained for codes with insertions and for codes with deletions. In the present paper we study coverings of a space for the fixed length Levenshtein metric, i.e., for insertions and deletions. For and , we prove new lower and upper bounds on the minimum cardinality of a covering code, which differ by a constant factor only. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. ON UPPER BOUNDS FOR TOTAL K-DOMINATION NUMBER VIA THE PROBABILISTIC METHOD.
- Author
-
SIGARRETA, SAYLÍ, SIGARRETA, SAYLÉ, and CRUZ-SUÁREZ, HUGO
- Subjects
DOMINATING set ,GRAPH connectivity ,INTEGERS - Abstract
For a fixed positive integer k and G = (V, E) a connected graph of order n, whose minimum vertex degree is at least k, a set S ⊆ V is a total k-dominating set, also known as a k-tuple total dominating set, if every vertex v ∈ V has at least k neighbors in S. The minimum size of a total k-dominating set for G is called the total k-domination number of G, denoted by γkt(G). The total k-domination problem is to determine a minimum total k-dominating set of G. Since the exact problem is in general quite difficult to solve, it is also of interest to have good upper bounds on the total k-domination number. In this paper, we present a probabilistic approach to computing an upper bound for the total k-domination number that improves on some previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Method for Determining Design Heating Load of Rural Residential Buildings Considering Indoor Temperature Uncertainty
- Author
-
Haiyan Meng, Zhe Tian, Xia Wu, Yakai Lu, and Haoran Mai
- Subjects
design heating load ,probabilistic method ,indoor temperature uncertainty ,rural residential building ,confidence level ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
In rural locations, the application of clean heating technologies requires an appropriate design heating load. However, the variation characteristics of indoor temperatures in rural residential buildings are rarely taken into consideration by traditional techniques of calculating the design heating load, which may result in over- or under-design. As a result, a new method that took the uncertainty of the indoor temperature into account was presented to calculate the design heating load for rural residential buildings. First, for the “part-time, part-space” heating mode in rural residential buildings, an indoor temperature stochastic model was established to generate multiple indoor temperature scenarios; on the basis of this, heating loads under these scenarios were simulated and their probability distributions were counted; lastly, the design heating load was selected from the load probability distribution based on a predetermined confidence level. When the new method and the traditional method were compared, it was discovered that the new method can offer a more thorough guide to the determination of the design load value for the design of heating systems in rural residential buildings, while the traditional method’s result might not satisfy the reliability requirements.
- Published
- 2024
- Full Text
- View/download PDF
31. Edge Constrained Eulerian Extensions
- Author
-
Ganesan, Ghurumuruhan, Rushi Kumar, B., editor, Ponnusamy, S., editor, Giri, Debasis, editor, Thuraisingham, Bhavani, editor, Clifton, Christopher W., editor, and Carminati, Barbara, editor
- Published
- 2022
- Full Text
- View/download PDF
32. Study and Application of Probabilistic Estimation Method for Reserves Parameters in Uncertainty Reserves Calculation
- Author
-
Li, Jun, Guo, Yuan-ling, Zhang, Ya-xiong, Liu, Li-qiong, Xiao, Yu-ru, Wu, Wei, Series Editor, and Lin, Jia'en, editor
- Published
- 2022
- Full Text
- View/download PDF
33. Model Selection for Machine Learning
- Author
-
Ghosh, Shyamasree, Dasgupta, Rathi, Ghosh, Shyamasree, and Dasgupta, Rathi
- Published
- 2022
- Full Text
- View/download PDF
34. Forecasting Infrastructure Performance
- Author
-
Mohammadi, Alireza, Amador Jimenez, Luis, Mohammadi, Alireza, and Amador Jimenez, Luis
- Published
- 2022
- Full Text
- View/download PDF
35. كاربرد نظريه بيزين در زلزلهشناسي، مطالعه موردي: شمال و شمال غرب ايران
- Author
-
مهدي ملكي and زهره سادات رياضي راد
- Abstract
In probability theory and statistics, Bayes' theorem, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. One of the many applications of the theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in the theorem may have different probability interpretations. With Bayesian probability interpretation, the theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence. Bayesian inference is fundamental to Bayesian statistics. The purpose of this study is a modeling method based on hierarchical structure in prioritizing and providing appropriate solutions to reduce seismic hazards in the Alborz-Azerbaijan province. The basis of the work is based on using natural data and Bayesian statistics which is a powerful tool in modeling both uncertainty and randomness. The method can correctly show the values of peak ground acceleration (PGA) along with the quantities of its distribution function in the region. The input information for the method is seismic catalog from 1900 to 2020 and proper ground motion attenuation law. It should be noted that Iran strong motion network had limited data so that there was a gap of large earthquakes of data. This modeling contains a set of processes and rules for using and specifying variables and relationships between them. Based on the results of this study, conducted in the northern and north-western parts of Iran, using Iranian Seismological Center data (http://irsc.ut.ac.ir) that includes 11 stations in the study area, hazard maps were drawn for PGA over a period of next 50 and 475 years, with the highest acceleration in the Alborz region including Tehran and Zanjan and in the Azerbaijan region including Tabriz and Rasht. The correlation between estimated acceleration values by Bayesian method and the values observed by the accelerometer network of Iran Road, Housing and Urban Development Research Center was α=0.95. This indicates that the estimated maximum acceleration is in a good agreement with the observed maximum acceleration. According to the results, the southern part of Alborz (Tehran) and the north-western part of Iran (Tabriz) had the highest PGA. Then, the Bayesian method will give favorable results for probability seismic hazard analysis. The results confirm the uncertainty of different parameters of seismic acceleration. Therefore, all of these parameters calculated, indicate that in the west of the Caspian Sea (Rasht city) the lowest value was allocated. Then, Bayesian method with advantages such as considering the relationship between variables, conditions of uncertainty and high flexibility, has the necessary ability to analyze seismic risk in other parts of Iran. This method can also be used in construction projects. In carrying out such renovations, it is necessary to provide step-by-step protocols and rules guidance for application and specification of variables and relationships between them in designing and correcting the model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Distinct degrees and homogeneous sets.
- Author
-
Long, Eoin and Ploscaru, Laurenţiu
- Subjects
- *
RANDOM graphs , *LOGICAL prediction - Abstract
In this paper we investigate the extremal relationship between two well-studied graph parameters: the order of the largest homogeneous set in a graph G and the maximal number of distinct degrees appearing in an induced subgraph of G , denoted respectively by hom (G) and f (G). Our main theorem improves estimates due to several earlier researchers and shows that if G is an n -vertex graph with hom (G) ≥ n 1 / 2 then f (G) ≥ (n / hom (G)) 1 − o (1). The bound here is sharp up to the o (1) -term, and asymptotically solves a conjecture of Narayanan and Tomon. In particular, this implies that max { hom (G) , f (G) } ≥ n 1 / 2 − o (1) for any n -vertex graph G , which is also sharp. The above relationship between hom (G) and f (G) breaks down in the regime where hom (G) < n 1 / 2. Our second result provides a sharp bound for distinct degrees in biased random graphs, i.e. on f (G (n , p)). We believe that the behaviour here determines the extremal relationship between hom (G) and f (G) in this second regime. Our approach to lower bounding f (G) proceeds via a translation into an (almost) equivalent probabilistic problem, and it can be shown to be effective for arbitrary graphs. It may be of independent interest. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Probabilistic Analyses of Root-Reinforced Slopes Using Monte Carlo Simulation.
- Author
-
Pisano, Marilene and Cardile, Giuseppe
- Subjects
SLOPE stability ,MONTE Carlo method ,SAFETY factor in engineering ,SUSTAINABILITY ,BIOENGINEERING - Abstract
Among measures that are used to prevent the triggering of shallow landslides and for erosion control, root reinforcement has spread out widely as its contribution to environmental sustainability is high. Although in recent years reliability-based design (RBD) has been applied increasingly to the assessment of slope stability to address the shortcomings of the deterministic approach (which does not consider geotechnical uncertainties explicitly), there is still a lack in the application of this method to root reinforcement. Plants are characterised by high inherent uncertainty, making it necessary to investigate the level of reliability of these soil-bioengineering techniques. In this context, to determine whether or not root-reinforced slopes designed according to Eurocodes (that is, by applying their statistical partial factors), and providing satisfactory factors of safety, may lead to a probability of failure that is, in contrast, unacceptable, the Authors carried out several probabilistic analyses by using Monte Carlo simulation (MCS). MCS was applied to the simplified Bishop Method modified to bear pseudo-static forces representing earthquake loading in mind. To take into account the mechanical effect provided by roots, an apparent root cohesion was added to the Mohr–Coulomb failure criterion. Results showed that not every slope configuration that satisfies the safety criterion has acceptable levels of reliability, and this evidence is caused by the high variability of the design parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. An Efficient Probabilistic Methodology to Evaluate Web Sources as Data Source for Warehousing.
- Author
-
Sinha, Hariom Sharan, Choudhary, Saket Kumar, and Solanki, Vijender Kumar
- Subjects
DATA warehousing ,TOPSIS method ,FEATURE selection ,STANDARD deviations ,MULTIPLE criteria decision making - Abstract
Internet is the largest source of data and the requirement of data analytics have fueled the data warehouse to switch from structured conventional Data Warehouse to complex Web Data Warehouse. The dynamic and complex nature of web poses various types of complexities during synthesis of web data into a conventional warehouse. Multi-Criteria-Decision Making (MCDM) is a prominent mechanism to select the best data for storing into the data-warehouse. In this article, a method, based on the probabilistic analysis of SAW and TOPSIS methods, has been proposed to select web data sources as data sources for web data warehouse. This method deals more efficiently with the dynamic and complex nature of web. Here, the result of the selection employs the analysis of both the methods (SAW and TOPSIS) to evaluate the probability of selection of respective score (1-9) for each feature. With these probability values, the probability of selection of the next web sources has been be determined. Moreover, using the same probability values, mean score and standard deviation of the scores of respective features of selected web sources have been deduced, which are further used to fix the standard score of each feature for selection of web sources. The standard score is a parameter of the proposed Mean-Standard- Deviation (MSD) method to check the suitability of web sources individually, whereas others do the same on comparative basis. The proposed method cuts down the cost of the repetitive comparison operation, once after computation of the Standard score using Mean and Standard deviation of each individual feature. Here, the respective value of the standard score of each feature is only compared with the score of each respective feature of the next web sources, so it reduces the cost of computation and selects the web sources faster as well. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. On the Density of Critical Graphs with No Large Cliques.
- Author
-
Kelly, Tom and Postle, Luke
- Subjects
DENSITY - Abstract
A graph G is k-critical if χ (G) = k and every proper subgraph of G is (k - 1) -colorable, and if L is a list assignment for G, then G is L-critical if G is not L-colorable but every proper subgraph of G is. In 2014, Kostochka and Yancey proved a lower bound on the average degree of an n-vertex k-critical graph tending to k - 2 k - 1 for large n that is tight for infinitely many values of n, and they asked how their bound may be improved for graphs not containing a large clique. Answering this question, we prove that there exists some ε > 0 for which the following holds. If k is sufficiently large and G is a K ω + 1 -free L-critical graph where ω ≤ k - log 10 k and L is a list assignment for G such that | L (v) | = k - 1 for all v ∈ V (G) , then the average degree of G is at least (1 + ε) (k - 1) - ε ω - 1 . This result implies that for some ε > 0 , for every graph G satisfying ω (G) ≤ mad (G) - log 10 mad (G) where ω (G) is the size of the largest clique in G and mad (G) is the maximum average degree of G, the list-chromatic number of G is at most. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Estimating the expected value of multiple prospects in bidding blocks
- Author
-
Jun Li and Xuebin Huang
- Subjects
Expected value (EV) ,Multiple prospects ,Bidding block ,Probabilistic method ,Production of electric energy or power. Powerplants. Central stations ,TK1001-1841 - Abstract
In recent years, China has accelerated the reform of its oil and gas management system, especially in competitive transfer of mining rights. Evaluating the expected value (EV) of lease blocks is crucial for the bidding decision of oil companies. When bidding for a block with several individual prospects, the simple addition of each prospect EV usually leads to overly high and optimistic resource volume and value estimates. For the assessment of the EV of a multi-prospect block, two factors should be considered. Firstly, the geological setting of the prospects, including their relative spatial relationship, their chance of geological success, their resources and their geological dependency, etc. The second factor is the exploration strategy of oil companies, concerned with the dry hole tolerance, the committed wells and the drilling priorities for the prospects, etc. A probabilistic method to assess the EV of a multi-prospect block is proposed, which proves to be favorable for formulating a bidding strategy for oil companies. In addition, a case study on two specific blocks with several prospects is presented to illustrate the effect of the above mentioned factors on the EV.
- Published
- 2022
- Full Text
- View/download PDF
41. An estimation method for hydrocarbon expected value of exploration target group in mature exploration area
- Author
-
Jun LI and Xiangbin YAN
- Subjects
expected value ,exploration target ,probabilistic method ,block evaluation ,Geophysics. Cosmic physics ,QC801-809 ,Geology ,QE1-996.5 - Abstract
The transfer system reform of mining rights in China is promoted in a comprehensive way. The rational evaluation of the expected value of blocks is crucial to the decision-making of bidding and exiting blocks for oil companies. As to the mature exploration area containing multiple exploration targets, adding up each expected value of each target simply usually over-estimates the expected value of blocks. Two factors should be comprehensively considered. The first one is geological conditions, including relative spatial relation, drilling success rate, resource scale and geological relation, etc. The second one is exploration strategy of oil company, including the bearing capacity of dry well, the number of commitment well and the drilling sorting of each exploration target, etc. Therefore, a set of estimation methods for exploration target group based on probabilistic theory was proposed from the oil company's practical decision-making of bidding and exiting blocks. The effect of the above factors on the expected value of blocks was also discussed using a specific block as an example.
- Published
- 2022
- Full Text
- View/download PDF
42. Prospective Fault Displacement Hazard Assessment for Leech River Valley Fault Using Stochastic Source Modeling and Okada Fault Displacement Equations
- Author
-
Katsuichiro Goda and Parva Shoaeifar
- Subjects
fault displacement ,stochastic source modeling ,Okada equations ,probabilistic method ,Leech River Valley Fault ,Environmental sciences ,GE1-350 - Abstract
In this study, an alternative method for conducting probabilistic fault displacement hazard analysis is developed based on stochastic source modeling and analytical formulae for evaluating the elastic dislocation due to an earthquake rupture. It characterizes the uncertainty of fault-rupture occurrence in terms of its position, geometry, and slip distribution and adopts so-called Okada equations for the calculation of fault displacement on the ground surface. The method is compatible with fault-source-based probabilistic seismic hazard analysis and can be implemented via Monte Carlo simulations. The new method is useful for evaluating the differential displacements caused by the fault rupture at multiple locations simultaneously. The proposed method is applied to the Leech River Valley Fault located in the vicinity of Victoria, British Columbia, Canada. Site-specific fault displacement and differential fault displacement hazard curves are assessed for multiple sites within the fault-rupture zone. The hazard results indicate that relatively large displacements (∼0.5 m vertical uplift) can be expected at low probability levels of 10−4. For critical infrastructures, such as bridges and pipelines, quantifying the uncertainty of fault displacement hazard is essential to manage potential damage and loss effectively.
- Published
- 2022
- Full Text
- View/download PDF
43. Number of components of polynomial lemniscates: A problem of Erdös, Herzog, and Piranian.
- Author
-
Ghosh, Subhajit and Ramachandran, Koushik
- Published
- 2024
- Full Text
- View/download PDF
44. Determination of Dynamic Characteristics for Predicting Electrical Load Curves of Mining Enterprises
- Author
-
Denis Anatolievich Ustinov and Konstantin Alekseevich Khomiakov
- Subjects
calculation of electrical loads ,utilization factor ,normalized correlation function ,probabilistic method ,individual load curves ,group load curves ,Electricity ,QC501-721 - Abstract
The calculation of electrical loads is the first and most significant stage in the design of the power supply system. It is essential to make the right choice when choosing the power electrical equipment: transformers, power lines, and switching devices. Underestimation or overestimation of the calculated values can lead to large losses and an increase in capital costs. Therefore, the reliability of the results plays a key role. The use of energy-saving technologies and energy-efficient electrical equipment leads to a change in the nature and level of power consumption, which must be taken into account when determining the electrical loads. The existing methods leave out dynamic characteristics of electrical load curves, so the calculated values are overestimated by up to 40%. This study shows a load calculation method with the normalized correlation functions and its parameters at the level of the individual and group electricity consumers. As a result, the difference between the calculated and experimental values does not exceed 5%.
- Published
- 2022
- Full Text
- View/download PDF
45. Variations on the Erdős distinct-sums problem.
- Author
-
Costa, Simone, Dalai, Marco, and Della Fiore, Stefano
- Subjects
- *
LOGICAL prediction , *POLYNOMIALS - Abstract
Let { a 1 ,... , a n } be a set of positive integers with a 1 < ⋯ < a n such that all 2 n subset sums are distinct. A famous conjecture by Erdős states that a n > c ⋅ 2 n for some constant c , while the best result known to date is of the form a n > c ⋅ 2 n / n . In this paper, inspired by an information-theoretic interpretation, we extend the study to vector-valued elements a i ∈ Z k and we weaken the condition by requiring that only sums corresponding to subsets of size smaller than or equal to λ n be distinct. For this case, we derive lower and upper bounds on the smallest possible value of a n. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. The Parameterized Complexity of the k-Biclique Problem.
- Author
-
Lin, Bingkai
- Subjects
BIPARTITE graphs ,COMPUTATIONAL complexity ,PATHS & cycles in graph theory ,SUBGRAPHS ,COMPLETE graphs ,ALGORITHMS - Abstract
Given a graph G and an integer k, the k-Biclique problem asks whether G contains a complete bipartite subgraph with k vertices on each side. Whether there is an f(k) ċ |G|
O(1) -time algorithm, solving k-Biclique for some computable function f has been a longstanding open problem. We show that k-Biclique is W[1]-hard, which implies that such an f(k) ċ |G|O(1) -time algorithm does not exist under the hypothesis W[1] ≠ FPT from parameterized complexity theory. To prove this result, we give a reduction which, for every n-vertex graph G and small integer k, constructs a bipartite graph H = (L⊍ R,E) in time polynomial in n such that if G contains a clique with k vertices, then there are k(k − 1)/2 vertices in L with nθ(1/k) common neighbors; otherwise, any k(k − 1)/2 vertices in L have at most (k+1)! common neighbors. An additional feature of this reduction is that it creates a gap on the right side of the biclique. Such a gap might have further applications in proving hardness of approximation results. Assuming a randomized version of Exponential Time Hypothesis, we establish an f(k) ċ |G|o(&sqrt;k) -time lower bound for k-Biclique for any computable function f. Combining our result with the work of Bulatov and Marx [2014], we obtain a dichotomy classification of the parameterized complexity of cardinality constraint satisfaction problems. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
47. Salmoniformes: A Marine or Freshwater Origin?
- Author
-
Dolganov, V. N.
- Abstract
The long-lasting discussion about the marine or freshwater genesis of Salmoniformes is still unsettled. No convincing evidence of either point of view has been found so far. The arguments for the freshwater origin of salmoniforms are analyzed using the "probabilistic" method of phylogenetic reconstruction and genetic approach. The analysis shows that the arguments set forth cannot be considered as evidence for the freshwater genesis of this taxon. The lack of actual facts and theoretical bases of the ancient freshwater residence of Salmoniformes makes it difficult to prove that they acquired a freshwater lifestyle in the Cretaceous–Paleocene just to become anadromous again in the Eocene. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Bounding χ by a fraction of Δ for graphs without large cliques.
- Author
-
Bonamy, Marthe, Kelly, Tom, Nelson, Peter, and Postle, Luke
- Subjects
- *
CHARTS, diagrams, etc. , *GREEDY algorithms , *GRAPH coloring - Abstract
The greedy coloring algorithm shows that a graph of maximum degree at most Δ has chromatic number at most Δ + 1 , and this is tight for cliques. Much attention has been devoted to improving this "greedy bound" for graphs without large cliques. Brooks famously proved that this bound can be improved by one if Δ ≥ 3 and the graph contains no clique of size Δ + 1. Reed's Conjecture states that the "greedy bound" can be improved by k if the graph contains no clique of size Δ + 1 − 2 k. Johansson proved that the "greedy bound" can be improved by a factor of Ω (ln (Δ) − 1) or Ω (ln (ln (Δ)) ln (Δ)) for graphs with no triangles or no cliques of any fixed size, respectively. Notably missing is a linear improvement on the "greedy bound" for graphs without large cliques. In this paper, we prove that for sufficiently large Δ, if G is a graph with maximum degree at most Δ and no clique of size ω , then χ (G) ≤ 72 Δ ln (ω) ln (Δ) . This implies that for sufficiently large Δ, if ω (72 c) 2 ≤ Δ then χ (G) ≤ Δ / c. This bound actually holds for the list-chromatic and even the correspondence chromatic number (also known as DP-chromatic number). In fact, we prove what we call a "local version" of it, a result implying the existence of a coloring when the number of available colors for each vertex depends on local parameters, like the degree and the clique number of its neighborhood. We prove that for sufficiently large Δ, if G is a graph of maximum degree at most Δ and minimum degree at least ln 2 (Δ) with list-assignment L , then G is L -colorable if for each v ∈ V (G) , | L (v) | ≥ 72 deg (v) ⋅ min { ln (ω (v)) ln (deg (v)) , ω (v) ln (ln (deg (v))) ln (deg (v)) , log 2 (χ (v) + 1) ln (deg (v)) } , where χ (v) denotes the chromatic number of the neighborhood of v and ω (v) denotes the size of a largest clique containing v. This simultaneously implies the linear improvement over the "greedy bound" and the two aforementioned results of Johansson. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Repeated Distances and Dot Products in Finite Fields
- Author
-
Iosevich, Alex, Wolf, Charles, Karapetyants, Alexey N., editor, Kravchenko, Vladislav V., editor, Liflyand, Elijah, editor, and Malonek, Helmuth R., editor
- Published
- 2021
- Full Text
- View/download PDF
50. Size of Local Finite Field Kakeya Sets
- Author
-
Ganesan, Ghurumuruhan, Nešetřil, Jaroslav, editor, Perarnau, Guillem, editor, Rué, Juanjo, editor, and Serra, Oriol, editor
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.