46 results on '"*SUBROUTINES (Computer programs)"'
Search Results
2. A novel technique of schedule tracker for parabolic dish concentrator.
- Author
-
Malviya, Rajkumar, Patel, Akash, Singh, Ayush, Jagadev, Santosh, Baredar, Prashant, and Kumar, Anil
- Subjects
PARABOLIC reflectors ,SUBROUTINES (Computer programs) ,PHOTOVOLTAIC power systems ,SOLAR energy ,OBJECT tracking (Computer vision) ,AZIMUTH - Abstract
Tracking is important in a system that harnesses solar energy. Single axis tracking mechanism is cheaper and simple to develop but because of the limitation of tracking axis, this system is less efficient than dual axis. Dual-axis tracking systems necessitate a large number of equipment, sensors, motors, and a lengthy computer program to function properly. Therefore, in the present study, a novel method of solar tracking has been discussed where each tracking point has the impact of both the azimuth and altitude angle at a single point. This method is an average axis tracking method (AATM). HelioScope software was used to extract the hourly solar altitude and azimuth angles for each day and month for the site of Bhopal, India. The average method was then used to get the hourly average solar tracking angle (ASTA) for each month. The parabolic dish concentrator was designed in SolidWorks to apply and simulate the newly developed tracking points on SolTrace software. The graphical analysis was presented along with proper validation of the proposed method, and the single and dual axes were compared with AATM. The graphical study shows that the average axis tracking points have a smoother slop of wave than the single axis. From June to September, the proposed method's error was estimated between 0.85 and 0.95. It can be concluded that by making slight adjustments to the seasonal angle, this error could be minimized and the concept could be successfully applied to a parabolic dish or a solar PV system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Subdomain separability in global optimization.
- Author
-
Deussen, Jens and Naumann, Uwe
- Subjects
GLOBAL optimization ,SUBROUTINES (Computer programs) ,AUTOMATIC differentiation ,DIFFERENTIABLE functions - Abstract
We introduce a generalization of separability for global optimization, presented in the context of a simple branch and bound method. Our results apply to continuously differentiable objective functions implemented as computer programs. A significant search space reduction can be expected to yield an acceleration of any global optimization method. We show how to utilize interval derivatives calculated by adjoint algorithmic differentiation to examine the monotonicity of the objective with respect to so called structural separators and how to verify the latter automatically. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Design of MDOF structure with damping enhanced inerter systems.
- Author
-
Zhang, Ruifu, Wu, Minjun, Pan, Chao, Wang, Chao, and Hao, Linfei
- Subjects
- *
EARTHQUAKE resistant design , *SEISMIC response , *SUBROUTINES (Computer programs) , *SHEARING force - Abstract
An inerter is a two-terminal inertial element that can produce an amplified inertance and enhanced damping when operating with spring and damping elements. Its superior vibration mitigation effect has been proved by previous studies. Although H∞ optimal design of inerter-controlled single-degree-of-freedom structure can be derived based on fixed-point method, rational and theoretical design methods for inerter-controlled multi-degree-of-freedom (MDOF) structures yet need to be developed. In this study, a practical and semi-analytical method based on the damping enhancement principle is proposed for the design of inerter-controlled MDOF structures under earthquakes. To improve the vibration mitigation efficiency, the parameters of inerter systems are distributed based on structural responses, and the required damping coefficient in the inerter systems is minimized to fully utilize the damping enhancement effect of inerter systems. The response mitigation ratio is taken as the targeted performance index to fulfill the demand-oriented design philosophy presented in this study. The stochastic response of the structure is obtained by conducting complex mode superposition. A detailed design procedure and corresponding computer program is developed. Three benchmark structures are employed to exemplify the effectiveness of the proposed design. The analysis results show that the story drifts and story shear forces of the designed structure are effectively mitigated to the target value. In comparison with an existing method (the fixed-point method), the proposed design strategy efficiently exploits the damping enhancement effect, resulting in reducing the damping coefficient and damping force while satisfying the performance demand, thereby producing a rational and economical design. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Simulation of Thermal and Electric Fields in Electrodes with Allowance for Melting and Fritting of Films.
- Author
-
Arutyunyan, R. V.
- Subjects
- *
ELECTRIC fields , *SUBROUTINES (Computer programs) , *FINITE difference method , *SWITCHING circuits , *ELECTRODES - Abstract
A mathematical model is formulated, and a finite-difference method and computer programs have been developed that allow effective computer simulation of the process of thermal breakdown of contact films. The system of equations includes the Stefan problem, the equations of electric transfer and of a switched circuit. The axial symmetry of the boundary-value problem conditions is assumed. A finite-difference method in a spherical coordinate system is used for the solution. During the development of the process, the radius of the conducting bridge made of a molten material changes by several orders of magnitude. An effective solution of this problem is the use of a "moving" grid by corresponding replacing the variables. The results of the work can be applied in the practice of research and design of electrical apparatuses and other electrotechnical devices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Identifying routines in the discourse of undergraduate students when defining.
- Author
-
Fernández-León, Aurora, Gavilán-Izquierdo, José María, González-Regaña, Alfonso J., Martín-Molina, Verónica, and Toscano, Rocío
- Subjects
MATHEMATICS education ,UNDERGRADUATES ,GEOMETRY education ,MATHEMATICS students ,SUBROUTINES (Computer programs) - Abstract
In this paper, we study how undergraduate students define 3D geometrical solids. With this aim, we have identified the routines that are present in the discourse of the students when describing and defining these solids. These routines are one of the properties that characterise the mathematical discourse in the theory of commognition (Sfard 2008). Our results show three different types of routines. The first type is related to the process of describing the solids, the second one to the process of defining the solids and the rest of the routines have a transversal nature. All of them together give us a global vision of the mathematical practice of defining of these undergraduate students. For instance, it seems that some of these students do not have a clear idea of what a definition is. Moreover, there are also differences between the discourse of students when defining 2D figures and the discourse of students when defining 3D solids. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Simulation and experimental investigation of a ballistic compression soft recovery system.
- Author
-
Mathur, Girijesh and Tiwari, Nachiketa
- Subjects
- *
EULER equations (Rigid dynamics) , *SUBROUTINES (Computer programs) , *TURBULENT boundary layer , *FINITE difference method , *EULER equations , *NUMERICAL analysis , *COMPRESSIBLE flow - Abstract
A soft recovery system is used to arrest a supersonic object over a limited distance in a controlled manner. This may be achieved through ballistic compression of gas. This work explains the motion of a supersonic object passing through a ballistic compression decelerator i.e., pressurized gas column initially sandwiched between two diaphragms. The accompanying mechanics is complex and includes diverse effects such as separation of shock from the supersonic object, travelling shocks, shock reflections, creation of a new shock, emergence and dissolution of contact discontinuities and expansion waves, and shock-shock interactions. In this work, these phenomena have been numerically and experimentally studied. While the method of characteristics was used to solve Euler's equations in continuous regions, jump conditions derived from control volume considerations were used to obtain solutions across discontinuities. In this way, a duly validated finite difference method computer program was developed to analyze the problem. Finally, simulation predictions were validated by conducting experiments on a 7.62 mm soft recovery system tube. Our results showed that, an object having an entry velocity of 880 m/s, left the SRS with a velocity that was lower by 47% from simulation predictions. Further analysis showed that friction between the object and tube was a major contributor to this gap. Post accounting for friction, the difference between numerical analysis and experimental data got reduced to about 5% at most locations, and to 17% at the end of the SRS. We attribute this residual difference between observations and simulations to build up of pressure at a location post passage of shock by it. Our 2-D finite volume study results, which are consistent with earlier research, as well as with our experimental data, show that such a phenomenon is prominent particularly in narrow tubes due to development of significantly thick turbulent boundary layers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Enumeration in time is irresistibly event-based.
- Author
-
Ongchoco, Joan Danielle K. and Scholl, Brian J.
- Subjects
- *
SUBROUTINES (Computer programs) , *AUDITORY perception - Abstract
One of the most fundamental questions that can be asked about any process concerns the underlying units over which it operates. And this is true not just for artificial processes (such as functions in a computer program that only take specific kinds of arguments) but for mental processes. Over what units does the process of enumeration operate? Recent work has demonstrated that in visuospatial arrays, these units are often irresistibly discrete objects. When enumerating the number of discs in a display, for example, observers underestimate to a greater degree when the discs are spatially segmented (e.g., by connecting pairs of discs with lines): you try to enumerate discs, but your mind can't help enumerating dumbbells. This phenomenon has previously been limited to static displays, but of course our experience of the world is inherently dynamic. Is enumeration in time similarly based on discrete events? To find out, we had observers enumerate the number of notes in quick musical sequences. Observers underestimated to a greater degree when the notes were temporally segmented (into discrete musical phrases, based on pitch-range shifts), even while carefully controlling for both duration and the overall range and heterogeneity of pitches. Observers tried to enumerate notes, but their minds couldn't help enumerating musical phrases – since those are the events they experienced. These results thus demonstrate how discrete events are prominent in our mental lives, and how the units that constitute discrete events are not entirely under our conscious, intentional control. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Performance Analysis of a Thermoelectric Generator with a Segmented Leg.
- Author
-
Xiao, Xuejiao, Kim, Chang Nyung, Luo, Yang, Fan, Xiaowen, and Deng, Yanxi
- Subjects
THERMOELECTRIC generators ,FINITE volume method ,SUBROUTINES (Computer programs) ,HEAT ,ELECTRIC charge ,WASTE heat - Abstract
With the development of thermoelectric (TE) materials with high thermoelectric figure of merit (ZT) values, improved thermoelectric devices have been fabricated recently. Although the performance of thermoelectric generators (TEGs) with segmented legs has been evaluated experimentally, detailed numerical prediction of TEG performance is rarely considered. In this study, a new numerical solution method is devised to analyze a TE couple. We employ the finite volume method, which allows the conservation of thermal energy and electric charge. A computer program based on the numerical method is developed to evaluate the performance of a TEG with a segmented p-leg. Temperature-dependent thermoelectric properties are employed, without simplifying or neglecting Joule heating, Thomson heating, or Peltier heating. The numerical method is discussed in detail, including the derivation of algebraic equations from the integral forms of the governing equations for thermal energy and electric potential. The numerical results obtained with the computer program based on the method are compared with a mathematical solution for validation. Compared with the performance without segmentation, the results show an improvement for the TEG with a segmented leg. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. ESR statement on new approaches to undergraduate teaching in Radiology.
- Author
-
European Society of Radiology (ESR), Oleaga, Laura, Dewey, Marc, Iezzi, Roberto, Kainberger, Franz, Nyhsen, Christiane M., Catalano, Carlo, Válek, Vlastimil, Szczerbo-Trojanowska, Malgorzata, Messina, Carmelo, and Seker, Fatih
- Subjects
- *
EDUCATIONAL technology , *MEDICAL school curriculum , *SUBROUTINES (Computer programs) , *DECISION support systems , *INTERDISCIPLINARY education , *OPERATIVE ultrasonography , *PICTURE archiving & communication systems - Abstract
Medical education is evolving and electronic learning (e-Learning) strategies have now become an essential asset in radiology education. Radiology education is a significant part of the undergraduate medical curriculum and the use of e-Learning in radiology teaching in medical schools is on the rise. If coupled with clinical decision support systems, e-Learning can be a practical way of teaching students clinical decision making, such as selecting the diagnostic imaging tests that are best suited in certain clinical scenarios. The innovative concept of flipped classroom learning encourages students to work independently and maximises the application of learnt contents in interactive classroom sessions. For integrated curricula with their student-centred, problem-based, and community-based design, an approach to systematically integrate radiology may be to define diagnostic reasoning as one of the core goals. Radiologists as teachers and scholars may understand themselves as experts in diagnostic reasoning and in mentoring how to make medical decisions. Computer programs simulating the routine work are available and can be used to teach the recognition of anatomical structures and pathological patterns, and also to teach ultrasonography and interventional radiology, maximising patient safety. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Joint routing and scheduling for transmission service in software-defined full-duplex wireless networks.
- Author
-
Li, Zhuo, Chen, Xin, Li, Lulu, and Wang, Xiangkun
- Subjects
SOFTWARE-defined networking ,ROUTING algorithms ,OPL (Computer program language) ,SUBROUTINES (Computer programs) - Abstract
In recent years, full-duplex communication has been investigated in wireless networks to improve the quality of transmission service. Most existing work focused on the physical layer, with little consideration of the upper layers. In this paper, we address the issue of joint routing and link scheduling in software-defined full-duplex wireless networks, where an exclusive SDN controller node is involved. Firstly, we formulate the problem as an optimization problem, which tries to maximize the total throughput of the network. Due to the NP-hardness of this problem, we propose a heuristic algorithm that jointly consider the routing selection and link scheduling. For routing selection subroutine, we propose the minimum-cost routing algorithm (MinCostRo for short). We evaluate the performance in MATLAB, and compare it with the DRPA routing algorithm and the minimal maximal interference routing algorithm (MinMaxRo for short). It can be found that MinCostRo performs better than the two existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. Turbocharging Treewidth Heuristics.
- Author
-
Gaspers, Serge, Gudmundsson, Joachim, Jones, Mitchell, Mestre, Julián, and Rümmele, Stefan
- Subjects
- *
TURBOCHARGERS , *HEURISTIC algorithms , *PERMUTATIONS , *SUBROUTINES (Computer programs) , *GREEDY algorithms - Abstract
A widely used class of algorithms for computing tree decompositions of graphs are heuristics that compute an elimination order, i.e., a permutation of the vertex set. In this paper, we propose to turbocharge these heuristics. For a target treewidthk, suppose the heuristic has already computed a partial elimination order of width at most k, but extending it by one more vertex exceeds the target width k. At this moment of regret, we solve a subproblem which is to recompute the last c positions of the partial elimination order such that it can be extended without exceeding width k. We show that this subproblem is fixed-parameter tractable when parameterized by k and c, but it is para-NP-hard and W[1]-hard when parameterized by only k or c, respectively. Our experimental evaluation of the FPT algorithm shows that we can trade a reasonable increase of the running time for the quality of the solution. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Terminating distributed construction of shapes and patterns in a fair solution of automata.
- Author
-
Michail, Othon
- Subjects
- *
MACHINE theory , *AUTOPOIESIS , *SELF-organizing systems , *FAIRNESS , *SUBROUTINES (Computer programs) - Abstract
In this work, we consider a solution of automata (or nodes) that move passively in a well-mixed solution without being capable of controlling their movement. Nodes can cooperate by interacting in pairs and every such interaction may result in an update of their local states. Additionally, the nodes may also choose to connect to each other in order to start forming some required structure. Such nodes can be thought of as small programmable pieces of matter, like tiny nanorobots or programmable molecules. The model that we introduce here is a more applied version of network constructors, imposing physical (or geometric) constraints on the connections that the nodes are allowed to form. Each node can connect to other nodes only via a very limited number of local ports. Connections are always made at unit distance and are perpendicular to connections of neighboring ports, which makes the model capable of forming 2D or 3D shapes. We provide direct constructors for some basic shape construction problems, like spanning line, spanning square, and self-replication. We then develop new techniques for determining the computational and constructive capabilities of our model. One of the main novelties of our approach is that of exploiting the assumptions that the system is well-mixed and has a unique leader, in order to give terminating protocols that are correct with high probability. This allows us to develop terminating subroutines that can be sequentially composed to form larger modular protocols. One of our main results is a terminating protocol counting the size n of the system with high probability. We then use this protocol as a subroutine in order to develop our universal constructors, establishing that it is possible for the nodes to become self-organized with high probability into arbitrarily complex shapes while still detecting termination of the construction. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Recent developments in the texture analysis program ANAELU.
- Author
-
Burciaga-Valencia, Diana C., Villalobos-Portillo, Edgar E., Marín-Romero, José A., del Río, Manuel Sánchez, Montero-Cabrera, María E., Fuentes-Cobas, Luis E., and Fuentes-Montero, Luis
- Subjects
TEXTURE analysis (Image processing) ,DIFFRACTION patterns ,CRYSTAL structure ,SPHERICAL harmonics ,SUBROUTINES (Computer programs) ,FORTRAN - Abstract
The ANAELU program is part of the current trend towards 2D diffraction patterns processing. ANAELU is open source, distributed under MPL license. The basic conception of the program is that the user proposes the crystalline structure of the phase under study and the inverse pole figure of the considered texture. With this data, using the tools of mathematical texture analysis, the program simulates and graphically represents the 2D-XRD pattern of the model sample. An important feature of the considered patterns is the distribution of intensities along the Debye rings. The visual comparison between observed and calculated patterns is the criterion of correctness of the proposed model. The program has been successfully used in the characterization of materials for electronic applications, alloys and minerals. Some limitations that have been detected in the use of ANAELU are the limited number of input formats that it is able to read, the program relative slowness, the non-consideration of the diffraction background and the poor portability. The present update consists in the improvement of the raised aspects. ANAELU-2.0 presents the following innovations. (a) A new GUI has been created, in WxPython, associated with a system for reading experimental patterns through the FabIO library. The current system reads patterns in the most internationally used formats. (b) The calculation of diffraction patterns, from the generation of the unit cell to the diffracted intensities, has been translated to FORTRAN 2003 with systematic use of the CRYSFML library. This change reduces the running time by one order. (c) Various routines (Laplacian softening, spherical harmonics) have been introduced to model the two-dimensional background. (d) The current version, ANAELU2.0, can be distributed by means of stable executable packages in Windows, LINUX and IOS wraped by MiniConda. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. O2iJoin: An Efficient Index-Based Algorithm for Overlap Interval Join.
- Author
-
Luo, Ji-Zhou, Shi, Sheng-Fei, Yang, Guang, Wang, Hong-Zhi, and Li, Jian-Zhong
- Subjects
ALGORITHMS ,MATHEMATICAL functions software ,XML (Extensible Markup Language) ,SUBROUTINES (Computer programs) - Abstract
Time intervals are often associated with tuples to represent their valid time in temporal relations, where overlap join is crucial for various kinds of queries. Many existing overlap join algorithms use indices based on tree structures such as quad-tree, B
+ -tree and interval tree. These algorithms usually have high CPU cost since deep path traversals are unavoidable, which makes them not so competitive as data-partition or plane-sweep based algorithms. This paper proposes an efficient overlap join algorithm based on a new two-layer flat index named as Overlap Interval Inverted Index (i.e., O2i Index). It uses an array to record the end points of intervals and approximates the nesting structures of intervals via two functions in the first layer, and the second layer uses inverted lists to trace all intervals satisfying the approximated nesting structures. With the help of the new index, the join algorithm only visits the must-be-scanned lists and skips all others. Analyses and experiments on both real and synthetic datasets show that the proposed algorithm is as competitive as the state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
16. “Compacted” procedures for adults’ simple addition: A review and critique of the evidence.
- Author
-
Chen, Yalin and Campbell, Jamie I. D.
- Subjects
- *
ADDITION reactions , *SUBROUTINES (Computer programs) , *COUNTING , *INTERFERENCE (Linguistics) , *DIGITAL counters - Abstract
We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (
2016 ,Cognition ,146 , 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model’s predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts—the network interference theory (Campbell,1995 )—predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults’ simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
17. Activity in Boolean networks.
- Author
-
Adiga, Abhijin, Galyean, Hilton, Kuhlman, Chris, Levet, Michael, Mortveit, Henning, and Wu, Sichao
- Subjects
- *
BOOLEAN functions , *GRAPH theory , *PROBABILITY theory , *SUBROUTINES (Computer programs) , *CELLULAR automata - Abstract
In this paper we extend the notion of activity for Boolean networks introduced by Shmulevich and Kauffman (Phys Rev Lett 93(4):48701:1-4, 2004). In contrast to existing theory, we take into account the actual graph structure of the Boolean network. The notion of activity measures the probability that a perturbation in an initial state produces a different successor state than that of the original unperturbed state. It captures the notion of sensitive dependence on initial conditions, and provides a way to rank vertices in terms of how they may impact predictions. We give basic results that aid in the computation of activity and apply this to Boolean networks with threshold functions and nor functions for elementary cellular automata, d-regular trees, square lattices, triangular lattices, and the Erdős-Renyi random graph model. We conclude with some open questions and thoughts on directions for future research related to activity, including long-term activity. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Photonic Side-Channel Analysis of Arbiter PUFs.
- Author
-
Tajik, Shahin, Dietz, Enrico, Frohmann, Sven, Dittrich, Helmar, Nedospasov, Dmitry, Helfmeier, Clemens, Seifert, Jean-Pierre, Boit, Christian, and Hübers, Heinz-Wilhelm
- Subjects
PHOTONICS ,INFORMATION storage & retrieval systems ,SUBROUTINES (Computer programs) ,LOGIC devices ,COMPUTER architecture - Abstract
As intended by its name, physically unclonable functions (PUFs) are considered as an ultimate solution to deal with insecure storage, hardware counterfeiting, and many other security problems. However, many different successful attacks have already revealed vulnerabilities of certain digital intrinsic PUFs. This paper demonstrates that legacy arbiter PUF and its popular extended versions (i.e., feed-forward and XOR-enhanced) can be completely and linearly characterized by means of photonic emission analysis. Our experimental setup is capable of measuring every PUF internal delay with a resolution of 6 ps. Due to this resolution, we indeed require only the theoretical minimum number of linear independent equations (i.e., physical measurements) to directly solve the underlying inhomogeneous linear system. Moreover, it is not required to know the actual PUF responses for our physical delay extraction. We present our practical results for an arbiter PUF implementation on a complex programmable logic device manufactured with a 180 nm process. Finally, we give an insight into photonic emission analysis of arbiter PUF on smaller chip architectures by performing experiments on a field programmable gate array manufactured with a 60 nm process. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Single-projection procedure for linear optimization.
- Author
-
Nurminski, E.
- Subjects
LINEAR programming ,ORTHOGRAPHIC projection ,LINEAR complementarity problem ,CONSTRAINT programming ,CONES (Operator theory) ,SUBROUTINES (Computer programs) - Abstract
It is shown in this paper that under strict complementarity condition, a linear programming problem can be solved by a single orthogonal projection operation onto the cone generated by rows of constraint matrix and corresponding right-hand sides. The efficient projection procedure with the finite termination is provided and computational experiments are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. Interactive function computation via polar coding.
- Author
-
Gülcü, T. and Barg, A.
- Subjects
- *
INTERACTIVE computer systems , *SUBROUTINES (Computer programs) , *INFORMATION storage & retrieval systems , *COMPUTER terminals , *MATHEMATICAL models - Abstract
In a series of papers (2011-2013) N. Ma and P. Ishwar considered a range of distributed source coding problems that arise in the context of interactive computation of functions, characterizing the region of achievable communication rates. We consider the problems of interactive computation of functions by two terminals and interactive computation in a collocated network, showing that the rate regions for both these problems can be achieved using several rounds of polar-coded transmissions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Exact Sublinear Binomial Sampling.
- Author
-
Farach-Colton, Martín and Tsai, Meng-Tsung
- Subjects
- *
BINOMIAL distribution , *SUBROUTINES (Computer programs) , *ALGORITHM research , *SIMULATION methods & models , *VARIATE difference method - Abstract
Drawing a random variate from a given binomial distribution B( n, p) is an important subroutine in many large-scale simulations. The naive algorithm takes $$\mathcal {O}(n)$$ time w.h.p. in the WordRAM model, which is too slow in many settings, though to its credit, it does not suffer from precision loss. The problem of sampling from a binomial distribution in sublinear time has been extensively studied and implemented in such packages as R [] and the GNU Scientific Library [], however, all previous sublinear-time algorithms involve precision loss, which introduces artifacts such as discontinuities into the sampling. In this paper, we present the first algorithm, to the best of our knowledge, that samples binomial distributions in sublinear time with no precision loss. We assume that each bit of p can be obtained in $$\mathcal {O}(1)$$ time. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
22. A distributed protocol for privacy preserving aggregation with non-permanent participants.
- Author
-
Benkaouz, Yahya and Erradi, Mohammed
- Subjects
- *
ACQUISITION of data , *PRIVACY , *COMPUTER simulation , *SUBROUTINES (Computer programs) , *PROBABILITY theory - Abstract
Recent advances in techniques that combine and analyze data collected from multiple partners led to many new promising distributed collaborative applications. Such collaborative computations could occur between trusted partners, between partially trusted partners, or between competitors. Therefore preserving privacy is an important issue in this context. This paper presents a distributed protocol for privacy-preserving aggregation to enable computing a class of aggregation functions that can be expressed as Abelian group. The proposed protocol is based on an overlay structure that enables secret sharing without the need of any central authority or heavyweight cryptography. It preserves data privacy such that participant data is only known to their owner with a given probability. The aggregation result is computed by participants themselves without interacting with a specific aggregator. The aggregation result is accurate when there is no data loss. A strategy to handle the problem of nodes failures is given, along with a study of the privacy ensured by the suggested protocol. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Graver basis and proximity techniques for block-structured separable convex integer minimization problems.
- Author
-
Hemmecke, Raymond, Köppe, Matthias, and Weismantel, Robert
- Subjects
- *
INTEGER programming , *STOCHASTIC analysis , *COMBINATORIAL optimization , *POLYNOMIAL time algorithms , *SUBROUTINES (Computer programs) - Abstract
We consider $$N$$-fold $$4$$-block decomposable integer programs, which simultaneously generalize $$N$$-fold integer programs and two-stage stochastic integer programs with $$N$$ scenarios. In previous work (Hemmecke et al. in Integer programming and combinatorial optimization. Springer, Berlin, ), it was proved that for fixed blocks but variable $$N$$, these integer programs are polynomial-time solvable for any linear objective. We extend this result to the minimization of separable convex objective functions. Our algorithm combines Graver basis techniques with a proximity result (Hochbaum and Shanthikumar in J. ACM 37:843-862,), which allows us to use convex continuous optimization as a subroutine. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
24. Empirical Installation of Linear Algebra Shared-Memory Subroutines for Auto-Tuning.
- Author
-
Cámara, Jesús, Cuenca, Javier, Giménez, Domingo, García, Luis, and Vidal, Antonio
- Subjects
- *
SUBROUTINES (Computer programs) , *EMPIRICAL research , *COMPUTER software installation , *MATRIX multiplications , *SIMULTANEOUS multithreading processors , *NUMERICAL solutions for linear algebra - Abstract
The introduction of auto-tuning techniques in linear algebra shared-memory routines is analyzed. Information obtained in the installation of the routines is used at running time to take some decisions to reduce the total execution time. The study is carried out with routines at different levels (matrix multiplication, LU and Cholesky factorizations and linear systems symmetric or general routines) and with calls to routines in the LAPACK and PLASMA libraries with multithread implementations. Medium NUMA and large cc-NUMA systems are used in the experiments. This variety of routines, libraries and systems allows us to obtain general conclusions about the methodology to use for linear algebra shared-memory routines auto-tuning. Satisfactory execution times are obtained with the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. Object joint detection and tracking using adaptive multiple motion models.
- Author
-
Wang, Zhijie, Ben Salah, Mohamed, and Zhang, Hong
- Subjects
- *
PREDICTION models , *SUBROUTINES (Computer programs) , *KINEMATICS , *JOINTS (Engineering) , *MOTION , *EXPERIMENTAL design - Abstract
This paper deals with the problem of detecting objects that may switch between different motion models. In order to accurately detect these moving objects taking into account possible changing motion models, we propose an adaptive multi-motion model in the joint detection and tracking (JDT) framework. The proposed technique differs from the existing JDT-based methods mainly in two ways. First we express the solution in the JDT framework via a formulation in the multiple motion model setting. Second, we introduce a new motion model prediction function which exploits the correlation between the motion model and object kinematic state. Experiments on both synthetic and real videos demonstrate that the JDT method employing the proposed adaptive multi-motion model can detect objects more accurately than the existing peer methods when objects change their motion models. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
26. Swallow swarm optimization algorithm: a new method to optimization.
- Author
-
Neshat, Mehdi, Sepidnam, Ghodrat, and Sargolzaei, Mehdi
- Subjects
- *
PARTICLE swarm optimization , *PARTICLES , *SUBROUTINES (Computer programs) , *COMPUTATIONAL intelligence , *ARTIFICIAL intelligence , *COMPUTER algorithms - Abstract
This paper presents an exposition of a new method of swarm intelligence-based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
27. Coverage-based search result diversification.
- Author
-
Zheng, Wei, Wang, Xuanhui, Fang, Hui, and Cheng, Hong
- Subjects
- *
INFORMATION retrieval , *QUERYING (Computer science) , *SEARCH algorithms , *PROGRAM transformation , *SUBROUTINES (Computer programs) - Abstract
Traditional retrieval models may provide users with less satisfactory search experience because documents are scored independently and the top ranked documents often contain excessively redundant information. Intuitively, it is more desirable to diversify search results so that the top-ranked documents can cover different query subtopics, i.e., different pieces of relevant information. In this paper, we study the problem of search result diversification in an optimization framework whose objective is to maximize a coverage-based diversity function. We first define the diversity score of a set of search results through measuring the coverage of query subtopics in the result set, and then discuss how to use them to derive diversification methods. The key challenge here is how to define an appropriate coverage function given a query and a set of search results. To address this challenge, we propose and systematically study three different strategies to define coverage functions. They are based on summations, loss functions and evaluation measures respectively. Each of these coverage functions leads to a result diversification method. We show that the proposed coverage based diversification methods not only cover several state-of-the-art methods but also allows us to derive new ones. We compare these methods both analytically and empirically. Experiment results on two standard TREC collections show that all the methods are effective for diversification and the new methods can outperform existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
28. On the complexity of the herding attack and some related attacks on hash functions.
- Author
-
Blackburn, Simon, Stinson, Douglas, and Upadhyay, Jalaj
- Subjects
HASHING ,SUBROUTINES (Computer programs) ,RANDOM graphs ,GRAPH theory ,TECHNOLOGICAL complexity - Abstract
In this article, we analyze the complexity of the construction of the 2-diamond structure proposed by Kelsey and Kohno (LNCS, Vol 4004, pp 183-200, ). We point out a flaw in their analysis and show that their construction may not produce the desired diamond structure. We then give a more rigorous and detailed complexity analysis of the construction of a diamond structure. For this, we appeal to random graph theory (in particular, to the theory of random intersection graphs), which allows us to determine sharp necessary and sufficient conditions for the message complexity (i.e., the number of hash computations required to build the required structure). We also analyze the computational complexity for constructing a diamond structure, which has not been previously studied in the literature. Finally, we study the impact of our analysis on herding and other attacks that use the diamond structure as a subroutine. Precisely, our results shows the following: Due to the above two results, the herding attack (Kelsey and Kohno, LNCS, Vol 4004, pp 183-200, ) and the second preimage attack (Andreeva et al., LNCS, Vol 4965, pp 270-288, ) on iterated hash functions have increased complexity. We also show that the message complexity of herding and second preimage attacks on 'hash twice' is n times the complexity claimed by Andreeva et al. (LNCS, Vol 5867, pp 393-414, ), by giving a more detailed analysis of the attack. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
29. The interaction of contracts and laziness.
- Author
-
Degen, Markus, Thiemann, Peter, and Wehr, Stefan
- Subjects
FUNCTIONAL programming (Computer science) ,SUBROUTINES (Computer programs) ,PREDICATE (Logic) ,ASSERTIONS (Logic) ,PROGRAMMING language semantics ,COMPUTER science - Abstract
Contract monitoring for strict higher-order functional languages has an intuitive meaning, an established theoretical basis, and a standard implementation. For lazy functional languages, the situation is less clear-cut. There is no agreed-upon intended meaning or theory, and there are competing implementations with subtle semantic differences. This paper proposes meaning preservation and completeness as formally defined properties for evaluating implementations of contract monitoring. Both properties have definitions that can be checked by straightforward inductive proof. A survey of existing suggestions for lazy contract systems reveals that some are meaning preserving, some are complete, and some have neither property. The main result is that contract monitoring for lazy functional languages cannot be complete and meaning preserving at the same time, although each property can be achieved in isolation. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
30. Adaptive Zero-Knowledge Proofs and Adaptively Secure Oblivious Transfer.
- Author
-
Lindell, Yehuda and Zarosim, Hila
- Subjects
COMPUTER security ,COMPUTER science ,COMPUTER network protocols ,DATA security ,CRYPTOGRAPHY ,SUBROUTINES (Computer programs) ,DATA protection - Abstract
In the setting of secure computation, a set of parties wish to securely compute some function of their inputs, in the presence of an adversary. The adversary in question may be static (meaning that it controls a predetermined subset of the parties) or adaptive (meaning that it can choose to corrupt parties during the protocol execution and based on what it sees). In this paper, we study two fundamental questions relating to the basic zero-knowledge and oblivious transfer protocol problems: We provide surprising answers to the above questions, showing that achieving adaptive security is sometimes harder than achieving static security, and sometimes not. First, we show that assuming the existence of one-way functions only, there exist adaptive zero-knowledge proofs for all languages in $\mathcal{NP}$. In order to prove this, we overcome the problem that all adaptive zero-knowledge protocols known until now used equivocal commitments (which would enable an all-powerful prover to cheat). Second, we prove a black-box separation between adaptively secure oblivious transfer and enhanced trapdoor permutations. As a corollary, we derive a black-box separation between adaptively and statically secure oblivious transfer. This is the first black-box separation to relate to adaptive security and thus the first evidence that it is indeed harder to achieve security in the presence of adaptive adversaries than in the presence of static adversaries. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
31. Refined typing to localize the impact of forced strictness on free theorems.
- Author
-
Seidel, Daniel and Voigtländer, Janis
- Subjects
- *
PROGRAMMING languages , *PROOF theory , *SUBROUTINES (Computer programs) , *COMPUTER algorithms , *COMPUTER programming , *FUNCTIONAL programming (Computer science) , *HASKELL (Computer program language) - Abstract
Free theorems establish interesting properties of parametrically polymorphic functions, solely from their types, and serve as a nice proof tool. For pure and lazy functional programming languages, they can be used with very few preconditions. Unfortunately, in the presence of selective strictness, as provided in languages like Haskell, their original strength is reduced. In this paper we present an approach for overcoming this weakness in specific situations. Employing a refined type system which tracks the use of enforced strict evaluation, we rule out unnecessary restrictions that otherwise emerge. Additionally, we provide (and implement) an algorithm determining all refined types for a given term. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
32. Efficient algorithms for ranking with SVMs.
- Author
-
Chapelle, O. and Keerthi, S.
- Subjects
- *
INFORMATION retrieval , *SUPPORT vector machines , *MATHEMATICAL optimization , *RANKING , *ALGORITHM research , *NEWTON-Raphson method , *MAGNITUDE estimation , *RANKINGS of websites , *SUBROUTINES (Computer programs) , *MANAGEMENT - Abstract
RankSVM (Herbrich et al. in Advances in large margin classifiers. MIT Press, Cambridge, MA, 2000; Joachims in Proceedings of the ACM conference on knowledge discovery and data mining (KDD), 2002) is a pairwise method for designing ranking models. SVMLight is the only publicly available software for RankSVM. It is slow and, due to incomplete training with it, previous evaluations show RankSVM to have inferior ranking performance. We propose new methods based on primal Newton method to speed up RankSVM training and show that they are 5 orders of magnitude faster than SVMLight. Evaluation on the Letor benchmark datasets after complete training using such methods shows that the performance of RankSVM is excellent. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
33. On hash functions using checksums.
- Author
-
Gauravaram, Praveen, Kelsey, John, Knudsen, Lars, and Thomsen, Søren
- Subjects
- *
INFORMATION technology security , *HASHING , *SUBROUTINES (Computer programs) , *COMPUTER crimes , *COMPUTER security - Abstract
We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
34. Efficient Algorithms for the Problems of Enumerating Cuts by Non-decreasing Weights.
- Author
-
Yeh, Li-Pu, Wang, Biing-Feng, and Su, Hsin-Hao
- Subjects
- *
ALGORITHMS , *GRAPH theory , *DIRECTED graphs , *SUBROUTINES (Computer programs) , *FACTORS (Algebra) , *MATHEMATICAL analysis - Abstract
Abstract  In this paper, we study the problems of enumerating cuts of a graph by non-decreasing weights. There are four problems, depending on whether the graph is directed or undirected, and on whether we consider all cuts of the graph or only s-t cuts for a given pair of vertices s,t. Efficient algorithms for these problems with delay between two successive outputs have been known since 1992, due to Vazirani and Yannakakis. In this paper, improved algorithms are presented. The delays of the presented algorithms are O(nmlogâ(n 2/m)). Vazirani and Yannakakisâs algorithms have been used as basic subroutines in the solutions of many problems. Therefore, our improvement immediately reduces the running time of these solutions. For example, for the minimum k-cut problem, the upper bound is immediately reduced by a factor of for k=3,4,5,6. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
35. On the Autoreducibility of Functions.
- Author
-
Faliszewski, Piotr and Ogihara, Mitsunori
- Subjects
- *
SUBROUTINES (Computer programs) , *PROGRAMMING languages , *COMPUTER systems , *ELECTRONIC data processing , *C (Computer program language) - Abstract
This paper studies the notions of self-reducibility and autoreducibility. Our main result regarding length-decreasing self-reducibility is that any complexity class $\mathcal{C}$ that has a (logspace) complete language and is closed under polynomial-time (logspace) padding has the property that if all $\mathcal{C}$ -complete languages are length-decreasing (logspace) self-reducible then $\mathcal{C}\subseteq \mathrm {P}$ ( $\mathcal {C}\subseteq \mathrm {L}$ ). In particular, this result applies to NL, NP and PSPACE. We also prove an equivalent of this theorem for function classes (for example, for #P). We also show that for several hard function classes, in particular for #P, it is the case that all their complete functions are deterministically autoreducible. In particular, we show the following result. Let f be a #P parsimonious function with two preimages of 0. We show that there are two FP functions h and t such that for all inputs x we have f( x)= t( x)+ f( h( x)), h( x)≠ x, and t( x)∈{0,1}. Our results regarding single-query autoreducibility of #P functions can be contrasted with random self-reducibility for which it is known that if a #P complete function were random self-reducible with one query then the polynomial hierarchy would collapse. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
36. HOL-Boogie—An Interactive Prover-Backend for the Verifying C Compiler.
- Author
-
Sascha Böhme, Michał Moskal, Wolfram Schulte, and Burkhart Wolff
- Subjects
PROGRAMMING languages ,COMPILERS (Computer programs) ,C (Computer program language) ,C# (Computer program language) ,SOFTWARE verification ,SUBROUTINES (Computer programs) - Abstract
Abstract Boogie is a verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic, i.e. pre- and postconditions, assertions, and loop invariants. Moreover, concepts like ghost fields, ghost variables, ghost code and specification functions have been introduced to support a specific modeling methodology. Boogie’s verification conditions—constructed via a wp calculus from annotated programs—are usually transferred to automated theorem provers such as Simplify or Z3. This also comprises the expansion of language-specific modeling constructs in terms of a theory describing memory and elementary operations on it; this theory is called a machine/memory model. In this paper, we present a proof environment, HOL-Boogie, that combines Boogie with the interactive theorem prover Isabelle/HOL, for a specific C front-end and a machine/memory model. In particular, we present specific techniques combining automated and interactive proof methods for code verification. The main goal of our environment is to help program verification engineers in their task to “debug” annotations and to find combined proofs where purely automatic proof attempts fail. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
37. Notes on the value function.
- Author
-
Zihui Liu and Wende Chen
- Subjects
MATHEMATICAL functions software ,SUBROUTINES (Computer programs) ,CIPHERS ,COMPUTER networks ,CRYPTOGRAPHY ,CODING theory ,GENERALIZABILITY theory - Abstract
Abstract  The properties satisfied by the value function are given. These properties can be used to study the generalized Hamming weight and the relative generalized Hamming weight of certain linear codes. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
38. Circuit Complexity of Regular Languages.
- Author
-
Koucký, Michal
- Subjects
- *
COMPUTATIONAL complexity , *MATHEMATICS , *COMPUTER science , *SUFFICIENT statistics , *SUBROUTINES (Computer programs) , *COMPUTABLE functions - Abstract
We survey the current state of knowledge on the circuit complexity of regular languages and we prove that regular languages that are in AC0 and ACC0 are all computable by almost linear size circuits, extending the result of Chandra et al. (J. Comput. Syst. Sci. 30:222–234, ). As a consequence we obtain that in order to separate ACC0 from NC1 it suffices to prove for some ε>0 an Ω( n1+ ε) lower bound on the size of ACC0 circuits computing certain NC1-complete functions. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
39. Formalization of interrelations between operators and data within the framework of an extended algebra of algorithms.
- Author
-
Akulovsky, V. G.
- Subjects
- *
ALGORITHM research , *SYSTEM analysis software , *SYNTAX in programming languages , *SUBROUTINES (Computer programs) , *MODULAR design , *PROGRAMMING languages - Abstract
A system of algorithmic algebras is considered whose basic concepts are newly interpreted to formalize the interrelation between operators and data of such a system. A modified formal instrument is constructed that extends the possibilities of design and transformation of regular schemes of algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
40. Standardization and testing of implementations of mathematical functions in floating point numbers.
- Author
-
V. Kuliamin
- Subjects
- *
FLOATING-point arithmetic , *COMPUTER arithmetic , *SUBROUTINES (Computer programs) , *MATHEMATICAL functions , *COMPUTER network protocols , *STANDARDIZATION - Abstract
Requirements definition and test suites development for implementations of mathematical functions in floating point arithmetic in the framework of the IEEE 754 standard are considered. A method based on this standard is proposed for defining requirements for such functions. This method can be used for the standardization of implementations of such functions; this kind of standardization extends IEEE 754. A method for designing test suites for the verification of those requirements is presented. The proposed methods are based on specific properties of the representation of floating point numbers and on some features of the functions under examination. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
41. A Reducibility Concept for Problems Defined in Terms of Ordered Binary Decision Diagrams.
- Author
-
Meinel, Ch. and Slobodová, A.
- Subjects
- *
COMPUTATIONAL complexity , *SUBROUTINES (Computer programs) - Abstract
Reducibility concepts are fundamental in complexity theory. Usually, they are defined as follows: A problem Π is reducible to a problems Σ if Π can be computed using a program or device for Σ as a subroutine. However, this approach has its limitations if restricted computational models are considered. In the case of ordered binary decision diagrams (OBDDs), it allows the use of merely the almost unmodified original program for the subroutine. Here we propose a new reducibility for OBDDs: We say that Π is reducible to Σ if an OBDD for Π can be constructed by applying a sequence of elementary operations to an OBDD for Σ. In contrast to traditional reducibility notions, the newly introduced reduction is able to reflect the real needs of a reducibility concept in the context of OBDD-based complexity classes: it allows the reduction of problems to others which are computable with the same amount of OBDD-resources and it gives a tool to carry over lower and upper bounds. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
42. LIST OF SUBROUTINES.
- Subjects
SUBROUTINES (Computer programs) - Abstract
Presents a list of subroutines on computer, published in the April 1988 issue of the book "Annals of Operations Research."
- Published
- 1988
43. Using T-SQL functions and summarizing results : analyzing and summarizing data in SQL Server
- Author
-
Apress (Firm), production company. and Deardurff, John, speaker.
- Published
- 2019
44. Statistical copolymerization of N-vinyl-pyrrolidone and alkyl methacrylates via RAFT: reactivity ratios and thermal analysis.
- Author
-
Mitsoni, Eleftheria, Roka, Nikoletta, and Pitsikalis, Marinos
- Subjects
- *
DIBLOCK copolymers , *METHACRYLATES , *THERMAL analysis , *SUBROUTINES (Computer programs) , *COPOLYMERIZATION , *RATIO analysis - Abstract
The synthesis of statistical copolymers of N-vinylpyrrolidone (NVP) with the alkyl methacrylates: hexyl methacrylate (HMA) and stearyl methacrylate (SMA), is conducted by reversible addition-fragmentation chain transfer polymerization (RAFT), employing [(O-ethylxanthyl) methyl]benzene and [1-(O-ethylxanthyl) ethyl]benzene as the RAFT agents. The reactivity ratios are estimated using the Fineman-Ross, inverted Fineman-Ross, Kelen-Tudos and Barson-Fenn graphical methods as well as the computer program COPOINT, modified to both the terminal and the penultimate model. In all cases, the NVP reactivity ratio is significantly lower than that of the methacrylates. Structural parameters of the copolymers are obtained by calculating the dyad and triad sequence fractions and the mean sequence length. The thermal properties of the copolymers are studied by Differential Scanning Calorimetry and Thermogravimetric Analysis, and the results are compared with those of the respective homopolymers. In spite of the relatively small amount of NVP in the copolymers, copolymer thermal properties are influenced by both monomers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. Multi-criteria ranking of voice transmission carriers of a telecommunication company using PROMETHEE.
- Author
-
Boatemaa, Beatrice, Appati, Justice K., and Darkwah, Kwaku F.
- Subjects
MULTIPLE criteria decision making ,GAUSSIAN processes ,INFORMATION & communication technologies ,TELECOMMUNICATION ,SUBROUTINES (Computer programs) - Abstract
This paper ranks the voice transmission carriers as used by mobile telecommunication companies. The versatility of the PROMETHEE method made it a useful tool for this ranking process. However, in our approach, the logistic preference function which is a recently proposed preference function was adopted in the ranking procedure as opposed to the Gaussian preference function. The results obtained by the logistic preference function yield a similar effect as that of the Gaussian preference function with both reaching an optimal ranking at the complete ranking stage. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Element distinctness revisited.
- Author
-
Portugal, Renato
- Subjects
- *
COMPUTER science , *SUBROUTINES (Computer programs) , *QUERY languages (Computer science) , *COMPUTER algorithms , *PROBABILITY theory - Abstract
The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x1,…,xN)
is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N2/3) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis’ seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007 ). The first critical parameter is the number of repetitions of the algorithm’s main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm’s success probability is 1-O(N1/(k+1)), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm. [ABSTRACT FROM AUTHOR] - Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.