16,474 results on '"Programming paradigm"'
Search Results
2. Beyond Map-Reduce: LATNODE – A New Programming Paradigm for Big Data Systems
- Author
-
Sheng, Chai Yit, Keong, Phang Keat, Kim, Kuinam, editor, and Joukov, Nikolai, editor
- Published
- 2017
- Full Text
- View/download PDF
3. Preparing to Take the First Step
- Author
-
McDonough, James E. and McDonough, James E.
- Published
- 2017
- Full Text
- View/download PDF
4. Exploiting Hardware-Based Data-Parallel and Multithreading Models for Smart Edge Computing in Reconfigurable FPGAs
- Author
-
Eduardo de la Torre, Alfonso Rodriguez, Marco Platzner, and Andres Otero
- Subjects
business.industry ,Data parallelism ,Computer science ,Reconfigurable computing ,Theoretical Computer Science ,Software ,Computational Theory and Mathematics ,Hardware and Architecture ,Multithreading ,Programming paradigm ,Enhanced Data Rates for GSM Evolution ,business ,Evolvable hardware ,Computer hardware ,Edge computing - Abstract
Current edge computing systems are deployed in highly complex application scenarios with dynamically changing requirements. In order to provide the expected performance and energy efficiency values in these situations, the use of heterogeneous hardware/software platforms at the edge has become widespread. However, these computing platforms still suffer from the lack of unified software-driven programming models to efficiently deploy multi-purpose hardware-accelerated solutions. In parallel, edge computing systems also face another huge challenge: operating under multiple conditions that were not taken into account during any of the design stages. Moreover, these conditions may change over time, forcing self-adaptation mechanisms to become a must. This paper presents an integrated architecture to exploit hardware-accelerated data-parallel models and transparent hardware/software multithreading. In particular, the proposed architecture leverages the \ARTICo framework and ReconOS to allow developers to select the most suitable programming model to deploy their edge computing applications onto run-time reconfigurable hardware devices. An evolvable hardware system is used as an additional architectural component during validation, providing support for continuous lifelong learning in smart edge computing scenarios. In particular, the proposed setup exhibits online learning capabilities that include learning by imitation from software-based reference algorithms.
- Published
- 2022
5. Chaotic Architectures: a New Trend in Computers.
- Author
-
Palagin, A. V., Semotiuk, M. V., and Ustenko, S. V.
- Subjects
- *
COMPUTER architecture , *COMPUTERS , *INFORMATION technology , *SYSTEMS development - Abstract
Information technologies are analyzed and their components are identified as virtualization technologies, quantitative technologies, data technologies, and knowledge technologies. Based on the analysis, it is determined that chaotic architectures of computer systems are a new trend in the development of these systems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. ХАОТИЧНІ АРХІТЕКТУРИ - НОВИЙ НАПРЯМОК РОЗВИТКУ ОБЧИСЛЮВАЛЬНОЇ ТЕХНІКИ
- Author
-
ПАЛАГIН, O. В., СЕМОТЮК, М. В., and УСТЕНКО, С. В.
- Abstract
Copyright of Cybernetics & Systems Analysis / Kibernetiki i Sistemnyj Analiz is the property of V.M. Glushkov Institute of Cybernetics of NAS of Ukraine and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
7. Using Classification Methods to Reinforce the Impact of Social Factors on Software Success
- Author
-
Anjos, Eudisley, Brasileiro, Jansepetrus, Silva, Danielle, Zenha-Rela, Mário, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Gervasi, Osvaldo, editor, Murgante, Beniamino, editor, Misra, Sanjay, editor, Rocha, Ana Maria A.C., editor, Torre, Carmelo M., editor, Taniar, David, editor, Apduhan, Bernady O., editor, Stankova, Elena, editor, and Wang, Shangguang, editor
- Published
- 2016
- Full Text
- View/download PDF
8. Evaluation of Logic-Based Smart Contracts for Blockchain Systems
- Author
-
Idelberger, Florian, Governatori, Guido, Riveret, Régis, Sartor, Giovanni, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Alferes, Jose Julio, editor, Bertossi, Leopoldo, editor, Governatori, Guido, editor, Fodor, Paul, editor, and Roman, Dumitru, editor
- Published
- 2016
- Full Text
- View/download PDF
9. A Formal Model of the Safety-Critical Java Level 2 Paradigm
- Author
-
Luckcuck, Matt, Cavalcanti, Ana, Wellings, Andy, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Ábrahám, Erika, editor, and Huisman, Marieke, editor
- Published
- 2016
- Full Text
- View/download PDF
10. Heterogeneous Semantics and Unifying Theories
- Author
-
Woodcock, Jim, Foster, Simon, Butterfield, Andrew, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, and Margaria, Tiziana, editor
- Published
- 2016
- Full Text
- View/download PDF
11. An Introduction to Functional Programming Languages
- Author
-
Torra, Vicenç, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, and Torra, Vicenç
- Published
- 2016
- Full Text
- View/download PDF
12. Make Web3.0 Connected
- Author
-
Jian Shi, Zhuotao Liu, Yih-Chun Hu, Yangxi Xiang, Haoyu Wang, Peng Gao, Bihan Wen, Xusheng Xiao, and Qi Li
- Subjects
Source lines of code ,Computer science ,business.industry ,Interoperability ,Cryptography ,computer.software_genre ,Software framework ,Scalability ,Key (cryptography) ,Programming paradigm ,Electrical and Electronic Engineering ,Software engineering ,business ,Protocol (object-oriented programming) ,computer - Abstract
Web3.0, often cited to drastically shape our lives, is ubiquitous. However, few literatures have discussed the crucial differentiators that separate Web3.0 from the era we are currently living in. Via a thorough analysis of the recent blockchain infrastructure evolution, we capture a key invariant featuring the evolution, based on which we provide the first academic definition for Web3.0. Our definition is not the only way of understanding Web3.0, yet, it captures the fundamental and defining trait of Web3.0, and meanwhile it is has two desirable properties. Under this definition, we articulate three key infrastructural enablers for Web3.0: individual blockchains, federated or centralized platforms capable of publishing verifiable states, and an interoperability platform to hyperconnect those state publishers. While innovations in all categories are necessary to fully enable Web3.0, in this paper, we present a design for the third enabler, namely HyperService, that delivers interoperability and programmability across heterogeneous blockchains and state publishers. HyperService is powered by two innovative designs: a developer-facing programming framework that allows developers to build cross-chain applications in a unified programming model; and a secure blockchain-facing cryptography protocol that provably realizes those applications on blockchains. We implement a prototype of HyperService in 62,000 lines of code to demonstrate its practicality, usability, and scalability.
- Published
- 2022
13. Robust mechanism design and production structure for assembly systems with asymmetric cost information
- Author
-
Jennifer K. Ryan, Cheng Qian, Daewon Sun, and Zhaolin Li
- Subjects
Mechanism design ,Mathematical optimization ,Information Systems and Management ,General Computer Science ,Heuristic (computer science) ,Computer science ,Variance (accounting) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Procurement ,Modeling and Simulation ,Component (UML) ,Prior probability ,Programming paradigm ,Production (economics) - Abstract
We consider the design of a robust procurement mechanism for an assembler who must procure multiple complementary components under limited information regarding the unit production costs of the potential suppliers. We abandon the standard assumption that there exists a common prior distribution for the suppliers’ costs and we employ the max-min criterion in which we design a mechanism to maximize the assembler’s worst-case expected profit when only the means of the suppliers’ costs are known. We formulate this problem using a linear semi-infinite programming model and we apply a primal-dual approach to reduce the problem to a single joint optimization model. We characterize the optimal procurement mechanism (payments and ordering quantities) under the assumption of balanced ordering and uncertain end-customer demand. We use these results to compare the performance of component production, under which the components are sourced from separate suppliers, and integrated production, under which the components are sourced from the same supplier. We find that component production is preferred when the production costs of the components are heterogeneous. We also propose two heuristic mechanisms which are easier to compute and implement in practice, particularly when the number of components is large. Finally, we extend our analysis to the case in which the average and variance of the suppliers’ costs are known.
- Published
- 2022
14. SHIP - A Logic-Based Language and Tool to Program Smart Environments
- Author
-
Autexier, Serge, Hutter, Dieter, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, and Falaschi, Moreno, editor
- Published
- 2015
- Full Text
- View/download PDF
15. Coarse Grained FPGA Overlay for Rapid Just-In-Time Accelerator Compilation
- Author
-
Abhishek Kumar Jain, Suhaib A. Fahmy, and Douglas L. Maskell
- Subjects
business.industry ,Dataflow ,Computer science ,Design flow ,computer.software_genre ,Software portability ,Computational Theory and Mathematics ,Hardware and Architecture ,Embedded system ,Signal Processing ,Scalability ,Programming paradigm ,Place and route ,Compiler ,Field-programmable gate array ,business ,computer - Abstract
Coarse-grained FPGA overlays built around the runtime programmable DSP blocks in modern FPGAs can achieve high throughput and improved scalability compared to traditional overlays built without detailed consideration of FPGA architecture. These overlays can be mapped to using higher level compilers, achieving fast compilation, software-like programmability and run-time management, and high-level design abstraction. OpenCL allows programs running on a host computer to launch accelerator kernels which can be compiled at run-time for a specific architecture, thus enabling portability. However, prohibitive hardware compilation times in traditional design flows mean that the tools cannot effectively use just-in-time (JIT) compilation or runtime performance scaling on FPGAs. We present a methodology for runtime compilation of dataflow graphs expressed as OpenCL kernels onto coarse-grained overlays. The methodology benefits from the high level of abstraction afforded by using the OpenCL programming model, while the mapping to the overlay significantly reduces compilation and load times. Key characteristics of this work include highly performant DSP-optimized functional units that scale to large overlays on modern devices and the ability to perform automatic resource-aware kernel replication up to the size of the overlay. We demonstrate place and route times orders of magnitude better than traditional HLS flows, even when running on an embedded processor in the Xilinx Zynq.
- Published
- 2022
16. Taskflow: A Lightweight Parallel and Heterogeneous Task Graph Computing System
- Author
-
Dian-Lun Lin, Yibo Lin, Tsung-Wei Huang, and Chun-Xun Lin
- Subjects
FOS: Computer and information sciences ,D.1.3 ,D.4.0 ,B.7.2 ,Computer Science - Artificial Intelligence ,Computer science ,Distributed computing ,Symmetric multiprocessor system ,Task (project management) ,Scheduling (computing) ,Artificial Intelligence (cs.AI) ,Control flow ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computational Theory and Mathematics ,Hardware and Architecture ,Signal Processing ,Programming paradigm ,Graph (abstract data type) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Throughput (business) ,Efficient energy use - Abstract
Taskflow aims to streamline the building of parallel and heterogeneous applications using a lightweight task graph-based approach. Taskflow introduces an expressive task graph programming model to assist developers in the implementation of parallel and heterogeneous decomposition strategies on a heterogeneous computing platform. Our programming model distinguishes itself as a very general class of task graph parallelism with in-graph control flow to enable end-to-end parallel optimization. To support our model with high performance, we design an efficient system runtime that solves many of the new scheduling challenges arising out of our models and optimizes the performance across latency, energy efficiency, and throughput. We have demonstrated the promising performance of Taskflow in real-world applications. As an example, Taskflow solves a large-scale machine learning workload up to 29% faster, 1.5x less memory, and 1.9x higher throughput than the industrial system, oneTBB, on a machine of 40 CPUs and 4 GPUs. We have opened the source of Taskflow and deployed it to large numbers of users in the open-source community., 18 pages, 24 figures
- Published
- 2022
17. Risk-averse multi-stage stochastic optimization for surveillance and operations planning of a forest insect infestation
- Author
-
Robert G. Haight, I. Esra Büyüktahtakın, and Sabah Bushaj
- Subjects
Information Systems and Management ,General Computer Science ,Operations research ,biology ,CVAR ,Computer science ,Risk measure ,Maximization ,Management Science and Operations Research ,biology.organism_classification ,Industrial and Manufacturing Engineering ,Emerald ash borer ,Dominance (economics) ,Modeling and Simulation ,Programming paradigm ,Stochastic optimization ,Integer programming - Abstract
We derive a nested risk measure for a maximization problem and implement it in a scenario-based formulation of a multi-stage stochastic mixed-integer programming problem. We apply the risk-averse formulation to the surveillance and control of a non-native forest insect, the emerald ash borer (EAB), a wood-boring beetle native to Asia and recently discovered in North America. Spreading across the eastern United States and Canada, EAB has killed millions of ash trees and cost homeowners and local governments billions of dollars. We present a mean-Conditional Value-at-Risk (CVaR), multi-stage, stochastic mixed-integer programming model to optimize a manager’s decisions about surveillance and control of EAB. The objective is to maximize the benefits of healthy ash trees in forests and urban environments under a fixed budget. Combining the risk-neutral objective with a risk measure allows for a trade-off between the weighted expected benefits from ash trees and the expected risks associated with experiencing extremely damaging scenarios. We define scenario dominance cuts (sdc) for the maximization problem and under the decision-dependent uncertainty. We then solve the model using the sdc cutting plane algorithm for varying risk parameters. Computational results demonstrate that scenario dominance cuts significantly improve the solution performance relative to that of CPLEX. Our CVaR risk-averse approach also raises the objective value of the least-benefit scenarios compared to the risk-neutral model. Results show a shift in the optimal strategy from applying less expensive insecticide treatment to more costly tree removal as the manager becomes more risk-averse. We also find that risk-averse managers survey more often to reduce the risk of experiencing adverse outcomes.
- Published
- 2022
18. Taskflow: A General-Purpose Parallel and Heterogeneous Task Programming System
- Author
-
Yibo Lin, Chun-Xun Lin, Dian-Lun Lin, and Tsung-Wei Huang
- Subjects
Control flow ,Computer science ,Distributed computing ,Programming paradigm ,Graph (abstract data type) ,CAD ,Electrical and Electronic Engineering ,Computer Graphics and Computer-Aided Design ,Throughput (business) ,Software ,Scheduling (computing) ,Efficient energy use ,Task (project management) - Abstract
Taskflow tackles the long-standing question: How can we make it easier for developers to program parallel and heterogeneous computer-aided design (CAD) applications with high performance and simultaneous high productivity? Taskflow introduces a new powerful task graph programming model to assist developers in the implementation of parallel and heterogeneous algorithms with complex control flow. We develop an efficient system runtime to solve many of the new scheduling challenges arising out of our models and optimize the performance across latency, energy efficiency, and throughput. Taskflow has demonstrated promising performance on both micro-benchmarks and real-world applications. As an example, Taskflow solved a large-scale circuit placement problem up to 17% faster, with 1.3× fewer memory, 2.1× less power consumption, and 2.9× higher throughput than two industrial-strength systems, oneTBB and StarPU, on a machine of 40 CPUs and 4 GPUs.
- Published
- 2022
19. A Low-Power Transprecision Floating-Point Cluster for Efficient Near-Sensor Data Analytics
- Author
-
Davide Rossi, Stefan Mach, Luca Benini, Simone Benatti, Angelo Garofalo, Fabio Montagna, Gianmarco Ottavi, Giuseppe Tagliavini, Montagna F., Mach S., Benatti S., Garofalo A., Ottavi G., Benini L., Rossi D., and Tagliavini G.
- Subjects
020203 distributed computing ,sub-word vectorization ,Floating point ,parallel computing ,Computer science ,near-sensor computing ,RISC-V ,transprecision ,02 engineering and technology ,Energy consumption ,Power budget ,Toolchain ,Computational Theory and Mathematics ,Computer engineering ,Hardware and Architecture ,Computer cluster ,Signal Processing ,Vectorization (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,FPU interconnect - Abstract
Recent applications in low-power (1-20 mW) near-sensor computing require the adoption of floating-point arithmetic to reconcile high precision results with a wide dynamic range. In this article, we propose a low-power multi-core computing cluster that leverages the fined-grained tunable principles of transprecision computing to provide support to near-sensor applications at a minimum power budget. Our solution - based on the open-source RISC-V architecture - combines parallelization and sub-word vectorization with a dedicated interconnect design capable of sharing floating-point units (FPUs) among the cores. On top of this architecture, we provide a full-fledged software stack support, including a parallel low-level runtime, a compilation toolchain, and a high-level programming model, with the aim to support the development of end-to-end applications. We performed an exhaustive exploration of the design space of the transprecision cluster on a cycle-accurate FPGA emulator, varying the number of cores and FPUs to maximize performance. Orthogonally, we performed a vertical exploration to identify the most efficient solutions in terms of non-functional requirements (operating frequency, power, and area). We conducted an experimental assessment on a set of benchmarks representative of the near-sensor processing domain, complementing the timing results with a post place-&-route analysis of the power consumption. A comparison with the state-of-the-art shows that our solution outperforms the competitors in energy efficiency, reaching a peak of 97 Gflop/s/W on single-precision scalars and 162 Gflop/s/W on half-precision vectors. Finally, a real-life use case demonstrates the effectiveness of our approach in fulfilling accuracy constraints. ISSN:1045-9219 ISSN:1558-2183 ISSN:2161-9883
- Published
- 2022
20. Comparing Block-Based Programming Models for Two-Armed Robots
- Author
-
Vladimir Kovalenko, Ronald Garcia, Nico Ritschel, Reid Holmes, and David C. Shepherd
- Subjects
business.industry ,Computer science ,020207 software engineering ,Robotics ,02 engineering and technology ,Task (computing) ,Work (electrical) ,Human–computer interaction ,Block (programming) ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Robot ,Factory (object-oriented programming) ,Artificial intelligence ,business ,Software ,Visual programming language - Abstract
Modern industrial robots can work alongside human workers and coordinate with other robots. This means they can perform complex tasks, but doing so requires complex programming. Therefore, robots are typically programmed by experts, but there are not enough to meet the growing demand for robots. To reduce the need for experts, researchers have tried to make robot programming accessible to factory workers without programming experience. However, none of that previous work supports coordinating multiple robot arms that work on the same task. In this paper we present four block-based programming language designs that enable end-users to program two-armed robots. We analyze the benefits and trade-offs of each design on expressiveness and user cognition, and evaluate the designs based on a survey of 273 professional participants of whom 110 had no previous programming experience. We further present an interactive experiment based on a prototype implementation of the design we deem best. This experiment confirmed that novices can successfully use our prototype to complete realistic robotics tasks. This work contributes to making coordinated programming of robots accessible to end-users. It further explores how visual programming elements can make traditionally challenging programming tasks more beginner-friendly.
- Published
- 2022
21. Improved l-diversity: Scalable anonymization approach for Privacy Preserving Big Data Publishing
- Author
-
Brijesh B. Mehta and Udai Pratap Rao
- Subjects
Information privacy ,General Computer Science ,Data anonymization ,business.industry ,Computer science ,Data_MISCELLANEOUS ,Big data ,020206 networking & telecommunications ,02 engineering and technology ,Data publishing ,computer.software_genre ,Scalability ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,020201 artificial intelligence & image processing ,Data mining ,business ,computer ,Equivalence class - Abstract
In the era of big data analytics, data owner is more concern about the data privacy. Data anonymization approaches such as k-anonymity, l-diversity, and t-closeness are used for a long time to preserve privacy in published data. However, these approaches cannot be directly applicable to a large amount of data. Distributed programming framework such as MapReduce and Spark are used for big data analytics which add more challenges to privacy preserving data publishing. Recently, we identified few scalable approaches for Privacy Preserving Big Data Publishing in literature and majority of them are based on k-anonymity and l-diversity. However, these approaches require a significant improvement to reach the level of existing privacy preserving data publishing approaches, therefore, we propose Improved Scalable l-Diversity (ImSLD) approach which is the extension of Improved Scalable k-Anonymity (ImSKA) for scalable anonymization in this paper. Our approaches are based on scalable k-anonymization that uses MapReduce as a programming paradigm. We use poker dataset and synthesize big data versions of poker dataset to test our approaches. The result analysis shows significant improvement in terms of running time due to the lesser number of MapReduce iterations and also exhibits lower information loss as compared to existing approaches while providing the same level of privacy due to tight arrangement of the records in the initial equivalence class.
- Published
- 2022
22. A Uniform Quantum Computing Model Based on Virtual Quantum Processors
- Author
-
Georg Gesek
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Artificial Intelligence ,Distributed computing ,B.5.1 ,FOS: Physical sciences ,Cloud computing ,Turing machine ,symbols.namesake ,C.1.2 ,Hardware Architecture (cs.AR) ,Programmer ,Computer Science - Hardware Architecture ,Quantum computer ,Virtual Processor ,Quantum Physics ,business.industry ,Software development ,Quantum machine ,C.0 ,F.1.1 ,Artificial Intelligence (cs.AI) ,Programming paradigm ,symbols ,business ,Quantum Physics (quant-ph) - Abstract
Quantum Computers, one fully realized, can represent an exponential boost in computing power. However, the computational power of the current quantum computers, referred to as Noisy Internediate Scale Quantum, or NISQ, is severely limited because of environmental and intrinsic noise, as well as the very low connectivity between qubits compared to their total amount. We propose a virtual quantum processor that emulates a generic hybrid quantum machine which can serve as a logical version of quantum computing hardware. This hybrid classical quantum machine powers quantum-logical computations which are substitutable by future native quantum processors., IEEE peer reviewed, published September 2021
- Published
- 2023
23. Message-Passing Programming
- Author
-
Gudula Rünger and Thomas Rauber
- Subjects
Symbolic programming ,Software portability ,business.industry ,Address space ,Computer science ,Data exchange ,Transfer (computing) ,Message passing ,Programming paradigm ,Reactive programming ,business ,Computer network - Abstract
The message-passing programming model is based on the abstraction of a parallel computer with a distributed address space where each processor has a local memory to which it has exclusive access, see Sect. 2.3.1. There is no global memory. Data exchange must be performed by message passing: to transfer data from the local memory of one processor \(A\) to the local memory of another processor \(B\), \(A\) must send a message containing the data to \(B\), and \(B\) must receive the data in a buffer in its local memory. To guarantee portability of programs, no assumptions on the topology of the interconnection network is made. Instead, it is assumed that each processor can send a message to any other processor.
- Published
- 2023
24. The software package MUSCOP
- Author
-
Potschka, Andreas, Bock, Hans Georg, Series editor, Hackbusch, Wolfgang, Series editor, Luskin, Mitchell, Series editor, Rannacher, Rolf, Series editor, and Potschka, Andreas
- Published
- 2014
- Full Text
- View/download PDF
25. A Dataflow Inspired Programming Paradigm for Coarse-Grained Reconfigurable Arrays
- Author
-
Niedermeier, A., Kuper, Jan, Smit, Gerard J. M., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Kobsa, Alfred, editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Weikum, Gerhard, editor, Goehringer, Diana, editor, Santambrogio, Marco Domenico, editor, Cardoso, João M. P., editor, and Bertels, Koen, editor
- Published
- 2014
- Full Text
- View/download PDF
26. The Future
- Author
-
Soukup, Jiri, Macháček, Petr, Soukup, Jiri, and Macháček, Petr
- Published
- 2014
- Full Text
- View/download PDF
27. Programming Concepts
- Author
-
Sharan, Kishori and Sharan, Kishori
- Published
- 2014
- Full Text
- View/download PDF
28. Template Programming
- Author
-
Sutherland, Bruce and Sutherland, Bruce
- Published
- 2014
- Full Text
- View/download PDF
29. Beginning C++
- Author
-
Sutherland, Bruce and Sutherland, Bruce
- Published
- 2014
- Full Text
- View/download PDF
30. Types of Questions in Computer Science Education
- Author
-
Hazzan, Orit, Lapidot, Tami, Ragonis, Noa, Hazzan, Orit, Lapidot, Tami, and Ragonis, Noa
- Published
- 2014
- Full Text
- View/download PDF
31. Research in Computer Science Education 4
- Author
-
Hazzan, Orit, Lapidot, Tami, Ragonis, Noa, Hazzan, Orit, Lapidot, Tami, and Ragonis, Noa
- Published
- 2014
- Full Text
- View/download PDF
32. Overview of the Discipline of Computer Science
- Author
-
Hazzan, Orit, Lapidot, Tami, Ragonis, Noa, Hazzan, Orit, Lapidot, Tami, and Ragonis, Noa
- Published
- 2014
- Full Text
- View/download PDF
33. Frequency competition among airlines on coordinated airports network
- Author
-
Wenzhu Zhang, Chun-Han Wang, Yu-Ching Lee, and Yue Dai
- Subjects
050210 logistics & transportation ,021103 operations research ,Information Systems and Management ,General Computer Science ,Computer science ,05 social sciences ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Competition (economics) ,Microeconomics ,symbols.namesake ,Strategy ,Nash equilibrium ,Modeling and Simulation ,0502 economics and business ,symbols ,Programming paradigm ,Profitability index ,Sensitivity (control systems) ,Relaxation (approximation) ,Market share - Abstract
Frequency competition is critical for a full-service airline in gaining market share, and adopting a proper strategy can improve an airline’s profits. This study proposes a new equilibrium programming model with flow balance to address frequency competition on airports network with time slot constraints. We first show that a pure-strategy Nash equilibrium may not always exist, and thus forming a pure strategy profile in frequency competition among airlines may naturally lead to deviation from current frequency. Therefore, we formulate the problem as a programming model with a mixed-strategy Nash equilibrium. To avoid shocks from dramatic frequency changes across the network, airlines tend to fine-tune frequencies on select segments during each adjustment. We propose a procedure to generate a computationally tractable amount of representative strategies from a finite set of feasible strategies to demonstrate mixed-strategy Nash equilibrium. We conduct an empirical analysis using an example in which industry profitability increased by as much as 7.89%. We then extend the model to formulate frequency competition among metal-neutral alliances. The results show that forming metal-neutral alliances can improve total industry profits by 10.59%. In particular, a sensitivity analysis with real data on the tolerance of flow imbalance demonstrates that deducting the potential costs due to the relaxation of flow balance between congested airports may earn additional total industry profits in a frequency competition.
- Published
- 2022
34. Modeling green supply chain games with governmental interventions and risk preferences under fuzzy uncertainties
- Author
-
Juntao Li and Pengfei Liu
- Subjects
Numerical Analysis ,Supply chain management ,General Computer Science ,Applied Mathematics ,Supply chain ,Psychological intervention ,Fuzzy logic ,Manufacturing cost ,Theoretical Computer Science ,Product (business) ,Microeconomics ,Demand curve ,Modeling and Simulation ,Programming paradigm ,Business - Abstract
The aggravation of environmental issues makes green manufacturing become inevitable and fuzzy uncertainties prevail in supply chain management. To further promote the development of green supply chain, this paper considers a two-echelon green supply chain models with governmental interventions composed of one supplier and one retailer under fuzzy uncertainties, in which the parameters of demand function and manufacturing cost are all characterized as fuzzy variables. Then the equilibrium decisions of the expected value and chance-constrained programming models are derived considering the different risk references of the supplier and the retailer. At the end, numerical examples are presented to demonstrate the theoretical underpinning of proposed models. Analytical results indicate that the supplier and the retailer can obtain different equilibrium decisions by adjusting different confidence levels. The equilibrium decisions reflect the risk attitudes of the supplier and the retailer to the uncertainties in supply chain system and their different predictions on possible level. Then, whether the supplier as a leader has the first-mover advantage is related to the risk attitudes of the supplier and the retailer when dealing with the uncertainties of the green supply chain system. Moreover, the strong governmental interventions can coordinate not only the conflict between pricing and green level decisions, but also the contradiction between the consumers and the supplier. In addition, the retailer may be the main puller of the green supply chain in developing the green product.
- Published
- 2022
35. Joint energy capacity and production planning optimization in flow-shop systems
- Author
-
Taha Arbaoui, Alice Yalaoui, Melek Rodoplu, Département Sciences de la Fabrication et Logistique (SFL-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-CMP-GC, Laboratoire d'Optimisation des Systèmes Industriels (LOSI), Laboratoire Informatique et Société Numérique (LIST3N), and Université de Technologie de Troyes (UTT)-Université de Technologie de Troyes (UTT)
- Subjects
Mathematical optimization ,business.industry ,Computer science ,Applied Mathematics ,Reliability (computer networking) ,Probabilistic logic ,Flow shop scheduling ,Sizing ,Renewable energy ,Production planning ,Modeling and Simulation ,Programming paradigm ,[INFO]Computer Science [cs] ,business ,Energy source ,ComputingMilieux_MISCELLANEOUS - Abstract
This study introduces new probabilistic constraints and objective functions to manage the uncertain nature of the renewable energy sources in single-item capacitated lot sizing problem for flow-shop configurations by integrating the capacity contract selection problem with multiple energy sources. The aim of the probabilistic models built by considering the different probabilistic constraints and objective functions is to provide a decision-making tool and to promote the use of renewable energy sources in manufacturing industry despite of their stochastic nature. Mixed Integer Non-Linear Programming models are proposed by integrating the uncertainty of the renewable energy sources based on different features. The developed models are tested on a small-size instance and the results of the models are compared in terms of economical, ecological and reliability aspects.
- Published
- 2022
36. Multiverse Optimization Algorithm for Stochastic Biobjective Disassembly Sequence Planning Subject to Operation Failures
- Author
-
Liang Qi, Yaping Fu, Xiwang Guo, MengChu Zhou, and Khaled Sedraoui
- Subjects
Mathematical optimization ,Sequence ,Process (engineering) ,Computer science ,Stochastic process ,02 engineering and technology ,Energy consumption ,010501 environmental sciences ,01 natural sciences ,Computer Science Applications ,Human-Computer Interaction ,Resource (project management) ,Control and Systems Engineering ,Stochastic simulation ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,Remanufacturing ,Software ,0105 earth and related environmental sciences - Abstract
Disassembly is an essential step in a remanufacturing process via which valuable parts and material of end-of-life (EOL) products can be well reused and resource waste is reduced. Disassembly sequence planning focuses on finding the best disassembly sequence for a given EOL product by considering economic and environmental performance. In a practical disassembly process, one may face a disassembly operation failure risk due to the difficulty of knowing EOL products' exact information in advance. Despite its importance in impacting disassembly outcomes, the existing work fails to consider it comprehensively. This work proposes a stochastic biobjective DSP problem with the objectives of maximizing disassembly profit and minimizing energy consumption by doing so. A chance-constrained programming model is established, where a chance constraint ensures a fixed confidence level of disassembly failure. To solve it efficiently, a multiobjective multiverse optimization algorithm with stochastic simulation is proposed. Experiments are carried out on four products. Results demonstrate that it outperforms some state-of-the-art algorithms in terms of solution performance.
- Published
- 2022
37. Interval type-2 fuzzy programming method for risky multicriteria decision-making with heterogeneous relationship
- Author
-
Xiaowei Gu, Francisco Chiclana, Peide Liu, Jianpeng Long, Guolin Tang, and Fubin Wang
- Subjects
QA75 ,Regret theory ,Mathematical optimization ,Information Systems and Management ,Computer science ,Regret ,Evolutionary computation ,Interval (mathematics) ,Multiple-criteria decision analysis ,Fuzzy logic ,Bounded rationality ,Computer Science Applications ,Theoretical Computer Science ,2-additive fuzzy measure ,Ranking ,Artificial Intelligence ,Control and Systems Engineering ,Risky multicriteria decision making ,Programming paradigm ,Interval type-2 fuzzy set ,Heterogeneous relationship ,Pairwise comparison ,Software - Abstract
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. We propose a new interval type-2 fuzzy (IT2F) programming method for risky multicriteria decision-making (MCDM) problems with IT2F truth degrees, where the criteria exhibit a heterogeneous relationship and decision-makers behave according to bounded rationality. First, we develop a technique to calculate the Banzhaf-based overall perceived utility values of alternatives based on 2-additive fuzzy measures and regret theory. Subsequently, considering pairwise comparisons of alternatives with IT2F truth degrees, we define the Banzhaf-based IT2F risky consistency index (BIT2FRCI) and the Banzhaf-based IT2F risky inconsistency index (BIT2FRII). Next, to identify the optimal weights, an IT2F programming model is established based on the concept that BIT2FRII must be minimized and must not exceed the BIT2FRCI using a fixed IT2F set. Furthermore, we design an effective algorithm using an external archive-based constrained state transition algorithm to solve the established model. Accordingly, the ranking order of alternatives is derived using the Banzhaf-based overall perceived utility values. Experimental studies pertaining to investment selection problems demonstrate the state-of-the-art performance of the proposed method, that is, its strong capability in addressing risky MCDM problems.
- Published
- 2022
38. Long-Term Voltage Stability-Constrained Coordinated Scheduling for Gas and Power Grids With Uncertain Wind Power
- Author
-
Chong Wang, Ping Ju, Feng Wu, Shunbo Lei, and Xueping Pan
- Subjects
Electric power system ,Wind power ,Renewable Energy, Sustainability and the Environment ,business.industry ,Control theory ,Computer science ,Scheduling (production processes) ,Programming paradigm ,Piecewise ,Relaxation (approximation) ,business ,Power (physics) ,Voltage - Abstract
Considering the increased trend that power systems are closer to the operating bounds because of the increased demand and new challenges in consideration of gas systems and wind power, this paper investigates long-term voltage stability-constrained integrated electric and gas system optimal scheduling in consideration of wind energy integration. A sufficient condition, which is represented as an explicit function of voltage and injected power, is used to constrain power system long-term voltage stability. Due to bilinear terms in this condition, tightening piecewise McCormick envelope relaxation is used to convert it into convex constraints. The second-order cone programming (SOCP) formulation is employed to represent the operational constraints of the integrated electric and gas system. The loss of wind power probability, representing wind power uncertainties, is established by a chance-constrained programming model, which is transformed into a deterministic optimization model by means of the star-inequality-based extended formulation of sample average approximation. Two test systems, the 9-bus electric system with the 6-node gas system and the IEEE 118-bus electric system with the 40-node gas system, are used to validate the proposed model.
- Published
- 2022
39. An optimal Islamic investment decision in two-region economy: The case of Indonesia and Malaysia
- Author
-
Ferry Syarifuddin, Ali Sakti, and Toni Bakhtiar
- Subjects
QA299.6-433 ,General Decision Sciences ,Islam ,Investment (macroeconomics) ,HF5691-5716 ,Market liquidity ,Business mathematics. Commercial arithmetic. Including tables, etc ,Investment decisions ,Economy ,Work (electrical) ,Programming paradigm ,Economics ,Expected utility maximization ,Analysis ,Budget constraint - Abstract
In this work, the possibility of cross-border activities between two regions in the framework of the investment contract is viewed as optimal allocation problems. The problems of determining the optimal proportion of funds to be invested in liquidity and technology are analyzed in two different environments. In the first case, we consider a two-region and two-technology economy in which both regions possess the same productive technology or project, but a different stream of return. While in the second case, we examine an economy where two regions (i.e., Indonesia and Malaysia) hold different Islamic productive projects with identical returns. Allocation models are formulated in terms of investors’ expected utility maximization problem under budget constraints with respect to regional and sectoral shocks. It is revealed that optimal parameters for liquidity ratio, technological investment profile, and bank repayment are analytically characterized by the return of a more productive project and the proportion of impatient and patient investors in the region. Even though both cases employ different assumptions, they provide the same expressions of optimal parameters. The model suggests that cross-border Islamic investment activities between two regions might be realized, provided both regions hold productive projects with an identical stream of return. This paper also shows that by increasing the lower return of the project approaching the higher return, a room for inter-region investment can be created. An analytical framework of an investment contract in terms of optimal allocation model is provided.
- Published
- 2022
40. Solving Partially Observable Environments with Universal Search Using Dataflow Graph-Based Programming Model
- Author
-
Swarna Kamal Paul and Parama Bhaumik
- Subjects
Theoretical computer science ,Dataflow ,Computer science ,Graph based ,Programming paradigm ,Observable ,Electrical and Electronic Engineering ,Computer Science Applications ,Theoretical Computer Science - Published
- 2021
41. Logically Parallel Communication for Fast MPI+Threads Applications
- Author
-
Hui Zhou, Martin Berzins, Damodar Sahasrabudhe, Rohit Zambre, Aparna Chandramowlishwaran, and Pavan Balaji
- Subjects
Computer science ,Semantics (computer science) ,Distributed computing ,010103 numerical & computational mathematics ,02 engineering and technology ,Supercomputer ,01 natural sciences ,Bottleneck ,Domain (software engineering) ,Computational Theory and Mathematics ,Parallel processing (DSP implementation) ,Hardware and Architecture ,Parallel communication ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Parallelism (grammar) ,020201 artificial intelligence & image processing ,0101 mathematics - Abstract
Supercomputing applications are increasingly adopting the MPI+threads programming model over the traditional “MPI everywhere” approach to better handle the disproportionate increase in the number of cores compared with other on-node resources. In practice, however, most applications observe a slower performance with MPI+threads primarily because of poor communication performance. Recent research efforts on MPI libraries address this bottleneck by mapping logically parallel communication, that is, operations that are not subject to MPI’s ordering constraints to the underlying network parallelism. Domain scientists, however, typically do not expose such communication independence information because the existing MPI-3.1 standard’s semantics can be limiting. Researchers had initially proposed user-visible endpoints to combat this issue, but such a solution requires intrusive changes to the standard (new APIs). The upcoming MPI-4.0 standard, on the other hand, allows applications to relax unneeded semantics and provides them with many opportunities to express logical communication parallelism. In this article, we show how MPI+threads applications can achieve high performance with logically parallel communication. Through application case studies, we compare the capabilities of the new MPI-4.0 standard with those of the existing one and user-visible endpoints (upper bound). Logical communication parallelism can boost the overall performance of an application by over 2×.
- Published
- 2021
42. Integrated Resource Assignment and Scheduling Optimization With Limited Critical Equipment Constraints at an Automated Container Terminal
- Author
-
Jinlin Wan, Jianbiao Peng, Xi Wang, and Hui Li
- Subjects
Job shop scheduling ,business.industry ,Computer science ,Mechanical Engineering ,Scheduling (production processes) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Automation ,Computer Science Applications ,Reliability engineering ,Automotive Engineering ,Programming paradigm ,Operation time ,business ,Resource assignment - Abstract
With the advancement of automation in transportation, the need to improve the operation efficiency of container terminals has increased. The most important determinant of container-handling efficiency is the productivity of equipment, such as quay cranes, automated lifting vehicles, storage yards, and yard cranes. Most previous studies have sought to optimize equipment assignments and scheduling independently and have considered only a loading or an unloading process. As loading and unloading processes occur simultaneously and the equipment operations are highly interrelated, it is important to direct the operations in an integrated manner that reflects the characteristics of automated container terminals. This paper presents a new mixed-integer programming model for analyzing the integrated problem of assigning resources and scheduling, which also considers the limited quantity of critical equipment. To solve the integrated optimization model, a genetic algorithm (GA) is developed. Since the critical equipment, such as yard cranes, are limited, and thus, restricting the efficiency of terminals, a sharing policy is proposed to improve the GA to shorten the operation time of both the loading and unloading processes. Experiments show that the improved GA proposed in this paper can obtain the optimal/near-optimal solutions in short CPU times, therefore it is efficient in solving the integrated equipment assignment and scheduling problem. The results obtained from the sharing policy are superior to those obtained from a non-sharing approach.
- Published
- 2021
43. RWRoute: An Open-source Timing-driven Router for Commercial FPGAs
- Author
-
Christopher Lavin, Dirk Stroobandt, Pongstorn Maidee, Yun Zhou, and Alireza S. Kaviani
- Subjects
Router ,General Computer Science ,business.industry ,Computer science ,Embedded system ,Key (cryptography) ,Programming paradigm ,Electronic design automation ,Routing (electronic design automation) ,business ,Field-programmable gate array ,Compile time ,Domain (software engineering) - Abstract
One of the key obstacles to pervasive deployment of FPGA accelerators in data centers is their cumbersome programming model. Open source tooling is suggested as a way to develop alternative EDA tools to remedy this issue. Open source FPGA CAD tools have traditionally targeted academic hypothetical architectures, making them impractical for commercial devices. Recently, there have been efforts to develop open source back-end tools targeting commercial devices. These tools claim to follow an alternate data-driven approach that allows them to be more adaptable to the domain requirements such as faster compile time. In this paper, we present RWRoute, the first open source timing-driven router for UltraScale+ devices. RWRoute is built on the RapidWright framework and includes the essential and pragmatic features found in commercial FPGA routers that are often missing from open source tools. Another valuable contribution of this work is an open-source lightweight timing model with high fidelity timing approximations. By leveraging a combination of architectural knowledge, repeating patterns, and extensive analysis of Vivado timing reports, we obtain a slightly pessimistic, lumped delay model within 2% average accuracy of Vivado for UltraScale+ devices. Compared to Vivado, RWRoute results in a 4.9× compile time improvement at the expense of 10% Quality of Results (QoR) loss for 665 synthetic and six real designs. A main benefit of our router is enabling fast partial routing at the back-end of a domain-specific flow. Our initial results indicate that more than 9× compile time improvement is achievable for partial routing. The results of this paper show how such a router can be beneficial for a low touch flow to reduce dependency on commercial tools.
- Published
- 2021
44. Mathematical model for the scheduling of real-time applications in IoT using Dew computing
- Author
-
Ghazaleh Javadzadeh, Morteza Saberi Kamarposhti, and Amir Masoud Rahmani
- Subjects
business.product_category ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,Internet traffic ,Theoretical Computer Science ,Scheduling (computing) ,Hardware and Architecture ,Scalability ,Internet access ,Programming paradigm ,The Internet ,business ,Resource management (computing) ,Software ,Information Systems - Abstract
The dew computing paradigm is emerging as a complement of cloud computing to cover its limitations. Independence is one of the essential features of dew computing. It means that it can continue to provide services without an Internet connection. These characteristics of dew computing allow it to find a niche in real-time applications. The importance of real-time applications in daily human life is not hidden due to the growing development of the Internet of Things. In this paper, the hierarchical architecture of cloud-fog-dew is presented to overcome the limitations of cloud computing in real-time applications such as latency and resource management. Also, a Mixed Integer Non-Linear Programming model is presented for the scheduling of real-time applications in the proposed architecture. It aims to reduce power consumption and Internet traffic. Besides, the proposed model is supported by Non-dominated Sorting Genetic Algorithm II to provide scalability. The simulation results demonstrate that completing tasks in the dew computing layer can reduce Internet dependency while also reducing power consumption and traffic. As a result, under the suggested paradigm, the number of tasks missed due to stoppage or Internet disturbance is reduced.
- Published
- 2021
45. Soft Computing Methodology to Optimize the Integrated Dynamic Models of Cellular Manufacturing Systems in a Robust Environment
- Author
-
Amir-Mohammad Golmohammadi, Zaynab Akhoundpour Amiri, Hasan Rasay, Negar Balajeh, and Maryam Solgi
- Subjects
Soft computing ,Mathematical optimization ,Article Subject ,Artificial neural network ,Computer science ,General Mathematics ,Cellular manufacturing ,General Engineering ,Time horizon ,Engineering (General). Civil engineering (General) ,Nonlinear programming ,Genetic algorithm ,QA1-939 ,Programming paradigm ,TA1-2040 ,Metaheuristic ,Mathematics - Abstract
Machine learning, neural networks, and metaheuristic algorithms are relatively new subjects, closely related to each other: learning is somehow an intrinsic part of all of them. On the other hand, cell formation (CF) and facility layout design are the two fundamental steps in the CMS implementation. To get a successful CMS design, addressing the interrelated decisions simultaneously is important. In this article, a new nonlinear mixed-integer programming model is presented which comprehensively considers solving the integrated dynamic cell formation and inter/intracell layouts in continuous space. In the proposed model, cells are configured in flexible shapes during the planning horizon considering cell capacity in each period. This study considers the exact information about facility layout design and material handling cost. The proposed model is an NP-hard mixed-integer nonlinear programming model. To optimize the proposed problem, first, three metaheuristic algorithms, that is, Genetic Algorithm (GA), Keshtel Algorithm (KA), and Red Deer Algorithm (RDA), are employed. Then, to further improve the quality of the solutions, using machine learning approaches and combining the results of the aforementioned algorithms, a new metaheuristic algorithm is proposed. Numerical examples, sensitivity analyses, and comparisons of the performances of the algorithms are conducted.
- Published
- 2021
46. Pricing policy in green supply chain design: the impact of consumer environmental awareness and green subsidies
- Author
-
Donya Rahmani, Maryam Shoaeinaeini, and Kannan Govindan
- Subjects
Numerical Analysis ,Operations research ,Computer science ,Strategy and Management ,Supply chain ,Particle swarm optimization ,Computational intelligence ,Subsidy ,Management Science and Operations Research ,Computational Theory and Mathematics ,Management of Technology and Innovation ,Modeling and Simulation ,Genetic algorithm ,Programming paradigm ,Benchmark (computing) ,Factory (object-oriented programming) ,Statistics, Probability and Uncertainty - Abstract
This paper presents a mixed-integer non-linear programming model to design a green closed-loop supply chain comprising hybrid plants, hybrid collection centers, customer zones, secondary markets, and disposal centers. To ensure a smooth reverse flow, our model determines a return rate for each customer zone with respect to the consumers’ environmental awareness and the optimal acquisition price offered for returned products. Considering the environmental awareness levels and optimal green levels, specific cost and price functions for green products are proposed. Further, government subsidy as a financial incentive for manufacturers is considered to make the model more realistic and challenging. The effectiveness of the proposed model is analyzed by an illustrative example generated based on an Iranian straw factory. Beneficial managerial insights are obtained by conducting several sensitivity analyses. Further, the illustrative example and several generated examples in all scales are assessed by Sine Cosine Crow Search Algorithm using an efficient solution representation based on the priority-based encoding. As there is no benchmark available in the related literature to validate the results of Sine Cosine Crow Search Algorithm and to evaluate its performance, Particle Swarm Optimization and Genetic Algorithm are utilized to solve the examples. Finally, the obtained results of different metaheuristic algorithms and the exact solution are compared in terms of the CPU time and the objective function value.
- Published
- 2021
47. Additional parallelization of existing MPI programs using SAPFOR
- Subjects
Multi-core processor ,business.industry ,Fortran ,Computer science ,Programming language ,Suite ,Maintainability ,Software development ,computer.software_genre ,Programming paradigm ,Code (cryptography) ,Parallelism (grammar) ,business ,computer ,computer.programming_language - Abstract
Системы SAPFOR и DVM были спроектированы и предназначены для упрощения разработки параллельных программ научно-технических расчетов. Главной целью системы SAPFOR является автоматизация процесса отображения последовательных программ на параллельные архитектуры в модели DVMH. В некоторых случаях пользователь системы SAPFOR может рассчитывать на полностью автоматическое распараллеливание, если программа была написана или приведена к потенциально параллельному виду. DVMH модель представляет собой расширение стандартных языков C и Fortran спецификациями параллелизма, которые оформлены в виде директив и не видимы стандартным компиляторам. В статье будет рассмотрено автоматизированное дополнительное распараллеливание существующих MPI-программ с помощью системы SAPFOR, где, в свою очередь, будут использованы новые возможности DVMH модели по распараллеливанию циклов в MPI программе внутри узла. Данный подход позволяет существенно снизить трудоемкость распараллеливания MPI программ на графические ускорители и многоядерные процессоры, сохранив при этом удобство сопровождения уже написанной программы. Данная возможность в системе SAPFOR была реализована для языков Fortran и C. Эффективность данного подхода показана на примере некоторых приложений из пакета NAS Parallel Benchmarks. The SAPFOR and DVM systems are primarily designed to simplify the development of parallel programs of scientific-technical calculations. SAPFOR is a software development suite that aims to produce a parallel version of a sequential program in a semi-automatic way. The fully automatic parallelization is also possible if the program is well-formed and satisfies certain requirements. SAPFOR uses the DVMH directive-based programming model to expose parallelism in the code. The DVMH model introduces CDVMH and Fortran-DVMH (FDVMH) programming languages which extend the standard C and Fortran languages by parallelism specifications. We present MPI-aware extension of the SAPFOR system that exploits opportunities provided by the new features of the DVMH model to extend existing MPI programs with intra-node parallelism. In that way, our approach reduces the cost of parallel program maintainability and allows an MPI program to utilize accelerators and multicore processors. SAPFOR extension has been implemented for both Fortran and C programming languages. In this paper, we use the NAS Parallel Benchmarks to evaluate the performance of generated programs.
- Published
- 2021
48. Preemptive scheduling on unrelated machines with fractional precedence constraints
- Author
-
Tian Lan, Vaneet Aggarwal, and Dheeraj Peddireddy
- Subjects
Scheme (programming language) ,Mathematical optimization ,Schedule ,Job shop scheduling ,Computer Networks and Communications ,Computer science ,Preemption ,Theoretical Computer Science ,Matrix decomposition ,Set (abstract data type) ,Artificial Intelligence ,Hardware and Architecture ,Programming paradigm ,Computer Science::Operating Systems ,computer ,Computer Science::Distributed, Parallel, and Cluster Computing ,Software ,computer.programming_language - Abstract
Many programming models, e.g., MapReduce, introduce precedence constraints between the jobs. This paper formalizes a notion of precedence constraints, called fractional precedence constraints, where the progress of follower jobs only has to lag behind (fractionally) their leads. For a general set of fractional precedence constraints between the jobs, this paper provides a new class of preemptive scheduling algorithms on unrelated machines that have arbitrary processing speeds. In particular, for a given makespan, we establish both sufficient and necessary conditions on the existence of a feasible job schedule, and then propose an efficient scheduling algorithm based on a novel matrix decomposition method, if the sufficient conditions are satisfied. The algorithm is shown to be a Polynomial-Time Approximation Scheme (PTAS), i.e., its solution is able to achieve any feasible makespan with an approximation bound of 1 + ϵ , for an arbitrary ϵ > 0 .
- Published
- 2021
49. A heuristic-based simulated annealing algorithm for the scheduling of relief teams in natural disasters
- Author
-
Zeinab Sazvar, Jafar Heydari, Sina Nayeri, and Reza Tavakkoli-Moghaddam
- Subjects
Mathematical optimization ,Schedule ,Emergency management ,Heuristic (computer science) ,Computer science ,business.industry ,Computational intelligence ,Time horizon ,Theoretical Computer Science ,Scheduling (computing) ,Simulated annealing ,Programming paradigm ,Geometry and Topology ,business ,Software - Abstract
Natural disasters cause heavy casualties and financial losses annually. To reduce these damages, the rescue teams need to be planned effectively. In this regard, in this research, a mixed-integer programming model is offered to allocate and schedule rescue teams in a response phase of disaster management under uncertainty. The objective function minimizes the incident’s total weighted completion times. The literature review shows that the uncertain condition and time windows have been less addressed in the previous studies. To cover these gaps, this paper investigates the problem under uncertainty and considers time windows for incidents. Besides, the fatigue effect is considered in this paper. Accordingly, within a planning horizon, incident processing times are not fixed. Since the considered problem is an NP-hard one and exact methods cannot solve it within a reasonable amount of time, this research develops a heuristic-based simulated annealing algorithm. The presented model is solved using the developed algorithm and three known meta-heuristic algorithms. Then, the results obtained by algorithms are compared and analyzed. Finally, the sensitivity analysis is carried out on some crucial parameters of the presented model, and the related results are reported.
- Published
- 2021
50. Intelligent transmission control layer for efficient node management in SDN
- Author
-
Hamza Aldabbas and Khalaf Khatatneh
- Subjects
Computer Networks and Communications ,Computer science ,Event (computing) ,business.industry ,Network packet ,Node (networking) ,Packet loss ,Control theory ,Programming paradigm ,Software-defined networking ,business ,Host (network) ,Software ,Computer network - Abstract
The Software Defined Networking (SDN) promises exciting new networking functionality. However, there is always remains a chance of programming errors that result in unreliable data communication. The centralized programming model helps decrease bugs' probability where a single controller manages the whole network. Yet, many real-time events occur at switches and end hosts, which often affect and add a delay in the communication process. One of those events includes unannounced destination host migration after installing flow rules during receiving of data packets. Such destination host movement results in the loss of packets because controller is not aware of this recent event. Therefore, we need an efficient approach to transmit packets without any packet loss despite destination host migration. This paper proposed a design to achieve the objective as mentioned above by defining a layer named Intelligent Transmission Control Layer (ITCL). It monitors all the end hosts' connections at their specific locations and performs necessary actions whenever the connection state changes for one or multiple hosts. The controller collects information of end nodes and state change through ITCL using A star search algorithm. After that, it updates flow tables accordingly to accommodate a location-change scenario with a route-change policy. ICTL is developed on prototype-based implementation using a popular POX controller platform. By comparing ITCL with the existing solution, we conclude that our proposed approach exhibits efficient performance in terms of Packet loss, Bandwidth usage, and Network Throughput.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.