4,492 results on '"Computing Systems"'
Search Results
2. Investigating IoT-Enabled 6G Communications: Opportunities and Challenges
- Author
-
Belkeziz, Radia, Chefira, Reda, Tibssirte, Oumaima, Kacprzyk, Janusz, Series Editor, García Márquez, Fausto Pedro, editor, Jamil, Akhtar, editor, Ramirez, Isaac Segovia, editor, Eken, Süleyman, editor, and Hameed, Alaa Ali, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Computing Diversity Paradigm for the Utilization of Unused Telephony and Marine Infrastructure
- Author
-
Periola, A and Obayiuwana, E.
- Published
- 2024
- Full Text
- View/download PDF
4. A Model of Design for Computing Systems: A Categorical Approach
- Author
-
Tage Mohammadat
- Subjects
Computing systems ,computer-aided design (CAD) ,electronic design automation (EDA) ,embedded system design ,architectural design ,model-driven engineering ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper introduces the model of design (MoD), a framework that leverages category theory to study the design and development of computer-driven systems, to the academic and engineering communities dealing with computer systems. The model of design aims to offer a minimal framework for modelling the design and development of embedded computation across domains and abstractions, focusing on functional and extra-functional aspects as well as overarching concerns for automaticity, correctness and reuse. This nuanced approach provides insights into the theory and practice of computer systems design.
- Published
- 2023
- Full Text
- View/download PDF
5. Impact of Application Characteristics on Laser Energy Fluctuation in Integrated Photonic Switching Systems
- Author
-
Kang Wang, Huaxi Gu, Yintang Yang, Kun Wang, and Yue Wang
- Subjects
Application characteristics ,laser energy flucitation ,photonic switch ,computing systems ,Applied optics. Photonics ,TA1501-1820 ,Optics. Light ,QC350-467 - Abstract
Laser energy cost is one of the primary energy budgets in integrated photonic switching systems. Traditional photonic switch testing injects random traffic to the switches, which only generates a “static” laser energy cost result. However, the laser's energy per bit performance fluctuates due to different process mapping scenarios of applications. In this paper, we did experiments to study the influence of the application's process mapping on the photonic switches' injection traces. Then we built a model to show the connection between the traces and the laser's energy per bit performance. To get the energy fluctuation results quickly and accurately, we propose a heuristic-based energy boundary searching methodology, with the model we built being considered. We also analyze the speedup and convergence of the methodology. Two photonic switches are studied under five kinds of application traces. The study shows an over 60% searching speedup and 90% accuracy in most cases, compared with the enumeration method, and the lasers' energy fluctuations vary from nearly 0% to over 150%. We further analyze the factors inducing such huge fluctuation variations, and a qualitative criterion that predicts the magnitude of the variations is proposed and discussed.
- Published
- 2023
- Full Text
- View/download PDF
6. A method for documenting architectural solutions of computing platforms
- Author
-
Yaroslav G. Gorbachev
- Subjects
architecture ,architectural description ,computing systems ,computing mechanisms ,reconfigurable systems ,Optics. Light ,QC350-467 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The article describes a method for documenting the principles of functioning and internal organization of computing platforms, including reconfigurable computing systems and non-standard processor architectures. The novelty is in using of unified tools to describe: the design process and the computing process, hardware, software and tools, computing components of different granularity. The proposed approach is to describe computing platform as an ideal model that represents abstract algorithms for fulfilling functional requirements without specifying of how to implement it. Then the iterative model refinement follows including selection of physical implementation options, specifying the technological stacks, and additional mechanisms that provide the specified system qualities. A feature of the method is a kernel used for structuring information, classifying and describing the computational mechanisms the system consists of. The kernel includes elements common for different systems and is based on the analysis of a large number of computing architectures. The method describes the principles of the organization of platforms which are usually not considered together. These are: generalized processors with classical architecture which is an evolution of the von Neumann principles; systems based on microcontrollers; operating systems; large- and small-granular reconfigurable systems; specialized processors and accelerators; artificial neural networks. The proposed method can be used to structure information in both traditional and rapidly developing areas: reconfigurable systems and specialized processors. Based on the method, it is possible to create a common database of computing mechanisms suitable for use in different functional units of the system and at different levels of granularity. The results of the work can be useful for system architects to describe complex computing mechanisms consisting of software, hardware and dynamically generated adaptive “intelligent” components which will simplify their reuse and can be used to generate new architectural solutions. Also, the proposed method can be used in the process of training specialists, for a visual demonstration of the basic principles of computer technology.
- Published
- 2022
- Full Text
- View/download PDF
7. Journal on Interactive Systems
- Subjects
interactive systems ,computing systems ,interface evaluation ,interaction design ,applied computer science ,Computer software ,QA76.75-76.765 ,Computer engineering. Computer hardware ,TK7885-7895 - Published
- 2023
8. Development of technology for controlling access to digital portals and platforms based on estimates of user reaction time built into the interface
- Author
-
S. G. Magomedov, P. V. Kolyasnikov, and E. V. Nikulchev
- Subjects
access control ,digital platforms ,big data processing ,security event management ,information security ,computing systems ,user behavior analysis ,Information theory ,Q350-390 - Abstract
The paper addresses the development of technology for controlling access to digital portals and platforms based on assessments of personal characteristics of user behavior built into the interface. In distributed digital platforms and portals using personal data, big data is collected and processed using specialized applications using computer networks. In accordance with the law, the data is stored on internal corporate servers and data centers. Special attention is paid to the tasks of differentiation and control of access in modern information systems. Wide availability and mass scale of services should be accompanied by more careful control and user verification. Access control to such systems cannot be ensured only through technologies and information security tools; efficiency can be increased through software and hardware architectural solutions. The paper proposes to expand the currently developing SIEM technology (Security information and event management), which combines the concept of security event management and information security management, with blocks of user behavior analysis. As a characteristic that can be measured without overloading communication channels and is independent of the type of device used, the psychomotor reaction time is proposed, measured as the performance of actions with the interface. A technological solution has been developed for implementation in a wide range of digital platforms: banking, medical, educational, etc. The results of experimental research using a digital platform of mass psychological research are presented. For the research, data from a mass survey were used when answering (in the form of a choice from the available options) to questions about the level of education. Analysis of the reaction time data showed the possibility of standardization and the same indicators of specific users when answering different questions.
- Published
- 2020
- Full Text
- View/download PDF
9. Systemic Integrated Unmanned Aerial System.
- Author
-
Ragab, Ahmed Refaat, Peña, Pablo Flores, Luna, Marco A., and Isaac, Mohammad Sadeq Ale
- Subjects
COMPUTER systems ,APPLICATION software ,OPERATING costs ,NETWORK PC (Computer) - Abstract
Systemic integrated Unmanned Aerial System (UAS), is the process of gathering the subsystems into one fulfilled system. This integration is done in order to improve the system performance, reducing operational costs, and improving the time response of the system. Normally, such systems are integrated using different techniques such as communication processes, and computer networking. In this paper, a new integrated system is implemented by linking functionally computing systems and software applications together in one powerful system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Fast generation of sparse random kernel graphs
- Author
-
Du, Wen [Beihang Univ. (China)]
- Published
- 2015
- Full Text
- View/download PDF
11. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
- Author
-
Vetter, Jeffrey [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Future Technologies Group; Georgia Inst. of Technology, Atlanta, GA (United States)]
- Published
- 2015
- Full Text
- View/download PDF
12. The evolution of distributed computing systems: from fundamental to new frontiers.
- Author
-
Lindsay, Dominic, Gill, Sukhpal Singh, Smirnova, Daria, and Garraghan, Peter
- Subjects
- *
DISTRIBUTED computing , *COMPUTER systems , *MOORE'S law , *GRID computing , *COMPUTER science , *EDGE computing , *CLOUD computing - Abstract
Distributed systems have been an active field of research for over 60 years, and has played a crucial role in computer science, enabling the invention of the Internet that underpins all facets of modern life. Through technological advancements and their changing role in society, distributed systems have undergone a perpetual evolution, with each change resulting in the formation of a new paradigm. Each new distributed system paradigm—of which modern prominence include cloud computing, Fog computing, and the Internet of Things (IoT)—allows for new forms of commercial and artistic value, yet also ushers in new research challenges that must be addressed in order to realize and enhance their operation. However, it is necessary to precisely identify what factors drive the formation and growth of a paradigm, and how unique are the research challenges within modern distributed systems in comparison to prior generations of systems. The objective of this work is to study and evaluate the key factors that have influenced and driven the evolution of distributed system paradigms, from early mainframes, inception of the global inter-network, and to present contemporary systems such as edge computing, Fog computing and IoT. Our analysis highlights assumptions that have driven distributed systems appear to be changing, including (1) an accelerated fragmentation of paradigms driven by commercial interests and physical limitations imposed by the end of Moore's law, (2) a transition away from generalized architectures and frameworks towards increasing specialization, and (3) each paradigm architecture results in some form of pivoting between centralization and decentralization coordination. Finally, we discuss present day and future challenges of distributed research pertaining to studying complex phenomena at scale and the role of distributed systems research in the context of climate change. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Large-Scale Computing Systems Workload Prediction Using Parallel Improved LSTM Neural Network
- Author
-
Xiaoyong Tang
- Subjects
Workload prediction ,computing systems ,LSTM ,neural network ,parallel ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In recent years, large-scale computing systems have been widely used as an important part of the computing infrastructure. Resource management based on systems workload prediction is an effective way to improve application efficiency. However, accuracy and real-time functionalities are always the key challenges that perplex the systems workload prediction model. In this paper, we first investigate the dependence on historical workload in large-scale computing systems and build a day and time two-dimensional time-series workload model. We then design a two-dimensional long short-term memory (LSTM) neural network cell structure. Based on this, we propose an improved LSTM prediction model providing its mathematical description and an error back propagation method. Furthermore, to achieve systems resource management real-time requirement, we provide a parallel improved LSTM algorithm that uses a hidden layer week-based dependence and weights parallelization algorithm. The comparative studies, based on the actual workload of the Shanghai Supercomputer Center, demonstrate that our proposed improved LSTM neural network prediction model can achieve higher accuracy and real-time performance in large-scale computing systems.
- Published
- 2019
- Full Text
- View/download PDF
14. User-Driven Sampling Strategies in Image Exploitation
- Author
-
Porter, Reid [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)]
- Published
- 2013
- Full Text
- View/download PDF
15. Hybrid Distributed Computing Service Based on the DIRAC Interware
- Author
-
Gergel, Victor, Korenkov, Vladimir, Pelevanyuk, Igor, Sapunov, Matvey, Tsaregorodtsev, Andrei, Zrelov, Petr, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Kalinichenko, Leonid, editor, Kuznetsov, Sergei O., editor, and Manolopoulos, Yannis, editor
- Published
- 2017
- Full Text
- View/download PDF
16. Judging a book by its cover: significance of UX design in gamification and computing systems
- Author
-
Sushra, Tulasi, Iyengar, Nitya, Shah, Manan, and Kshirsagar, Ameya
- Published
- 2022
- Full Text
- View/download PDF
17. Commentary: Reichenbach’s Verbal Tenses in the Context of Discovery About Computing Systems
- Author
-
Tamburrini, Guglielmo, Magnani, Lorenzo, Series editor, and Santoianni, Flavia, editor
- Published
- 2016
- Full Text
- View/download PDF
18. A Model-of-Design for Computing Systems : A Categorical Approach
- Author
-
Mohammadat, Tage and Mohammadat, Tage
- Abstract
This paper introduces the model of design (MoD), a framework that leverages category theory to study the design and development of computer-driven systems, to the academic and engineering communities dealing with computer systems. The model of design aims to offer a minimal framework for modelling the design and development of embedded computation across domains and abstractions, focusing on functional and extra-functional aspects as well as overarching concerns for automaticity, correctness and reuse. This nuanced approach provides insights into the theory and practice of computer systems design., QC 20231027
- Published
- 2023
- Full Text
- View/download PDF
19. Accurate and Robust Malware Detection: Running XGBoost on Runtime Data From Performance Counters
- Author
-
Shubham Mathur, Robert Wille, Wolfgang Ecker, Rana Elnaggar, Lorenzo Servadei, and Krishnendu Chakrabarty
- Subjects
Software_OPERATINGSYSTEMS ,Computer science ,Early detection ,Time data ,Computer security ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Computing systems ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Classifier (linguistics) ,Detection performance ,Malware ,Electrical and Electronic Engineering ,computer ,Software ,Vulnerability (computing) - Abstract
Malware applications are one of the major threats that computing systems face today. While security researchers develop new defense mechanisms to detect malware, attackers continue to release new malware families that evade detection. New defense mechanisms must therefore be developed to effectively counter malware. Hardware Performance Counters (HPCs) have been recently proposed as a means to detect malware. However, recent work has also shown that malware detection is not effective when performance counters are sampled in realistic scenarios. We show how proper data pre-processing and the use of the XGBoost classifier can be used to improve the performance of malware detection using HPCs by at least 15%. We also show that the proposed method can detect malware early (shortly after its launch) by classifying HPC datastreams at short time intervals. In addition, we propose a multi-temporal classification model that ensures the early detection of a high percentage of malware while maintaining overall low false positive rates. Finally, we show that through robust training, the XGBoost classifier shows up to 50x less vulnerability to adversarial attacks that are intended to undermine its malware detection performance.
- Published
- 2022
20. Near Volatile and Non-Volatile Memory Processing in 3D Systems
- Author
-
Masoumeh Ebrahimi, Pooria M. Yaghini, Maryam S. Hosseini, and Nader Bagherzadeh
- Subjects
business.industry ,Computer science ,Stacking ,Computing systems ,Bottleneck ,Die (integrated circuit) ,Computer Science Applications ,Human-Computer Interaction ,Non-volatile memory ,Embedded system ,Computer Science (miscellaneous) ,Technology scaling ,business ,Energy (signal processing) ,Dram ,Information Systems - Abstract
The cost of transferring data between the off-chip memory system and compute unit is the fundamental energy and performance bottleneck in modern computing systems. Furthermore, with the advent of emerging data-intensive applications and technology scaling, this bottleneck has continuously increased. To overcome these difficulties, Near Memory Processing (NMP) based on 3D die stacking becomes a potential technology to transform the computation-centric system towards memory-centric system. In this work, we explore the feasibility and efficacy of a NMP architecture based on an emerging Non-Volatile Memory technology (NVM) for data-intensive applications and compare it with the conventional 3D-stacked NMP architecture based on DRAM. We demonstrate the effectiveness of our approach with experimental results.
- Published
- 2022
21. Organization of Onboard Digital Computer System with Reconfiguration
- Author
-
Kniga, Ekaterina, Shukalov, Anatoly, Paramonov, Pavel, Junqueira Barbosa, Simone Diniz, Series editor, Chen, Phoebe, Series editor, Cuzzocrea, Alfredo, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Sivalingam, Krishna M., Series editor, Ślęzak, Dominik, Series editor, Washio, Takashi, Series editor, Yang, Xiaokang, Series editor, Dudin, Alexander, editor, Nazarov, Anatoly, editor, Yakupov, Rafael, editor, and Gortsev, Alexander, editor
- Published
- 2014
- Full Text
- View/download PDF
22. Fractional-order quantum particle swarm optimization.
- Author
-
Xu, Lai, Muhammad, Aamir, Pu, Yifei, Zhou, Jiliu, and Zhang, Yi
- Subjects
- *
PARTICLE swarm optimization , *FRACTIONAL calculus , *QUANTUM mechanics , *EVOLUTIONARY computation , *DEFINITIONS , *NANOELECTRONICS - Abstract
Motivated by the concepts of quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was developed to achieve better global search ability. This paper proposes a new method to improve the global search ability of QPSO with fractional calculus (FC). Based on one of the most frequently used fractional differential definitions, the Grünwald-Letnikov definition, we introduce its discrete expression into the position updating of QPSO. Extensive experiments on well-known benchmark functions were performed to evaluate the performance of the proposed fractional-order quantum particle swarm optimization (FQPSO). The experimental results demonstrate its superior ability in achieving optimal solutions for several different optimizations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. coreMRI: A high-performance, publicly available MR simulation platform on the cloud.
- Author
-
Xanthis, Christos G. and Aletras, Anthony H.
- Subjects
- *
COMPUTER programming , *BLOCH equations , *GRAPHICS processing units , *COMPUTER systems , *HUMAN anatomical models , *MODULAR design - Abstract
Introduction: A Cloud-ORiented Engine for advanced MRI simulations (coreMRI) is presented in this study. The aim was to develop the first advanced MR simulation platform delivered as a web service through an on-demand, scalable cloud-based and GPU-based infrastructure. We hypothesized that such an online MR simulation platform could be utilized as a virtual MRI scanner but also as a cloud-based, high-performance engine for advanced MR simulations in simulation-based quantitative MR (qMR) methods. Methods and results: The simulation framework of coreMRI was based on the solution of the Bloch equations and utilized a ground-up-approach design based on the principles already published in the literature. The development of a front-end environment allowed the connection of the end-users to the GPU-equipped instances on the cloud. The coreMRI simulation platform was based on a modular design where individual modules (such as the Gadgetron reconstruction framework and a newly developed Pulse Sequence Designer) could be inserted in the main simulation framework. Different types and sources of pulse sequences and anatomical models were utilized in this study revealing the flexibility that the coreMRI simulation platform offers to the users. The performance and scalability of coreMRI were also examined on multi-GPU configurations on the cloud, showing that a multi-GPU computer on the cloud equipped with a newer generation of GPU cards could significantly mitigate the prolonged execution times that accompany more realistic MRI and qMR simulations. Conclusions: coreMRI is available to the entire MR community, whereas its high performance and scalability allow its users to configure advanced MRI experiments without the constraints imposed by experimentation in a true MRI scanner (such as time constraint and limited availability of MR scanners), without upfront investment for purchasing advanced computer systems and without any user expertise on computer programming or MR physics. coreMRI is available to the users through the webpage . [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. A new quantum approach to binary classification.
- Author
-
Sergioli, Giuseppe, Giuntini, Roberto, and Freytes, Hector
- Subjects
- *
QUANTUM measurement , *TENSOR products , *QUANTUM states , *CLASSIFICATION , *PHYSICAL sciences , *DENSITY matrices - Abstract
This paper proposes a new quantum-like method for the binary classification applied to classical datasets. Inspired by the quantum Helstrom measurement, this innovative approach has enabled us to define a new classifier, called Helstrom Quantum Centroid (HQC). This binary classifier (inspired by the concept of distinguishability between quantum states) acts on density matrices—called density patterns—that are the quantum encoding of classical patterns of a dataset. In this paper we compare the performance of HQC with respect to twelve standard (linear and non-linear) classifiers over fourteen different datasets. The experimental results show that HQC outperforms the other classifiers when compared to the Balanced Accuracy and other statistical measures. Finally, we show that the performance of our classifier is positively correlated to the increase in the number of “quantum copies” of a pattern and the resulting tensor product thereof. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Quantitative and qualitative evaluation of the impact of the G2 enhancer, bead sizes and lysing tubes on the bacterial community composition during DNA extraction from recalcitrant soil core samples based on community sequencing and qPCR.
- Author
-
Gobbi, Alex, Santini, Rui G., Filippi, Elisa, Ellegaard-Jensen, Lea, Jacobsen, Carsten S., and Hansen, Lars H.
- Subjects
- *
SOIL composition , *BACTERIAL communities , *DRILL core analysis , *SOIL sampling , *DNA , *NUCLEIC acid isolation methods - Abstract
Soil DNA extraction encounters numerous challenges that can affect both yield and purity of the recovered DNA. Clay particles lead to reduced DNA extraction efficiency, and PCR inhibitors from the soil matrix can negatively affect downstream analyses when applying DNA sequencing. Further, these effects impede molecular analysis of bacterial community compositions in lower biomass samples, as often observed in deeper soil layers. Many studies avoid these complications by using indirect DNA extraction with prior separation of the cells from the matrix, but such methods introduce other biases that influence the resulting microbial community composition. To address these issues, a direct DNA extraction method was applied in combination with the use of a commercial product, the G2 DNA/RNA Enhancer, marketed as being capable of improving the amount of DNA recovered after the lysis step. The results showed that application of G2 increased DNA yields from the studied clayey soils from layers from 1.00 to 2.20 m. Importantly, the use of G2 did not introduce bias, as it did not result in any significant differences in the biodiversity of the bacterial community measured in terms of alpha and beta diversity and taxonomical composition. Finally, this study considered a set of customised lysing tubes for evaluating possible influences on the DNA yield. Tubes customization included different bead sizes and amounts, along with lysing tubes coming from two suppliers. Results showed that the lysing tubes with mixed beads allowed greater DNA recovery compared to the use of either 0.1 or 1.4 mm beads, irrespective of the tube supplier. These outcomes may help to improve commercial products in DNA/RNA extraction kits, besides raising awareness about the optimal choice of additives, offering opportunities for acquiring a better understanding of topics such as vertical microbial characterisation and environmental DNA recovery in low biomass samples. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. A versatile quantum walk resonator with bright classical light.
- Author
-
Sephton, Bereneice, Dudley, Angela, Ruffato, Gianluca, Romanato, Filippo, Marrucci, Lorenzo, Padgett, Miles, Goyal, Sandeep, Roux, Filippus, Konrad, Thomas, and Forbes, Andrew
- Subjects
- *
SUPERPOSITION principle (Physics) , *RESONATORS , *QUANTUM superposition , *RANDOM walks , *QUANTUM computing , *QUANTUM mechanics - Abstract
In a Quantum Walk (QW) the “walker” follows all possible paths at once through the principle of quantum superposition, differentiating itself from classical random walks where one random path is taken at a time. This facilitates the searching of problem solution spaces faster than with classical random walks, and holds promise for advances in dynamical quantum simulation, biological process modelling and quantum computation. Here we employ a versatile and scalable resonator configuration to realise quantum walks with bright classical light. We experimentally demonstrate the versatility of our approach by implementing a variety of QWs, all with the same experimental platform, while the use of a resonator allows for an arbitrary number of steps without scaling the number of optics. This paves the way for future QW implementations with spatial modes of light in free-space that are both versatile and scalable. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Early experience with Watson for oncology in Korean patients with colorectal cancer.
- Author
-
Kim, Eui Joo, Woo, Hyun Sun, Cho, Jae Hee, Sym, Sun Jin, Baek, Jeong-Heum, Lee, Won-Suk, Kwon, Kwang An, Kim, Kyoung Oh, Chung, Jun-Won, Park, Dong Kyun, and Kim, Yoon Jae
- Subjects
- *
ONCOLOGY , *CANCER patients , *DECISION support systems , *COGNITIVE computing , *COLON cancer treatment , *HEALTH care teams , *MEDICAL decision making - Abstract
Background: Watson for oncology (WFO) is a cognitive computing system providing decision support. We evaluated the concordance rates between the treatment options determined by WFO and those determined by a multidisciplinary team (MDT). Methods: We reviewed the medical charts of patients diagnosed with colorectal cancer who visited the MDT at a single tertiary medical center from November 2016 to April 2017. WFO classified the treatment options for specific patients into three categories: ‘Recommended’, ‘For consideration’, and ‘Not recommended’. Concordance rates between the WFO- and MDT-determined chemotherapy options, and the factors that potentially influence the concordance rate, were analyzed. Results: Sixty-nine patients with colorectal cancer met with the MDT from Nov. 2016 to Feb. 2017. The mean age of the patients was 62 years (range: 34–86 years), and more patients were male (47/69) than female. Of the 69 patients, 51 (73.9%) were diagnosed with colon cancer, of whom 46.4% received the same regimen recommendation from WFO (‘Recommended’) as they did from the MDT. After inclusion of the ‘For consideration’ category from WFO, the concordance rate increased to 87.0%. The concordance rate between MDT and NCCN guidelines was 97.1%, and that between the WFO and NCCN guidelines was 88.4%. The concordance rates between WFO and MDT were significantly lower in patients with stage II, IIIC, or IV disease (P<0.001), and the colorectal cancer stage was the only statistically significant factor discriminating between WFO and MDT. Conclusions: The concordance rate between chemotherapy regimens for colorectal cancer determined by MDT versus WFO recommendations was 46.4%. After including the ‘For consideration’ category from WFO, the concordance rate increased to 88.4%. Further modification and improvement of the WFO prioritizing algorithm used to recommend treatment may increase the usefulness of WFO in the clinic. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. QuantumInformation.jl—A Julia package for numerical computation in quantum information theory.
- Author
-
Gawron, Piotr, Kurzyk, Dariusz, and Pawela, Łukasz
- Subjects
- *
QUANTUM information theory , *PROGRAMMING languages , *NUMERICAL analysis , *VECTOR algebra , *MATRICES (Mathematics) - Abstract
Numerical investigations are an important research tool in quantum information theory. There already exists a wide range of computational tools for quantum information theory implemented in various programming languages. However, there is little effort in implementing this kind of tools in the language. is a modern programming language designed for numerical computation with excellent support for vector and matrix algebra, extended type system that allows for implementation of elegant application interfaces and support for parallel and distributed computing. is a new quantum information theory library implemented in that provides functions for creating and analyzing quantum states, and for creating quantum operations in various representations. An additional feature of the library is a collection of functions for sampling random quantum states and operations such as unitary operations and generic quantum channels. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. Open source software in quantum computing.
- Author
-
Fingerhuth, Mark, Babej, Tomáš, and Wittek, Peter
- Subjects
- *
OPEN source software , *QUANTUM computing , *OPEN source intelligence , *PROGRAMMING languages , *QUANTUM information science - Abstract
Open source software is becoming crucial in the design and testing of quantum algorithms. Many of the tools are backed by major commercial vendors with the goal to make it easier to develop quantum software: this mirrors how well-funded open machine learning frameworks enabled the development of complex models and their execution on equally complex hardware. We review a wide range of open source software for quantum computing, covering all stages of the quantum toolchain from quantum hardware interfaces through quantum compilers to implementations of quantum algorithms, as well as all quantum computing paradigms, including quantum annealing, and discrete and continuous-variable gate-model quantum computing. The evaluation of each project covers characteristics such as documentation, licence, the choice of programming language, compliance with norms of software engineering, and the culture of the project. We find that while the diversity of projects is mesmerizing, only a few attract external developers and even many commercially backed frameworks have shortcomings in software engineering. Based on these observations, we highlight the best practices that could foster a more active community around quantum computing software that welcomes newcomers to the field, but also ensures high-quality, well-documented code. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Benchmarking treewidth as a practical component of tensor network simulations.
- Author
-
Dumitrescu, Eugene F., Fisher, Allison L., Goodrich, Timothy D., Humble, Travis S., Sullivan, Blair D., and Wright, Andrew L.
- Subjects
- *
FACTORIZATION , *COMPUTER simulation , *MANY-body problem , *COMPUTATIONAL complexity , *GRAPH theory - Abstract
Tensor networks are powerful factorization techniques which reduce resource requirements for numerically simulating principal quantum many-body systems and algorithms. The computational complexity of a tensor network simulation depends on the tensor ranks and the order in which they are contracted. Unfortunately, computing optimal contraction sequences (orderings) in general is known to be a computationally difficult (NP-complete) task. In 2005, Markov and Shi showed that optimal contraction sequences correspond to optimal (minimum width) tree decompositions of a tensor network’s line graph, relating the contraction sequence problem to a rich literature in structural graph theory. While treewidth-based methods have largely been ignored in favor of dataset-specific algorithms in the prior tensor networks literature, we demonstrate their practical relevance for problems arising from two distinct methods used in quantum simulation: multi-scale entanglement renormalization ansatz (MERA) datasets and quantum circuits generated by the quantum approximate optimization algorithm (QAOA). We exhibit multiple regimes where treewidth-based algorithms outperform domain-specific algorithms, while demonstrating that the optimal choice of algorithm has a complex dependence on the network density, expected contraction complexity, and user run time requirements. We further provide an open source software framework designed with an emphasis on accessibility and extendability, enabling replicable experimental evaluations and future exploration of competing methods by practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Nonnegative/Binary matrix factorization with a D-Wave quantum annealer.
- Author
-
O’Malley, Daniel, Vesselinov, Velimir V., Alexandrov, Boian S., and Alexandrov, Ludmil B.
- Subjects
- *
QUANTUM annealing , *MATRICES (Mathematics) , *ALGORITHMS , *MEDICAL sciences , *MACHINE learning - Abstract
D-Wave quantum annealers represent a novel computational architecture and have attracted significant interest. Much of this interest has focused on the quantum behavior of D-Wave machines, and there have been few practical algorithms that use the D-Wave. Machine learning has been identified as an area where quantum annealing may be useful. Here, we show that the D-Wave 2X can be effectively used as part of an unsupervised machine learning method. This method takes a matrix as input and produces two low-rank matrices as output—one containing latent features in the data and another matrix describing how the features can be combined to approximately reproduce the input matrix. Despite the limited number of bits in the D-Wave hardware, this method is capable of handling a large input matrix. The D-Wave only limits the rank of the two output matrices. We apply this method to learn the features from a set of facial images and compare the performance of the D-Wave to two classical tools. This method is able to learn facial features and accurately reproduce the set of facial images. The performance of the D-Wave shows some promise, but has some limitations. It outperforms the two classical codes in a benchmark when only a short amount of computational time is allowed (200-20,000 microseconds), but these results suggest heuristics that would likely outperform the D-Wave in this benchmark. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. qTorch: The quantum tensor contraction handler.
- Author
-
Fried, E. Schuyler, Sawaya, Nicolas P. D., Cao, Yudong, Kivlichan, Ian D., Romero, Jhonathan, and Aspuru-Guzik, Alán
- Subjects
- *
QUANTUM computing , *TENSOR algebra , *HILBERT space , *DATA modeling , *INFORMATION design - Abstract
Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit’s line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation. For a nontrivial subset of these quantum circuits, 50 to 100 qubits can easily be simulated on a single compute node. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Validating quantum-classical programming models with tensor network simulations.
- Author
-
McCaskey, Alexander, Dumitrescu, Eugene, Chen, Mengsu, Lyakh, Dmitry, and Humble, Travis
- Subjects
- *
WAVE functions , *QUBITS , *GRAPHICS processing units , *ALGORITHMS , *HARDWARE - Abstract
The exploration of hybrid quantum-classical algorithms and programming models on noisy near-term quantum hardware has begun. As hybrid programs scale towards classical intractability, validation and benchmarking are critical to understanding the utility of the hybrid computational model. In this paper, we demonstrate a newly developed quantum circuit simulator based on tensor network theory that enables intermediate-scale verification and validation of hybrid quantum-classical computing frameworks and programming models. We present our tensor-network quantum virtual machine (TNQVM) simulator which stores a multi-qubit wavefunction in a compressed (factorized) form as a matrix product state, thus enabling single-node simulations of larger qubit registers, as compared to brute-force state-vector simulators. Our simulator is designed to be extensible in both the tensor network form and the classical hardware used to run the simulation (multicore, GPU, distributed). The extensibility of the TNQVM simulator with respect to the simulation hardware type is achieved via a pluggable interface for different numerical backends (e.g., ITensor and ExaTENSOR numerical libraries). We demonstrate the utility of our TNQVM quantum circuit simulator through the verification of randomized quantum circuits and the variational quantum eigensolver algorithm, both expressed within the eXtreme-scale ACCelerator (XACC) programming model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. Quantum++: A modern C++ quantum computing library.
- Author
-
Gheorghiu, Vlad
- Subjects
- *
C++ , *QUBITS , *RANDOM access memory , *LOGIC circuits , *SOFTWARE compatibility - Abstract
Quantum++ is a modern general-purpose multi-threaded quantum computing library written in C++11 and composed solely of header files. The library is not restricted to qubit systems or specific quantum information processing tasks, being capable of simulating arbitrary quantum processes. The main design factors taken in consideration were the ease of use, portability, and performance. The library’s simulation capabilities are only restricted by the amount of available physical memory. On a typical machine (Intel i5 8Gb RAM) Quantum++ can successfully simulate the evolution of 25 qubits in a pure state or of 12 qubits in a mixed state reasonably fast. The library also includes support for classical reversible logic, being able to simulate classical reversible operations on billions of bits. This latter feature may be useful in testing quantum circuits composed solely of Toffoli gates, such as certain arithmetic circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Breaking the Interaction Wall: A DLPU-Centric Deep Learning Computing System
- Author
-
Qi Guo, Zhiwei Xu, Zeng Xi, Zidong Du, Zhao Yongwei, Yunji Chen, Limin Cheng, Li Ling, and Ninghui Sun
- Subjects
Computer science ,business.industry ,Deep learning ,Computation ,Scalar (physics) ,Parallel computing ,Computing systems ,Theoretical Computer Science ,Computational Theory and Mathematics ,Hardware and Architecture ,Task analysis ,Process control ,Artificial intelligence ,Central processing unit ,business ,Field-programmable gate array ,Software - Abstract
Due to the broad successes of deep learning, many CPU-centric artificial intelligent computing systems employ specialized devices such as GPUs, FPGAs, and ASICs, which can be named as Deep Learning Processing Units (DLPUs), for processing computation-intensive deep learning tasks. The separation between the scalar control operations mapped on CPUs and the vector computation operations mapped on DLPUs causes the frequent and costly interactions between CPUs and DLPUs, leading to the Interaction Wall. Moreover, the increasing algorithm complexity and DLPU computation speed would further aggravate the interaction wall substantially.
- Published
- 2022
36. DeBAM: Decoder-Based Approximate Multiplier for Low Power Applications
- Author
-
Syed Ershad Ahmed, Mythreye Venkatesan, Suresh Nambi, U. Anil Kumar, and Kavya Radhakrishnan
- Subjects
General Computer Science ,Computer engineering ,Hardware complexity ,Control and Systems Engineering ,Computer science ,Power consumption ,Data path ,Word error rate ,Multiplier (economics) ,Sharpening ,Computing systems ,Data compression - Abstract
Approximate Computing is a promising method for designing power-efficient computing systems. Many image and compression algorithms are inherently error-tolerant and can allow errors up to a specific limit. In such algorithms, savings in power can be achieved by approximating the data path units, such as a multiplier. This paper presents a novel decoder logic-based multiplier design with the intent to reduce the partial products generated. Thus, leading to a reduction in the hardware complexity and power consumption while maintaining a low error rate. Our proposed design in an 8-bit format achieves 40.96% and 22.30% power reduction compared to the accurate and approximate multipliers. Comprehensive simulations are carried out on image sharpening and compression algorithms to prove that the proposed design obtains a better quality-effort trade-off than existing multipliers.
- Published
- 2021
37. Emotion based video player
- Author
-
M Samyak, B S Shamitha, D Aditya, and R.G. Manvitha
- Subjects
Video player ,Neutral network ,Mood ,Index (publishing) ,Human–computer interaction ,Computer science ,Order (business) ,If and only if ,Computing systems ,Image (mathematics) - Abstract
One's work can be done efficiently only if their mood is good. Emotions is the index of mood. Here model capture one's image as an input, predict their mood and play a video of opposite genre as an output, in order to change their mood, which is the main goal of this project. Hence taking them through an emotional roller coaster. The solution makes use of CNN (convolutional neutral networks) for detecting one's mood. It uses OpenCV (open-source computer vision library) in-order to get user's image using their respective web camera. It is done by importing modules like web-browser and requests in-order to get access to YouTube to play videos accordingly. The average accuracy rate of the system has increased to 98.53 percent. Eight primary emotion classes have been effectively classified by the method. As a result, the proposed strategy has been shown to be effective in recognising emotions.
- Published
- 2021
38. Bodily Processing: What Progress Has Been Made in Understanding the Embodiment of Computing Systems?
- Author
-
Martina Properzi
- Subjects
Human–computer interaction ,Computer science ,Computing systems - Abstract
" In this article I will address the issue of the embodiment of computing sys-tems from the point of view distinctive of the so-called Unconventional Computation, focusing on the paradigm known as Mor-phological Computation. As a first step, I will contextualize Morphological Computa-tion within the disciplinary field of Embod-ied Artificial Intelligence: broadly con-ceived, Embodied Artificial Intelligence may be characterized as embracing both conventional and unconventional ap-proaches to the artificial emulation of natu-ral intelligence. Morphological Computa-tion stands out from other paradigms of unconventional Embodied Artificial Intelli-gence in that it discloses a new, closer kind of connection between embodiment and computation. I will further my investigation by briefly reviewing the state-of-the-art in Morphological Computation: attention will be given to a very recent trend, whose core concept is that of “organic reconfigu-rability”. In this direction, as a final step, two advanced cases of study of organic or living morphological computers will be pre-sented and discussed. The prospect is to shed some light on our title question: what progress has been made in understanding the embodiment of computing systems? Keywords: Embodied Artificial Intelligence; Morphological Computation; Reservoir Compu-ting Systems; Organic Reconfigurability; 3D Bio-Printed Synthetic Corneas; Xenobots "
- Published
- 2021
39. The Impact of Device Uniformity on Functionality of Analog Passively-Integrated Memristive Circuits
- Author
-
Mohammad Reza Mahmoodi, Zahra Fahimi, Hussein Nili, Michael Klachko, and Dmitri B. Strukov
- Subjects
Neuromorphic engineering ,law ,Computer science ,Electronic engineering ,Deep neural networks ,Memristor ,Electrical and Electronic Engineering ,Crossbar switch ,Neuromorphic circuits ,Computing systems ,Electronic circuit ,law.invention - Abstract
Passively-integrated memristors are the most prospective candidates for designing high-speed, energy-efficient, and compact neuromorphic circuits. Despite all the promising properties, experimental demonstrations of passive memristive crossbars have been limited to circuits with few thousands of devices until now, which stems from the strict uniformity requirements on the IV characteristics of memristors. This paper expands upon this vital challenge and investigates how uniformity impacts the computing accuracy of analog memristive circuits, focusing on neuromorphic applications. Specifically, the paper explores the tradeoffs between computing accuracy, crossbar size, switching threshold variations, and target precision. All-embracing simulations of matrix multipliers and deep neural networks on CIFAR-10 and ImageNet datasets have been carried out to evaluate the role of uniformity on the accuracy of computing systems. Further, we study three post-fabrication methods that increase the accuracy of nonuniform 0T1R neuromorphic circuits: hardware-aware training, improved tuning algorithm, and switching threshold modification. The application of these techniques allows us to implement advanced deep neural networks with almost no accuracy drop, using current state-of-the-art analog 0T1R technology.
- Published
- 2021
40. Hysteretic Optimality of Container Warming Control in Serverless Computing Systems
- Author
-
Yi-Han Chiang, Hai Lin, Yusheng Ji, and Chao Zhu
- Subjects
Mathematical optimization ,Computer science ,Control (management) ,Markov process ,computer.software_genre ,Computing systems ,Submodular set function ,symbols.namesake ,Aerospace electronics ,Systems management ,Container (abstract data type) ,symbols ,Markov decision process ,computer - Abstract
Keeping containers warm for prompt service responses and reducing warm containers for light-weight system management exhibit a fundamental tradeoff in serverless computing systems. In this letter, we investigate the problem of container warming control for serverless computing, and we formulate it as a Markov decision process (MDP). By observing that the value functions corresponding to the MDP are partially submodular, we show that the derived optimal policy is hysteretic and partially non-decreasing. Our numerical results show that the derived optimal policy exhibits a hysteretic structure, which can be realized via switching-up/-down thresholds in practice.
- Published
- 2021
41. Toward a more empathic relationship between humans and computing systems
- Author
-
Mary Czerwinski, Xuhai Xu, Karan Ahuja, Jina Suh, Gonzalo Ramos, and Jasmine Lu
- Subjects
Group (mathematics) ,media_common.quotation_subject ,General Earth and Planetary Sciences ,Empathy ,Psychology ,Social psychology ,Computing systems ,General Environmental Science ,media_common ,Hue - Abstract
How might computing support us in becoming our better, more emotionally resilient selves? We explore this in an interview with the team from Microsoft Research's Human Understanding and Empathy group.
- Published
- 2021
42. A Fault Model Centered Modeling Framework for Self-healing Computing Systems
- Author
-
Lu, Wei, Zhu, Yian, Ma, Chunyan, Zhang, Longmei, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Deng, Hepu, editor, Miao, Duoqian, editor, Lei, Jingsheng, editor, and Wang, Fu Lee, editor
- Published
- 2011
- Full Text
- View/download PDF
43. A novel image encryption scheme based on quantum dynamical spinning and rotations.
- Author
-
Khan, Majid and Waseem, Hafiz Muhammad
- Subjects
- *
IMAGE encryption , *QUANTUM theory , *CRYPTOGRAPHY , *INTERNET security , *DATA encryption - Abstract
Quantum information processing made a tremendous and remarkable impact on number of classical mechanic’s problems. The impact does not only stop at classical mechanics but also the cyber security paradigm. Quantum information and cryptography are two classes of quantum information processing which use the idea of qubits instead of bits as in classical information security. The idea of fast computations with multiple complexity level is becoming more realistic in the age of quantum information due to quantum parallelism where a single quantum computer does allow to compute hundreds of classical computers with less efforts and more accuracy. The evolution of quantum information processing replaces a number of classical mechanic’s aspects in computational and cyber security sciences. Our aim here is to introduce concepts of applied quantum dynamics in cryptography, which leads to an evolution of quantum cryptography. Quantum cryptography is one of the most astonishing solicitations of quantum information theory. To measure the quantum state of any system is not possible without disturbing that system. The facts of quantum mechanics on traditional cryptosystems lead to a new protocol and achieving maximum remarkable security for systems. The scope of this paper is to design an innovative encryption scheme for digital data based on quantum spinning and rotation operators. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. QFlow lite dataset: A machine-learning approach to the charge states in quantum dot experiments.
- Author
-
Zwolak, Justyna P., Kalantre, Sandesh S., Wu, Xingyao, Ragole, Stephen, and Taylor, Jacob M.
- Subjects
- *
MACHINE learning , *QUANTUM dots , *DATA mining , *INTERNET security , *GEOMORPHOLOGY - Abstract
Background: Over the past decade, machine learning techniques have revolutionized how research and science are done, from designing new materials and predicting their properties to data mining and analysis to assisting drug discovery to advancing cybersecurity. Recently, we added to this list by showing how a machine learning algorithm (a so-called learner) combined with an optimization routine can assist experimental efforts in the realm of tuning semiconductor quantum dot (QD) devices. Among other applications, semiconductor quantum dots are a candidate system for building quantum computers. In order to employ QDs, one needs to tune the devices into a desirable configuration suitable for quantum computing. While current experiments adjust the control parameters heuristically, such an approach does not scale with the increasing size of the quantum dot arrays required for even near-term quantum computing demonstrations. Establishing a reliable protocol for tuning QD devices that does not rely on the gross-scale heuristics developed by experimentalists is thus of great importance. Materials and methods: To implement the machine learning-based approach, we constructed a dataset of simulated QD device characteristics, such as the conductance and the charge sensor response versus the applied electrostatic gate voltages. The gate voltages are the experimental ‘knobs’ for tuning the device into useful regimes. Here, we describe the methodology for generating the dataset, as well as its validation in training convolutional neural networks. Results and discussion: From 200 training sets sampled randomly from the full dataset, we show that the learner’s accuracy in recognizing the state of a device is ≈ 96.5% when using either current-based or charge-sensor-based training. The spread in accuracy over our 200 training sets is 0.5% and 1.8% for current- and charge-sensor-based data, respectively. In addition, we also introduce a tool that enables other researchers to use this approach for further research: QFlow lite—a Python-based mini-software suite that uses the dataset to train neural networks to recognize the state of a device and differentiate between states in experimental data. This work gives the definitive reference for the new dataset that will help enable researchers to use it in their experiments or to develop new machine learning approaches and concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. An improved memory-based collaborative filtering method based on the TOPSIS technique.
- Author
-
Al-bashiri, Hael, Abdulgabber, Mansoor Abdullateef, Romli, Awanis, and Kahtan, Hasan
- Subjects
- *
NEUROSCIENCES , *COMPUTER science , *RECOMMENDER systems , *TOPSIS method , *LIFE sciences - Abstract
This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. All-sense-all networks are suboptimal for sensorimotor synchronization.
- Author
-
van de Rijt, Arnout
- Subjects
- *
SENSORIMOTOR integration , *MUSIC psychology , *NEURAL circuitry , *MUSIC conservatories , *SENSORY perception , *PSYCHOLOGY - Abstract
In human groups that seek to synchronize to a common steady beat, every member can typically perceive every other member. We question whether this naturally occurring all-sense-all condition is optimal for temporal coordination. We consider alternative configurations represented by directed graphs, in which individuals can only hear or see a subset of others. We identify a trade-off in the topology of such networks: While denser graphs provide stronger coupling, improving synchrony, density increases sensitivity to early taps, which produces rushing. Results from an experimental study with music conservatory students show that networks that combine short path length with low density match all-sense-all networks in synchrony while yielding a steadier beat. These findings suggest that professional teams in arts, sports, industry, and the military may improve temporal coordination by employing technology that strategically configures who can track whom. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. Interdisciplinarity research based on NSFC-sponsored projects: A case study of mathematics in Chinese universities.
- Author
-
Shao, Zhi-Yi, Li, Yong-Ming, Hui, Fen, Zheng, Yang, and Guo, Ying-Jie
- Subjects
- *
MATHEMATICAL research , *MATHEMATICS education , *INTERDISCIPLINARY education , *HIGHER education , *SCHOLARS - Abstract
We investigate the interdisciplinarity of mathematics based on an analysis of projects sponsored by the NSFC (National Natural Science Foundation of China). The motivation of this study lies in obtaining an efficient method to quantify the research interdisciplinarities, revealing the research interdisciplinarity patterns of mathematics discipline, giving insights for mathematics scholars to improve their research, and providing empirical supports for policy making. Our data set includes 6147 NSFC-sponsored projects implemented by 3225 mathematics professors in 177 Chinese universities with established mathematics departments. We propose the weighted-mean DIRD (diversity of individual research disciplines) to quantify interdisciplinarity. In addition, we introduce the matrix computation method, discover several properties of such a matrix, and make the computation cost significantly lower than the bitwise computation method. Finally, we develop an automatic DIRD computing system. The results indicate that mathematics professors at top normal universities in China exhibit strong interdisciplinarity; mathematics professors are most likely to conduct interdisciplinary research involving information science (research department), computer science (research area), computer application technology (research field), and power system bifurcation and chaos (research direction). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Remaining capacity estimation of lithium-ion batteries based on the constant voltage charging profile.
- Author
-
Wang, Zengkai, Zeng, Shengkui, Guo, Jianbin, and Qin, Taichun
- Subjects
- *
LITHIUM-ion batteries , *ELECTRIC potential , *PARTICLE swarm optimization , *QUANTUM computing , *REGRESSION analysis - Abstract
Estimation of remaining capacity is essential for ensuring the safety and reliability of lithium-ion batteries. In actual operation, batteries are seldom fully discharged. For a constant current-constant voltage charging mode, the incomplete discharging process affects not only the initial state but also processed variables of the subsequent charging profile, thereby mainly limiting the applications of many feature-based capacity estimation methods which rely on a whole cycling process. Since the charging information of the constant voltage profile can be completely saved whether the battery is fully discharged or not, a geometrical feature of the constant voltage charging profile is extracted to be a new aging feature of lithium-ion batteries under the incomplete discharging situation in this work. By introducing the quantum computing theory into the classical machine learning technique, an integrated quantum particle swarm optimization–based support vector regression estimation framework, as well as its application to characterize the relationship between extracted feature and battery remaining capacity, are presented and illustrated in detail. With the lithium-ion battery data provided by NASA, experiment and comparison results demonstrate the effectiveness, accuracy, and superiority of the proposed battery capacity estimation framework for the not entirely discharged condition. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. Examination of China’s performance and thematic evolution in quantum cryptography research using quantitative and computational techniques.
- Author
-
Olijnyk, Nicholas V.
- Subjects
- *
CRYPTOGRAPHY , *RESEARCH methodology , *QUALITATIVE research , *CLUSTER analysis (Statistics) , *MATHEMATICAL mappings , *QUANTUM theory - Abstract
This study performed two phases of analysis to shed light on the performance and thematic evolution of China’s quantum cryptography (QC) research. First, large-scale research publication metadata derived from QC research published from 2001–2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China’s QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China’s performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China’s performance indicators (i.e., Publication Frequency, citation score, H-index) are growing. China’s H-index (a normalized indicator) has surpassed all other countries’ over the last several years. The second phase of analysis shows how China’s main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures); some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state), while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation). Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China’s QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology policy researchers can also use these findings to trace previous research directions and plan future lines of research. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Comparison of computer systems and ranking criteria for automatic melanoma detection in dermoscopic images.
- Author
-
Møllersen, Kajsa, Zortea, Maciel, Schopf, Thomas R., Kirchesch, Herbert, and Godtliebsen, Fred
- Subjects
- *
MELANOMA diagnosis , *EARLY detection of cancer , *SKIN cancer , *IMAGE segmentation , *COMPUTER systems - Abstract
Melanoma is the deadliest form of skin cancer, and early detection is crucial for patient survival. Computer systems can assist in melanoma detection, but are not widespread in clinical practice. In 2016, an open challenge in classification of dermoscopic images of skin lesions was announced. A training set of 900 images with corresponding class labels and semi-automatic/manual segmentation masks was released for the challenge. An independent test set of 379 images, of which 75 were of melanomas, was used to rank the participants. This article demonstrates the impact of ranking criteria, segmentation method and classifier, and highlights the clinical perspective. We compare five different measures for diagnostic accuracy by analysing the resulting ranking of the computer systems in the challenge. Choice of performance measure had great impact on the ranking. Systems that were ranked among the top three for one measure, dropped to the bottom half when changing performance measure. Nevus Doctor, a computer system previously developed by the authors, was used to participate in the challenge, and investigate the impact of segmentation and classifier. The diagnostic accuracy when using an automatic versus the semi-automatic/manual segmentation is investigated. The unexpected small impact of segmentation method suggests that improvements of the automatic segmentation method w.r.t. resemblance to semi-automatic/manual segmentation will not improve diagnostic accuracy substantially. A small set of similar classification algorithms are used to investigate the impact of classifier on the diagnostic accuracy. The variability in diagnostic accuracy for different classifier algorithms was larger than the variability for segmentation methods, and suggests a focus for future investigations. From a clinical perspective, the misclassification of a melanoma as benign has far greater cost than the misclassification of a benign lesion. For computer systems to have clinical impact, their performance should be ranked by a high-sensitivity measure. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.