33,384 results
Search Results
2. Automatic Test Paper Generation Technology for Mandarin Based on Hilbert Huang Algorithm.
- Author
-
Wang, Lei
- Subjects
ARTIFICIAL neural networks ,ALGORITHMS ,COMPUTER engineering ,EMPLOYEE rights ,HUMAN resources departments - Abstract
With the development of computer technology, automatic test paper generation systems have gradually become an effective tool for detecting and maintaining national machine security and protecting the rights and interests of workers. This article achieved multi-level oral scores for different types of questions through online scoring using artificial neural networks in recent years. Based on its specific situation and evaluation index requirements, an analysis module that is reasonable, efficient, and in line with the hierarchical structure and module requirements of national conditions has been designed to complete the research on automatic test paper generation technology, in order to help better manage and allocate human resources and improve production efficiency. Afterwards, this article conducted functional testing on the technical module. The test results showed that the scalability of the system was over 82%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. THE ECONOMICS OF PAPER CONSUMPTION IN OFFICES.
- Author
-
SHAH, Iqtidar Ali, AMJED, Sohail, and ALKATHIRI, Nasser Alhamar
- Subjects
OFFICE equipment & supplies ,PAPER products ,COMPUTER engineering ,OPERATING costs ,ENERGY consumption & the environment - Abstract
This paper explores the factors potentially responsible for the overconsumption of office paper and estimates the adverse environmental and economic impact of overconsumption. Data were collected from the employees of selected higher educational institutions in Oman. Technical factors, workplace environment, printing preferences and lack of awareness were found the main cause of overconsumption. Environmental and economic impact of the paper was estimated from the actual amount of paper consumed using standard formulas from literature. The institutions have used 5,200 reams (13 tons) of 80gm A4 size paper in one year. The economic cost of the paper was 7,800 OMR (20,280 US$). The environmental impact estimated are: cutting of 312 trees, 73,970 Ibs of CO
2 gas emission, 144,742 KWh of energy consumption, solid waste produced 29,614 lbs and 247975 gallons of water were wasted. Changing printing preferences, a significant amount of economic and environmental resources to the tune of 44.8% can be saved. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
4. ARTIFICIAL INTELLIGENCE FOR SHIP DESIGN PROCESS IMPROVEMENT: A CONCEPTUAL PAPER.
- Author
-
Maimun, A., Loon, S. C., and Khairuddin, J.
- Subjects
- *
ENGINEERING design , *KNOWLEDGE graphs , *ARTIFICIAL intelligence , *MARITIME shipping , *COMPUTER engineering - Abstract
This paper explores the artificial intelligence (AI) concept for complex engineering design processes in the shipping industry. It is driven by the computer technologies advancement for fast and concurrent tasks processing, machine learnability, and data-centric approach. While AI has been adopted in many industries, it is still lacking the structured approaches for practical implementation. This is especially on the generality of the methodologies and explaining AI to the non-technical members and their preparedness. Therefore, this work proposed a conceptual framework to systematically extract, represent and visualize the ship design knowledge, to develop and deploy the machine learning (ML) models, and to demonstrate the AI-based ship design processes. Comparisons to the generic ship design model were made and discussed to highlight the improvements observed. It is found that while the conventional algorithmic approach procedures were faster in terms of execution time, the stepwise empirical models were often limited by the dataset and the design assumptions with restricted estimation capabilities for solving the nonlinear ship design problems. The findings presented the impact in improving the existing processes and effectively reducing its cycle. Additionally, the approach emphasised on the validated ship design data thus its generalization for fast and wide adoptions at scales. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Teaching Formal Methods: An Experience Report
- Author
-
Askarpour, Mehrnoosh, Bersani, Marcello M., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bruel, Jean-Michel, editor, Capozucca, Alfredo, editor, Mazzara, Manuel, editor, Meyer, Bertrand, editor, Naumchev, Alexandr, editor, and Sadovykh, Andrey, editor
- Published
- 2020
- Full Text
- View/download PDF
6. Autonomous Agents and Multiagent Systems. Best and Visionary Papers : AAMAS 2023 Workshops, London, UK, May 29 –June 2, 2023, Revised Selected Papers
- Author
-
Francesco Amigoni, Arunesh Sinha, Francesco Amigoni, and Arunesh Sinha
- Subjects
- Artificial intelligence, Computer engineering, Computer networks, Software engineering, Social sciences—Data processing, Numerical analysis
- Abstract
This book contains visionary and best papers from the workshops held at the International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, held in London, UK, during May 29–June 2, 2023.The 12 regular papers, 5 best papers and 7 visionary papers, presented were carefully reviewed and selected from a total of more than 110 contributions to the workshops. They focus on emerging topics and new trends in the area of autonomous agents and multiagent systems and stem from the following workshops:- Workshop on Autonomous Robots and Multirobot Systems (ARMS)- Workshop on Adaptive and Learning Agents (ALA)- Workshop on Interdisciplinary Design of Emotion Sensitive Agents (IDEA)- Workshop on Rebellion and Disobedience in Artificial Intelligence (RaD-AI)- Workshop on Neuro-symbolic AI for Agent and Multi-Agent Systems (NeSyMAS)- Workshop on Multiagent Sequential Decision Making under Uncertainty (MSDM)- Workshop on Citizen-Centric Multi-Agent Systems (C-MAS)
- Published
- 2024
7. Statistical Atlases and Computational Models of the Heart. Regular and CMRxRecon Challenge Papers : 14th International Workshop, STACOM 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 12, 2023, Revised Selected Papers
- Author
-
Oscar Camara, Esther Puyol-Antón, Maxime Sermesant, Avan Suinesiaputra, Qian Tao, Chengyan Wang, Alistair Young, Oscar Camara, Esther Puyol-Antón, Maxime Sermesant, Avan Suinesiaputra, Qian Tao, Chengyan Wang, and Alistair Young
- Subjects
- Computer vision, Computer science—Mathematics, Mathematical statistics, Machine learning, Computer engineering, Computer networks, Social sciences—Data processing
- Abstract
This book constitutes the proceedings of the 14th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2023, as well as the Cardiac MRI Reconstruction Challenge, CMRxRecon Challenge. There was a total of 53 submissions to the workshop. The 24 regular workshop papers included in this volume were carefully reviewed and selected from 29 paper submissions. They deal with cardiac segmentation, modelling, strain quantification, registration, statistical shape analysis, and quality control. In addition, 21 papers from the CMRxRecon challenge are included in this volume. They focus on fast CMR image reconstruction and provide a benchmark dataset that enables the broader research community to promote advances in this area of research.
- Published
- 2024
8. Classification of Paper Values Based on Citation Rank and PageRank.
- Author
-
Souma, Wataru, Vodenska, Irena, and Chitkushev, Lou
- Subjects
CITATION networks ,MOLECULAR biology ,COMPUTER science ,CITATION indexes ,INFORMATION science ,COMPUTER engineering - Abstract
Purpose: The number of citations has been widely used to measure the significance of a paper. However, there is a need in introducing another index to determine superiority or inferiority of papers with the same number of citations. We determine superiority or inferiority of papers by using the ranking based on the number of citations and PageRank. Design/methodology/approach: We show the positive linear correlation between Citation Rank (the ranking of the number of citation) and PageRank. On this basis, we identify high-quality, prestige, emerging, and popular papers. Findings: We found that the high-quality papers belong to the subjects of biochemistry and molecular biology, chemistry, and multidisciplinary sciences. The prestige papers correspond to the subjects of computer science, engineering, and information science. The emerging papers are related to biochemistry and molecular biology, as well as those published in the journal "Cell." The popular papers belong to the subject of multidisciplinary sciences. Research limitations: We analyze the Science Citation Index Expanded (SCIE) from 1981 to 2015 to calculate Citation Rank and PageRank within a citation network consisting of 34,666,719 papers and 591,321,826 citations. Practical implications: Our method is applicable to forecast emerging fields of research subjects in science and helps policymakers to consider science policy. Originality/value: We calculated PageRank for a giant citation network which is extremely larger than the citation networks investigated by previous researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Statistical Atlases and Computational Models of the Heart. Regular and CMRxMotion Challenge Papers : 13th International Workshop, STACOM 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Revised Selected Papers
- Author
-
Oscar Camara, Esther Puyol-Antón, Chen Qin, Maxime Sermesant, Avan Suinesiaputra, Shuo Wang, Alistair Young, Oscar Camara, Esther Puyol-Antón, Chen Qin, Maxime Sermesant, Avan Suinesiaputra, Shuo Wang, and Alistair Young
- Subjects
- Computer vision, Computer science—Mathematics, Mathematical statistics, Machine learning, Computer engineering, Computer networks, Social sciences—Data processing
- Abstract
This book constitutes the proceedings of the 13th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2022, held in conjunction with the 25th MICCAI conference. The 34 regular workshop papers included in this volume were carefully reviewed and selected after being revised and deal with topics such as: common cardiac segmentation and modelling problems to more advanced generative modelling for ageing hearts, learning cardiac motion using biomechanical networks, physics-informed neural networks for left atrial appendage occlusion, biventricular mechanics for Tetralogy of Fallot, ventricular arrhythmia prediction by using graph convolutional network, and deeper analysis of racial and sex biases from machine learning-based cardiac segmentation. In addition, 14 papers from the CMRxMotion challenge are included in the proceedings which aim to assess the effects of respiratory motion on cardiac MRI (CMR) imaging quality and examine the robustness of segmentation models in face of respiratory motion artefacts. A total of 48 submissions to the workshop was received.
- Published
- 2023
10. Autonomous Agents and Multiagent Systems. Best and Visionary Papers : AAMAS 2022 Workshops, Virtual Event, May 9–13, 2022, Revised Selected Papers
- Author
-
Francisco S. Melo, Fei Fang, Francisco S. Melo, and Fei Fang
- Subjects
- Artificial intelligence, Computer engineering, Computer networks, Software engineering, Social sciences—Data processing, Numerical analysis
- Abstract
This book constitutes thoroughly refereed and revised selected best and visionary papers from the Workshops held at the International Conference on Autonomous Agents and Multiagent Systems AAMAS 2022, which took place online, during May 9–13, 2022.The 5 best papers and 4 visionary papers included in this book stem from the following workshops: - 13th Workshop on Optimization and Learning in Multi-agent Systems (OptLearnMAS);- 23rd Workshop on Multi-Agent Based Simulation (MABS);- 6th Workshop on Agent-Based Modelling of Urban Systems (ABMUS);- 10th Workshop on Engineering Multi-Agent Systems (EMAS);- 1st Workshop on Rebellion and Disobedience in AI (RaD-AI).There was a total of 59 submissions to these workshops.
- Published
- 2022
11. Traditional Paper-Cut Art and Cosmetic Packaging Design Research Based on Wireless Communication and Artificial Intelligence Technology.
- Author
-
Wu, Shilin
- Subjects
WIRELESS communications ,PACKAGING design ,ARTIFICIAL intelligence ,DESIGN research ,COMPUTER engineering ,TECHNOLOGICAL revolution - Abstract
After hundreds of years of changes, due to the development of chemistry, the development of human life has undergone tremendous changes. People have used chemical science to manufacture many supplies, such as various medicines and various types of cosmetics. In recent decades, electronic components and computer software technology have developed rapidly, and the fourth round of technological revolution is underway, which has a favorable impact on the development of various industries. Traditional paper-cut and cosmetic bag design methods should also take into account the advantages of wireless communication and artificial intelligence technology and combine other types of traditional industries to carry out technological reforms to help traditional craftsmanship pass down. In modern life, due to the vigorous promotion of cosmetics, there are more and more cosmetics on the market, and the traditional paper-cut art itself is an artistic design method similar to cosmetic design. Both industries will grow rapidly after the use of artificial intelligence and wireless communication technology for the update of both industries. Therefore, the purpose of this paper is to combine wireless communication technology with artificial intelligence technology to transform the traditional paper-cut art and cosmetic packaging design. After consulting the reasons for the decline of the traditional handicraft industry and the reasons for the turmoil caused by modern technology, this paper conducts a combined design of artificial intelligence technology, wireless communication technology, and cosmetic packaging and then performs wireless communication and artificial intelligence on the cutting of traditional paper design elements. The design of the technical matching system is also designed for traditional paper-cut art and cosmetic packaging. And find professional practitioners for research and discussion and multiple transformations and obtain experimental analysis results data. After many experiments, it can be seen that the combination of wireless communication and artificial intelligence technology can transform the traditional paper-cut art and cosmetic packaging design, improve their relevance, continue the inheritance of paper-cut art, and possibly improve the efficacy of cosmetics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. HCI International 2022 – Late Breaking Papers: Ergonomics and Product Design : 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26–July 1, 2022, Proceedings
- Author
-
Vincent G. Duffy, Pei-Luen Patrick Rau, Vincent G. Duffy, and Pei-Luen Patrick Rau
- Subjects
- User interfaces (Computer systems), Human-computer interaction, Computer engineering, Computer networks, Information storage and retrieval systems, Artificial intelligence
- Abstract
Volume LNCS 13522 is part of the refereed proceedings of the 24th International Conference on Human-Computer Interaction, HCII 2022, which was held virtually during June 26 to July 1, 2022.A total of 5583 individuals from academia, research institutes, industry, and governmental agencies from 88 countries submitted contributions, and 1276 papers and 275 posters were included in the proceedings that were published just before the start of the conference. Additionally, 296 papers and 181 posters are included in the volumes of the proceedings published after the conference, as “Late Breaking Work” (papers and posters). The contributions thoroughly cover the entire field of human-computer interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas.
- Published
- 2022
13. HCI International 2021 - Late Breaking Papers: Design and User Experience : 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings
- Author
-
Constantine Stephanidis, Marcelo M. Soares, Elizabeth Rosenzweig, Aaron Marcus, Sakae Yamamoto, Hirohiko Mori, Pei-Luen Patrick Rau, Gabriele Meiselwitz, Xiaowen Fang, Abbas Moallem, Constantine Stephanidis, Marcelo M. Soares, Elizabeth Rosenzweig, Aaron Marcus, Sakae Yamamoto, Hirohiko Mori, Pei-Luen Patrick Rau, Gabriele Meiselwitz, Xiaowen Fang, and Abbas Moallem
- Subjects
- User interfaces (Computer systems), Human-computer interaction, Electronic commerce, Software engineering, Image processing—Digital techniques, Computer vision, Data mining, Computer engineering, Computer networks
- Abstract
This book constitutes late breaking papers from the 23rd International Conference on Human-Computer Interaction, HCII 2021, which was held in July 2021. The conference was planned to take place in Washington DC, USA but had to change to a virtual conference mode due to the COVID-19 pandemic. A total of 5222 individuals from academia, research institutes, industry, and governmental agencies from 81 countries submitted contributions, and 1276 papers and 241 posters were included in the volumes of the proceedings that were published before the start of the conference. Additionally, 174 papers and 146 posters are included in the volumes of the proceedings published after the conference, as “Late Breaking Work” (papers and posters). The contributions thoroughly cover the entire field of HCI, addressing major advances in knowledge and effective use of computers in a variety of application areas.
- Published
- 2021
14. HCI International 2020 - Late Breaking Papers: User Experience Design and Case Studies : 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings
- Author
-
Constantine Stephanidis, Aaron Marcus, Elizabeth Rosenzweig, Pei-Luen Patrick Rau, Abbas Moallem, Matthias Rauterberg, Constantine Stephanidis, Aaron Marcus, Elizabeth Rosenzweig, Pei-Luen Patrick Rau, Abbas Moallem, and Matthias Rauterberg
- Subjects
- User interfaces (Computer systems), Human-computer interaction, Computer engineering, Computer networks, Social sciences—Data processing, Education—Data processing, Data protection, Artificial intelligence
- Abstract
This book constitutes late breaking papers from the 22nd International Conference on Human-Computer Interaction, HCII 2020, which was held in July 2020. The conference was planned to take place in Copenhagen, Denmark, but had to change to a virtual conference mode due to the COVID-19 pandemic.From a total of 6326 submissions, a total of 1439 papers and 238 posters have been accepted for publication in the HCII 2020 proceedings before the conference took place. In addition, a total of 333 papers and 144 posters are included in the volumes of the proceedings published after the conference as “Late Breaking Work” (papers and posters). These contributions address the latest research and development efforts in the field and highlight the human aspects of design and use of computing systems. The 54 late breaking papers presented in this volume were organized in two topical sections named: User Experience Design and Evaluation Methods and Tools; Design Case Studies; User Experience Case Studies.
- Published
- 2020
15. HCI International 2019 – Late Breaking Papers : 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings
- Author
-
Constantine Stephanidis and Constantine Stephanidis
- Subjects
- User interfaces (Computer systems), Human-computer interaction, Computer vision, Artificial intelligence, Computer engineering, Computer networks, Education—Data processing, Social sciences—Data processing
- Abstract
This year the 21st International Conference on Human-Computer Interaction, HCII 2019, which was held in Orlando, Florida, USA, in July 2019, introduced the additional option of'late-breaking work', which applied both for papers and posters with the corresponding volumes of the proceedings. The 47 late-breaking papers included in this volume were published after the conference has taken place. They were organized in the following topical sections: user experience design and evaluation; information, visualization, and decision making; virtual and augmented reality; learning and games; human and task models in HCI; and design and user experience case studies.
- Published
- 2019
16. Low Acceptance Rates of Conference Papers Considered Harmful.
- Author
-
Parhami, Behrooz
- Subjects
- *
CONFERENCE papers , *ACQUISITION of manuscripts , *COMPUTER science periodicals , *COMPUTER engineering periodicals , *COMPUTER engineering - Abstract
A quantitative analysis supports the argument that very low acceptance rates of conference papers are more likely to impede publication of bold and innovative research results than to indicate the chosen papers' prestige and elite status. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Chemometric challenges in development of paper-based analytical devices: Optimization and image processing
- Author
-
Paolo Oliveri, Riccardo Leardi, Daniel Citterio, and Vahid Hamedpour
- Subjects
Optimal design ,Optimization ,Image processing ,D-optimal design ,Design of experiments ,Image processing algorithm ,Mathematical morphology recognition (MMR) ,Microfluidic paper-based analytical devices (μPADs) ,02 engineering and technology ,01 natural sciences ,Biochemistry ,Commercialization ,Analytical Chemistry ,Digital image processing ,Environmental Chemistry ,Spectroscopy ,Chemistry ,010401 analytical chemistry ,Paper based ,021001 nanoscience & nanotechnology ,0104 chemical sciences ,Multiple factors ,Computer engineering ,RGB color model ,0210 nano-technology - Abstract
Although microfluidic paper-based analytical devices (μPADs) get a lot of attention in the scientific literature, they rarely reach the level of commercialization. One possible reason for this is a lack of application of machine learning techniques supporting the design, optimization and fabrication of such devices. This work demonstrates the potential of two chemometric techniques including design of experiments (DoE) and digital image processing to support the production of μPADs. On the example of a simple colorimetric assay for isoniazid relying on the protonation equilibrium of methyl orange, the experimental conditions were optimized using a D-optimal design (DO) and the impact of multiple factors on the μPAD response was investigated. In addition, this work demonstrates the impact of automatic image processing on accelerating color value analysis and on minimizing errors caused by manual detection area selection. The employed algorithm is based on morphological recognition and allows the analysis of RGB (red, green, and blue) values in a repeatable way. In our belief, DoE and digital image processing methodologies are keys to overcome some of the remaining weaknesses in μPAD development to facilitate their future market entry.
- Published
- 2020
18. Underlying data for research paper 'A novel deep learning method for recognizing texts printed with multiple different printing methods'
- Author
-
Koponen, Jarmo
- Subjects
Marketing ,Computer Sciences ,Computational Engineering ,Life Sciences ,F-Measure ,Multi-Object Recognition ,Character Recognition ,Printing Methods ,Precision ,FOS: Economics and business ,Engineering ,Deep Learning ,Regions with Convolutional Neural Networks ,Medicine and Health Sciences ,Physical Sciences and Mathematics ,Recall ,Tesseract OCR ,R-CNN ,Business ,Computer Engineering ,Accuracy - Abstract
This registration contain information about my research article "A novel deep learning method for recognizing texts printed with multiple different printing methods" and its extended/underlying data which are required by publisher (F1000Research).
- Published
- 2023
- Full Text
- View/download PDF
19. A review paper on the internet of things (IoT) & its modern application.
- Author
-
Zakaria, Jesmin, Kundu, Jhuma, and Rza, Hasnain
- Subjects
INTERNET of things ,COMPUTER engineering ,MACHINE-to-machine communications ,TELECOMMUNICATION systems ,SENSOR networks ,HIGH technology - Abstract
In the present world of advanced technology, we are entering a new era of computer technology, the full form of IoT being the Internet of Things. The Internet, a revolutionary invention, is constantly evolving into some new kind of hardware and software that has made it inevitable for everyone. IoT is a kind of global neural network in the cloud that allocates a lot. The communication system we see is person-to-person or device-to-person, but the IoT offers better future for the Internet, where communication type is machine-to-machine (M2M). The purpose of this study is to provide a comprehensive overview of the IoT display and its enabled technology and sensor network. This paper covers the subject matter, features, basic requirements and applications of IoT. The primary purpose of this research paper is to deliver an idea of the development and use of the IoT, its structure and advantages and disadvantages. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. electroMicroTransport v2107: Open-source toolbox for paper-based electromigrative separations
- Author
-
Gabriel S. Gerlero, Santiago Márquez Damián, and Pablo A. Kler
- Subjects
Open source ,Computer engineering ,Hardware and Architecture ,Computer science ,General Physics and Astronomy ,Paper based ,Toolbox - Published
- 2021
21. Artificial Unintelligence: How Computers Misunderstand the World: by Meredith Broussard, Cambridge, MA, MIT Press, 2019, 248 pp., $15.95T/£12.99 (paper).
- Author
-
Schweizer, Karl W.
- Subjects
- *
COMPUTERS , *ARTIFICIAL intelligence , *HUMAN behavior , *COMPUTER engineering , *COMPUTER interfaces , *INTELLECT - Abstract
Though a computer professional, Meredith Broussard feels "that the way people talk about technology is out of touch with what digital technology can actually do", and that seeing the latter as a panacea has "resulted in a tremendous amount of poorly designed technology" (6), which needlessly complicates instead of improving or making life easier. Artificial Unintelligence: How Computers Misunderstand the World: by Meredith Broussard, Cambridge, MA, MIT Press, 2019, 248 pp., $15.95T/£12.99 (paper) Supported by extensive research, I Artificial Unintelligence i cogently challenges the prevailing technophile hype extolling the unlimited ways in which technology supposedly can "change the world for the better" and create a digital utopia with infinite benefits in every area of life. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
22. A review paper on memory fault models and test algorithms
- Author
-
Aiman Zakwan Jidin, Lee Weng Fook, Mohd Syafiq Mispan, and Razaidi Hussin
- Subjects
Control and Optimization ,Design for testability ,Computer Networks and Communications ,Computer science ,Design for testing ,March test algorithm ,Random access memory ,Chip ,Fault (power engineering) ,Fault detection and isolation ,Test (assessment) ,Computer engineering ,Built-in self-test ,Hardware and Architecture ,Control and Systems Engineering ,Memory fault model ,Fault coverage ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,Instrumentation ,Row ,Information Systems - Abstract
Testing embedded memories in a chip can be very challenging due to their high-density nature and manufactured using very deep submicron (VDSM) technologies. In this review paper, functional fault models which may exist in the memory are described, in terms of their definition and detection requirement. Several memory testing algorithms that are used in memory built-in self-test (BIST) are discussed, in terms of test operation sequences, fault detection ability, and also test complexity. From the studies, it shows that tests with 22 N of complexity such as March SS and March AB are needed to detect all static unlinked or simple faults within the memory cells. The N in the algorithm complexity refers to Nx*Ny*Nz whereby Nx represents the number of rows, Ny represents the number of columns and Nz represents the number of banks. This paper also looks into optimization and further improvement that can be achieved on existing March test algorithms to increase the fault coverage or to reduce the test complexity.
- Published
- 2021
23. Publication guidelines for papers involving computational lithography
- Author
-
Harry J. Levinson
- Subjects
Set (abstract data type) ,Network architecture ,Artificial neural network ,Computer engineering ,Computational lithography ,Hardware_INTEGRATEDCIRCUITS ,Photomask ,Lithography ,GeneralLiterature_MISCELLANEOUS - Abstract
Editor-in-Chief Harry Levinson introduces a new set of guidelines for papers related to computational lithography.
- Published
- 2021
24. Application of molecular simulation in transformer oil–paper insulation.
- Author
-
Xiao, Xia, Yang, Wenyan, Li, Linduo, Zhong, Tingting, and Zhang, Xiaolin
- Subjects
ELECTRIC insulators & insulation ,MOLECULAR dynamics ,INSULATING oils ,COMPUTER engineering - Abstract
Molecular simulation technology is a new technology emerging with the rapid development of computer technology. It has the advantages of clear microscopic models, in-depth and precise calculation etc., it has been widely used in the oil–paper insulation system of transformer in recent years. Based on the application of molecular simulation technology in the oil–paper insulation system of transformers in this study, the current research status and progress of molecular simulation in these area are elaborated in detail from the aspects of transformer insulating oil and insulating paper according to the domestic and foreign literature, and the application of molecular simulation technology in plant insulating oil and nano-modified insulating oil is emphatically analysed. The complementing research method of molecular simulation and experiment is pointed out, which provides certain reference for solving many problems existing in the field of oil–paper insulation system of transformers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. A Survey Paper on Acceleration of Convolutional Neural Network using Field Programmable Gate Arrays
- Author
-
Ranjit Sadakale, R. A. Patil, and Jyoti Doifode
- Subjects
Computer engineering ,Computer science ,business.industry ,Logic gate ,Deep learning ,Key (cryptography) ,Process (computing) ,Artificial intelligence ,Field-programmable gate array ,business ,Throughput (business) ,Convolutional neural network ,Efficient energy use - Abstract
Lately, Deep Learning has indicated its capacity by adequately tackling complex learning issues which were not possible previously. Specifically, Convolutional Neural Networks are most generally utilized and have indicated their viability in image detection and recognition problems. Convolutional Neural Networks(CNN) can be used to perform different kinds of tasks. However, to recognize even a single image, billions of calculations are required. For example, ResNet 50 takes 8 billion calculations just to figure out what's in a single image. So we need to find out ways to be able to run that superfast and that's why we need hardware accelerators. In this paper the basic information about Convolutional Neural Networks is provided along with the key operations involved and the brief idea about Field Programmable Gate Arrays(FPGA) is given which enable them to be used for accelerating the inference process of Convolutional Neural Networks. Various techniques which were employed previously for accelerating the Convolutional Neural Networks are discussed. We are focussing on designing a energy efficient Field Programmable Gate Arrays accelerator for accelerating the inference process of Convolutional Neural Networks with reduced latency and increased throughput.
- Published
- 2021
26. Binary Complex Neural Network Acceleration on FPGA : (Invited Paper)
- Author
-
Scott Weitze, Minghu Song, Shanglin Zhou, Sahidul Islam, Tong Geng, Hang Liu, Jiaxin Li, Ang Li, Hongwu Peng, Mimi Xie, Caiwen Ding, and Wei Zhang
- Subjects
Complex data type ,Signal processing ,Memory management ,Computer engineering ,Artificial neural network ,Edge device ,Computer science ,Pruning (decision trees) ,Complex network ,Throughput (business) - Abstract
Being able to learn from complex data with phase information is imperative for many signal processing applications. Today’s real-valued deep neural networks (DNNs) have shown efficiency in latent information analysis but fall short when applied to the complex domain. Deep complex networks (DCN), in contrast, can learn from complex data, but have high computational costs; therefore, they cannot satisfy the instant decision-making requirements of many deployable systems dealing with short observations or short signal bursts. Recent, Binarized Complex Neural Network (BCNN), which integrates DCNs with binarized neural networks (BNN), shows great potential in classifying complex data in real-time. In this paper, we propose a structural pruning based accelerator of BCNN, which is able to provide more than 5000 frames/s inference throughput on edge devices. The high performance comes from both the algorithm and hardware sides. On the algorithm side, we conduct structural pruning to the original BCNN models and obtain 20 × pruning rates with negligible accuracy loss; on the hardware side, we propose a novel 2D convolution operation accelerator for the binary complex neural network. Experimental results show that the proposed design works with over 90% utilization and is able to achieve the inference throughput of 5882 frames/s and 4938 frames/s for complex NIN-Net and ResNet-18 using CIFAR-10 dataset and Alveo U280 Board.
- Published
- 2021
27. Low-Latency Distributed Inference at the Network Edge Using Rateless Codes (Invited Paper)
- Author
-
Anton Frigard, Alexandre Graell i Amat, Siddhartha Kumar, and Eirik Rosnes
- Subjects
Computer engineering ,Edge device ,Robustness (computer science) ,Computer science ,Server ,Code (cryptography) ,Latency (engineering) ,Antenna diversity ,Decoding methods - Abstract
We propose a coding scheme for low-latency distributed inference at the network edge that combines a rateless code with an irregular-repetition code. The rateless code provides robustness against straggling servers and serves the purpose of reducing the computation latency, while the irregular-repetition code provides spatial diversity to reduce the communication latency. We show that the proposed scheme yields significantly lower latency than a scheme based on maximum distance separable codes recently proposed by Zhang and Simeone.
- Published
- 2021
28. The first 25 years of the FPL conference: Significant papers
- Author
-
Jason H. Anderson, João M. P. Cardoso, Guy Gogniat, JunKyu Lee, Hayden K.-H. So, Tero Rissa, Patrick Lysaght, Hideharu Amano, Philip H. W. Leong, Oliver Diessel, Koen Bertels, Yu Wang, Wayne Luk, Michael D. Hutton, Marco Platzner, Cristina Silvano, Viktor K. Prasanna, Engineering & Physical Science Research Council (E, Commission of the European Communities, and Engineering & Physical Science Research Council (EPSRC)
- Subjects
Technology ,General Computer Science ,Field (physics) ,Computer science ,Significant papers ,02 engineering and technology ,25 years ,Reconfigurable logic and FPGA ,01 natural sciences ,INTRUSION DETECTION ,Reconfigurable computing ,Field-programmable logic and applications ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science, Hardware & Architecture ,FPGA ,010302 applied physics ,ARCHITECTURE ,1006 Computer Hardware ,Science & Technology ,CHIP ,RECONFIGURABLE SYSTEMS ,PROGRAMMABLE GATE ARRAYS ,020202 computer hardware & architecture ,NETWORKS ,Computer engineering ,Computer Science ,Algorithm ,FPL ,HARDWARE - Abstract
A summary of contributions made by significant papers from the first 25 years of the Field-Programmable Logic and Applications conference (FPL) is presented. The 27 papers chosen represent those which have most strongly influenced theory and practice in the field.
- Published
- 2017
29. Review Paper: Error Detection and Correction Onboard Nanosatellites
- Author
-
Vipin Balyan and Caleb Hillier
- Subjects
Computer engineering ,Computer science ,Error detection and correction ,Space radiation ,Field-programmable gate array ,Geocentric orbit - Abstract
The work presented in this paper forms part of a literature review conducted during a study on error detection and correction systems. The research formed the foundation of understanding, touching on space radiation, glitches and upsets, geomagnetism, error detection and correction (EDAC) schemes, and implementing EDAC systems. EDAC systems have been around for quite some time, and certain EDAC schemes have been implemented and tested extensively. However, this work is a more focused study on understanding and finding the best-suited EDAC solution for nanosatellites in low earth orbits (LEO).
- Published
- 2021
30. Leveraging Different Types of Predictors for Online Optimization (Invited Paper)
- Author
-
Russell Lee, Jessica Maghakian, Jian Li, Zhenhua Liu, Ramesh K. Sitaraman, and Mohammad H. Hajiesmaili
- Subjects
Prediction algorithms ,Exploit ,Online optimization ,Computer engineering ,Computer science ,Bandwidth (computing) ,Ranging ,Minification ,Video streaming ,Strengths and weaknesses - Abstract
Predictions have a long and rich history in online optimization research, with applications ranging from video streaming to electrical vehicle charging. Traditionally, different algorithms are evaluated on their performance given access to the same type of predictions. However, motivated by the problem of bandwidth cost minimization in large distributed systems, we consider the benefits of using different types of predictions. We show that the two different types of predictors we consider have complimentary strengths and weaknesses. Specifically, we show that one type of predictor has strong average-case performance but weak worst-case performance, while the other has weak average-case performance but strong worst-case performance. By using a learning-augmented meta-algorithm, we demonstrate that it is possible to exploit both types of predictors for strong performance in all scenarios.
- Published
- 2021
31. Dirty paper via a relay with oblivious processing
- Author
-
Michael Peleg and Shlomo Shamai
- Subjects
Computer engineering ,Relay ,law ,Need to know ,Computer science ,Transmitter ,Error correcting ,Computer Science::Information Theory ,Channel use ,law.invention ,Dirty paper ,Coding (social sciences) - Abstract
The Oblivious Relay serves users without a need to know the users error correcting codes. We extend the oblivious relay concept to channels with interference which is known to the transmitter but not to the receiver. Our system uses structured modulation and coding based on lattices. We show that when the interference is known non-causally, it's influence can be overcome wholly and that in simpler causal schemes the performance is usually within the shaping loss of 0.254 bits/channel use from the optimal performance attainable with large lattices.
- Published
- 2017
32. Towards Demand-Driven Optimization Algorithms in Electromagnetic Engineering (Invited Paper)
- Author
-
Maria Kovaleva, David Bulger, Karu P. Esselle, Wang, CF, Shen, Z, Liu, EX, and Tan, EL
- Subjects
Electromagnetics ,Cross entropy ,Optimization algorithm ,Computer engineering ,Computer science ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,Demand driven ,020206 networking & telecommunications ,02 engineering and technology ,Antenna (radio) ,Popularity - Abstract
With the increasing popularity of optimization algorithms in electromagnetic engineering, it is clear that mixed-variable design problems prevail. This paper shows that the Cross-Entropy (CE) optimization method is intrinsically versatile to handle these and other types of problems. We provide implementation details of two antenna examples optimized by the CE method to demonstrate its elegance and efficiency.
- Published
- 2020
33. Towards Real-time CNN Inference from a Video Stream on a Mobile GPU (WiP Paper)
- Author
-
Do-Hee Kim, Gunju Park, Sumin Kim, Youngmin Yi, and Chanyoung Oh
- Subjects
010302 applied physics ,Speedup ,Computer science ,business.industry ,Deep learning ,Inference ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture ,Computer engineering ,Kernel (statistics) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Artificial intelligence ,Quantization (image processing) ,Face detection ,business ,Execution model - Abstract
While there are several frameworks for CNN inference on mobile GPUs, they do not achieve real-time processing for the most of the CNNs that aim at reasonable accuracy since they all employ kernel-by-kernel execution model and do not effectively support INT8 quantization yet. In this paper, we reveal that mobile GPUs suffer from large kernel launch overhead unlike server GPUs, and then propose an on-device deep learning inference framework that can achieve real-time inference of CNNs on mobile GPUs by removing kernel launch overhead and by effectively exploiting INT8 quantization. We have evaluated the proposed framework with a state-of-the-art CNN based face detector (RetinaFace), and observed up to 2.01X of speedup compared to ARM Compute Library (ACL) on a commodity smartphone.
- Published
- 2020
34. Focused Value Prediction: Concepts, techniques and implementations presented in this paper are subject matter of pending patent applications, which have been filed by Intel Corporation
- Author
-
Sumeet Bandishte, Zeev Sperber, Jayesh Gaur, Adi Yoaz, Lihu Rappoport, and Sreenivas Subramoney
- Subjects
010302 applied physics ,Out-of-order execution ,Speedup ,Computer science ,Contrast (statistics) ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture ,Subject matter ,Computer engineering ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Baseline (configuration management) ,Instruction-level parallelism ,Implementation ,Value (mathematics) - Abstract
Value Prediction was proposed to speculatively break true data dependencies, thereby allowing Out of Order (OOO) processors to achieve higher instruction level parallelism (ILP) and gain performance. State-of-the-art value predictors try to maximize the number of instructions that can be value predicted, with the belief that a higher coverage will unlock more ILP and increase performance. Unfortunately, this comes at increased complexity with implementations that require multiple different types of value predictors working in tandem, incurring substantial area and power cost. In this paper we motivate towards lower coverage, but focused, value prediction. Instead of aggressively increasing the coverage of value prediction, at the cost of higher area and power, we motivate refocusing value prediction as a mechanism to achieve an early execution of instructions that frequently create performance bottlenecks in the OOO processor. Since we do not aim for high coverage, our implementation is light-weight, needing just 1.2 KB of storage. Simulation results on 60 diverse workloads show that we deliver 3.3% performance gain over a baseline similar to the Intel Skylake processor. This performance gain increases substantially to 8.6% when we simulate a futuristic up-scaled version of Skylake. In contrast, for the same storage, state-of-the-art value predictors deliver a much lower speedup of 1.7% and 4.7% respectively. Notably, our proposal is similar to these predictors in performance, even when they are given nearly eight times the storage and have 60% more prediction coverage than our solution.
- Published
- 2020
35. Introduction of Non-Volatile Computing In Memory (nvCIM) by 3D NAND Flash for Inference Accelerator of Deep Neural Network (DNN) and the Read Disturb Reliability Evaluation : (Invited Paper)
- Author
-
Po-Kai Hsu, Hang-Ting Lue, Keh-Chung Wang, and Chih-Yuan Lu
- Subjects
Optimal design ,Artificial neural network ,Computer science ,Calibration (statistics) ,Reliability (computer networking) ,010401 analytical chemistry ,Bandwidth (signal processing) ,NAND gate ,Inference ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,0104 chemical sciences ,Computer engineering ,0210 nano-technology ,Efficient energy use - Abstract
In this paper, we introduce the optimal design methods of 3D NAND nvCIM [1], and then address the read disturb reliability issue. In recent years CIM [2] is widely considered as a promising solution to accelerate the DNN inference hardware. Theoretically, nvCIM can drastically reduce the power consumption by data movement because of no need to move the weights during computation. 3D NAND has the advantage of extremely low Icell (~nA), while the large ON/OFF ratio provides the capability to sum >10’000 cells together to improve the performance bandwidth and energy efficiency. We think that 3D NAND nvCIM has the potential to serve as the inference accelerator for the high-density fully- connected (FC) network which often requires high-bandwidth inputs. The read disturb property is studied. It is suggested that the "on-the-fly" calibration technique can well maintain the inference accuracy for 10-year usage.
- Published
- 2020
36. Industry Paper: On the Performance of Commodity Hardware for Low Latency and Low Jitter Packet Processing
- Author
-
Stylianopoulos, Charalampos, Almgren, Magnus, Landsiedel, Olaf, Papatriantafilou, Marina, Neish, Trevor, Gillander, Linus, Johansson, Bengt, and Bonnier, Staffan
- Subjects
packet processing ,jitter ,Communication Systems ,Telecommunications ,Computer Engineering ,Industry 4.0 ,latency - Abstract
With the introduction of Virtual Network Functions (VNF), network processing is no longer done solely on special purpose hardware. Instead, deploying network functions on commodity servers increases flexibility and has been proven effective for many network applications. However, new industrial applications and the Internet of Things (IoT) call for event-based systems and midleware that can deliver ultra-low and predictable latency, which present a challenge for the packet processing infrastructure they are deployed on. In this industry experience paper, we take a hands-on look on the performance of network functions on commodity servers to determine the feasibility of using them in existing and future latency-critical event-based applications. We identify sources of significant latency (delays in packet processing and forwarding) and jitter (variation in latency) and we propose application- and system-level improvements for removing or keeping them within required limits. Our results show that network functions that are highly optimized for throughput perform sub-optimally under the very different requirements set by latency-critical applications, compared to latency-optimized versions that have up to 9.8X lower latency. We also show that hardware-aware, system-level configurations, such as disabling frequency scaling technologies, greatly reduce jitter by up 2.4X and lead to more predictable latency.
- Published
- 2020
37. LARP1 paper combined elife 7-2-20
- Author
-
Shelly C. Lu
- Subjects
Computer engineering ,Computer science - Published
- 2020
38. Editorial: Introduction to the Special Section on CVPR2019 Best Papers.
- Author
-
Hua, Gang, Hoiem, Derek, Gupta, Abhinav, and Tu, Zhuowen
- Subjects
ARTIFICIAL intelligence ,COGNITIVE science ,SCHOLARSHIPS ,GENERATIVE adversarial networks ,COMPUTER engineering - Abstract
An introduction is presented in which the editor discusses articles in the issue on the topics including demonstrates exciting results based on self-imitation learning within a cross-modal setting; and providing advances in controllable generation and interpolation across attributes.
- Published
- 2021
- Full Text
- View/download PDF
39. Writing on Dirty Paper with Feedback
- Author
-
Nicola Elia and Jialing Liu
- Subjects
Lossless compression ,Computer science ,Kalman filter ,Data_CODINGANDINFORMATIONTHEORY ,Dirty-paper coding ,Capacity-achieving coding scheme ,Single antenna interference cancellation ,Interference (communication) ,Computer engineering ,Control theory ,Interconnections among information, control, and estimation ,Electronic engineering ,Dirty paper coding ,Lossless interference cancelation ,Feedback communication ,Encoder ,Wireless sensor network ,Computer Science::Information Theory ,Communication channel ,Coding (social sciences) ,Block (data storage) - Abstract
“Writing on dirty paper” refers to the communication problem over a channel with both noise and interference, where the interference is known to the encoder non-causally and unknown to the decoder. This problem is regarded as a basic building block in both the single-user and multiuser communications, and it has been extensively investigated by Costa and other researchers. However, little is known in the case that the encoder can have access to feedback from the decoder. In this paper, we study the dirty-paper coding problem for feedback Gaussian channels without or with memory. We provide the most power efficient coding schemes for this problem, i.e., the schemes achieve lossless interference cancelation. These schemes are based on the Kalman filtering algorithm, extend the Schalkwijk-Kailath feedback codes, have low complexity and a doubly exponential reliability function, and reveal the interconnections among information, control, and estimation over dirty-paper channels with feedback. This research may be found useful to, for example, powerconstrained sensor network communication.
- Published
- 2005
40. Guest Editorial: Advanced image restoration and enhancement in the wild.
- Author
-
Wang, Longguang, Li, Juncheng, Yokoya, Naoto, Timofte, Radu, and Guo, Yulan
- Subjects
IMAGE intensifiers ,IMAGE reconstruction ,COMPUTER vision ,SCHOLARSHIPS ,COMPUTER engineering ,IMAGE denoising ,DEEP learning ,VIDEO compression - Abstract
This document is a guest editorial from the journal IET Computer Vision, discussing the topic of advanced image restoration and enhancement. The editorial highlights the challenges faced in this field, such as the complexity of degradation models for real-world low-quality images and the difficulty of acquiring paired data. It also introduces a special issue of the journal that includes five accepted papers, which focus on video reconstruction and image super-resolution. The editorial concludes by providing brief summaries of each accepted paper. The guest editors of the special issue are also mentioned, along with their research interests and affiliations. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
41. DEEPEYE: A Deeply Tensor-Compressed Neural Network Hardware Accelerator: Invited Paper
- Author
-
Guangya Li, Hao Yu, Hai-Bao Chen, Yuan Cheng, and Ngai Wong
- Subjects
Clustering high-dimensional data ,Artificial neural network ,Computer science ,Quantization (signal processing) ,Inference ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Object detection ,Computer engineering ,Terminal (electronics) ,Tensor (intrinsic definition) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Quantization (image processing) ,0105 earth and related environmental sciences - Abstract
Video detection and classification constantly involve high dimensional data that requires a deep neural network (DNN) with huge number of parameters. It is thereby quite challenging to develop a DNN video comprehension at terminal devices. In this paper, we introduce a deeply tensor compressed video comprehension neural network called DEEPEYE for inference at terminal devices. Instead of building a Long Short-Term Memory (LSTM) network directly from raw video data, we build a LSTM-based spatio-temporal model from tensorized time-series features for object detection and action recognition. Moreover, a deep compression is achieved by tensor decomposition and trained quantization of the time-series feature-based spatio-temporal model. We have implemented DEEPEYE on an ARM-core based IOT board with only 2.4W power consumption. Using the video datasets MOMENTS and UCF11 as benchmarks, DEEPEYE achieves a 228.1× model compression with only 0.47% mAP deduction; as well as 15k× parameter reduction yet 16.27% accuracy improvement.
- Published
- 2019
42. Facilitating Deployment Of A Wafer-Based Analytic Software Using Tensor Methods: Invited Paper
- Author
-
Li-C. Wang, Ahmed Wahba, and Chuanhe Jay Shan
- Subjects
Contextual image classification ,Artificial neural network ,Computer science ,business.industry ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,020202 computer hardware & architecture ,Software ,Computer engineering ,Robustness (computer science) ,Software deployment ,0202 electrical engineering, electronic engineering, information engineering ,Wafer ,Tensor ,business ,0105 earth and related environmental sciences - Abstract
Robustness is a key requirement for deploying a machine learning (ML) based solution. When a solution involves a ML model whose robustness is not guaranteed, ensuring robustness of the solution might rely on continuous checking of the ML model for its validity after the solution is deployed in production. Using wafer image classification as an example, this paper introduces tensor-based methods that help improve robustness of a neural-network-based classification approach and facilitate its deployment. Experiment results based on data from a commercial product line are presented to explain the key ideas behind the tensor-based methods.
- Published
- 2019
43. Efficient Simulation of Electromigration Damage in Large Chip Power Grids Using Accurate Physical Models (Invited Paper)
- Author
-
Farid N. Najm and Valeriy Sukharev
- Subjects
LTI system theory ,Computer engineering ,Signoff ,Reliability (computer networking) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Node (circuits) ,Grid ,Chip ,Electromigration ,Power (physics) - Abstract
Due to continued technology scaling, electromigration (EM) signoff has become increasingly difficult, mainly due to the use of inaccurate methods for EM assessment, such as the empirical Black's model. In this paper, we review the state of the art for EM verification in on-die power/ground grids, with emphasis on a recent finite-difference based approach for power grid EM checking using physics-based models. The resulting model allows the EM damage across the power grid to be simulated based on a Linear Time Invariant (LTI) system formulation. The model also handles early failures and accounts for their impact on the grid lifetime. Our results, for a number of IBM power grid benchmarks, confirm that existing approaches for EM checking can be highly inaccurate. The lifetimes found using our physics-based approach are on average about 2X, or more, those based on the existing approaches, showing that existing EM design guidelines are overly pessimistic. The method is also quite fast, with a runtime of about 8 minutes for a 4M node grid, and so it is suitable for large circuits.
- Published
- 2019
44. A Survey Paper on Leave Automation
- Author
-
Snehal Vijay Kamble and Amruta Shivaji Kamble
- Subjects
leave automation ,leave management ,Computer Engineering - Abstract
In the existing Leave Automation Record, every College Department follows manual procedure in which collage faculty enters information in a record book. At the end of each month, Administration Department calculates leave s of every faculty which is a more time taking process and there are chances of losing data or errors in the records. This module is a single leave automation that is critical for HR tasks and keeps the record of virtual information regarding leaves. It intelligently adapts to HR policy of the automation and allows employees and their line managers to manage leaves and replacements if required . In this module, Head of Department HOD will have permissions to look after data of every faculty member of their department. HOD can approve leave through this application and can view leave information of every individual. This system can be used in a college to reduce processing work load. This project's main idea is to develop an online centralized application connected to database which will maintain faculty leaves and their replacements if needed . Proposed System will reduce paperwork and maintain record in a more efficient and systematic way. This module will also help to calculate the number of leaves taken monthly annually and help gather data with respect to number of leaves, thereby helping in calculating the leaves working days by the HR Department. Snehal Vijay Kamble | Amruta Shivaji Kamble | Priyanka Sanjeev Babar | Sushmita Shrimant Kakmare | Mr. Nilesh D. Ghorpode "A Survey Paper on Leave Automation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21370.pdf
- Published
- 2019
45. An Efficient Quantum Circuits Optimizing Scheme Compared with QISKit (Short Paper)
- Author
-
Hong Xiang, Xin Zhang, and Tao Xiang
- Subjects
Scheme (programming language) ,Computer science ,Quantum Physics ,010502 geochemistry & geophysics ,01 natural sciences ,Computer Science::Hardware Architecture ,Quantum circuit ,Computer Science::Emerging Technologies ,Quantum gate ,Computer engineering ,Qubit ,0103 physical sciences ,Quantum algorithm ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,010306 general physics ,Quantum ,computer ,0105 earth and related environmental sciences ,Quantum computer ,Electronic circuit ,computer.programming_language - Abstract
Recently, the development of quantum chips has made great progress – the number of qubits is increasing and the fidelity is getting higher. However, qubits of these chips are not always fully connected, which sets additional barriers for implementing quantum algorithms and programming quantum programs. In this paper, we introduce a general circuit optimizing scheme, which can efficiently adjust and optimize quantum circuits according to arbitrary given qubits’ layout by adding additional quantum gates, exchanging qubits and merging single-qubit gates. Compared with the optimizing algorithm of IBM’s QISKit, the quantum gates consumed by our scheme is 74.7%, and the execution time is only 12.9% on average.
- Published
- 2019
46. Trellis shaping based dirty paper coding scheme for the overlay cognitive radio channel
- Author
-
Yuting Sun, Wenbo Xu, and Jiaru Lin
- Subjects
Cognitive radio ,Computer engineering ,business.industry ,Computer science ,Transmitter ,Code (cryptography) ,Bit error rate ,Dirty paper coding ,Data_CODINGANDINFORMATIONTHEORY ,Trellis (graph) ,business ,Decoding methods ,Computer network - Abstract
In this paper, we propose a dirty paper coding (DPC) scheme that uses trellis shaping for the overlay cognitive radio channel, where a cognitive user and a primary user transmit concurrently in the same spectrum. Interference of the primary user is assumed to be known at the cognitive transmitter non-causally. Based on this knowledge, the shaping code selection, as a key feature of the proposal, is introduced which enables the constellation to be self adaptively changed. The performance of our proposed scheme is compared using simulations with that based on the conventional trellis shaping and it achieves excellent tradeoff between performance and complexity.
- Published
- 2014
47. Co-Design of Embeddable Diagnostics using Reduced-Order Models * *The paper has been supported by SFI grants 12/RC/2289 and 13/RC/2094
- Author
-
Gregory Provan
- Subjects
Polynomial regression ,Co-design ,0209 industrial biotechnology ,Engineering ,business.industry ,Ode ,Inference ,Embedded processing ,02 engineering and technology ,Structural engineering ,01 natural sciences ,Reduced order ,010104 statistics & probability ,020901 industrial engineering & automation ,Computer engineering ,Control and Systems Engineering ,Benchmark (computing) ,Isolation (database systems) ,0101 mathematics ,business - Abstract
We develop a system for generating embedded diagnostics from an ODE model that can isolate faults given the memory and processing limitations of the embedded processor. This system trades off diagnosis isolation accuracy for inference time and/or memory in a principled manner. We use a Polynomial Regression approach for tuning the performance of an ensemble of low-fidelity ODE diagnosis models such that we achieve the target of embedded processing limits. We demonstrate our approach on a non-linear tank benchmark system.
- Published
- 2017
48. 65‐2: Invited Paper: Advanced Modelling of Field Emission.
- Author
-
Cárceles, Salvador Barranco, Kyritsakis, Andreas, and Underwood, Ian
- Subjects
FIELD emission ,INFORMATION display systems ,LIQUID crystals ,PRODUCT design ,COMPUTER engineering ,DECISION making - Abstract
After offering great promise and attracting substantial investment in technology development, fields emission did not succeed in the competition against liquid crystal technology for computer screens and plasma technology for large flat panel television. We review and analyze some of the challenges that prevented field emission from becoming a mayor information display technology. W e report the development of a novel modelling tool that can be used to support the design of field emitters optimized to the application requirements. The previous lack of such a simulation tool that incorporates the range of phenomena that affect field emission has been a limiting factor in emitter performance and data analysis. The new model enables more accurate and effective design of field emitters including the emission dynamics and thermal behaviour for a wide range of geometries and materials. It provides a powerful tool with high accessibility to support engineers to make informed decisions as part of the product design process. This increases opportunities for field emission to be used in a range of non‐display applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. MUQUT: Multi-Constraint Quantum Circuit Mapping on NISQ Computers: Invited Paper
- Author
-
Debjyoti Bhattacharjee, Swaroop Ghosh, Mahabubul Alam, Anupam Chattopadhyay, and Abdullah Ash Saki
- Subjects
Quantum technology ,Quantum circuit ,Gate count ,Computer engineering ,Computer science ,Qubit ,Logical depth ,Quantum ,AND gate ,Quantum computer - Abstract
Rapid advancement in the domain of quantum technologies have opened up researchers to the real possibility of experimenting with quantum circuits, and simulating small-scale quantum programs. Nevertheless, the quality of currently available qubits and environmental noise pose a challenge in smooth execution of the quantum circuits. Therefore, efficient design automation flows for mapping a given algorithm to the Noisy Intermediate Scale Quantum (NISQ) computer becomes of utmost importance. State-of-the-art quantum design automation tools are primarily focused on reducing logical depth, gate count and qubit counts with recent emphasis on topology-aware (nearest-neighbour compliance) mapping. In this work, we extend the technology mapping flows to simultaneously consider the topology and gate fidelity constraints while keeping logical depth and gate count as optimization objectives. We provide a comprehensive problem formulation and multi-tier approach towards solving it. The proposed automation flow is compatible with commercial quantum computers, such as IBM QX and Rigetti. Our simulation results over 10 quantum circuit benchmarks, show that the fidelity of the circuit can be improved up to 3.37 × with an average improvement of 1.87 ×.
- Published
- 2019
50. Intelligent Test Paper Generation Based on Dynamic Programming Algorithm
- Author
-
SuRong Wang and YiFei Wang
- Subjects
Dynamic programming ,History ,Computer engineering ,Computer science ,Computer Science Applications ,Education ,Test (assessment) - Abstract
This paper describes the problem of intelligent paper grouping and its mathematical model. By optimizing and improving the traditional dynamic programming algorithm, its space complexity is reduced from O(nb) to O(b). At the same time, the flexibility of dynamic programming algorithm is increased by using marker function and tracking algorithm, and the result composition is tracked to obtain the optimal solution. Finally, through several experiments, the improved dynamic programming algorithm is compared with the greedy algorithm and brute force algorithm, and it is found that the improved dynamic programming algorithm has a very good result and is with high efficiency when applied to the simple test paper. It is the most recommended algorithm among the two algorithms compared in this paper.
- Published
- 2020
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.