3,066 results on '"Transaction processing"'
Search Results
2. A Reputation-Based Mechanism for Transaction Processing in Blockchain Systems
- Author
-
Yukun Cheng, Xiaotie Deng, Bo Wang, Jan Xie, Mengqian Zhang, Yuanyuan Yang, and Jiarui Zhang
- Subjects
Blockchain ,Transaction processing ,Computer science ,media_common.quotation_subject ,Computer security ,computer.software_genre ,Theoretical Computer Science ,Computational Theory and Mathematics ,Hardware and Architecture ,computer ,Software ,Mechanism (sociology) ,Reputation ,media_common - Published
- 2022
- Full Text
- View/download PDF
3. The Norwegian State Railway System GTL (1976)
- Author
-
Steine, Tor Olav, Rannenberg, Kai, Editor-in-chief, Sakarovitch, Jacques, Series editor, Goedicke, Michael, Series editor, Tatnall, Arthur, Series editor, Neuhold, Erich J., Series editor, Pras, Aiko, Series editor, Tröltzsch, Fredi, Series editor, Pries-Heje, Jan, Series editor, Whitehouse, Diane, Series editor, Reis, Ricardo, Series editor, Murayama, Yuko, Series editor, Dillon, Tharam, Series editor, Gulliksen, Jan, Series editor, Rauterberg, Matthias, Series editor, Gram, Christian, editor, Rasmussen, Per, editor, and Østergaard, Søren Duus, editor
- Published
- 2015
- Full Text
- View/download PDF
4. The evolution of the Arjuna transaction processing system
- Author
-
Santosh K. Shrivastava and Mark Cameron Little
- Subjects
business.industry ,Computer science ,Transaction processing ,Middleware (distributed applications) ,Key (cryptography) ,Transaction processing system ,Cloud computing ,computer.software_genre ,Software engineering ,business ,Database transaction ,computer ,Software - Abstract
The Arjuna transaction system began life in the mid 1980s as an academic project to examine the use of object-oriented techniques in the development of fault-tolerant distributed systems. Twenty five years later, it is an integral part of the JBoss application sever middleware from Red Hat. This journey from an academic to a commercial environment has been neither easy nor smooth but it has been interesting from many different perspectives. This paper gives an overview of this journey and discusses key lessons learned.
- Published
- 2021
- Full Text
- View/download PDF
5. Research and implementation of HTAP for distributed database
- Author
-
Ouya Pei, Jintao Gao, Wenjie Liu, and Changhong Jing
- Subjects
Decision support system ,Distributed database ,Database ,Transaction processing ,Relational database ,Computer science ,Online analytical processing ,data analysis ,General Engineering ,htap ,InformationSystems_DATABASEMANAGEMENT ,TL1-4050 ,02 engineering and technology ,computer.software_genre ,Pipeline (software) ,Data warehouse ,olap ,020204 information systems ,distributed database ,0202 electrical engineering, electronic engineering, information engineering ,Online transaction processing ,020201 artificial intelligence & image processing ,computer ,Motor vehicles. Aeronautics. Astronautics - Abstract
Data processing can be roughly divided into two categories, online transaction processing OLTP(on-line transaction processing) and online analytical processing OLAP(on-line analytical processing). OLTP is the main application of traditional relational databases, and it is some basic daily transaction processing, such as bank pipeline transactions and so on. OLAP is the main application of the data warehouse system, it supports some more complex data analysis operations, focuses on decision support, and provides popular and intuitive analysis results. As the amount of data processed by enterprises continues to increase, distributed databases have gradually replaced stand-alone databases and become the mainstream of applications. However, the current business supported by distributed databases is mainly based on OLTP applications, lacking OLAP implementation. This paper proposes an implementation method of HTAP for distributed database CBase, which provides an implementation method of OLAP analysis for CBase, and can easily deal with data analysis of large amounts of data.
- Published
- 2021
6. Research and application of hyperledger fabric based on VRF on electronic order
- Author
-
Tang Rui and Zhang Chengling
- Subjects
Process (engineering) ,Transaction processing ,Computer science ,Random function ,Data security ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,Order (exchange) ,Credibility ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Verifiable secret sharing ,Electronic data ,computer ,General Environmental Science - Abstract
With the rapid development of information digitization, a large number of electronic orders have also brought a series of problems such as data security and trust. In the face of the ever-increasing demand for electronic data storage, the traditional and conventional storage methods have gradually exposed many problems such as extremely high cost, low efficiency, and difficulty in acceptance. It has become an urgent need to ensure the data security and credibility of electronic orders. solved problem. Aiming at how to ensure the authority and security of electronic documents, through the study of the Hyperledger Fabric consensus mechanism, and at the same time, in order to solve the security risks and processing bottlenecks caused by the use of fixed endorsement nodes in the hyperledger structure to handle all exchanges, the use of A non-interactive, verifiable random endorsement scheme. On the basis of the consensus mechanism of "endorsement-sorting-verification", by introducing a candidate set of endorsing nodes, using a verifiable random function to randomly select endorsing nodes and endorsing each order, a non-interactive verifiable random selection of endorsing nodes is realized And parallel endorsement process. Analysis and experiments show that the optimized consensus mechanism has higher security and faster transaction processing speed.
- Published
- 2021
- Full Text
- View/download PDF
7. Penyusunan Laporan Keuangan dan Perancangan Aplikasi Keuangan Untuk Usaha Kecil Menengah Studi Kasus pada D’Haus Cake
- Author
-
Elfitri Santi, Eka Rosalina, and Fitra Oliyan
- Subjects
Database ,Computer science ,Transaction processing ,business.industry ,Process (engineering) ,General Medicine ,computer.software_genre ,System requirements ,Software ,Ledger ,Position (finance) ,business ,Engineering design process ,computer ,Database transaction - Abstract
This study aims to design accounting applications for the preparation of financial statements at D'Haus Cake's business. The approach used in this research is a case study. The design process begins with studying the transactions and reports that D'Haus Cake’s business needs in the form of purchase and sale transaction forms, and general journals. The required reports are in the form of all transaction journal records, ledgers, trial balances, profit and loss reports, and statements of financial position. After studying the system requirements, then proceed to the application development stage using Microsoft Access 2013 software. The next stage is to test the application to obtain adequate confidence in transaction processing by comparing the results of processing with manual calculations carried out. The implementation process is carried out by converting the initial data on the conversion date and making input transactions up to the company's operating date. After implementing and converting, the final stage is to carry out training for users and carry out a process of improvement and adjustment to the application based on the feedback received from users. The process of designing and implementing the application has been considered successful after the user states that the application has met all user needs and operates well.
- Published
- 2020
- Full Text
- View/download PDF
8. A system design for elastically scaling transaction processing engines in virtualized servers
- Author
-
Eliezer Levy, Angelos-Christos G. Anadiotis, Hillel Avni, Raja Appuswamy, Shay Goikhman, Ilan Bronshtein, Anastasia Ailamaki, David Dominguez-Sal, Département d'informatique de l'École polytechnique (X-DEP-INFO), École polytechnique (X), Ecole Polytechnique Fédérale de Lausanne (EPFL), Rich Data Analytics at Cloud Scale (CEDAR), Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Eurecom [Sophia Antipolis], Huawei Research Center [Tel Aviv], Huawei Technologies Co., Ltd [Shenzhen], Huawei Research Center [Munich], Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Inria Saclay - Ile de France, and Huawei
- Subjects
[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB] ,business.industry ,Computer science ,Transaction processing ,Distributed computing ,General Engineering ,Hypervisor ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Virtualization ,[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,Virtual machine ,020204 information systems ,Server ,impact ,0202 electrical engineering, electronic engineering, information engineering ,Transaction processing system ,Online transaction processing ,[INFO.INFO-OS]Computer Science [cs]/Operating Systems [cs.OS] ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,business ,computer ,scalability - Abstract
Online Transaction Processing (OLTP) deployments are migrating from on-premise to cloud settings in order to exploit the elasticity of cloud infrastructure which allows them to adapt to workload variations. However, cloud adaptation comes at the cost of redesigning the engine, which has led to the introduction of several, new, cloud-based transaction processing systems mainly focusing on: (i) the transaction coordination protocol, (ii) the data partitioning strategy, and, (iii) the resource isolation across multiple tenants. As a result, standalone OLTP engines cannot be easily deployed with an elastic setting in the cloud and they need to migrate to another, specialized deployment., In this paper, we focus on workload variations that can be addressed by modern multi-socket, multi-core servers and we present a system design for providing fine-grained elasticity to multi-tenant, scale-up OLTP deployments. We introduce novel components to the virtualization software stack that enable on-demand addition and removal of computing and memory resources. We provide a bi-directional, low-overhead communication stack between the virtual machine and the hypervisor, which allows the former to adapt to variations coming both from the workload and the resource availability. We show that our system achieves NUMA-aware, millisecond-level, stateful and fine-grained elasticity, while it is not intrusive to the design of state-of-the-art, in-memory OLTP engines. We evaluate our system through novel use cases demonstrating that scale-up elasticity increases resource utilization, while allowing tenants to pay for actual use of resources and not just their reservation.
- Published
- 2020
- Full Text
- View/download PDF
9. Desain Aplikasi Sistem Informasi Akuntansi untuk Usaha Bengkel Studi Kasus pada AA Cempaka Auto Service
- Author
-
Firman Surya, Armel Yentifa, Dedy Djefris, Elfitri Santi, and Rini Frima
- Subjects
Service (systems architecture) ,Database ,Transaction processing ,Computer science ,Income statement ,General ledger ,Design process ,General Medicine ,Visual Basic for Applications ,computer.software_genre ,Database transaction ,computer ,Purchasing - Abstract
This study aims to design an accounting information system application in AA Cempaka Auto Service workshop. Case study used as research approach. The application design process begins with studying the forms and reports that are required by the workshop business i.e purchasing form, sales form, general journal form, vehicle service record, debt payment and receivables. Required reports consist of transaction journal records, general ledger, balance sheet, income statement, balance sheet, inventory position, and vehicle service record. Application development stage using Microsoft Access 2007 software and VBA facilities for automation of journal posting and inventory cost calculation based on moving average method. The next step is to test the application to get sufficient confidence in transaction processing by comparing the results of the processing with the manual calculations performed. The implementation process is carried out by converting the initial data on the conversion date and inputting the transaction up to the company's operating date. After carrying out the implementation and conversion, the final stage is to carry out training to users and carry out the process of improvement and adjustments to the application based on feedback received from the user. The system design and implementation process have been deemed successful after the user has stated that the application has met all user needs and is operating properly.
- Published
- 2020
- Full Text
- View/download PDF
10. Optimistic Transaction Processing in Deterministic Database
- Author
-
Zhaoguo Wang, Zhiyuan Dong, Haibo Chen, Binyu Zang, Jiachen Wang, and Chuzhe Tang
- Subjects
Database ,Computer science ,Transaction processing ,020207 software engineering ,02 engineering and technology ,Commit ,computer.software_genre ,Blocking (computing) ,Computer Science Applications ,Theoretical Computer Science ,Concurrency control ,Computational Theory and Mathematics ,Hardware and Architecture ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Optimistic concurrency control ,computer ,Database transaction ,Software - Abstract
Deterministic databases can improve the performance of distributed workload by eliminating the distributed commit protocol and reducing the contention cost. Unfortunately, the current deterministic scheme does not consider the performance scalability within a single machine. In this paper, we describe a scalable deterministic concurrency control, Deterministic and Optimistic Concurrency Control (DOCC), which is able to scale the performance both within a single node and across multiple nodes. The performance improvement comes from enforcing the determinism lazily and avoiding read-only transaction blocking the execution. The evaluation shows that DOCC achieves 8x performance improvement than the popular deterministic database system, Calvin.
- Published
- 2020
- Full Text
- View/download PDF
11. Signature Verification Using Critical Segments for Securing Mobile Transactions
- Author
-
Jie Yang, Mooi Choo Chuah, Yingying Chen, Chen Wang, and Yanzhi Ren
- Subjects
Authentication ,Exploit ,Computer Networks and Communications ,Transaction processing ,Computer science ,Feature extraction ,Mobile computing ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Digital signature ,Handwriting recognition ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Electrical and Electronic Engineering ,computer ,Mobile device ,Software - Abstract
The explosive usage of mobile devices enables conducting electronic transactions involving direct signature on such devices. Thus, user signature verification becomes critical to ensure the success deployment of online transactions such as approving legal documents and authenticating financial transactions. Existing approaches mainly focus on user verification targeting the unlocking of mobile devices or performing continuous verification based on a user's behavioral traits. Few studies provide efficient real-time user signature verification. In this work, we propose a critical segment based online signature verification system to secure mobile transactions on multi-touch mobile devices. Our system identifies and exploits the segments which remain invariant within a user's signature to capture the intrinsic signing behavior embedded in each user's signature. Our system extracts useful features from a user's signature that describe both the geometric layout of the signature as well as behavioral and physiological characteristics in the user's signing process. Given the input signatures for user enrollment, our system further designs a quality score to identify the problematic signature sets to achieve robust user signature profile construction. Moreover, we develop the signature normalization and interpolation methods to achieve robust signature verification in the presence of signature geometric distortions caused by different writing sizes, orientations and locations on touch screens. Our experimental evaluation of 25 subjects over six months time period shows that our system is highly accurate in provide signature verification and robust to signature forging attacks.
- Published
- 2020
- Full Text
- View/download PDF
12. Design and implement a new secure prototype structure of e-commerce system
- Author
-
Hala Bahjat Abdul Wahab, Abdul Monem S. Rahma, and Farah Tawfiq Abdul Hussien
- Subjects
General Computer Science ,Computer science ,Transaction processing ,business.industry ,Performance ,E-commerce system management ,Deadlock ,E-commerce ,Computer security ,computer.software_genre ,Secure e-commerce structure ,Field (computer science) ,Agent ,Order (business) ,Software agent ,The Internet ,Electrical and Electronic Engineering ,business ,computer ,Database transaction ,E-commerce classification - Abstract
The huge development of internet technologies and the widespread of modern and advanced devices lead to an increase in the size and diversity of e-commerce system development. These developments lead to an increase in the number of people that navigate these sites asking for their services and products. Which leads to increased competition in this field. Moreover, the expansion in the size of currency traded makes transaction protection an essential issue in this field. Providing security for each online client especially for a huge number of clients at the same time, causing an overload on the system server. This problem may lead to server deadlock, especially at rush time, which reduce system performance. To solve security and performance problems, this research suggests a prototype design for agent software. This agent will play the role of broker between the clients and the electronic marketplace. This is done by providing security inside the client device and converting the client’s order into a special form which is called a record form to be sent to the commercial website. Experimental results showed that this method increase system performance in terms of page loading time, transaction processing and improves the utilization of system resources.
- Published
- 2022
13. Authentication in an internet banking environment : towards developing a strategy for fraud detection
- Author
-
Kane Baxter Bignell
- Subjects
Authentication ,Computer science ,Transaction processing ,business.industry ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Intrusion detection system ,Computer security ,computer.software_genre ,ComputingMilieux_COMPUTERSANDEDUCATION ,Message authentication code ,Anomaly detection ,The Internet ,business ,computer ,Database transaction ,Financial fraud - Abstract
This paper outlines a framework for Internet banking security using multi-layered, feed-forward artificial neural networks. Such applications utilise anomaly detection techniques which can be applied for transaction authentication and intrusion detection within Internet banking security architectures. Such fraud detection strategies have the potential to significantly limit present levels of financial fraud in comparison to existing fraud prevention techniques.
- Published
- 2022
- Full Text
- View/download PDF
14. Privacy preserving event based transaction system in a decentralized environment
- Author
-
Rabimba Karanjai, Mudabbir Kaleem, Lei Xu, Zhimin Gao, Lin Chen, and Weidong Shi
- Subjects
Computer science ,Event (computing) ,Transaction processing ,Complex event processing ,020206 networking & telecommunications ,02 engineering and technology ,Asset (computer security) ,Computer security ,computer.software_genre ,Data model ,Asynchronous communication ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Zero-knowledge proof ,Database transaction ,computer - Abstract
In this paper, we present the design and implementation of a privacy preserving event based UTXO (Unspent Transaction Output) transaction system. Unlike the existing approaches that often depend on smart contracts where digital assets are first locked in a vault, and then released according to event triggers, the event based transaction system encodes event outcome as part of the UTXO note and safeguards event privacy by shielding it with zero-knowledge proof based protocols such that associations between UTXO notes and events are hidden from the validators. Without relying on any triggering mechanism, the proposed transaction system separates event processing from the transaction processing where confidential event based UTXO notes (event based UTXOs or conditional UTXOs) can be transferred freely with full privacy in an asynchronous manner, only with their asset values conditional to the linked event outcomes. The main advantage of such design is that it enables free trade of event based digital assets and prevents the assets from being locked. We implemented the proposed transaction system by extending the Zerocoin data model and protocols. The system is implemented and evaluated using xJsnark.
- Published
- 2021
- Full Text
- View/download PDF
15. Cobots for FinTech
- Author
-
Guruprasad Hagari, Eshan Baranwal, and Mithun Sivan
- Subjects
Service (systems architecture) ,Computer science ,Integrated project delivery ,Transaction processing ,business.industry ,Time to market ,Computer security ,computer.software_genre ,Credit card ,Embedded software ,Software ,Mobile phone ,business ,computer - Abstract
Embedded devices enabling payments transaction processing in Financial Services industry cannot have any margin for error. These devices need to be tested & validated by replicating production like environment to the extent possible. This means literally handling payments related events like swiping a credit card, tapping a mobile phone or pressing buttons amongst many other things like in real world. Embedded Software development is time consuming as it involves multiple man-machine interactions and dependencies such as managing and handling embedded devices, operating devices (Push buttons, interpret display panels, read receipt printouts etc.) and sharing devices for collaboration within team. During the current pandemic, it was impossible for software teams to travel to office, share devices or even procure necessary devices on time for project related tasks. This caused delay to project delivery and increased Time to market. The paper describes how the team used Capgemini's flexible Robotics as a Service (RaaS) platform that helped during pandemic to automate feasible man-machine interactions using Robotic arms. The paper provides details of the work done by the team that involves internet of things (IoT), Artificial Intelligence (AI) to remotely handle and operate hardware and devices thereby completing embedded software development life cycles faster and well within budget while ensuring superior product quality and importantly ensuring team's health and safety. This is novel in Financial Services space.
- Published
- 2021
- Full Text
- View/download PDF
16. Adaptive layer-two dispute cutoffs in smart-contract blockchains
- Author
-
Naranker Dulay and Rami Khalil
- Subjects
Record locking ,Smart contract ,Transaction processing ,Computer science ,media_common.quotation_subject ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Throughput ,Denial-of-service attack ,Payment ,computer.software_genre ,Computer security ,Virtual machine ,Adaptive system ,ComputingMilieux_COMPUTERSANDSOCIETY ,computer ,media_common - Abstract
Second-layer or off-chain protocols aim to increase the throughput of permissionless blockchains by enabling parties to lock funds into smart-contracts and perform payments through peer-to-peer communication, only resorting to the smart-contracts for protection against fraud. Current protocols have fixed periods during which participants can dispute any fraud attempts. However, current blockchains have limited transaction processing capacity, so a fixed dispute period will not always be sufficient to deter all fraudulent behaviour in an off-chain protocol. In this paper we present a novel mechanism for adaptive dispute cutoffs (ADCs) which ensure that users retain the opportunity to dispute fraudulent behaviours despite blockchain congestion, while increasing second-layer protocol efficiency by reducing dispute period lengths when the number of disputes is low. We present a non-interactive argument system for setting adaptive dispute periods under the current Ethereum Virtual Machine, and describe how to efficiently integrate built-in support for adaptive dispute periods in any blockchain using binary-indexed trees. We empirically demonstrate that an ADC-enabled second -layer protocol can handle a larger number of disputes and prevent more fraud than its non-adaptive counterparts even when users are slow to issue disputes, due to denial of service or blockchain congestion.
- Published
- 2021
- Full Text
- View/download PDF
17. Strict Serializable Multidatabase Certification with Out-of-Order Updates
- Author
-
Emil Koutanov
- Subjects
Consistency (database systems) ,Concurrency control ,Transaction processing ,Computer science ,Distributed transaction ,Isolation (database systems) ,Certification ,Timestamp ,Computer security ,computer.software_genre ,computer ,Optimistic concurrency control - Abstract
Multi-phase atomic commitment protocols require long-lived resource locks on the participants and introduce blocking behaviour at the coordinator. They are also pessimistic in nature, preventing reads from executing concurrently with writes. Despite their known shortfalls, multi-phase protocols are the mainstay of transactional integration between autonomous, federated systems. This paper presents a novel atomic commitment protocol, STRIDE (Serializable Transactions in Decentralised Environments), that offers strict serializable certification of distributed transactions across autonomous, replicated sites. The protocol follows the principles of optimistic concurrency control, operating on the premise that conflicting transactions are infrequent. When they do occur, conflicting transactions are identified through antidependency testing on the certifier, which may be replicated for performance and availability. The majority of transactions can be certified entirely in memory. Unlike its multi-phase counterparts, STRIDE is nonblocking, decentralised and does not mandate the use of long-lived resource locks on the participants. It also offers a flexible isolation model for read-only transactions, which can be served directly from the participant sites without undergoing certification. Also, update transactions are Φ-serializable, making the certifier immune to the recently disclosed logical timestamp skew anomaly.
- Published
- 2021
- Full Text
- View/download PDF
18. The problem of transaction processing using microservice architecture
- Author
-
D.S. Fomin and A.V. Bal'zamov
- Subjects
Technology ,transaction ,Computer science ,Transaction processing ,microservice architecture ,Operating system ,service ,business process ,Architecture ,computer.software_genre ,computer ,database - Abstract
Background. The object of the research is an e-commerce system built on the principle of microservice architecture. The subject of the research is methods of ensuring correct operation of transactions using a microservice architecture. The purpose of the work is to find an optimal method for solving the problem of processing transactions using a microservice architecture. Materials and methods. Research was carried out in the field of architectural solutions for the construction of high-load e-commerce systems. T Two-phase commit methods were used to process transactions and a pattern-compensating transaction − “Saga”. Results. The research analyzes the features of working with transactions and proposes methods for solving the problem of processing transactions in systems built using a microservice architecture. Conclusions. The approaches considered, as a rule, involve the introduction of additional services (Transaction Coordinator or Saga Orchestrator) that manage the life cycle of transactions, which increases development costs and complexity. Applying the described solution methods, the system becomes more fault-tolerant and scalable.
- Published
- 2021
19. TransKV: A Networking Support for Transaction Processing in Distributed Key-value Stores
- Author
-
Hebatalla Eldakiky and David H. C. Du
- Subjects
Distributed database ,Transaction processing ,Network packet ,Computer science ,business.industry ,Throughput ,NoSQL ,computer.software_genre ,Concurrency control ,Timestamp ,business ,Database transaction ,computer ,Computer network - Abstract
Through the massive use of mobile devices, data clouds, and the rise of Internet of Things, enormous amount of data has been generated and analyzed for the benefit of society. NoSQL Databases and specially key-value stores become the backbone in managing these large amounts of data. Most of key-value stores ignore transactions due to their effect on degrading key-value store’s performance. Meanwhile, programmable switches with the software-defined networks and the Programming Protocol-Independent Packet Processor (P4) lead to a programmable network where in-network computation can help accelerating the performance of applications. In this paper, we proposed a networking support for transaction processing in distributed key-value stores. Our system leverages the programmable switch to act as a transaction coordinator. Using a variation of the time stamp ordering concurrency control approach, the programmable switch can decide to proceed in transaction processing or abort the transaction directly from the network. Our experimental results on an initial prototype show that our proposed approach, while supporting transactions, improves the throughput by up to 4X and reduces the latency by 35% when compared to the existing architectures.
- Published
- 2021
- Full Text
- View/download PDF
20. An Evaluation of Uncle Block Mechanism Effect on Ethereum Selfish and Stubborn Mining Combined With an Eclipse Attack
- Author
-
Yiming Hei, Jianwei Liu, Tongge Xu, and Yizhong Liu
- Subjects
030506 rehabilitation ,General Computer Science ,Computer science ,02 engineering and technology ,Interval (mathematics) ,Computer security ,computer.software_genre ,03 medical and health sciences ,Ethereum ,eclipse attack ,Order (exchange) ,0202 electrical engineering, electronic engineering, information engineering ,selfish mining ,Production (economics) ,General Materials Science ,Block (data storage) ,Transaction processing ,General Engineering ,Process (computing) ,020206 networking & telecommunications ,uncle block ,stubborn mining ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,0305 other medical science ,Database transaction ,computer ,lcsh:TK1-9971 - Abstract
Ethereum accelerates the transaction process through a quicker block creation design. Since the time interval between the generation of blocks is very short (about 15s), block propagation time in an inefficient network is not negligible compared with the block time interval. This lead to the production of a large number of orphan blocks. In order to solve the security problems that may be caused by the orphan block and improve the transaction processing efficiency, Ethereum introduces the uncle block mechanism, i.e., an orphan block may get part of minted reward if it gets a reference by a regular block. In this paper, we show the weakness of the uncle block mechanism. Firstly, we describe the specific differences of Ethereum selfish and stubborn mining in every state from the ones in Bitcoin. Secondly, we simulate possible attacks, and the results show that the Ethereum selfish and stubborn mining strategies not only increase the reward of an attacker but also decrease the security threshold. The security threshold refers to the proportion of the attacker's computational power that needs to be achieved in order to obtain a higher reward than he should. In a practical network congestion rate, the security threshold are weakened to 0.129 and 0.216 against the Lead stubborn mining strategy and the original selfish mining strategy, respectively. When the congestion rate is rising, the reward is increasing and the threshold is decreasing. Thirdly, possible strategies are evaluated to find out the optimal one in different settings. Fourthly, we also extend the evaluation by combining three eclipse attack strategies with selfish or stubborn mining. Most of combinations bring more advantages to an attacker than a single strategy.
- Published
- 2020
21. RandAdminSuite: A New Privacy-Enhancing Solution for Private Blockchains
- Author
-
Adam Mihai Gergely and Bogdan Crainicu
- Subjects
0209 industrial biotechnology ,Cryptocurrency ,Blockchain ,Transaction processing ,Computer science ,02 engineering and technology ,Computer security ,computer.software_genre ,Transparency (behavior) ,Industrial and Manufacturing Engineering ,020303 mechanical engineering & transports ,020901 industrial engineering & automation ,0203 mechanical engineering ,Artificial Intelligence ,Currency ,Architecture ,computer ,Database transaction ,Anonymity - Abstract
Blockchain is an innovative technology which is used by cryptocurrencies as a public, immutable ledger for recording transactions, while more recent versions of Blockchain can also record smart contracts and other assets. Blockchain can be viewed as a distributed platform that holds transactional records without the involvement of a central authority, and where ensuring decentralization, transparency and security is of paramount importance. But the transparency requirement, absolutely necessary for improving trust among the blockchain’s users, came with a price: lack of privacy. In most of the blockchains which are based on Bitcoin’s Blockchain, anyone can query the blockchain and see all the transactions. This introduces a privacy issue which needs to be addressed. Although there are a few solutions for mitigating the privacy concern, we consider that true anonymity must be built-in, not added on trough extensions to the base protocol. We propose a novel solution, called RandAdminSuite, that addresses the blockchain privacy problem through a comprehensive approach that covers the blockchain architecture itself, transaction mechanism and cryptocurrency as well. RandAdminSuite offers some improvements over the concept of currency rewards for transaction processing nodes.
- Published
- 2020
- Full Text
- View/download PDF
22. Design of a Control System for a Vending Machine
- Author
-
Khumbulani Mpofu, Solomon Sibanda, Vennan Sibanda, and Eriyeti Murena
- Subjects
Inventory control ,0209 industrial biotechnology ,Point of sale ,Transaction processing ,business.industry ,Computer science ,Payment system ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,020901 industrial engineering & automation ,User experience design ,Control system ,Microcomputer ,Operating system ,General Earth and Planetary Sciences ,User interface ,business ,computer ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
Vending machines are available in many public places for vending of items like snacks, beverages, newspapers, tickets and smoking cigarettes Recently developed vending machine requires a control system to offer a variety of products to the general public. In this light, this paper, therefore, is aimed at developing a control system for the developed vending machine by developing various inputs required to make the machine function efficiently. The system controls and monitors the vending machine functions, namely: alarm system, product dispensing, refrigeration and payment system. The microcomputer capitalises on the evolution of high-performance processors and stable operating systems to implement control requirements. The project shall use intelligent vending machine input/output board to link other machine peripherals. The control system shall enable the machine to handle coin, mobile and point of sale terminal payment options. Implementation of the control system enhances flexibility in payment, remote machine monitoring and inventory control, and improved user experience through the integration of digital touch screen user interfaces and high-speed transaction processing.
- Published
- 2020
- Full Text
- View/download PDF
23. Lessons in Persisting Object Data Using Object-Relational Mapping
- Author
-
Gregory Vial
- Subjects
Object-oriented programming ,Java ,Transaction processing ,Computer science ,Relational database ,business.industry ,InformationSystems_DATABASEMANAGEMENT ,Information technology ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Object (computer science) ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Object-relational mapping ,business ,Software engineering ,computer ,computer.programming_language - Abstract
In this article, object-relational mapping (ORM) engines in object-oriented programming are introduced. The tradeoffs that must be considered when using ORM are discussed and four lessons that will help software developers take advantage of ORM in transaction processing scenarios are provided.
- Published
- 2019
- Full Text
- View/download PDF
24. Robotic Process Automation
- Author
-
Vinay Kommera
- Subjects
Transaction processing ,business.industry ,Computer science ,05 social sciences ,Digital transformation ,050301 education ,030206 dentistry ,Process automation system ,computer.software_genre ,Automation ,03 medical and health sciences ,0302 clinical medicine ,Software ,Scripting language ,General Earth and Planetary Sciences ,Robot ,User interface ,Software engineering ,business ,0503 education ,computer ,General Environmental Science - Abstract
RPA, or Robotic Process Automation, is a software that mimics the steps a human takes to complete rules-based, repetitive tasks. The robot carries out work with speed and precision, utilizing the same applications your employees use every day. In traditional automation, all the actions are primarily based on the programming/scripting, APIs or other ways of integration methods to the backend systems or internal applications. In distinction, RPA automates software that can migrate the work from the human to the computer which can stop paying humans to do work ripe for automation, faster front and back office transaction processing, near “Instant On” integration at the lowest cost, optimization of User Interface to drive long call/transaction times down, accelerate digital transformation objectives, eliminate errors thereby improving productivity by making workers smarter.
- Published
- 2019
- Full Text
- View/download PDF
25. Cloud-native database systems at Alibaba
- Author
-
Feifei Li
- Subjects
0106 biological sciences ,Database ,business.industry ,Transaction processing ,Computer science ,010604 marine biology & hydrobiology ,Online analytical processing ,Big data ,General Engineering ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Autoscaling ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,business ,Database transaction ,computer ,Intelligent database - Abstract
Cloud-native databases become increasingly important for the era of cloud computing, due to the needs for elasticity and on-demand usage by various applications. These challenges from cloud applications present new opportunities for cloud-native databases that cannot be fully addressed by traditional on-premise enterprise database systems. A cloud-native database leverages software-hardware co-design to explore accelerations offered by new hardware such as RDMA, NVM, kernel bypassing protocols such as DPDK. Meanwhile, new design architectures, such as shared storage, enable a cloud-native database to decouple computation from storage and provide excellent elasticity. For highly concurrent workloads that require horizontal scalability, a cloud-native database can leverage a shared-nothing layer to provide distributed query and transaction processing. Applications also require cloud-native databases to offer high availability through distributed consensus protocols. At Alibaba, we have explored a suite of technologies to design cloud-native database systems. Our storage engine, X-Engine and PolarFS, improves both write and read throughputs by using a LSM-tree design and self-adapted separation of hot and cold data records. Based on these efforts, we have designed and implemented POLARDB and its distributed version POLARDB-X, which has successfully supported the extreme transaction workloads during the 2018 Global Shopping Festival on November 11, 2018, and achieved commercial success on Alibaba Cloud. We have also designed an OLAP system called AnalyticDB (ADB in short) for enabling real-time interactive data analytics for big data. We have explored a self-driving database platform to achieve autoscaling and intelligent database management. We will report key technologies and lessons learned to highlight the technical challenges and opportunities for cloud-native database systems at Alibaba.
- Published
- 2019
- Full Text
- View/download PDF
26. SolarDB
- Author
-
Tao Zhu, Zhuoyue Zhao, Dong Xie, Aoying Zhou, Feifei Li, Ryan Stutsman, Huiqi Hu, Haining Li, and Weining Qian
- Subjects
Concurrency control ,Data access ,Database ,Relational database management system ,Hardware and Architecture ,Computer science ,Transaction processing ,Key (cryptography) ,Distributed transaction ,Workload ,Layer (object-oriented design) ,computer.software_genre ,computer - Abstract
Efficient transaction processing over large databases is a key requirement for many mission-critical applications. Although modern databases have achieved good performance through horizontal partitioning, their performance deteriorates when cross-partition distributed transactions have to be executed. This article presents SolarDB, a distributed relational database system that has been successfully tested at a large commercial bank. The key features of SolarDB include (1) a shared-everything architecture based on a two-layer log-structured merge-tree; (2) a new concurrency control algorithm that works with the log-structured storage, which ensures efficient and non-blocking transaction processing even when the storage layer is compacting data among nodes in the background; and (3) find-grained data access to effectively minimize and balance network communication within the cluster. According to our empirical evaluations on TPC-C, Smallbank, and a real-world workload, SolarDB outperforms the existing shared-nothing systems by up to 50x when there are close to or more than 5% distributed transactions.
- Published
- 2019
- Full Text
- View/download PDF
27. A Proof-of-Trust Consensus Protocol for Enhancing Accountability in Crowdsourcing Services
- Author
-
Bin Ye, Mehmet A. Orgun, Lei Li, Jun Zou, Yan Wang, and Lie Qu
- Subjects
Service (business) ,Information Systems and Management ,Computer Networks and Communications ,business.industry ,Computer science ,Transaction processing ,Crowdsourcing ,computer.software_genre ,Computer security ,Computer Science Applications ,Hardware and Architecture ,Paxos ,Trust management (information system) ,Web service ,business ,Byzantine fault tolerance ,computer ,Database transaction - Abstract
Incorporating accountability mechanisms in online services requires effective trust management and immutable, traceable source of truth for transaction evidence. The emergence of the blockchain technology brings in high hopes for fulfilling most of those requirements. However, a major challenge is to find a proper consensus protocol that is applicable to the crowdsourcing services in particular and online services in general. Building upon the idea of using blockchain as the underlying technology to enable tracing transactions for service contracts and dispute arbitration, this paper proposes a novel consensus protocol that is suitable for the crowdsourcing as well as the general online service industry. The new consensus protocol is called “Proof-of-Trust” (PoT) consensus; it selects transaction validators based on the service participants’ trust values while leveraging RAFT leader election and Shamir's secret sharing algorithms. The PoT protocol avoids the low throughput and resource intensive pitfalls associated with Bitcoin’ s “Proof-of-Work” (PoW) mining, while addressing the scalability issue associated with the traditional Paxos-based and Byzantine Fault Tolerance (BFT)-based algorithms. In addition, it addresses the unfaithful behaviors that cannot be dealt with in the traditional BFT algorithms. The paper demonstrates that our approach can provide a viable accountability solution for the online service industry.
- Published
- 2019
- Full Text
- View/download PDF
28. Automated Workflow Execution of Loan Transaction Processing for Distributed Environment
- Author
-
Emmanuel O. Oshoiribhor, A. M. John-Otumu, and R. E Imhanlahimi
- Subjects
Distributed Computing Environment ,Workflow ,Database ,Transaction processing ,Computer science ,Loan ,computer.software_genre ,computer - Published
- 2019
- Full Text
- View/download PDF
29. Utilizing IPFS and Private Blockchain to Secure Forensic Information
- Author
-
Saha Reno, Shovan Bhowmik, and Mamun Ahmed
- Subjects
File system ,Immutability ,Blockchain ,restrict ,Computer science ,Transaction processing ,Law enforcement ,Confidentiality ,computer.software_genre ,Computer security ,Database transaction ,computer - Abstract
Forensic Science includes scientific methods to find out the actual cause of a crime and to bring justice to the victims. Forensic reports incorporate information regarding different crimes. These details are considered as extremely valuable and confidential as it helps the law enforcement agencies and prosecutors to ensure punishment to the blameworthy persons. These reports require security only to restrict access to the authorized persons. Blockchain stores every transaction occurring in the system and these transactions cannot be removed or modified because of their immutability. In this work, Inter-Planetary File System (IPFS) and Hyperledger based private blockchain are assembled to implement a secure forensic information storing system. Our system enables the tracing of any illegitimate en-trance or data tempering by the intruders. Our proposed hybrid approach surpasses the classical public blockchain systems i.e. Bitcoin and Ethereum in terms of transaction processing time achieving an average of 11.99 seconds per transaction. This system also facilitates the storing of heavyweight features which is not possible inside the existing blockchain frameworks.
- Published
- 2021
- Full Text
- View/download PDF
30. Scale-down experiments on TPCx-HS
- Author
-
Tilmann Rabl and Maximilian Böther
- Subjects
Data processing ,Commodity hardware ,Transaction processing ,business.industry ,Computer science ,Big data ,computer.software_genre ,Raspberry pi ,Benchmark (computing) ,Operating system ,business ,Throughput (business) ,computer ,Scale down - Abstract
The Transaction Processing Performance Council's (TPC) benchmarks are the standard for evaluating data processing performance and are extensively used in academia and industry. Official TPC results are usually produced on high-end deployments, making transferability to commodity hardware difficult. Recent performance improvements on low-power ARM CPUs have made low-end computers, such as the Raspberry Pi, a candidate platform for distributed, low-scale data processing. In this paper, we conduct a feasibility study of executing scaled-down big data workloads on low-power ARM clusters. To this end, we run the TPCx-HS benchmark on two Raspberry Pi clusters. TPCx-HS is the ideal candidate for hardware comparisons and understanding hardware characteristics for data processing workloads because TPCx-HS results do not depend on specific software implementations and the benchmark has limited options for workload-specific tuning. Our evaluation shows that Pis exhibit similar behavior to large-scale big data systems in terms of price performance and relative throughput to performance results. Current generation Pi clusters are becoming a reasonable choice for GB-scale data processing due to the increasing amount of available memory, while older versions struggle with stable execution of high-load scenarios.
- Published
- 2021
- Full Text
- View/download PDF
31. HeuristicDB
- Author
-
David J. Lilja, Bingzhe Li, and Jinfeng Yang
- Subjects
020203 distributed computing ,Database ,business.industry ,Computer science ,Transaction processing ,Online analytical processing ,Big data ,Device file ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Non-volatile memory ,Relational database management system ,0202 electrical engineering, electronic engineering, information engineering ,Online transaction processing ,business ,computer ,Block (data storage) - Abstract
Hybrid storage systems are widely used in big data fields to balance system performance and cost. However, due to a poor understanding of the characteristics of database block requests, past studies in this area cannot fully utilize the performance gain from emerging storage devices. This study presents a hybrid storage database system, called HeuristicDB, which uses an emerging non-volatile memory (NVM) block device as an extension of the database buffer pool. To consider the unique performance behaviors of NVM block devices and the block-level characteristics of database requests, a set of heuristic rules that associate database (block) requests with the appropriate quality of service for the purpose of caching priority are proposed. Using online analytical processing (OLAP) and online transactional processing (OLTP) benchmarks, both trace-based examination and system implementation on MySQL are carried out to evaluate the effectiveness of the proposed design. The experimental results indicate that HeuristicDB provides up to 75% higher performance and migrates 18X fewer data between storage and the NVM block device than existing systems.
- Published
- 2021
- Full Text
- View/download PDF
32. Attaining Workload Scalability and Strong Consistency for Replicated Databases with Hihooi
- Author
-
Michael A. Georgiou, Lambros Odysseos, Michael Sirivianos, Herodotos Herodotou, Aristodemos Paphitis, and Michael Panayiotou
- Subjects
Computer and Information Sciences ,Workload scalability ,Database ,Computer science ,Transaction processing ,Replica ,Strong consistency ,computer.software_genre ,Replication (computing) ,Concurrency control ,Consistency (database systems) ,Scalability ,Natural Sciences ,Database replication ,computer ,Database transaction - Abstract
Database replication can be employed for scaling transactional workloads while maintaining strong consistency semantics. However, past approaches suffer from various issues such as limited scalability, performance versus consistency tradeoffs, and requirements for database or application modifications. Hihooi is a new replication-based master-slave middleware system that is able to overcome the aforementioned limitations. The novelty of Hihooi lies in its modern architecture as well as its replication and transaction routing algorithms. In particular, Hihooi replicates all write statements asynchronously and applies them in parallel at the replica nodes, while ensuring replica consistency. At the same time, a fine-grained transaction routing algorithm ensures that all read transactions are load balanced to the replicas consistently. This demonstration will showcase the key functionalities of Hihooi, including (i) practical management of system components and databases (e.g., add a new replica node), (ii) increased scalability compared to state-of-the-art approaches, and (iii) support for elasticity by suspending and resuming database replicas online without service interruption.
- Published
- 2021
- Full Text
- View/download PDF
33. EVM-Perf: High-Precision EVM Performance Analysis
- Author
-
Stefan Tai, Anselm Busse, and Jacob Eberhardt
- Subjects
Instruction set ,Smart contract ,Computer engineering ,Computer science ,Transaction processing ,Virtual machine ,Information system ,x86 ,computer.software_genre ,Dimensioning ,computer ,Block (data storage) - Abstract
Smart contract execution engines are a central part of transaction processing in blockchains. One of the most widely used execution engines is the Ethereum Virtual Machine (EVM). EVM performance is a key determinant of the overall blockchain system. Being able to gather detailed insights into EVM performance characteristics is essential not only when implementing EVMs, but also when designing and dimensioning blockchain-based information systems. Today, there is no precise and easy-to-use solution to gather such information. To address this issue, we introduce EVM-Perf, a high-precision EVM evaluation, and analysis framework. Our framework allows detailed performance analysis of arbitrary EVM implementations. Unlike previous work, it leverages statical analysis methods to achieve higher levels of accuracy. We provide an open-source implementation of EVM-Perf and use it to conduct extensive performance analysis for different physical machines. It comprises both machines with x86_64 as well as ARMv8 instruction set architecture. We discuss how EVM-Perf supports system architects with machine selection as well as configurations of Ethereum deployments, e.g., determining Gas limits or block intervals.
- Published
- 2021
- Full Text
- View/download PDF
34. On Conditional Cryptocurrency With Privacy
- Author
-
Weidong Shi, Lin Chen, Lei Xu, Rabimba Karanjai, Mudabbir Kaleem, and Zhimin Gao
- Subjects
Information privacy ,Cryptocurrency ,Data model ,Transaction processing ,Asynchronous communication ,Computer science ,Complex event processing ,Computer security ,computer.software_genre ,Asset (computer security) ,computer ,Event (probability theory) - Abstract
In this paper, we present the design and imple-mentation of a conditional cryptocurrency system with privacy protection. Unlike the existing approaches that often depend on smart contracts where cryptocurrencies are first locked in a vault, and then released according to event triggers, the conditional cryptocurrency system encodes event outcome as part of a cryptocurrency note in a UTXO based system. Without relying on any triggering mechanism, the proposed system separates event processing from conditional coin transaction processing where conditional cryptocurrency notes can be transferred freely in an asynchronous manner, only with their asset values conditional to the linked event outcomes. The main advantage of such design is that it enables free trade of conditional assets and prevents assets from being locked. In this work, we demonstrate a method of confidential conditional coin by extending the Zerocoin data model and protocol. The system is implemented and evaluated using xJsnark.
- Published
- 2021
- Full Text
- View/download PDF
35. Talking Blockchains: The Perspective of a Database Researcher
- Author
-
Felix Martin Schuhknecht
- Subjects
Structure (mathematical logic) ,Blockchain ,Database ,Distributed database ,Transaction processing ,Computer science ,business.industry ,Existential quantification ,010102 general mathematics ,Cryptography ,0102 computer and information sciences ,computer.software_genre ,01 natural sciences ,Variety (cybernetics) ,Workflow ,010201 computation theory & mathematics ,0101 mathematics ,business ,computer - Abstract
There are few topics out there, that seem to create as much confusion and discussion as blockchains. This has a multitude of reasons: (1) A large number of drastically different concepts and systems are unified under the very broad term "blockchain". (2) The topic touches a variety of different fields, including databases, distributed processing, networks, cryptography, and even economics. (3) There exists a large number of different applications of the technology.The goal of this paper is to simplify and structure the discussion of blockchain technology. We first introduce a simple formalization of the basic components, that appear again and again in a variety of blockchain systems. Second, we formalize four important execution models, that express the workflow of a large number of blockchain systems. Third, along the way, the we also discuss certain misconceptions, that constantly reappear in discussions. We believe that a common formalization of the transaction processing behavior of blockchain systems both helps beginners to get into the topic as well as can bring experts from different fields to a common denominator when discussing blockchain systems.
- Published
- 2021
- Full Text
- View/download PDF
36. Discriminative Admission Control for Shared-everything Database under Mixed OLTP Workloads
- Author
-
Aoying Zhou, Donghui Wang, Peng Cai, and Weining Qian
- Subjects
Concurrency control ,Database ,Transaction processing ,Computer science ,Concurrent computing ,Thrashing ,Online transaction processing ,Admission control ,computer.software_genre ,Database transaction ,computer ,Partition (database) - Abstract
Due to the variability of IT applications, the back-end databases usually run the mixed OLTP workload, which comprises a variety of transactions. Some of these transactions are high-conflict and others are low-conflict. Furthermore, high-conflict transactions may contend on different groups of data stored in the database. Without precise admission control, too many transactions with conflict on the same group of records are simultaneously executed by the OLTP engine, and this will lead to the well-known problem of data-contention thrashing. Under mixed OLTP workloads, conflicting transactions would be blocked for a long time or rolled back finally, and other transactions have not enough opportunity to be processed.To achieve the optimal performance for each kind of transaction, we design a discriminative admission control mechanism for shared-everything database, referred to as DAC. DAC can quickly identify and classify high-conflict transactions according to the set of records they try to access, which is defined as a conflict zone. DAC makes admission control over OLTP transactions with the conflict zone as the granularity. By adaptively adjusting the transaction concurrency level for each zone, transaction blocking and waiting among the same kind of high-conflict transactions can be alleviated. Furthermore, thread resources are released to make the execution of low-conflict transactions less affected. We evaluate DAC using a main-memory database prototype and a classical disk-based database system. Experimental results demonstrate that DAC can help OLTP engine significantly improve the performance under mixed OLTP workloads.
- Published
- 2021
- Full Text
- View/download PDF
37. Design and Implementation of a Secure QR Payment System Based on Visual Cryptography
- Author
-
Rania Al-Sabha, Ali Al-Haj, and Lina Ahmad
- Subjects
business.industry ,Computer science ,Transaction processing ,media_common.quotation_subject ,Payment system ,020206 networking & telecommunications ,Cryptography ,02 engineering and technology ,Payment ,Computer security ,computer.software_genre ,Visual cryptography ,Server ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Payment gateway ,business ,Database transaction ,computer ,media_common - Abstract
In this paper, we will describe the design and implementation of a secure payment system based on QR codes. These QR codes have been extensively used in recent years since they speed up the payment process and provide users with ultimate convenience. However, as convenient as they may sound, QR-based online payment systems are vulnerable to different types of attacks. Therefore, transaction processing needs to be secure enough to protect the integrity and confidentiality of every payment process. Moreover, the online payment system must provide authenticity for both the sender and receiver of each transaction. In this paper, the security of the proposed QR-based system is provided using visual cryptography. The proposed system consists of a mobile application and a payment gateway server that implements visual cryptography. The application provides a simple and user-friendly interface for users to carry out payment transactions in user-friendly secure environment.
- Published
- 2021
- Full Text
- View/download PDF
38. Benchmarking MongoDB multi-document transactions in a sharded cluster
- Author
-
Tushar Panpaliya
- Subjects
Consistency (database systems) ,Atomicity ,Database ,Relational database ,Transaction processing ,Computer science ,Scalability ,Online transaction processing ,computer.software_genre ,NoSQL ,computer ,Database transaction - Abstract
Relational databases like Oracle, MySQL, and Microsoft SQL Server offer trans- action processing as an integral part of their design. These databases have been a primary choice among developers for business-critical workloads that need the highest form of consistency. On the other hand, the distributed nature of NoSQL databases makes them suitable for scenarios needing scalability, faster data access, and flexible schema design. Recent developments in the NoSQL database community show that NoSQL databases have started to incorporate transactions in their drivers to let users work on business-critical scenarios without compromising the power of distributed NoSQL features [1]. MongoDB is a leading document store that has supported single document atomicity since its first version. Sharding is the key technique to support the horizontal scalability in MongoDB. The latest version MongoDB 4.2 enables multi-document transactions to run on sharded clusters, seeking both scalability and ACID multi- documents. Transaction processing is a novel feature in MongoDB, and benchmarking the performance of MongoDB multi-document transactions in sharded clusters can encourage developers to use ACID transactions for business-critical workloads. We have adapted pytpcc framework to conduct a series of benchmarking experi- ments aiming at finding the impact of tunable consistency, database size, and design choices on the multi-document transaction in MongoDB sharded clusters. We have used TPC’s OLTP workload under a variety of experimental settings to measure business throughput. To the best of our understanding, this is the first attempt towards benchmarking MongoDB multi-document transactions in a sharded cluster.
- Published
- 2021
- Full Text
- View/download PDF
39. Analysis of Benchmark Development Times in the Transaction Processing Performance Council and Ideas on How to Reduce It with a Domain Independent Benchmark Evolution Model
- Author
-
Meikel Poess
- Subjects
Development (topology) ,Database ,Computer science ,Transaction processing ,A domain ,Benchmark (computing) ,Verifiable secret sharing ,computer.software_genre ,Dissemination ,computer - Abstract
The Transaction Processing Performance Council (TPC) has a very successful history of disseminating objective and verifiable performance data to the industry. However, it lacks the ability to create new benchmarks in a timely fashion. In its initial years, the TPC defined benchmarks in about two years on average while recently this number increased to about eight years.
- Published
- 2021
- Full Text
- View/download PDF
40. A Communication-Induced Checkpointing Algorithm for Consistent-Transaction in Distributed Database Systems
- Author
-
Houssem Mansouri and Al-Sakib Khan Pathan
- Subjects
Scheme (programming language) ,Distributed database ,Computer science ,Transaction processing ,Rollback recovery ,020206 networking & telecommunications ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,State (computer science) ,Database transaction ,computer ,Algorithm ,Rollback ,computer.programming_language - Abstract
For better protection of distributed systems, two well-known techniques are: checkpointing and rollback recovery. While failure protection is often considered a separate issue, it is crucial for establishing better secure distributed systems. This article is dedicated to the proposal of a new checkpointing algorithm for saving a consistent-transaction state in distributed databases ensuring that database management systems are able, after a failure, to recover the state of the database. The proposed communication-induced algorithm does not hamper the normal transaction processing and saves a global consistent-transaction state that records only the fully completed transactions. Analysis and experimental results of our proposal show that the proposed scheme saves a minimum number of forced checkpoints and has some performance gains compared to the alternative approaches.
- Published
- 2021
- Full Text
- View/download PDF
41. Boosting OLTP Performance Using Write-Back Client-Side Caches
- Author
-
Hieu Nguyen, Haoyu Huang, and Shahram Ghandeharizadeh
- Subjects
Focus (computing) ,Hardware_MEMORYSTRUCTURES ,Boosting (machine learning) ,Computer science ,Transaction processing ,InformationSystems_DATABASEMANAGEMENT ,Client-side ,computer.software_genre ,Server ,Benchmark (computing) ,Operating system ,Online transaction processing ,Cache ,computer - Abstract
We present a comprehensive evaluation of client-side caches to enhance the performance of MySQL for online transaction processing (OLTP) workloads. We focus on TPC-C benchmark and the write-back policy of the client-side cache. With this policy, the cache processes all application writes and propagates them to its data store asynchronously. We observe with 1 TPC-C warehouse, the cache enhances performance of MySQL InnoDB by 70%. The cache scales horizontally as a function of the number of warehouses and servers to boost TPC-C’s tpm-C by factors as high as 25 folds with 100 warehouses and 20 servers. The main limitation of the cache is its requirement for an abundant amount of memory. We extend the cache with a transaction processing storage manager (Berkeley DB, BDB) to minimize its amount of required memory, quantifying its impact on TPC-C’s tpm-C. We detail two variants, Async-BDB and Sync-BDB. The slowest variant, Sync-BDB, continues to scale horizontally to boost TPC-C performance by more than 13 folds with 100 TPC-C warehouses.
- Published
- 2021
- Full Text
- View/download PDF
42. Holistic Evaluation in Multi-Model Databases Benchmarking
- Author
-
Chao Zhang, Jiaheng Lu, Unified DataBase Management System research group / Jiaheng Lu, and Department of Computer Science
- Subjects
SQL ,Information Systems and Management ,Computer science ,Test data generation ,computer.internet_protocol ,02 engineering and technology ,computer.software_genre ,Data modeling ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,computer.programming_language ,Database ,Transaction processing ,SOCIAL COMMERCE ,Multi-model ,Data structure ,113 Computer and information sciences ,JSON ,Parameter curation ,Benchmarking ,Hardware and Architecture ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Data generation ,computer ,Software ,XML ,Information Systems - Abstract
A multi-model database (MMDB) is designed to support multiple data models against a single, integrated back-end. Examples of data models include document, graph, relational, and key-value. As more and more platforms are developed to deal with multi-model data, it has become crucial to establish a benchmark for evaluating the performance and usability of MMDBs. In this paper, we propose UniBench, a generic multi-model benchmark for a holistic evaluation of state-of-the-art MMDBs. UniBench consists of a set of mixed data models that mimics a social commerce application, which covers data models including JSON, XML, key-value, tabular, and graph. We propose a three-phase framework to simulate the real-life distributions and develop a multi-model data generator to produce the benchmarking data. Furthermore, in order to generate a comprehensive and unbiased query set, we develop an efficient algorithm to solve a new problem called multi-model parameter curation to judiciously control the query selectivity on diverse models. Finally, the extensive experiments based on the proposed benchmark were performed on four representatives of MMDBs: ArangoDB, OrientDB, AgensGraph and Spark SQL. We provide a comprehensive analysis with respect to internal data representations, multi-model query and transaction processing, and performance results for distributed execution.
- Published
- 2021
43. A Literature Review on: Handwritten Character Recognition Using Machine Learning Algorithms
- Author
-
Debnath Bhattacharyya and Sachin S. Shinde
- Subjects
Transaction processing ,business.industry ,Computer science ,Character (computing) ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Machine learning ,computer.software_genre ,Random forest ,Forms processing ,Support vector machine ,Naive Bayes classifier ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,business ,Algorithm ,computer - Abstract
Due to wide range of applications such as banking, postal, digital libraries, handwritten character realization is precise sizzling. The maturation comedian in the area of handwritten character processing is application forms processing, digitizing old articles, postal code processing and bank transaction processing as well as many various others applications. The device’s handwritten recognition interprets the handwritten characters or phrases of the user into a format understandable by the computer machine. For efficient recognition, many machine learning and deep learning approaches have been planned. We have demonstrated a thorough study of handwritten character recognition phases and various strategies and methods for machine learning and deep learning in this paper. The primary oblique of this paper is to ascertain efficient and trustworthy motion to handwritten character recognition. For character approval, several machines have been used to learn algorithms such as support vector machine, Naive Bayes, KNN, Bayes Net, random forest, logistic regression, linear regression and random tree.
- Published
- 2021
- Full Text
- View/download PDF
44. BEI-TAB: Enabling Secure and Distributed Airport Baggage Tracking with Hybrid Blockchain-Edge System
- Author
-
Fei Wang, Su Yuzhao, Enchang Sun, and Pengbo Si
- Subjects
File system ,Technology ,Article Subject ,Computer Networks and Communications ,business.industry ,Transaction processing ,Computer science ,TK5101-6720 ,computer.software_genre ,Encryption ,Information leakage ,Scalability ,Telecommunication ,Radio-frequency identification ,Electrical and Electronic Engineering ,business ,computer ,Byzantine fault tolerance ,Edge computing ,Information Systems ,Computer network - Abstract
Global air transport carries about 4.3 billion pieces of baggage each year, and up to 56 percent of travellers prefer obtaining real-time baggage tracking information throughout their trip. However, the traditional baggage tracking scheme is generally based on optical scanning and centralized storage systems, which suffers from low efficiency and information leakage. In this paper, a blockchain and edge computing-based Internet of Things (IoT) system for tracking of airport baggage (BEI-TAB) is proposed. Through the combination of radio frequency identification technology (RFID) and blockchain, real-time baggage processing information is automatically stored in blockchain. In addition, we deploy Interplanetary File System (IPFS) at edge nodes with ciphertext policy attribute-based encryption (CP-ABE) to store basic baggage information. Only hash values returned by the IPFS network are kept in blockchain, enhancing the scalability of the system. Furthermore, a multichannel scheme is designed to realize the physical isolation of data and to rapidly process multiple types of data and business requirements in parallel. To the best of our knowledge, it is the first architecture that integrates RFID, IPFS, and CP-ABE with blockchain technologies to facilitate secure, decentralized, and real-time characteristics for storing and sharing data for baggage tracking. We have deployed a testbed with both software and hardware to evaluate the proposed system, considering the performances of transaction processing time and speed. In addition, based on the characteristics of consortium blockchain, we improved the practical Byzantine fault tolerance (PBFT) consensus protocol, which introduced the node credit score mechanism and cooperated with the simplified consistency protocol. Experimental results show that the credit score-based PBFT consensus (CSPBFT) can shorten transaction delay and improve the long-term running efficiency of the system.
- Published
- 2021
45. Promize - Blockchain and Self Sovereign Identity Empowered Mobile ATM Platform
- Author
-
Peter Foytik, Xueping Liang, Wee Keong Ng, Sachin Shetty, Eranga Bandara, Kasun De Zoysa, and Nalin Ranasinghe
- Subjects
Information privacy ,Smart contract ,End user ,Transaction processing ,media_common.quotation_subject ,Payment ,Computer security ,computer.software_genre ,Credit card ,Mobile payment ,Business ,Database transaction ,computer ,media_common - Abstract
Banks provide interactive money withdrawal/payment facilities, such as ATM, debit and credit card systems. With these systems, customers could withdraw money and make payments without visiting a bank. However, traditional ATM, debit and credit card systems inherit several weaknesses such as limited ATM facilities in rural areas, the high initial cost of ATM deployment, potential security issues in ATM systems, high inter-bank transaction fees etc. Through this research, we propose a blockchain-based peer-to-peer money transfer system “Promize” to address these limitations. The Promize platform provides a blockchain-based, low cost, peer-to-peer money transfer system as an alternative for traditional ATM system and debit/credit card system. Promize provides a self-sovereign identity empowered mobile wallet for its end users. With this, users can withdraw money from registered banking authorities (e.g. shops, outlets etc.) or their friends without going to an ATM. Any user in the Promize platform can act as an ATM, which is introduced as a mobile ATM. The Promize platform provides blockchain-based low-cost inter-bank transaction processing, thereby reducing the high inter-bank transaction fee. The Promize platform guarantees data privacy, confidentiality, non-repudiation, integrity, authenticity and availability when conducting electronic transactions using the blockchain.
- Published
- 2021
- Full Text
- View/download PDF
46. An Analysis of Transaction Handling in Bitcoin
- Author
-
Befekadu G. Gebraselase, Bjarne E. Helvik, and Yuming Jiang
- Subjects
FOS: Computer and information sciences ,Cryptocurrency ,Computer Science - Cryptography and Security ,Database ,Computer science ,Transaction processing ,computer.software_genre ,Data modeling ,Exploratory data analysis ,Key (cryptography) ,Predictability ,Database transaction ,computer ,Cryptography and Security (cs.CR) ,Block (data storage) - Abstract
Bitcoin has become the leading cryptocurrency system, but the limit on its transaction processing capacity has resulted in increased transaction fees and delayed transaction confirmation. As such, it is pertinent to understand and probably predict how transactions are handled by Bitcoin such that a user may adapt the transaction requests and a miner may adjust the block generation strategy and/or the mining pool to join. To this aim, the present paper introduces results from an analysis of transaction handling in Bitcoin. Specifically, the analysis consists of two-part. The first part is an exploratory data analysis revealing key characteristics in Bitcoin transaction handling. The second part is a predictability analysis intended to provide insights on transaction handling such as (i) transaction confirmation time, (ii) block attributes, and (iii) who has created the block. The result shows that some models do reasonably well for (ii), but surprisingly not for (i) or (iii).
- Published
- 2021
47. gStore-C: A Transactional RDF Store with Light-Weight Optimistic Lock
- Author
-
Lei Zou and Zhe Zhang
- Subjects
Record locking ,Database ,Computer science ,Transaction processing ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,InformationSystems_DATABASEMANAGEMENT ,computer.file_format ,computer.software_genre ,Concurrency control ,Scalability ,Online transaction processing ,SPARQL ,RDF ,Semantic Web ,computer - Abstract
RDF systems are widely applied in many areas such as knowledge base, semantic web, social network. Traditional RDF systems focus on speed up SPARQL queries on large RDF data but disregarding the performance of updates and transaction processing. In this demonstration, we propose a new transactional RDF system based on multi-version and MVCC. We introduce a lightweight optimistic lock upon atomic variables and operations that provides fine-grained locking and avoids scalability issues. The methods are fully implemented in an open-source RDF system gStore. And it outperforms other state-of-art RDF systems solutions on transactional workloads.
- Published
- 2021
- Full Text
- View/download PDF
48. Running Oracle Machine Learning with Autonomous Database
- Author
-
Heli Helskyaho, Jean Yu, and Kai Yu
- Subjects
Oracle machine ,Database ,Transaction processing ,Computer science ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,computer ,Data warehouse ,Oracle - Abstract
The last chapter explored Oracle Machine Learning (OML), a built-in environment offered with Autonomous Database. We discussed how to add a database user account or grant an existing database user account with the machine learning developer role. This role enables a database account user to use the OML environment and Autonomous Database for a machine learning project. Oracle Machine Learning works with both types of Oracle Autonomous Databases: Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP).
- Published
- 2021
- Full Text
- View/download PDF
49. BLOCKEYE: Hunting For DeFi Attacks on Blockchain
- Author
-
Zhiqiang Yang, Chao Liu, Huixuan Zheng, Han Liu, Qian Ren, Hong Lei, and Bin Wang
- Subjects
FOS: Computer and information sciences ,Cryptocurrency ,Service (systems architecture) ,Security analysis ,Computer Science - Cryptography and Security ,Transaction processing ,business.industry ,Attack surface ,Asset (computer security) ,Computer security ,computer.software_genre ,Computer Science - Computers and Society ,Computers and Society (cs.CY) ,Business ,Database transaction ,computer ,Cryptography and Security (cs.CR) ,Financial services - Abstract
Decentralized finance, i.e., DeFi, has become the most popular type of application on many public blockchains (e.g., Ethereum) in recent years. Compared to the traditional finance, DeFi allows customers to flexibly participate in diverse blockchain financial services (e.g., lending, borrowing, collateralizing, exchanging etc.) via smart contracts at a relatively low cost of trust. However, the open nature of DeFi inevitably introduces a large attack surface, which is a severe threat to the security of participants' funds. In this paper, we proposed BlockEye, a real-time attack detection system for DeFi projects on the Ethereum blockchain. Key capabilities provided by BlockEye are twofold: (1) Potentially vulnerable DeFi projects are identified based on an automatic security analysis process, which performs symbolic reasoning on the data flow of important service states, e.g., asset price, and checks whether they can be externally manipulated. (2) Then, a transaction monitor is installed off-chain for a vulnerable DeFi project. Transactions sent not only to that project but other associated projects as well are collected for further security analysis. A potential attack is flagged if a violation is detected on a critical invariant configured in BlockEye, e.g., Benefit is achieved within a very short time and way much bigger than the cost. We applied BlockEye in several popular DeFi projects and managed to discover potential security attacks that are unreported before. A video of BlockEye is available at https://youtu.be/7DjsWBLdlQU.
- Published
- 2021
- Full Text
- View/download PDF
50. Key Technologies of Distributed Transactional Database Storage Engine
- Author
-
Huang Wei, Chun Zhang, Haiyong Xu, Guo Chen, Jing Zhou, and Xiaoxin Gao
- Subjects
Database ,Distributed database ,business.industry ,Computer science ,Transaction processing ,Information technology ,computer.software_genre ,Consistency (database systems) ,Distributed data store ,The Internet ,business ,Consistent hashing ,computer ,Database transaction - Abstract
With the development of information technology at home and abroad and the rapid popularization of mobile Internet, the data of business systems has increased exponentially. The processing and storage capabilities of traditional centralized databases are facing great challenges. Distributed databases have become the only way for enterprises to transform their digitalization. At the same time, in order to break through the market monopoly of traditional database vendors, it is a strong demand for the domestic database market to realize the localization of distributed database brands. This technical research paper constitutes a storage engine for distributed databases by studying consistent hash distributed storage, data sharding technology and transaction consistency. On the basis of the above technical research, further discussion on the hierarchical design of distributed storage engines, distributed system expansion and data rebalancing, provides theoretical and practical basis for creating mature and stable distributed database innovative products.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.