508 results
Search Results
2. Energy‐efficient full‐duplex UAV relaying networks: Trajectory design for channel‐model‐free scenarios.
- Author
-
Qi, Nan, Wang, Wei, Ye, Diliao, Wang, Mei, Tsiftsis, Theodoros A., and Yao, Rugui
- Subjects
GENETIC algorithms ,ENERGY consumption ,ALGORITHMS ,POTENTIAL energy - Abstract
In this paper, we propose an energy‐efficient unmanned aerial vehicle (UAV) relaying network. In this network, the channels between UAVs and ground transceivers are model‐free. A UAV acting as a flying relay explores better channels to assist in efficient data delivery between two ground nodes. The full‐duplex relaying mode is applied for potential energy efficiency (EE) improvements. With the genetic algorithm, we manage to optimize the UAV trajectory for any arbitrary radio map scenario. Numerical results demonstrate that compared to other schemes (eg, fixed trajectory/speed policies), the proposed algorithm performs better in terms of EE. Additionally, the impact of self‐interference on average EE is also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Quantum‐based exact pattern matching algorithms for biological sequences.
- Author
-
Soni, Kapil Kumar and Rasool, Akhtar
- Subjects
PATTERN matching ,RANDOM access memory ,ALGORITHMS ,GRAPH algorithms - Abstract
In computational biology, desired patterns are searched in large text databases, and an exact match is preferable. Classical benchmark algorithms obtain competent solutions for pattern matching in ON time, whereas quantum algorithm design is based on Grover's method, which completes the search in ON time. This paper briefly explains existing quantum algorithms and defines their processing limitations. Our initial work overcomes existing algorithmic constraints by proposing the quantum‐based combined exact (QBCE) algorithm for the pattern‐matching problem to process exact patterns. Next, quantum random access memory (QRAM) processing is discussed, and based on it, we propose the QRAM processing‐based exact (QPBE) pattern‐matching algorithm. We show that to find all t occurrences of a pattern, the best case time complexities of the QBCE and QPBE algorithms are OtandON, and the exceptional worst case is bounded by OtandON. Thus, the proposed quantum algorithms achieve computational speedup. Our work is proved mathematically and validated with simulation, and complexity analysis demonstrates that our quantum algorithms are better than existing pattern‐matching methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Energy efficient watchman based flooding algorithm for IoT‐enabled underwater wireless sensor and actor networks.
- Author
-
Draz, Umar, Ali, Tariq, Ahmad Zafar, Nazir, Saeed Alwadie, Abdullah, Irfan, Muhammad, Yasin, Sana, Ali, Amjad, and Khan Khattak, Muazzam A.
- Subjects
WIRELESS sensor networks ,ALGORITHMS ,END-to-end delay ,ENERGY consumption ,TELECOMMUNICATION systems ,DATA packeting - Abstract
In the task of data routing in Internet of Things enabled volatile underwater environments, providing better transmission and maximizing network communication performance are always challenging. Many network issues such as void holes and network isolation occur because of long routing distances between nodes. Void holes usually occur around the sink because nodes die early due to the high energy consumed to forward packets sent and received from other nodes. These void holes are a major challenge for I‐UWSANs and cause high end‐to‐end delay, data packet loss, and energy consumption. They also affect the data delivery ratio. Hence, this paper presents an energy efficient watchman based flooding algorithm to address void holes. First, the proposed technique is formally verified by the Z‐Eves toolbox to ensure its validity and correctness. Second, simulation is used to evaluate the energy consumption, packet loss, packet delivery ratio, and throughput of the network. The results are compared with well‐known algorithms like energy‐aware scalable reliable and void‐hole mitigation routing and angle based flooding. The extensive results show that the proposed algorithm performs better than the benchmark techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Hybrid genetic‐paired‐permutation algorithm for improved VLSI placement.
- Author
-
Ignatyev, Vladimir V., Kovalev, Andrey V., Spiridonov, Oleg B., Kureychik, Viktor M., Ignatyeva, Alexandra S., and Safronenkova, Irina B.
- Subjects
VERY large scale circuit integration ,EVOLUTIONARY algorithms ,ALGORITHMS ,SEARCH algorithms ,RANDOM forest algorithms ,SYSTEMS development ,GENETIC algorithms - Abstract
This paper addresses Very large‐scale integration (VLSI) placement optimization, which is important because of the rapid development of VLSI design technologies. The goal of this study is to develop a hybrid algorithm for VLSI placement. The proposed algorithm includes a sequential combination of a genetic algorithm and an evolutionary algorithm. It is commonly known that local search algorithms, such as random forest, hill climbing, and variable neighborhoods, can be effectively applied to NP‐hard problem‐solving. They provide improved solutions, which are obtained after a global search. The scientific novelty of this research is based on the development of systems, principles, and methods for creating a hybrid (combined) placement algorithm. The principal difference in the proposed algorithm is that it obtains a set of alternative solutions in parallel and then selects the best one. Nonstandard genetic operators, based on problem knowledge, are used in the proposed algorithm. An investigational study shows an objective‐function improvement of 13%. The time complexity of the hybrid placement algorithm is O(N2). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Algorithm based on Byzantine agreement among decentralized agents (BADA).
- Author
-
Oh, Jintae, Park, Joonyoung, Kim, Youngchang, and Kim, Kiyoung
- Subjects
ALGORITHMS ,EVIDENCE - Abstract
Distributed consensus requires the consent of more than half of the congress to produce irreversible results, and the performance of the consensus algorithm deteriorates with the increase in the number of nodes. This problem can be addressed by delegating the agreement to a few selected nodes. Since the selected nodes must comply with the Byzantine node ratio criteria required by the algorithm, the result selected by any decentralized node cannot be trusted. However, some trusted nodes monopolize the consensus node selection process, thereby breaking decentralization and causing a trilemma. Therefore, a consensus node selection algorithm is required that can construct a congress that can withstand Byzantine faults with the decentralized method. In this paper, an algorithm based on the Byzantine agreement among decentralized agents to facilitate agreement between decentralization nodes is proposed. It selects a group of random consensus nodes per block by applying the proposed proof of nonce algorithm. By controlling the percentage of Byzantine included in the selected nodes, it solves the trilemma when an arbitrary node selects the consensus nodes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. High‐throughput and low‐area implementation of orthogonal matching pursuit algorithm for compressive sensing reconstruction.
- Author
-
Nguyen, Vu Quan, Son, Woo Hyun, Parfieniuk, Marek, Trung, Luong Tran Nhat, and Park, Sang Yoon
- Subjects
ORTHOGONAL matching pursuit ,LOGIC design ,MATRIX multiplications ,ALGORITHMS ,LENGTH measurement - Abstract
Massive computation of the reconstruction algorithm for compressive sensing (CS) has been a major concern for its real‐time application. In this paper, we propose a novel high‐speed architecture for the orthogonal matching pursuit (OMP) algorithm, which is the most frequently used to reconstruct compressively sensed signals. The proposed design offers a very high throughput and includes an innovative pipeline architecture and scheduling algorithm. Least‐squares problem solving, which requires a huge amount of computations in the OMP, is implemented by using systolic arrays with four new processing elements. In addition, a distributed‐arithmetic‐based circuit for matrix multiplication is proposed to counterbalance the area overhead caused by the multi‐stage pipelining. The results of logic synthesis show that the proposed design reconstructs signals nearly 19 times faster while occupying an only 1.06 times larger area than the existing designs for N = 256, M = 64, and m = 16, where N is the number of the original samples, M is the length of the measurement vector, and m is the sparsity level of the signal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Enhanced technique for Arabic handwriting recognition using deep belief network and a morphological algorithm for solving ligature segmentation.
- Author
-
Essa, Nada, El‐Daydamony, Eman, and Mohamed, Ahmed Atwan
- Subjects
ARABIC language ,WRITING ,IMAGE segmentation ,PATTERN recognition systems ,COMPUTER programming ,ALGORITHMS - Abstract
Arabic handwriting segmentation and recognition is an area of research that has not yet been fully understood. Dealing with Arabic ligature segmentation, where the Arabic characters are connected and unconstrained naturally, is one of the fundamental problems when dealing with the Arabic script. Arabic character‐recognition techniques consider ligatures as new classes in addition to the classes of the Arabic characters. This paper introduces an enhanced technique for Arabic handwriting recognition using the deep belief network (DBN) and a new morphological algorithm for ligature segmentation. There are two main stages for the implementation of this technique. The first stage involves an enhanced technique of the Sari segmentation algorithm, where a new ligature segmentation algorithm is developed. The second stage involves the Arabic character recognition using DBNs and support vector machines (SVMs). The two stages are tested on the IFN/ENIT and HACDB databases, and the results obtained proved the effectiveness of the proposed algorithm compared with other existing systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. New Min-sum LDPC Decoding Algorithm Using SNR-Considered Adaptive Scaling Factors.
- Author
-
Yongmin Jung, Yunho Jung, Seongjoo Lee, and Jaeseok Kim
- Subjects
SIGNAL-to-noise ratio ,LOW density parity check codes ,EXTRINSIC information transfer charts ,ALGORITHMS ,COMPARATIVE studies - Abstract
This paper proposes a new min-sum algorithm for lowdensity parity-check decoding. In this paper, we first define the negative and positive effects of the received signal-to-noise ratio (SNR) in the min-sum decoding algorithm. To improve the performance of error correction by considering the negative and positive effects of the received SNR, the proposed algorithm applies adaptive scaling factors not only to extrinsic information but also to a received log-likelihood ratio. We also propose a combined variable and check node architecture to realize the proposed algorithm with low complexity. The simulation results show that the proposed algorithm achieves up to 0.4 dB coding gain with low complexity compared to existing min-sum-based algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Optimal Power Control in Cooperative Relay Networks Based on a Differential Game.
- Author
-
Haitao Xu and Xianwei Zhou
- Subjects
DIFFERENTIAL games ,NASH equilibrium ,ENERGY consumption ,ALGORITHMS ,GAME theory ,DIFFERENTIAL equations ,ELECTRIC power transmission - Abstract
In this paper, the optimal power control problem in a cooperative relay network is investigated and a new power control scheme is proposed based on a non-cooperative differential game. Optimal power allocated to each node for a relay is formulated using the Nash equilibrium in this paper, considering both the throughput and energy efficiency together. It is proved that the non-cooperative differential game algorithm is applicable and the optimal power level can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. Visual-Attention-Aware Progressive RoI Trick Mode Streaming in Interactive Panoramic Video Service.
- Author
-
Joo Myoung Seok and Yonghun Lee
- Subjects
VIDEOS ,STREAMING video & television ,VIDEO compression ,FEASIBILITY studies ,ALGORITHMS - Abstract
In the near future, traditional narrow and fixed viewpoint video services will be replaced by high-quality panorama video services. This paper proposes a visual-attention- aware progressive region of interest (RoI) trick mode streaming service (VA-PRTS) that prioritizes video data to transmit according to the visual attention and transmits prioritized video data progressively. VA-PRTS enables the receiver to speed up the time to display without degrading the perceptual quality. For the proposed VA-PRTS, this paper defines a cutoff visual attention metric algorithm to determine the quality of the encoded video slice based on the capability of visual attention and the progressive streaming method based on the priority of RoI video data. Compared to conventional methods, VA-PRTS increases the bitrate saving by over 57% and decreases the interactive delay by over 66%, while maintaining a level of perceptual video quality. The experiment results show that the proposed VA-PRTS improves the quality of the viewer experience for interactive panoramic video streaming services. The development results show that the VA-PRTS has highly practical real-field feasibility. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. Energy‐balance node‐selection algorithm for heterogeneous wireless sensor networks.
- Author
-
Khan, Imran and Singh, Dhananjay
- Subjects
WIRELESS sensor nodes ,WIRELESS sensor networks ,ALGORITHMS ,HETEROGENEOUS catalysis ,SENSOR networks - Abstract
To solve the problem of unbalanced loads and the short network lifetime of heterogeneous wireless sensor networks, this paper proposes a node‐selection algorithm based on energy balance and dynamic adjustment. The spacing and energy of the nodes are calculated according to the proximity to the network nodes and the characteristics of the link structure. The direction factor and the energy‐adjustment factor are introduced to optimize the node‐selection probability in order to realize the dynamic selection of network nodes. On this basis, the target path is selected by the relevance of the nodes, and nodes with insufficient energy values are excluded in real time by the establishment of the node‐selection mechanism, which guarantees the normal operation of the network and a balanced energy consumption. Simulation results show that this algorithm can effectively extend the network lifetime, and it has better stability, higher accuracy, and an enhanced data‐receiving rate in sufficient time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Dynamic threshold location algorithm based on fingerprinting method.
- Author
-
Ding, Xuxing, Wang, Bingbing, and Wang, Zaijian
- Subjects
HUMAN fingerprints ,K-nearest neighbor classification ,ALGORITHMS ,POSITIONING theory ,DYNAMICAL systems - Abstract
The weighted K‐nearest neighbor (WKNN) algorithm is used to reduce positioning accuracy, as it uses a fixed number of neighbors to estimate the position. In this paper, we propose a dynamic threshold location algorithm (DH‐KNN) to improve positioning accuracy. The proposed algorithm is designed based on a dynamic threshold to determine the number of neighbors and filter out singular reference points (RPs). We compare its performance with the WKNN and Enhanced K‐Nearest Neighbor (EKNN) algorithms in test spaces of networks with dimensions of 20 m × 20 m, 30 m × 30 m, 40 m × 40 m and 50 m × 50 m. Simulation results show that the maximum position accuracy of DH‐KNN improves by 31.1%, and its maximum position error decreases by 23.5%. The results demonstrate that our proposed method achieves better performance than other well‐known algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Analysis of Resource Assignment for Directional Multihop Communications in mm-Wave WPANs.
- Author
-
Meejoung Kim, Hong, Seung-Eun, Yongsun Kim, and Jinkyeong Kim
- Subjects
WIRELESS personal area networks ,MILLIMETER waves ,ANTENNA arrays ,NUMERICAL analysis ,ALGORITHMS ,DATA transmission systems - Abstract
This paper presents an analysis of resource assignment for multihop communications in millimeter-wave (mm-wave) wireless personal area networks. The purpose of this paper is to figure out the effect of using directional antennas and relaying devices (DEVs) in communications. The analysis is performed based on a grouping algorithm, categorization of the flows, and the relaying DEV selection policy. Three schemes are compared: direct and relaying concurrent transmission (DRCT), direct concurrent transmission (DCT), and direct nonconcurrent transmission (DNCT). Numerical results show that DRCT is better than DCT and DCT is better than DNCT for any antenna beamwidths under the proposed algorithm and policy. The results also show that using relaying DEVs increases the throughput up to 30% and that there is an optimal beamwidth that maximizes spatial reuse and depends on parameters such as the number of flows in the networks. This analysis can provide guidelines for improving the performance of mm-wave band communications with relaying DEVs. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. An Edge-Based Adaptive Method for Removing High-Density Impulsive Noise from an Image While Preserving Edges.
- Author
-
Dong-Ho Lee
- Subjects
ALGORITHMS ,IMAGE analysis ,MEDIAN filters (Electronics) ,ELECTRIC lines ,SIMULATION methods & models ,PROBLEM solving - Abstract
This paper presents an algorithm for removing high-density impulsive noise that generates some serious distortions in edge regions of an image. Although many works have been presented to reduce edge distortions, these existing methods cannot sufficiently restore distorted edges in images with large amounts of impulsive noise. To solve this problem, this paper proposes a method using connected lines extracted from a binarized image, which segments an image into uniform and edge regions. For uniform regions, the existing simple adaptive median filter is applied to remove impulsive noise, and, for edge regions, a prediction filter and a line-weighted median filter using the connected lines are proposed. Simulation results show that the proposed method provides much better performance in restoring distorted edges than existing methods provide. When noise content is more than 20 percent, existing algorithms result in severe edge distortions, while the proposed algorithm can reconstruct edge regions similar to those of the original image. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. An Efficient DPA Countermeasure for the EtaT Pairing Algorithm over GF(2n) Based on Random Value Addition.
- Author
-
Seog Chung Seo, Dong-Guk Han, and Seokhie Hong
- Subjects
STATISTICAL power analysis ,ADDITION (Mathematics) ,ALGORITHMS ,SECURITY management ,HIGH performance processors - Abstract
This paper presents an efficient differential power analysis (DPA) countermeasure for the Eta
T pairing algorithm over GF(2n ). The proposed algorithm is based on a random value addition (RVA) mechanism. An RVA-based DPA countermeasure for the EtaT pairing computation over GF(3n ) was proposed in 2008. This paper examines the security of this RVA-based DPA countermeasure and defines the design principles for making the countermeasure more secure. Finally, the paper proposes an efficient RVA-based DPA countermeasure for the secure computation of the EtaT pairing over GF(2n ). The proposed countermeasure not only overcomes the security flaws in the previous RVA-based method but also exhibits the enhanced performance. Actually, on the 8-bit ATmega128L and 16-bit MSP430 processors, the proposed method can achieve almost 39% and 43% of performance improvements, respectively, compared with the best-known countermeasure. [ABSTRACT FROM AUTHOR]- Published
- 2011
- Full Text
- View/download PDF
17. Efficient Masked Implementation for SEED Based on Combined Masking.
- Author
-
HeeSeok Kim, Young In Cho, Dooho Choi, Dong-Guk Han, and Seokhie Hong
- Subjects
ALGORITHMS ,CIPHERS ,CODE names ,CRYPTOGRAPHY ,DECODERS & decoding - Abstract
This paper proposes an efficient masking method for the block cipher SEED that is standardized in Korea. The nonlinear parts of SEED consist of two S-boxes and modular additions. However, the masked version of these nonlinear parts requires excessive RAM usage and a large number of operations. Protecting SEED by the general masking method requires 512 bytes of RAM corresponding to masked S-boxes and a large number of operations corresponding to the masked addition. This paper proposes a new-style masked S-box which can reduce the amount of operations of the masking addition process as well as the RAM usage. The proposed masked SEED, equipped with the new-style masked S-box, reduces the RAM requirements to 288 bytes, and it also reduces the processing time by 38% compared with the masked SEED using the general masked S-box. The proposed method also applies to other block ciphers with the same nonlinear operations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
18. Edge-Preserving Algorithm for Block Artifact Reduction and Its Pipelined Architecture.
- Author
-
Truong Quang Vinh and Young-Chul Kim
- Subjects
ALGORITHMS ,FILTERS & filtration ,PIPELINES ,ARCHITECTURE ,VERY large scale circuit integration ,MANUFACTURING processes - Abstract
This paper presents a new edge-protection algorithm and its very large scale integration (VLSI) architecture for block artifact reduction. Unlike previous approaches using block classification, our algorithm utilizes pixel classification to categorize each pixel into one of two classes, namely smooth region and edge region, which are described by the edge-protection maps. Based on these maps, a two-step adaptive filter which includes offset filtering and edge-preserving filtering is used to remove block artifacts. A pipelined VLSI architecture of the proposed deblocking algorithm for HD video processing is also presented in this paper. A memory-reduced architecture for a block buffer is used to optimize memory usage. The architecture of the proposed deblocking filter is verified on FPGA Cyclone II and implemented using the ANAM 0.25 μm CMOS cell library. Our experimental results show that our proposed algorithm effectively reduces block artifacts while preserving the details. The PSNR performance of our algorithm using pixel classification is better than that of previous algorithms using block classification. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
19. Adaptive background estimation based on robust statistics.
- Author
-
Shimai, Hiroyuki, Kurita, Takio, Umeyama, Shinji, Tanaka, Masaru, and Mishima, Taketoshi
- Subjects
ALGORITHMS ,TECHNOLOGY ,COMPUTER science ,INFORMATION technology ,MATHEMATICAL statistics - Abstract
In this paper, the authors propose a robust background estimation algorithm, which is a technique for adaptively estimating background in response to changes in the environment based on robust statistics, and a technique for automatically adjusting the adaptation rate in response to changes in the environment. In monitoring systems and the like using stationary cameras, it is necessary to detect moving objects from the captured image series, and recognize and identify the detected subject. The most simple and representative method for moving object detection, which is used in many fields, is the background subtraction method for detecting moving objects by the difference between the background image and the current scene. This method requires a background image which does not include a moving object. However, it is difficult to acquire a background image under conditions in which moving objects are continually included in the scene. Also, when the environment changes, adaptive revision must be performed in response to the changes in the background image. Furthermore, in an environment where sections of the background change over time in different ways, it is important to determine the appropriate number of frames needed to estimate the background. In this paper, therefore, the authors propose a method for adaptive background estimation using M-estimation, a known robust statistics technique. Also, the authors propose a technique for adjusting the number of frames necessary for background estimation by varying the adaptation rate to the environment using robust template matching. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(7): 98– 108, 2007; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10612 [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
20. Denoising of natural images using FREBAS transformation.
- Author
-
Ito, Satoshi and Yamada, Yoshifumi
- Subjects
ALGORITHMS ,STATISTICAL correlation ,NOISE ,BILINEAR transformation method ,LEAST squares ,MATHEMATICAL statistics ,IMAGE processing ,COMPUTER vision - Abstract
If two algorithms of Fresnel transformation are used with suitable parameters, the expansion of images modeled after multiresolution analysis (FREBAS transformation) is possible. This paper proposes a denoising technique of natural images using FREBAS transformation. Constrained least square filter applying in FREBAS transformed space removes noise superimposed on signals very well, however, that noise with specific patterns might remain on the images when processing images have small SN ratio. In this paper we focus our attention on this noise that exhibits isolationism in FREBAS transformed space and propose a new denoising technique that introduces non-linear noise processing in FREBAS transformed space. It was shown that the noises with specific patterns were favorably removed in simulation experiment. In addition, we applied the proposed technique to natural images which contain noise to and compare the method with other denoising techniques from the viewpoint of SNR improvement, image degradation, remained noises. As a result, the proposed technique could remove noise while controlling image degradation and also obtain significant improvements in the SNR on average. In particular, we confirmed that the proposed technique had excellent denoising performance for images with large variations in amplitude. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(3): 1–11, 2007; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.20675 [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
21. A pattern formation algorithm for a set of autonomous distributed robots with agreement on orientation along one axis.
- Author
-
Kasuya, Masao, Ito, Nobuhiro, Inuzuka, Nobuhiro, and Wada, Koichi
- Subjects
PATTERN formation (Physical sciences) ,CHAOS theory ,SYSTEMS theory ,AUTONOMOUS robots ,AUTOMATION ,ALGORITHMS - Abstract
A set of autonomous distributed robots is a group of robots that each function independently but cooperatively. In this paper we treat the pattern formation problem for a set of autonomous distributed robots. The pattern formation problem involves having a set of robots take up a certain pattern (a formation) given that their initial states place them in arbitrary locations. The robots used in this research function in an asynchronous manner iterating through a cycle of four modes of behavior: wait, observe, compute, and move. The robots cannot record information relating to the previous cycle (the results of observations or computations). In addition, the robots cannot be distinguished by their external appearance and all of the robots execute the same algorithm. In Ref. 1 it was shown that a pattern formation algorithm could be constructed for an odd number of robots with agreement on one axis if an observational constraint assumption was fulfilled. In this paper we show that a pattern formation algorithm can be constructed even when this observational constraint assumption does not hold. © 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 37(10): 89–100, 2006; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.20331 [ABSTRACT FROM AUTHOR]- Published
- 2006
- Full Text
- View/download PDF
22. On dynamic generation of business protocol in autonomous Web services.
- Author
-
Oya, Makoto, Kinoshita, Masahiro, and Kakazu, Yukinori
- Subjects
ONLINE information services ,USER interfaces ,COMMERCE ,ALGORITHMS ,COMPUTER network protocols - Abstract
One of the issues of autonomous Web services is that precise business protocols across systems are not always predefined. A likely solution to this problem is to dynamically generate business protocols by matching external interface definitions and relating information exposed by each system. The current Web services have a feature called a portType that describes an external interface of a system. This paper proposes to add a new concept called “behavior pattern” to the portType, then provides an algorithm to dynamically generate business protocols. The algorithm compares the portTypes of the concerned system and the opposite system, verifies the possibility of their interaction, and automatically reduces the behavior patterns within executable range. The paper also evaluates this algorithm by applying it to some use cases and shows that the method provides useful early results for realization of autonomous Web services. © 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 37(2): 37–45, 2005; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.20352 [ABSTRACT FROM AUTHOR]- Published
- 2006
- Full Text
- View/download PDF
23. Image processing algorithm of computer-aided diagnosis in lung cancer screening by CT.
- Author
-
Yamamoto, Shinji
- Subjects
IMAGE processing ,ALGORITHMS ,LUNG cancer ,TOMOGRAPHY ,MEDICAL screening ,TISSUES ,TUMORS - Abstract
This paper presents a systematic overview of our work for more than a decade on image-processing algorithms for lung cancer screening by CT. Most of the images handled at the mass-screening level are normal cases, and the rate of detection of lung tumors is no more than a few percent. In order to detect such rare tumors with high accuracy, there must be a technique that can correctly detect small changes in the image. On the other hand, the problem can arise that a large number of normal tissues are incorrectly detected. This paper outlines the image-processing algorithm developed by us to correctly detect tumors while reducing overdetection. In particular, we describe in detail the principles and properties of quoit filtering, which was devised for accurate detection of tumors in the first stage. © 2005 Wiley Periodicals, Inc. Syst Comp Jpn, 36(7): 40–53, 2005; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.20156 [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
24. Application of the parameter-free genetic algorithm to the fixed channel assignment problem.
- Author
-
Matsui, Shouichi, Watanabe, Isamu, and Tokoro, Ken-Ichi
- Subjects
GENETIC algorithms ,COMBINATORIAL optimization ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,COMBINATORICS - Abstract
This paper is concerned with the application of the parameter-free genetic algorithm (PfGA) proposed by Sawai and colleagues and the parallel distributed PfGA, to the fixed channel assignment problem. The results of the investigation are presented. The PfGA does not include parameters such as the population size, the crossover rate, and the mutation rate, which have been indispensable in the conventional genetic algorithm (GA). This eliminates parameter tuning for each problem, which is a very useful property in practice. Although the applications of PfGA to function optimization in 5 to 10 dimensions have been reported, there has been no report on the performance when it is applied to combinatorial optimization with a larger search space, and it remains a practically important problem to examine the performance in such a case. This paper considers the application of the PfGA to the fixed channel assignment problem, which is a combinatorial optimization problem. The sequence representation by the random keys is investigated, and it is shown that the PfGA and the parallel distributed PfGA have as high a performance in the fixed channel assignment problem as in the function optimization problem. © 2005 Wiley Periodicals, Inc. Syst Comp Jpn, 36(4): 71–81, 2005; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10328 [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
25. Parallel matrix-multiplication algorithm for distributed parallel computers.
- Author
-
Hattori, Masamitsu, Ito, Nobuhiro, Chen, Wei, and Wada, Koichi
- Subjects
MATRICES (Mathematics) ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,PARALLEL computers ,COMPUTERS - Abstract
This paper proposes a matrix-multiplication algorithm suited to the distributed computer environment. The system is implemented using PVM and the execution time is evaluated. One of the conventional methods implemented using PVM is the systolic method. However, this method is limited with respect to the matrix size that can be calculated and the number of computers that can be used in the calculation. This paper first extends the algorithm so that these limitations are lifted. Then, an algorithm is proposed which can reduce the communication complexity, although the computational complexity is slightly increased. These two algorithms are implemented using PVM, and the execution time is measured. In either algorithm, an acceleration rate exceeding the number of computers is obtained. The effectiveness is found to be due to the effect of the cache memory, and the speed improvement obtained by using cache memory is investigated. The optimal number of matrix divisions from the viewpoint of cache memory size is also discussed. © 2005 Wiley Periodicals, Inc. Syst Comp Jpn, 36(4): 48–59, 2005; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10551 [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
26. Information geometry for turbo decoding.
- Author
-
Ikeda, Shiro, Tanaka, Toshiyuki, and Amari, Shun-ichi
- Subjects
INFORMATION theory ,ALGORITHMS ,STATISTICAL physics ,MATHEMATICAL statistics ,ARTIFICIAL intelligence ,PHYSICS - Abstract
Turbo codes are known as a class of error-correcting codes which have high error-correcting performance with efficient decoding algorithm. Characteristics of the iterative decoding algorithm have been studied in detail through a variety of numerical experiments, but theoretical results are still insufficient. In this paper, this issue is addressed from the information geometrical viewpoint. As a result, a mathematical framework for analyzing turbo codes is obtained, and some of the fundamental properties of turbo decoding are elucidated based on this framework. Recently, it has been pointed out that the turbo decoding algorithm is related to the decoding algorithm of low-density parity check codes, the computation method of Bethe approximation in statistical physics, and the belief propagation algorithm of Bayesian networks. The mathematical framework given in the present paper can also be used to analyze these wide classes of iterative computation methods, and hence represent a new analysis tool. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 36(1): 79–87, 2005; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10359 [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
27. SVM that maximizes the margin in the input space.
- Author
-
Akaho, Shotaro
- Subjects
HYPERSPACE ,HILBERT space ,QUADRATIC programming ,NONLINEAR programming ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
While the original SVM seeks the discriminative plane that maximizes the margin in the feature space (the Hilbert space), this paper investigates the framework that maximizes the margin in the input space. This framework is considered to be effective for cases in which a priori knowledge is embedded as input space estimates. In the approach taken in this paper, approximating the margin in the input space by Taylor expansion is essential. The algorithm obtained is a kind of alternating optimization comprising the step of obtaining projections onto the discriminative plane from sample points by Newton's method and the step of determining parameters of the discriminative plane by convex quadratic programming. The algorithm converges to a stably local optimal solution under comparatively lenient conditions. In addition, the optimization problem to be solved includes the original SVM as a special case. However, since the amount of computation increases as the dimensions of the input space increase in the proposed algorithm, this paper proposes a simplified algorithm obtained by combining the original SVM and the abridged proposed algorithm. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(14): 78–86, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10631 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
28. Generating derangements by interchanging at most four elements.
- Author
-
Mikawa, Kenji and Semba, Ichiro
- Subjects
SCIENCE ,PERMUTATIONS ,ALGORITHMS ,SEQUENTIAL analysis - Abstract
For the set S = {1,2,…,n}, a permutation π:S → S such that π(i)≠i is called a complete permutation, and the string π(1)π(2)…π(n) is called a derangement. In this paper, the authors consider the generation of a list that contains one instance of every derangement. Sequential generation, which sequentially generates a string by interchanging characters in the string, can generate the list more efficiently when fewer characters are interchanged. If 𝒫(S) denotes the list of derangements and deg(𝒫(S)) denotes the maximum value of the numbers of characters that are interchanged, no list of derangements such that deg(𝒫(S)) = O(1) is known as of today. In this paper, the authors notice that a derangement can be decomposed into mutually disjoint cycles to show that there exists no list of derangements such that deg(𝒫(S)) = 2. Also, in the second half of this paper, they consider an algorithm for generating a list for which deg(𝒫(S)) = 4 for |S|≥4. According to this algorithm, the average number of characters that are interchanged is roughly 2.980. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(12): 25–31, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10531 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
29. Robust object detection using a Radial Reach Filter (RRF).
- Author
-
Satoh, Yutaka, Kaneko, Shun'ichi, Niwa, Yoshinori, and Yamamoto, Kazuhiko
- Subjects
ALGORITHMS ,ROBUST statistics ,LIGHTING ,LIGHT sources ,ENVIRONMENTAL monitoring - Abstract
In this paper the authors report on a new algorithm used to separate an object from its background using a background image. In the past, simple background subtraction has been used because of its low processing costs and ease of implementation. However, because this method depends solely on brightness patterns in the object and shadows, it has problems such as an inability to deal with poor lighting conditions and an inability to detect regions in which the brightness levels of the object and shadows are similar. In order to resolve these problems, in this paper the authors propose a new filter process called a Radial Reach Filter (RRF). The authors define a new statistic called a Radial Reach Correlation (RRC) used to determine on a pixel-by-pixel basis the similar and dissimilar areas between a background image and a current scene. They then evaluate the local texture at pixel-level resolution while reducing the effects of variations in lighting. In addition, by introducing a mechanism to adjust the defined region adaptively based on local characteristics of the background image, the authors are able to work with various shadows and objects in the scene. The authors perform a theoretical evaluation and experiments using real images to demonstrate the validity of their proposed method. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(10): 63–73, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10590 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
30. Multiroute planning for aircraft using genetic algorithms.
- Author
-
Tanaka, Masaharu, Mizoguchi, Masanobu, Tange, Toshiaki, Takami, Isao, and Suzuki, Shinji
- Subjects
GENETIC algorithms ,ALGORITHMS ,MATHEMATICAL models ,COMPUTER simulation ,SIMULATION methods & models ,COMPUTER systems - Abstract
This paper proposes a scheme for generating multiple flight routes using genetic algorithms (GAs). In obtaining multiple routes or multiroutes by GAs simultaneously, there is a problem of collisions or near-misses occurring due to interference between the routes (closeness or intersecting of the routes) when similar routes are generated. The proposed scheme realizes a scheme for searching multiple flight routes without interference by incorporating a method for correcting their fitness on the basis of the neighborhood density for maintaining the variety and the degree of interference or the interference degree for obtaining routes that do not come close to or intersect one another. In addition, this paper proposes a priority candidate preserving strategy that preserves superior multiple individuals for each generation in order to improve the search performance. Computer simulation results confirm that the scheme proposed generates multiple flight routes without interference, indicating its efficacy. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(2): 39–48, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10428 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
31. A method of generating seamless texture using GA.
- Author
-
Yamada, Tatsumi, Hashimoto, Akihiko, Adachi, Fumio, Shimohara, Katsunori, and Tokunaga, Yukio
- Subjects
GENETIC algorithms ,ALGORITHMS ,COMBINATORIAL optimization ,COMPUTER graphics ,IMAGE processing ,DIGITAL image processing - Abstract
This paper proposed a method of generating seamless textures by means of a genetic algorithm (GA). When the texture of a large region is built up from textures in regions of finite size, textural discontinuities usually arise at the boundaries between these regions. A seamless texture is one in which such discontinuities do not occur. A fractal method has been proposed for the generation of seamless textures, but in that method it is difficult to predict what seamless texture will be generated. Pattern generation cannot be controlled in such a way that a texture similar to an already generated seamless texture will be produced. In this paper an attempt is made to solve this problem by applying the GA to texture generation, yielding seamless textures that are similar to seamless textures previously generated. This method can also be used to generate multiple seamless textures. By using the proposed method, a texture with a user-selected pattern can be generated from simultaneously displayed seamless textures. In addition, to avoid monotonous behavior due to the arrangement of textures of a single kind, a method of generating seamless textures with the same boundary but with different internal pattern (group texture) is proposed. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(2): 91–99, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1229 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
32. Dictation of multiparty conversation considering speaker individuality and turn taking.
- Author
-
Murai, Noriyuki and Kobayashi, Tetsunori
- Subjects
INTERPERSONAL communication ,TURN-taking (Communication) ,SPEECH audiometry ,SPEECH perception ,ALGORITHMS ,LANGUAGE & languages ,ORAL communication - Abstract
This paper discusses an algorithm that recognizes multiparty speech with complex turn taking. In recognition of the conversation of multiple speakers, it is necessary to know not only what is spoken, as in the conventional system, but also who spoke up to what point. The purpose of this paper is to find a method to solve this problem. The representation of the likelihood of turn taking is included in the language model in the continuous speech recognition system, and the speech properties of each speaker are represented by a statistical model. Using this approach, two algorithms are proposed that estimate simultaneously and in parallel the speaker and the speech content. Recognition experiments using conversation in TV sports news show that the proposed method can correct a maximum of 29.5% of the errors in the recognition of speech content and 93.0% of the errors in recognition of the speaker. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(13): 103–111, 2003; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1223 [ABSTRACT FROM AUTHOR]- Published
- 2003
- Full Text
- View/download PDF
33. Parallel algorithms for selection on the BSP and BSP* models.
- Author
-
Ishimizu, Takashi, Fujiwara, Akihiro, Inoue, Michiko, Masuzawa, Toshimitsu, and Fujiwara, Hideo
- Subjects
PARALLEL algorithms ,ALGORITHMS ,PARALLEL processing ,ELECTRONIC data processing ,COMPUTER programming ,COMPUTATIONAL complexity - Abstract
In this paper, we propose parallel algorithms to solve the selection problem on the Bulk-Synchronous Parallel (BSP) model and the BSP* model. The BSP and BSP* models are recently proposed parallel computation models. They can represent the communication cost, a vital element in the latest parallel computations, in terms of the parameters of the synchronization period L, the reciprocal g of the communication network bandwidth, and the packet size it B. In this paper, we propose a parallel algorithm having an internal computation time O[(n/p) + dlogploglogn + L(logploglogn)/(logd)] and a communication time O[g(n/p) + (gd + L)(logploglogn)/(logd)], on the BSP model, and a parallel algorithm having an internal computation time O[(n/p) + dlogploglogn + L(logploglogn)/(logd)] and a communication time O[g(n/(pB)) + (n/p)
1/7 (logp)6/7 ) + (gd + L)(logploglogn)/(logd)] on the BSP* model for any integer d(1 ≤ d ≤ logn) to solve the problem of selecting n data with p processors. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(12): 97–107, 2002; Published online in Wiley InterScience (www.interscience.wiley.com ). DOI 10.1002/scj.1170 [ABSTRACT FROM AUTHOR]- Published
- 2002
- Full Text
- View/download PDF
34. A color image compression scheme based on the adaptive orthogonalized transform, accelerated by improvement of the dictionary.
- Author
-
Miura, Takashi
- Subjects
IMAGE compression ,COLOR ,ALGORITHMS ,MATHEMATICAL transformations ,VECTOR analysis ,APPROXIMATION theory - Abstract
The paper proposes an improvement of coding performance for a color image compression scheme based on the adaptive orthogonalized transform (AOT). The paper proposes that the performance of the coder will be accelerated by reducing the dictionary size to 25% and by maintaining the approximation gain of the LBG algorithm by virtue of the fact that the AOT increases the approximation gain of the dictionary. The effectiveness of the proposal is confirmed by numerical experiments. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(8): 1–8, 2002; Published online in Wiley InterScience (
www.interscience. wiley.com ). DOI 10.1002/scj.10218 [ABSTRACT FROM AUTHOR]- Published
- 2002
- Full Text
- View/download PDF
35. A nonscan DFT method for controllers to provide complete fault efficiency.
- Author
-
Ohtake, Satoshi, Masuzawa, Toshimitsu, and Fujiwara, Hideo
- Subjects
INTEGRATED circuits ,TESTING equipment ,ELECTRIC network synthesis ,ALGORITHMS ,ELECTRIC controllers ,ELECTRONIC circuits - Abstract
In this paper, we propose a method for non-scan design for testability of controllers that are logically synthesized from a finite state machine (FSM), which provides complete fault efficiency. In the previously used scan method, a test pattern for the combinational circuits of a controller was generated to achieve complete fault efficiency and was applied by using a scan flip-flop. However, in the scan method, the test sequence becomes large and various problems prevent the test from being performed at the actual operating speed. In this paper, the test pattern of the combinational circuits of the controller does not use scan flip-flops, but instead uses the state transitions of an FSM. The proposed method allows performance of the test at the actual operating speed of the circuit, thus reducing the testing time compared to the conventional methods. Moreover, experimental results using benchmarks show that the area overhead is small. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(5): 64–75, 2002; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1128 [ABSTRACT FROM AUTHOR]- Published
- 2002
- Full Text
- View/download PDF
36. Tree-based clustering for gaussian mixture HMMs.
- Author
-
Kato, Tsuneo, Kuroiwa, Shingo, Shimizu, Tohru, and Higuchi, Norio
- Subjects
ALGORITHMS ,AUTOMATIC speech recognition ,SPEECH processing systems ,GAUSSIAN distribution ,MIXTURE distributions (Probability theory) ,ACOUSTIC models - Abstract
Tree-based clustering is an effective method for sharing the state of an HMM in which clustering is applied to a set of context-dependent models with the phoneme context as the splitting condition. In past papers, the method has been restricted to the single Gaussian HMM. The single Gaussian HMM, however, is insufficient for representing the acoustic features, and an adequate topology (sharing of HMM state) will not necessarily be realized. Furthermore, in order to arrive at a state-sharing model with the desired number of mixtures, the process of doubling the number of mixtures and the embedded training must be iterated after the tree-based clustering, which increases the time for training. Consequently, this paper proposes a method in which the tree-based clustering algorithm for the single Gaussian HMM is extended to the clustering of the mixed Gaussian HMM. The proposed method reduces the training time to approximately one-third that of the conventional method of handling the single Gaussian HMM. A recognition experiment using a phone typewriter and a recognition experiment for continuous word demonstrate that the recognition rate is improved by one to two points. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(4): 40–49, 2002; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1118 [ABSTRACT FROM AUTHOR]- Published
- 2002
- Full Text
- View/download PDF
37. Speech Recognition Using Acoustic Similarity-Based Primitives.
- Author
-
Hayashi, Takafumi, Mori, Hiroki, Suzuki, Motoyuki, Makino, Shozo, and Aso, Hirotomo
- Subjects
SPEECH perception ,AUDITORY perception ,PSYCHOLINGUISTICS ,ALGORITHMS ,WORD recognition ,VOCABULARY - Abstract
This paper proposes an algorithm that automatically acquires a new recognition primitive by splitting the training sample so that the likeness of the whole training sample to the model is maximized. The primitive obtained by the proposed method is a primitive integrating several time-continuous phonemes. It is called an acoustic similarity-based primitive (ASP). The algorithm proposed in this paper performs in parallel ASP acquisition and ASP modeling by HMnet. The two are simultaneously optimized. In phoneme recognition experiments for six specified speakers, the recognition rate was improved by approximately 3.5% on the average, compared to the conventional phoneme HMnet, by increasing the number of candidates by approximately 14.6% on the average. A method is also proposed in which an ASP is built in a large-vocabulary continuous speech recognition system. In a recognition experiment with eight specified speakers, the accuracy of word recognition was improved by approximately 2.6%, compared to the phoneme HMnet. © 2001 Scripta Technica, Syst Comp Jpn, 33(1): 8–17, 2002 [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
38. A causal broadcast protocol for distributed mobile systems.
- Author
-
Ohori, Chikara, Inoue, Michiko, Masuzawa, Toshimitsu, and Fujiwara, Hideo
- Subjects
BROADCASTING industry ,MOBILE communication systems ,CAUSALITY (Physics) ,ALGORITHMS ,TELECOMMUNICATION - Abstract
In this paper, we will propose a causal broadcast protocol for distributed mobile systems. Since the mobile hosts are in general unspecified majority, and the computational capability and communication capability are considerably inferior to the static hosts, an algorithm is desired whose computational effort and traffic executed by the mobile hosts are small and the complexity will not depend on the number of mobile hosts if possible. Moreover, an algorithm is desired in which the processing for dealing with handoff accompanying the movements of the mobile hosts is smaller. This paper proposes an efficient causal broadcast protocol which pays attention to the hierarchical structure of the mobile hosts and mobile support stations. In the proposed technique, the message overhead (information content added to each message) does not depend on the number of mobile hosts. Moreover, the message overhead is smaller than in known techniques and this technique is excellent in the respect that both the delay time for dealing with handoff at the time of movements of the mobile terminals and the number of messages are small. © 2001 Scripta Technica, Syst Comp Jpn, 32(3): 65–75, 2001 [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
39. Global motion segmentation method and application to progressive scanning conversion.
- Author
-
Komatsu, Takashi and Saito, Takahiro
- Subjects
SCANNING systems ,ALGORITHMS ,IMAGING systems ,IMAGE processing ,INFORMATION processing ,PHOTOGRAPHS - Abstract
The problem in which an image containing moving objects is to be segmented into multiple subimages, each of which follows a single set of motion parameters, is called the global motion segmentation problem. The segmentation and the estimation of the motion parameters are mutually dependent, and consequently, it is difficult to solve independently the segmentation problem and the estimation problem for the motion parameters. This paper proposes a new algorithm for global motion segmentation. The proposed method directly processes the input image sequence, and the segmentation and the motion parameters are simultaneously estimated. This paper presents the detailed procedure for the computation and the application of the proposed method to the conversion from interlaced scanning to progressive scanning. © 2000 Scripta Technica, Syst Comp Jpn, 31(11): 92–100, 2000 [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
40. Load balancing algorithm using tape migration mechanisms for scalable tape archiver and its performance evaluation.
- Author
-
Nemoto, Toshihiro and Kitsuregawa, Masaru
- Subjects
COMPUTER algorithms ,ALGORITHMS ,MACHINE theory ,INFORMATION storage & retrieval systems ,MULTIMEDIA systems ,COMPUTER systems - Abstract
In this paper the authors describe a load balancing algorithm and its validity in a scalable tape archiver consisting of a small-scale tape archiver used as a single element and a tape migration device connected to these elements which enables the physical transfer of tapes. With the rapid development of small-scale archivers for multimedia applications in recent years, such archivers are expected to become commercial products in the near future. By connecting an arbitrary number of such inexpensive element archivers together, an archiver system of the desired scale can be created in a flexible and cost-efficient fashion. In this paper the authors describe their load balancing algorithm for use among the element archivers and then demonstrate its validity through simulations. By balancing any biases in the access frequency through the use of the tape migration mechanism, the authors show that considerable performance improvements can be obtained. In addition, they also show that the tape migration mechanism is useful for tape drive failures, as well as demonstrating that performance improvements can be obtained even when using file striping. Finally, the authors perform simulations using the access history for a satellite image database and show that the tape migration mechanism is useful for real-world applications as well. © 2000 Scripta Technica, Syst Comp Jpn, 31(5): 31–47, 2000 [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
41. Dynamic Probabilistic Caching Algorithm with Content Priorities for Content-Centric Networks.
- Author
-
Sirichotedumrong, Warit, Kumwilaisak, Wuttipong, Tarnoi, Saran, and Thatphitthukkul, Nattanun
- Subjects
DATA packeting ,NETWORK performance ,ALGORITHMS ,DATA quality ,PROBABILISTIC databases ,MULTIPLICATION - Abstract
This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache-hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache-hit ratio, and a cache-miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. ON THE CONSTRUCTION OF PHYSICAL SYSTEMS FROM SPECTRAL DATA.
- Author
-
Zimoch, R. Z.
- Subjects
ALGORITHMS ,NUMERICAL analysis ,MATRICES (Mathematics) ,DIFFERENTIAL equations ,INVERSE problems ,TAYLOR'S series - Abstract
This paper presents an algorithm for the solution of the inverse problem for a matrix differential equation of the form Ax + Bx + Cx = 0. In the method use is made of the Taylor formula for a matrix function of many variables. It is shown that the algorithm is always convergent. The paper includes a description of the appropriate algorithms and numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
43. COMMENTS ON A SIMPLE ALGORITHM FOR THE PLOTTING OF CONTOURS.
- Author
-
Singh, Chandan
- Subjects
ALGORITHMS ,CONTOURS (Cartography) ,FINITE element method ,QUADRATIC programming ,COORDINATES ,NUMERICAL analysis - Abstract
We consider a paper by Stelzer and Welzel (1987) about a new contour lines algorithm. Because of its exactness and its convincing straightforwardness we have tested it and incorporated it into our post- processing software. This paper describes some aspects we experienced during the numerical experimentation which may be helpful for people considering using the method. Although the cited paper deals with both linearly interpolated isoparametric elements and quadratic ones we have restricted ourselves to the linear type to keep the relationships as simple as possible. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
44. A FAST NUMERICAL ALGORITHM FOR CALCULATING ELECTROMAGNETIC WAVES IN AN INHOMOGENEOUS SLAB.
- Author
-
Winter, D. F. and Bosley, D. L.
- Subjects
ALGORITHMS ,ELECTROMAGNETIC waves ,ELECTROMAGNETIC fields ,CONSTRUCTION slabs ,INHOMOGENEOUS materials ,DIFFERENTIAL equations - Abstract
Consider a plane monochromatic electromagnetic wave normally incident on an absorbing dielectric slab, bounded by z = 0 and z = L, and whose electrical parameters ε, μ and σ are arbitrary functions of z. The media on either side of the slab have constant but generally different electrical properties. This paper describes an efficient numerical algorithm for calculating the electromagnetic field within the slab, as well as its absolute reflection and transmission coefficients. The waveform within the slab is represented by a new complex dependent variable satisfying a modified (complex) Helmholtz equation to be solved subject to mixed inhomogeneous conditions at the boundaries of the slab. If derivatives are replaced by nth-order centred differences, the difference equation set is linear, sparse and efficiently solved by direct recursion. In this paper, complex arithmetic is avoided by replacing the complex Helmholtz equation with two coupled, real, second-order ordinary differential equations. An algorithm of second-order accuracy is developed which leads to a bitridiagonal difference equation set. Calculations are carried out for a variable-property slab problem which call also be solved analytically. Numerical comparisons indicate that the method is accurate and efficient enough that a desk-top computer is a practical computational machine. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
45. TEMPORAL AND SPATIAL ADAPTIVE ALGORITHM FOR REACTING FLOWS.
- Author
-
Pervaiz, Mehtab M. and Baron, Judson R.
- Subjects
NUMERICAL analysis ,PHYSICAL & theoretical chemistry ,NUMERICAL solutions to equations ,SPATIAL analysis (Statistics) ,ALGORITHMS ,CONJUGATE gradient methods - Abstract
This paper discusses the numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry and are expressed in terms of conservative form Euler and species conservation equations. the coupled behaviour between fluid flow and finite rate chemistry can introduce appreciable stiffness into numerical integration schemes, which then involve prohibitively long computation times. The use of globally fine grid resolution to ensure the capture of local flow features can also result in lengthy runs. The aim of the paper is to provide a description of a controlled grid resolution approach in both space and time, and to demonstrate its advantage in diminishing the stiffness constraint. The use of uniform spatial and temporal grids demands some form of equilibrium limit modeling for extremely fast kinetics. However, the retention of a fine grid resolution when approaching that limit is required only for small portions of the overall space/time domain. Typically, fine spatial resolution is desired in those regions in which a shock or species concentration results in very rapid local changes; similarly high temporal resolution is needed where there are ;large non-equilibrium source terms which produce large temporal gradients. We discuss an adaptive technique which refines the spatial and/or temporal grid whenever preselected gradients exceed certain threshold levels. In general, the resolved grid field is itself unsteady, its rate of change may vary from large values for certain unsteady problems to zero for steady-state solutions. For example, the adapted grid for a moving shock must track the discontinuity. The present algorithm involves periodic examination of the evolving solution, detection of those regions in which large spatial non-uniformities exceed a threshold limit, and subsequent subdivision of the corresponding grids. Reverse embedding(collapse) to a coarser mesh is allowed up to the initial (coarsest) global grid. Consistent pre-embedding in accord with initial flow field is appropriate. For unsteady flow the temporal gradients must be monitored so as to maintain sufficiently small time-steps for adequate local resolution and stability. However, temporal adoption does allow for spatial variation of the cell time-staps, without which a global minimum time-step would apply and could be extremely costly. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
46. KNOWLEDGE ENGINEERING ENHANCEMENT OF FINE ELEMENT ANALYSIS.
- Author
-
Piotr. Breitkopf and Kleiber, Michal
- Subjects
FINITE element method ,ALGORITHMS ,EXPERT systems ,PASCAL (Computer program language) ,EXPERTISE ,NONLINEAR statistical models - Abstract
This paper describes concepts behind the IQFEM which is an experimental knowledge based system for algorithm selection in nonlinear finite element analysis. Problems demanding human expertise in the FEM field are identified. The need to emulate this expertise by means of an appropriate expert system is emphasized. Knowledge base organization and symbolic inference principles, as employed in the system, are reviewed. The IQFEM has been developed in PASCAL using the 16-bit personal computer. The paper describes the work in progress and no definite conclusions are stated. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
47. Building 3D models from unregistered multiple range images.
- Author
-
Higuchi, Kazunori, Hebert, Martial, and Ikeuchi, Katsushi
- Subjects
IMAGE processing ,THREE-dimensional imaging ,POSTURE ,ESTIMATION theory ,GEOMETRIC surfaces ,ALGORITHMS - Abstract
This paper proposes a method of constructing 3D models from multiple range data which have no information on position. This method can combine data of an arbitrarily curved surface sampled from an arbitrary direction, without preliminary knowledge of viewpoints and extraction of features. The procedures are: an object is represented by discrete meshes; a Gaussian curvature at each node of the mesh is calculated; the meshes are mapped on a unit sphere; and matching of viewpoints is carried out by comparing the unit sphere. The discrete meshes are repeatedly re-formed until they are sufficiently close to the surface of the object. The transformation matrix for range data is calculated based on the correspondence of nodes of the discrete meshes obtained from the results of matching of the unit sphere. This paper describes the matching algorithm, and examples of applications of the algorithm to the construction of a “complete model” from multiple range images. © 1998 Scripta Technica, Syst Comp Jpn, 29(6): 82–90, 1998 [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
48. A genetic algorithm with deterministic mutation based on neural network learning.
- Author
-
Fukumi, Minoru and Akamatsu, Norio
- Subjects
GENETIC algorithms ,COMBINATORIAL optimization ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence - Abstract
This paper presents a method for designing neural networks using a genetic algorithm (GA) with deterministic mutation (DM) based on learning. The GA presented in this paper has a large framework including DM, which is performed on the basis of the results from neural network learning. It can achieve better convergence properties than traditional GAs. This framework is an evolutional system based on mutual interaction between DM and traditional genetic operators including stochastic mutation. It is also a model of transcription and reverse transcription in DNA. We show that the present method is better than conventional GAs with respect to convergence in learning. © 1998 Scripta Technica. Syst. Comp Jpn, 29(3): 10–17, 1998 [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
49. A simple and efficient branch and bound algorithm for finding a maximum clique with experimental evaluations.
- Author
-
Tomita, Etsuji, Imamatsu, Ken'ichi, Kohata, Yasuhiro, and Wakatsuki, Mitsuo
- Subjects
BRANCH & bound algorithms ,NETWORK analysis (Planning) ,MATHEMATICAL programming ,GRAPHIC methods ,ALGORITHMS ,ALGEBRA - Abstract
In this paper a simple and efficient branch and bound algorithm to extract a maximum clique from an undirected graph is proposed. In this method the crucial point for improving the efficiency of the algorithm is how to make the search space small using a tighter bound. However, the processing time to implement that bound must be kept small. Thus, in this paper a very simple and clever sequential approximate coloring-arranging method is developed considering trade-offs between these two factors. Using this, an upper bound on the maximum clique size at each search step is computed and branches are pruned based on this bound. The total computation time has been successfully reduced using this method. It has been verified either by computational experiments or by comparing published experimental results that this algorithm is faster than other major algorithms on a wide range of random graphs with 600 or less vertices and on a number of random graphs with 1,000 vertices. © 1997 Scripta Technica, Inc. Syst Comp Jpn, 28(5): 60–67, 1997 [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
50. A new algorithm for resolving position reversal problems in stereo matching using dynamic programming.
- Author
-
Fujii, Minoru and Matsuyama, Yasuo
- Subjects
COMPUTER algorithms ,ALGORITHMS ,DYNAMIC programming ,ALGEBRA ,COMPUTER programming ,MATHEMATICAL optimization - Abstract
In binocular stereo vision, since color and distance information can be obtained simultaneously by a passive method which will not affect the environment, it is expected to be commonly used in the vision of future mobile robots. However, it is necessary to solve difficult problems such as finding corresponding points in the right and left images. Many stereo algorithms dealing with this problem have been proposed so far, all with their own weaknesses. This paper considers the position reversal problem, in which the positions of the corresponding points change places on the right and left scan lines. This is a subject in which there has been little progress despite its importance in obstacle detection by mobile robots. It has been considered that the position reversal problem cannot be processed correctly in plain dynamic programming for global matching unless the assumption that the positions of the corresponding points do not change places is forced. In the method proposed in this paper, four states are set in the search for the corresponding points. The search using dynamic programming is performed in two stages so that the position reversal problem can be solved while the above assumption is maintained. Then, matching also becomes possible in the case of objects for which the positions of the corresponding points change places. The proposed method is experimentally applied to the search problem in which the positions of the corresponding points change places inside scan lines, and its effectiveness is verified. © 1997 Scripta Technica, Inc. Syst Comp Jpn, 28(4): 25–35, 1997 [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.