25 results
Search Results
2. Evrimsel algoritmalar için yeni bir meta-iyileştirici: bipolar eşleşme eğilimi.
- Author
-
GENCAL, Mashar Cenk and ORAL, Mustafa
- Subjects
- *
PARTICLE swarm optimization , *EVOLUTIONARY algorithms , *SEARCH algorithms , *GENETIC algorithms , *METAHEURISTIC algorithms , *ALGORITHMS - Abstract
Recent studies show that the performance of Evolutionary Algorithms often depends on choosing appropriate parameter configurations. Thus, researchers have generally tuned these parameters either looking at the similar research areas in the literature or manually, e.g. Grid Search. However, searching the parameter manually is laborious and timeconsuming; therefore, meta-optimization techniques have become commonly used methods to adjust parameters of an algorithm. These techniques can be classified in two widespread manners: off-line, tuning parameters of an algorithm before the algorithm initiates, and on-line, tuning the parameters while it is working. In this paper, Bipolar Matching Tendency (BMT) algorithm has been chosen as the selection method of a Genetic Algorithm (GA). The new obtained algorithm is named GA-BMT and has been used for the first time as an online metaoptimizer. In addition, the paper utilizes two search algorithms (Grid Search, Coarse to Fine Search) with three meta-optimization methods (Standard GA, Particle Swarm Optimization, GA-BMT) to investigate the best parameter settings of the Standard GA for 17 test functions, and offers a comparative work by comparing their results. Furthermore, non-parametric statistical tests, Friedman and Wilcoxon Signed Rank, were performed to demonstrate the significance of the results. Based on the all results that achieved, GA-BMT presents a reasonable achievement. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. The Algorithm is not My Boss Anymore: Technological appropriation and (new) media strategies in Riders x Derechos and Mensakas.
- Author
-
FERNÀNDEZ, AINA and BARREIRO, MARÍA SOLIÑA
- Subjects
EMPLOYEE rights ,INFECTIOUS disease transmission ,ALGORITHMS ,TRIAL courts ,DIGITAL technology ,DIGITAL communications ,SOCIAL media - Abstract
This paper studies how a group of delivery workers in Barcelona were able to organize a successful traditional and a social media strategy in order to claim for their rights as waged workers. They created a union, RidersxDerechos, and they also decided to create a worker's cooperative, Mensakas, with their own application and algorithm. We will study how they were able to re-appropriate technology and to use digital communities to spread alternative discourses. We have used different methodologies: traditional content analysis in Media, debate analysis in Social Media, qualitative ethnography. We noticed that RidersxDerechos access to media was very successful (300 piece of news analyzed) thanks to strikes and court trials, facilitating a change of perspective in the treatment of platform economy in Media. Media were following up the digital entrepreneurship rhetoric until then. Along with the traditional media strategy they developed a diversified communicative pathway in social media (Twitter, Instagram, Facebook, and Goteo) that helped them to establish alliances with riders from other cities and countries. We focused on more than 25.000 tweets. Finally, they proposed a new way to use technology by creating their own app and algorithms for their working cooperative, Mensakas. Crowdfunding was also used to fund it and to spread an alternative working storytelling from Silicon Valley's. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Structured LDPC Codes for High-Density Recording: Large Girth and Low Error Floor.
- Author
-
Lu, J. and Moura, J. M. F.
- Subjects
MAGNETIC recorders & recording ,RECORDING instruments ,ALGORITHMS ,INFORMATION storage & retrieval systems ,ERROR-correcting codes ,SIMULATION methods & models - Abstract
High-rate low-density parity-check (LDPC) codes are the focus of intense research in magnetic recording because, when decoded by the iterative sum-product algorithm, they show decoding performance close to the Shannon capacity. However, cycles, especially short cycles, are harmful to LDPC codes. The paper describes the partition-and-shift LDPC (PS-LDPC) codes, a new class of regular, structured LDPC codes that can be designed with large girth and arbitrary large minimum distance. Large girth leads to more efficient iterative decoding and codes with better error-floor properties than random LDPC codes. PS-LDPC codes can be designed for any desired column weight and with flexible code rates. The paper details the girth and distance properties of the codes and their systematic construction and presents analytical and simulation performance results that show that, in the high signal-to-noise ratio region, PS-LDPC codes out- perform random codes, alleviating the error floor phenomenon. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
5. Extraction of Timing Error Parameters From Readback Waveforms.
- Author
-
Zeng, Wei, Kavcic, Aleksandar, and Motwani, Ravi
- Subjects
MAGNETIC recorders & recording ,RECORDING instruments ,MARKOV processes ,ALGORITHMS ,STOCHASTIC processes ,ALGEBRA - Abstract
In this paper, we consider the problem of modeling the timing error process in magnetic recording systems. We propose a discrete- valued Markov model for the timing error process, and design two methods (data-aided and nondata-aided), based on the Baum-Welch algorithm, to extract the model parameters from the readback waveforms. The channel model we consider is an intersymbol interference (ISI) channel with additive Gaussian noise. The continuous-time readback signal at the output of the channel is sampled at baud-rate. Simulation results show that the estimated parameters are close to the actual values and the convergence is attained in a few iterations of the Baum-Welch algorithm. We also demonstrate the usefulness of the accurate model extraction by comparing a fine-tuned Markov timing recovery loop to the standard Mueller and Muller detector with a tuned second-order loop filter. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
6. Güncel kılavuzlar ışığında pulmoner hipertansiyonda tedavi algoritmaları.
- Author
-
Okutucu, Sercan and Tokgözoğlu, Lale
- Subjects
PULMONARY hypertension treatment ,PULMONARY artery diseases ,PULMONARY circulation ,PULMONARY blood vessels ,THERAPEUTICS ,ALGORITHMS - Abstract
Copyright of Anatolian Journal of Cardiology / Anadolu Kardiyoloji Dergisi is the property of KARE Publishing and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2010
- Full Text
- View/download PDF
7. Çoklu Yapısal Kırılmalar Yardımıyla Sürekli Değişkenin Kesikleştirilmesinde Deneysel Bir Çalışma.
- Author
-
Özkoç, H. Hatice
- Subjects
- *
ALGORITHMS , *ECONOMETRIC models , *ECONOMETRICS , *MATHEMATICAL models , *SIMULATION methods & models - Abstract
Many classification algorithms and econometric models require that training examples contain only discrete values. In order to use these algorithms when some variables have continuous values, the numeric variables must be converted into discrete ones. This paper describes a new way of discretizing numeric values using multiple structural changes. [ABSTRACT FROM AUTHOR]
- Published
- 2011
8. Graeco-Latin Squares and a Mistaken Conjecture of Euler.
- Author
-
Klyve, Dominic and Stemkoski, Lee
- Subjects
- *
MAGIC squares , *NUMBER theory , *ALGEBRA , *ALGORITHMS , *FACTOR tables , *FACTORIZATION , *RECREATIONAL mathematics , *MATHEMATICS - Abstract
The article presents information on the properties of Graeco-Latin squares enumerated by mathematician Leonhard Euler. Euler suggested that a Graeco-Latin square of size n could never exist for any n of the form 4k +2, although he was not able to prove it. A Latin square is an n-by-n array of n distinct symbols in which each symbol appears exactly once in each row and column. On the other hand a Graeco-Latin square is an n-by-n array of ordered pairs from a set of n symbols such that in each row and each column of the array, each symbol appears exactly once in each coordinate. In one of his papers on Graeco-Latin squares, Euler used magic squares, which are closely related to Graeco-Latin squares. Magic squares were constructed by using Graeco-Latin squares of orders 3, 4 and 5. He showed that a Graeco-Latin square of order n can be converted into a magic square by the use of an algorithm. One can construct Graeco-Latin squares of every order n except those values for which the prime factorization of n contains only a single factor of 2.
- Published
- 2006
- Full Text
- View/download PDF
9. A comparative study of classification methods for fall detection
- Author
-
Çatalbaş, Bahadır, Yücesoy, Burak, Seçer, G., and Aslan, Murat
- Subjects
Signal processing ,Support vector machines ,Least squares support vector machines ,Accelerometer ,Comparative studies ,Daily life activities ,Triaxial accelerometer ,Fall detection ,Training and testing ,Classification methods ,Accelerometers ,Neural networks ,Algorithms ,Rule-based classifier - Abstract
Date of Conference: 23-25 April 2014 Conference name: 22nd Signal Processing and Communications Applications Conference (SIU), 2014 Bu bildiride giyilebilir yapıda olan ve üç boyutlu ölçüm alabilen bir ivmeölçerin çıktılarını kullanarak düşme tespiti yapan farklı algoritmaların karşılaştırılması yapılmıştır. Karşılaştırma amacıyla destek vektör makineleri, yapay sinir ağları ile elde edilen sınıflandırıcılar ve kural bazlı bir sınıflandırıcı kullanılmıştır. Sınıflandırıcıların tasarlanması ve dogrulanması amacıyla 7 farklı denekten üçer defa düşme ve düşme dışındaki günlük aktivitelere ilişkin ivmeölçer verileri toplanmıştır. Yapılan karşılaştırma sonucunda tespit doğruluğu en yüksek algoritmanın %87,76 ile destek vektör makineleri olduğu bulunmuştur. En yüksek düşme tespit oranı da %90,91 ˘ olarak kural bazlı sınıflandırıcı kullanımıyla elde edilmiştir. En yüksek özgüllük oranı %89,47 ile yine destek vektör makineleri ile elde edilmiştir. A comparative study of various fall detection algorithms based upon measurements of a wearable tri-axial accelerometer unit is presented in this paper. Least squares support vector machine, neural network and rule-based classifiers are evaluated in the scope of this paper. Training and testing data sets, which are necessary for design and testing of the classifiers, respectively, are collected from 7 people. Each subject exercised simulated falls and other daily life activities such as walking, sitting etc. Among three methods, support vector machine-based classifier is found to be superior in terms of both correct detection and false alarm ratio as 87,76% precision and 89.47% specifity. Meanwhile, best sensitivity is achieved with rule-based classifiers. © 2014 IEEE.
- Published
- 2014
10. Öbek analizi algoritmaları
- Author
-
Altun, Muhammet, Ercengiz, Ali, and Mühendislik Bilimleri Ana Bilim Dalı
- Subjects
System analysis ,Engineering Sciences ,Algorithms ,Mühendislik Bilimleri - Abstract
ÖZET Öbek analizi (cluster analysis) bir sınıflandırmanın oluşturulmasında kullanılabilen prosedürlerin geniş bir yelpazesine verilen genel bir isimdir. Bu prosedürler deneysel olarak oldukça benzer nesnelerin öbeklerini (cluster) veya gruplarını oluştururlar. Daha belirgin olarak, bir öbek analizi yöntemi, nesnelerin bir örneği hakkında bilgi içeren bir veri kümesi ile başlayan ve bu nesneleri göreceli olarak homojen gruplar olarak yeniden organize etmeye girişen çok değişkenli istatistiksel bir prosededürdür. Öbek analizi yapılması için günümüze kadar birçok çalışma yapılmış ve birçok algoritma üretilmiştir. Üretilen bu algoritmaların amacı, X = {xı, X2,..., xn} şeklinde verilen bir kümenin öbeklerinin elde edilmesidir. Bu algoritmalar için temel girdiler (input) X kümesi ve bu kümenin elemanları arasındaki ilişkidir (uzaklık ya da benzerliok ölçüsü). Bu algoritmaların çıktıları (output) temel olarak öbek sayısı ve öbekler olmalıdır. Fakat literatürdeki mevcut algoritmaları hepsi için bu geçerli değildir. Algoritmalar iki kategoriye ayrılmışlardır. Birinci kategoride öbek sayısının da girdi olarak verildiği algoritmalar, ikinci kategoride ise öbek sayısının bulunmasına yönelik algoritmalar mevcuttur. Bu tez kapsamında, genel olarak literatürde geçen öbek analizi algoritmaları üzerinde araştırmalar yapılarak, elde edilen önemli olan ve sıklıkla kullanılan algoritmalar verilmiştir. Ayrıca bu algoritmalar için bilgisayar programlan yazılmış ve bu programlar yardımıyla algoritmaların ürettikleri sonuçlar ekran çıktıları halinde verilmiştir. vıı SUMMARY CLUSTER ANALYSIS ALGORITHM The practice of classifying objects according to perceived similarities is the basis for much of science. Organizing data into sensible groupings is one of the most fundamental modes of understanding and learning. Cluster Analysis is the formal study of algorithms and methods for groupings, or classifying objects. An object is described either by a set of measurements or by relationship between object and other objects. Cluster analysis does not use category labels that tag object with prioridentifiers. The absence of category levels which distinguishes cluster analysis from discriminant analysis (and pattern recognition and decision analysis). The objective of cluster analysis is simply to find a convenient and valid organization of the data, not to establish rules for seperating future data into categories. Clustering algorithms are geared toward finding structure in the data. A cluster is comprised of a number of similar objects collected or grouped together. Everitt (1974) documents some of the following definitions of a cluster: l.`A cluster is set of entities which are alike, and entities from different clusters are not alike`. 2.`A cluster is an aggregation of points in the test spaces such that the distance between any two points in the cluster is less than the distance between any point in the cluster and any point not in it.` 3. `Clusters may be described as connected regions of a multi-dimensional space containing relatively high density of points, seperated from other such regions by a region containing a relatively low density of points`. The last two definitions assume that the objects to be clustered are represented as points in the measurements space. We recognize a cluster when see it in the plane, although it is not clear how we do it. While it is easy to give a functional definition of a cluster, it is very difficult to give an operational definition of a cluster. This is due to the fact that objects can be grouped into cluster with different purposes in mind. Data can reveal clusters of differing `shapes` and `sizes`. To compound the problem further, cluster membership can change over time, as is the case with star clusters (Dewdney, 1986), and the number of clusters often depends on the resolution (fine versus coarse) with which we view the data. Figure -1 illustrates some of concepts for two-dimensional point clusters. How many clusters are there in Figure- 1? At the global or higher level of similarity, we perceive four clusters in these data, but at the local level or a lower similarity threshold, we perceive twelwe clusters. Which answer is correct? Loking at the data at multiple scales may actually help in viiianalyzing its structure. Thus the crucial problem in identifying clusters in data is to specify what proximity is and how to measure it. As is to be expected, the notion of proximity is problem dependent. Figure -1 :Clusters of Point patterns in two dimension Clustering techniques offer several advantages over a manual grouping process.First, a clustering program can apply a specified objective criterion consistently to form the groups. Human beings are excellent cluster seekers in two and often in three dimensions, but different individuals do not always identify the same cluster in data. The proximity measure defining similarity among objects depends on an individual's educational and cultural background.Thus it is quite common for different human subjects to form different groups in the same data, especially when the group are not well sperated. Second, a clustering algorithm can form the groups in the fraction of time required by a manual grouping, particularly if ixa long list of descriptors or features is associated with each object. The speed, reliability, and consistency of a clustering algorithm in organizing data together constitute an overhelming reason to use it. A clustering algorithm relieves a scientist or data analyst of the treacherous job of `looking` at apattera matrix or a similarity matrix to detect clusters. A data analyst' time is better spent in analyzing or interpreting the results provided by a clustering algorithm. Clustering is also usefull in implementing the `divide an conquer` strategy to reduce the computational complexity of various decision-making algorithms in pattern recognition. For example, the nearest-neighbor decision rule is apopular technique in pattern recognition (Duda and Hart, 1973). However, finding the nearest neighbor of a test pattern can be very time consuming if the number of training patterns or prototypes is large. Fukunaga and Narendra (1975) used the well-known partitional clustering algorithm, ISODATA, to decompose the patterns, and then in conjunction with the branch-and-bound method obtained an efficient algorithm to compute nearest neighbors. Similarly Fukunaga and Short (1978) used clustering for problem localization, whereby a simple decidion rule can be implemented in local regions or clusters of the pattern space. The applications of clusterind continue to grow. Consider the problem of grouping various colleges and universities in the United States to illustrate the factors in clustering problems. Schools can be clustered based on their geographical location, size of the student body, size of the campus, tuition fee, or offering of various professional graduate programs. The factors depend on the goal of the analysis. The shapes and sizes of the clusters formed will depend on which particular attribute is used in defining the similarity between colleges. Interesting and challenging clustering problems arise when several attributes are taken together to construct clusters. One cluster could represent private, midwestera, and primarily liberal arts colleges with fewer than 1000 students and another can represent large state universities. The features or attributes that we have mentioned so far can easily be measured. What about such attributes as quality of education, quality of faculty, and the quality of campus life, which cannot be measured easily? One can poll alumni or a panel of experts to get either a numerical score (on a scale of, say, 1 to 10) for these factors or similarities must be averaged over all respondents because individual opinions differ. One can also measure subjectivity attributes indirectly. For example, faculty excellence in agraduate program can be estimated from the number of professional papers written and number of Ph.D. Degrees awarded. The example above illustrates the difference between decision making and clustering. Suppose that we want to partition computer science graduate programs in the United States into two categories based on such attributes as size of faculty, computing resources, external researc support, and faculty publications. In the decision-making paradigm, an `expert` must first define these two categories by identifying some computer science programs from each of the categories (these are the training samples in pattern recognition terminology). The attributes of these training samples will be used to construct decision boundaries (or simply thresholds on attributes values) that will separate the two types of programs. Once the decision boundry is available, the remaining computer science programs (those that were not labeled by the expert) will be assigned to one of the two categories. In the clusteringparadigm, no expert is available to define the categories. The objective is to determine ehether a two-category partition of the data, based on the given attributes, is reasonable, and if so, to determine the memberships of the two clusters. This can be achieved by forming similarities between all pairs of computer science graduate programs based on the given attributes and the given attributes and then constructing groups such that the within-group similarities are larger than the between-group similarities. Cluster analysis is one component of exploratory data analysis, which means sifting through data to make sense out of measurements by wahtever means are available. The information gained about a set of data from a cluster analysis should prod one's creativity, suggest new experiments, and provide fresh insight into the subject matter. The modern digital computer makes all this possible. Cluster analysis is a child of the computer revolution and frees the analyst from time-honored statistical models and procedures conceived when the human brain was aided only by pencil and paper. The development of clustering methodology has been truly interdisciplinary. Researchers in almost every area of science ststisticians, social scientists, and engineers. I.J. Good (1977) has suggested the new name botryology for the discipline of cluster analysis, from the Greek word for a cluster of grapes. In this thesis, in general, algorithms and mathematical bases are given in the result of cluster analysis. Algorithms are examined in two steps. At first step, we examined the algorithms where number of cluster inputs are given. Also we can classify these algorithms. Furthermore, hierarchical algorithms and nonhierarchical algorithms is to find the clusters in order to optimize the given objective functions. In these algorithms mostly `Hard c-means`, and `Fuzzy c-means` methods are used by the researchers. In the following flow chart the firs step algorithms are described. ALGORITHMS Hierarchical Algorithms Nonhierarchical Algorithms Single Linkage Complete Linkage Hard C-Means Fuzzy C-Means Average Linkage Figure 2. A Structure of Classification Algorithms. XIIn the second phase of research algorithms are examined in order to find the number of clusters. These algorithms can be also classified as hierarchical algorithms and nonhierarchical algorithms. As before hierarchical algorithms are examined with three methods, that is; `Single Linkage`, Complete Linkage`, and `Average Linkage`. On the other hand, the base of the nonhierarchical algorithms is the optimization of the validity functionals. In order to have executable algorithms in the first step as algorithm `Fuzzy c-means` is used. In this thesis, fuzzy sets and fuzzy partiton spaces are studied as well. After execution of the all algorithms, as a result fuzzy partiton matrix is produced. xn 58
- Published
- 1998
11. Ortaokul 6. sınıf öğrencilerinin kesirlerle bölme algoritması oluşturma sürecinin incelenmesi
- Author
-
Yildirim, Büşra, Akkaya, Recai, and İlköğretim Ana Bilim Dalı
- Subjects
Student achievement ,Secondary school students ,Eğitim ve Öğretim ,Education and Training ,Fractions ,Division operations ,Mathematics education ,Algorithms ,Mathematics - Abstract
Bu çalışmanın amacı, kesirlerle bölme algoritmasını kavramsal olarak oluşturmaları için uygun öğrenme ortamlarının tasarlanması ve tasarlanan öğretimin uygulanması, ardından rapor edip bu süreçteki bilgi oluşumunun niteliğinin incelenmesidir. Çalışmada nitel araştırma yöntemlerinden, örnek olay çalışması kullanılmıştır. Amaçlı örnekleme yöntemlerinden ölçüt örnekleme yöntemi ile araştırmaya katılacak öğrenciler belirlenmiştir. Çalışmaya katılacak olan öğrenciler altıncı sınıf öğrencisi olup kesirlerle bölme işlemini henüz öğrenmemişlerdir. Öğrencilerin bilgi oluşturma sürecinde kullanılmak üzere tasarlanan etkinlikleri yapmaları için gerekli ön bilgilere sahip olup olmadıklarını belirlemek amacıyla Devlet Yatılı ve Bursluluk Sınav sorularından oluşan ve kesirlerle bölme işlemine kadar olan kazanımları kapsayan sorulardan oluşan `Kesir Başarı Testi` kullanılmıştır. Matematiğe karşı tutumlarını ölçmek için ise `Matematik Tutum Ölçeği` uygulanmıştır. Araştırma, 108 altıncı sınıf öğrencisine uygulanan testlerden elde edilen puanlar, beşinci sınıf dönem sonu notları, matematik öğretmenlerinin öğrenciler hakkındaki görüşleri ve öğrencilerin çalışmaya katılma konusundaki istekliliği dikkate alınarak belirlenen 12 öğrenci ile yürütülmüştür. Çalışma, matematik başarı düzeylerine göre belirlenmiş altı grup ile gerçekleştirilmiştir. Öğrenci gruplarına araştırmacı tarafından hazırlanmış `Kesirlerle Bölme Algoritması Oluşturma Etkinlik Kâğıdı` uygulanmıştır. Esas uygulamadan 2 hafta sonra ise pekiştirme etkinliği gerçekleştirilmiştir. Çalışma grupları ile gerçekleştirilen etkinlikler video kayıt altına alınmış ve daha sonra yazılı metne çevrilmiştir. Elde edilen veriler RBC+C modeli ana-litik araç olarak kullanılarak betimsel analizi yapılmıştır.Araştırmanın sonunda öğrenci gruplarının hemen hemen hepsinin kesirlerle bölme algoritması için çapraz çarpım kuralını oluşturdukları görülmüştür. Bu oluşturma eyleminin süresi ve yolu her gruba göre değişiklik göstermekte, başarı düzeyi yüksek-yüksek olan gruplarda sürecin daha iyi içselleştirildiği gözlemlenmiştir. Diğer başarı düzeyindekilerin algoritmayı daha geç oluşturmasının ise gerekli ön bilgilerin eksikliğinden kaynaklanıyor olabileceği düşünülmektedir. Öğrencilere uygulama esnasında bölme işleminin anlamı sorulduğunda hepsinin eşit paylaştırma anlamından bahsettiği, bölmenin ölçme anlamını bilmedikleri görülmüştür. Elde edilen verilerden yola çıkılarak kesirlerle bölme işlemi için hazırlanan etkinliklerin hem bölme işleminin anlamlarını içerdiği hem de algoritma oluşturma sürecini yansıttığı için bilgi oluşturma sürecine katkı sağladığı söylenebilir. The purpose of this study is to design appropriate learning environments and to apply them to the designed teaching in order to conceptually construct the division algorithm with fractions, then to report and examine the quality of the information formation in this process. In the study, case study was used among the qualitative research methods. The students who participated in the research were determined by using the sampling method of objective sampling methods. The students who will participate in the study are the sixth grade students and have not yet learned to divide by fractions. `Fractional Success Test` was used to determine whether the students had the necessary preliminary knowledge to perform the activities designed to be used in the information creation process, including the results of the State Boarding and Scholarship Exam questions up to the division by fractions. `Math Attitude Scale` was applied to measure attitudes towards math. The study was conducted with 12 students who were determined by considering the results of the tests applied to 108th grade students, fifth grade notes, opinions of mathematics teachers and the willingness of students to participate in the study. The study was conducted with six groups determined according to mathematics achievement levels. The `Fractional Division Algorithm Creation Activity Paper`preprepared by the researcher was applied to student groups. Reinforcement was carried out two weeks after the actual application. Activities with the working groups were recorded in video and then the written text was transcribed. Descriptive analysis of the obtained data was performed using RBC + C model as an analytical tool.At the end of the research, almost all of the student groups were found to form the cross product rule for the partitioning algorithm with fractions. The duration and path of this creation process varied according to each group, and it was observed that the process was better internalized in the groups with higher and higher success levels. It is thought that the other success levels are caused by the lack of preliminary knowledge which is required to construct the algorithm later. When the students were asked about the meaning of the division process during the application, it was seen that they did not know the meaning of measurement, instead, they all mentioned equal division. Judging from the obtained data, it can be said that the activities prepared for partitioning by fractions contribute both to the information creation process because it includes both the meanings of the partitioning process and the algorithm creation process. 149
- Published
- 2019
12. Robust set-membership filtering algorithms against impulsive noise
- Author
-
Sayın, Muhammed Ö., Vanlı, N. Denizcan, and Kozat, Süleyman S.
- Subjects
Signal processing ,Absolute values ,Convergence performance ,Impulsive noise ,Set membership filtering ,Filtering algorithm ,Absolute error ,Adaptive filtering ,Logarithmic error cost ,Costs ,Noise environments ,Set-membership ,Robust adaptive filtering ,Absolute difference ,Algorithms - Abstract
Conference name: 22nd Signal Processing and Communications Applications Conference (SIU), 2014 Date of Conference: 23-25 April 2014 Bu bildiride, dürtün gürültüye karşı sağlam küme üyeliği süzgeç algoritmaları öneriyoruz. İlk olarak küme üyeliği düzgelenmiş en küçük mutlak fark algoritmasını (SM-NLAD) tanıtıyoruz. Bu algoritma hatanın karesi yerine mutlak değerini maliyetlendirerek dürtün gürültüye karşı sağlamlık sağlar. Sonra bu algoritmanın dürtün gürültünün olmadığı ortamlarda da diğer algoritmalarla karşılaştırılabilir performans sergilemesi için logaritmik maliyet çerçevesinden yararlanarak küme üyeliği düzgelenmiş en küçük logaritmik mutlak fark algoritmasını (SMNLLAD) öneriyoruz. Logaritmik maliyet fonksiyonu doğal olarak büyük hata değerlerinin mutlak değerini içerirken küçük hata değerlerinin karesini içerir. Son olarak, sayısal deneylerimizde algoritmalarımızın dürtün gürültülere karşı sağlamlığını ve dürtün gürültünün olmadığı ortamlarda da karşılaştırılabilir performans sergilediğini gösteriyoruz. In this paper, we propose robust set-membership filtering algorithms against impulsive noise. Firstly, we introduce set-membership normalized least absolute difference algorithm (SM-NLAD). This algorithm provides robustness against impulsive noise through pricing the absolute error instead of the square. Then, in order to achieve comparable convergence performance in the impulse-free noise environments, we propose the set-membership normalized least logarithmic absolute difference algorithm (SM-NLLAD) through the logarithmic cost framework. Logarithmic cost function involves the absolute value of the error for large error values and the square of the error for small error values intrinsically. Finally, in the numerical examples, we show the robustness of our algorithms against impulsive noise and their comparable performance in the impulse-free noise environments. © 2014 IEEE.
- Published
- 2014
13. Sequential nonlinear regression via context trees
- Author
-
Vanlı, N. Denizcan and Kozat, Süleyman S.
- Subjects
Signal processing ,Context tree ,Non-linear model ,Nonlinear regression ,Regressor space ,Efficient learning ,Adaptive ,Regression algorithms ,Regression analysis ,Trees (mathematics) ,Sequential ,Algorithms ,Non-linear regression - Abstract
Date of Conference: 23-25 April 2014 22nd Signal Processing and Communications Applications Conference (SIU), 2014 Bu bildiride, ardışık doğrusal olmayan bağlanım problemi incelenmiş ve bağlam ağaçları kullanarak etkili bir öğrenme algoritması sunulmuştur. Bu amaçla, bağlanım alanı parçalara ayrılmış ve oluşan bölgeler bağlam ağacı ile simgelenmiştir. Her bölgede bağımsız bağlanım algoritmaları kullanılarak bağlam ağacı tarafından gösterilebilen tüm doğrusal olmayan modellerin kestirimleri, hesaplama karmaşıklığı bağlam ağacının düğüm sayısıyla doğrusal olan bu algoritma ile uyarlanır olarak birleştirilmiştir. Önerilen algoritmanın performans limitleri, veriler üzerinde istatistiksel varsayımlarda bulunmaksızın incelenmiştir. Ayrıca, teorik sonuçları izah etmek için sayısal bir örnek sunulmuştur. In this paper, we consider the problem of sequential nonlinear regression and introduce an efficient learning algorithm using context trees. Specifically, the regressor space is partitioned and the resulting regions are represented by a context tree. In each region, we assign an independent regression algorithm and the outputs of the all possible nonlinear models defined on the context tree are adaptively combined with a computational complexity linear in the number of nodes. The upper bounds on the performance of the algorithm are also investigated without making any statistical assumptions on the data. A numerical example is provided to illustrate the theoretical results. © 2014 IEEE.
- Published
- 2014
14. Competitive linear MMSE estimation under structured data uncertainties
- Author
-
Vanlı, N. Denizcan, Sayın, Muhammed Ö., and Kozat, Süleyman S.
- Subjects
Signal processing ,Data uncertainties ,Semi-definite programming ,Mean square error ,Relative performance ,Convex optimization ,Linear estimation ,Competitive ,Error analysis ,Mean square error criterions ,Robust ,Data uncertainty ,Bounded uncertainty ,Algorithms - Abstract
Bu bildiride, yapısal veri belirsizlikleri altında doğrusal kestirim problemi incelenmektedir. Maliyet fonksiyonu olarak ortalama karesel hata (MSE) düşünülmüştür ve sınırlı belirsizlikler altında gürbüz bir algoritma önerilmiştir. Sunulan yöntem yarışmacı algoritma yapısına sahiptir ve bu yapıya ulaşmak için doğrusal kestiricinin performansı, bilinmeyen veri belirsizliklerine göre ayarlanmış doğrusal enküçük MSE (MMSE) kestiricisinin performansına göreceli olarak tanımlanmıştır.Daha sonra, bu göreceli performans ölçütünü en kötü durumdaki sistem modeline göre enküçülten doğrusal kestirici bulunmuştur. Bu yarışmacı kestiriciyi bulmak için çözülmesi gereken problemin yarı-kesin programlama (SDP) problemi olarak düşünülebileceği gösterilmiştir. Ayrıca, teorik sonuçları izah etmek için sayısal örnekler sunulmuştur. In this paper, we consider the linear estimation problem under structured data uncertainties. A robust algorithm is presented under bounded uncertainties under the mean square error (MSE) criterion. The performance of the linear estimator is defined relative to the performance of the linear minimum MSE (MMSE) estimator tuned to the underlying unknown data uncertainties, i.e., the introduced algorithm has a competitive framework. Then, using this relative performance measure, we find the estimator that minimizes this cost for the worst-case system model. We show that finding this estimator can equivalently be cast as a semidefinite programming (SDP) problem. Numerical examples are provided to illustrate the theoretical results. © 2014 IEEE.
- Published
- 2014
15. Compressive sensing based flame detection in infrared videos
- Author
-
Günay, Osman, Çetin, A. Enis, and Çetin, A. Enis
- Subjects
Adaptive boosting ,Temporal features ,Computational costs ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,A-wavelet transform ,Compressive sensing ,Vectors ,Wavelet transforms ,Image processing ,Feature extraction algorithms ,Infra-red cameras ,Flame detection ,Compressed sensing ,Wavelet transform ,Hidden Markov models ,Infrared ,Spatial feature vector ,Infrared radiation ,Algorithms ,Signal reconstruction - Abstract
Date of Conference: 24-26 April 2013 In this paper, a Compressive Sensing based feature extraction algorithm is proposed for flame detection using infrared cameras. First, bright and moving regions in videos are detected. Then the videos are divided into spatio-temporal blocks and spatial and temporal feature vectors are exctracted from these blocks. Compressive Sensing is used to exctract spatial feature vectors. Compressed measurements are obtained by multiplying the pixels in the block with the sensing matrix. A new method is also developed to generate the sensing matrix. A random vector generated according to standard Gaussian distribution is passed through a wavelet transform and the resulting matrix is used as the sensing matrix. Temporal features are obtained from the vector that is formed from the difference of mean intensity values of the frames in two neighboring blocks. Spatial feature vectors are classified using Adaboost. Temporal feature vectors are classified using hidden Markov models. To reduce the computational cost only moving and bright regions are classified and classification is performed at specified intervals instead of every frame. © 2013 IEEE.
- Published
- 2013
16. Kesme problemine sezgisel bir yaklaşım
- Author
-
Bayir, Firat, Özdemir, Erhan, and İşletme Anabilim Dalı
- Subjects
Optimization ,Travelling salesman problem ,Endüstri ve Endüstri Mühendisliği ,Computer Engineering and Computer Science and Control ,Industrial and Industrial Engineering ,İşletme ,Packaging ,Stock cutting problem ,Material cutting problem ,Cutting problems ,Two dimensional cutting ,Algorithms ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol ,Business Administration - Abstract
Kesme ve yerleştirme problemi gerek akademisyenler için gerekse cam, metal, kağıt, tekstil deri gibi endüstriler için en önemli araştırma konularından biridir. Kesilecek küçük parçaların büyük hammaddelere atanması veya 3 boyutlu konteynırların yüklenmeleri bu tarz problemlerdendir. Bu tez çalışması, kesme ve yerleştirme sınıfındaki Stok Kesim Problemi?nin 1,5 boyutlu versiyonu olan Açık Boyut Problemidir. Problemin amacı, bir boyutu sabit diğer boyutu açık olan dikdörtgen şeklindeki bir stok malzemesine kesilecek parça kümesindeki tüm parçaları, fireyi minimize edecek şekilde atamaktır. Atanacak küçük parçaların konveks olma şartı aranmamaktadır. Probleme Gezgin Satıcı probleminin bir varyantı olarak yaklaşılmış, parçaların eklenme sıraları ve parçaların duruş açıları optimizasyon kriteri olarak ele alınmıştır. Problem, Genetik Algoritmalar yardımıyla çözülmüştür. Yeni bir çaprazlama operatörü önerilmiş, alanı büyük olan parçaya öncelik tanınması prensibi eklenmiştir. Paketleme algoritması olarak aşağı sol dolgu algoritması, döndürme özelliği eklenerek geliştirilmiştir. Uygulamaya konu olan kesilecek parçalar Anand, McCord ve Sharma?nın (1999) yayınladıkları makaleden alınmış, sonuçlar makalede önerilen metot ile karşılaştırılmıştır.Anahtar Kelimeler: Kesme ve Yerleştirme Problemi, Stok Kesim Problemi, Açık Boyut Problemi, Genetik Algoritma, Gezgin Satıcı Problemi, Optimizasyon, Düzensiz Şekiller, Konveks Olmayan Çokgen, Konkav, Kumaş Kesimi, Metal Kesimi, Deri Kesimi, Döndürmeli Aşağı Sol Dolgu Algoritması Cutting and Packing Problem is one of the most important research areas among both academicians and Industries such as glass, metal, paper and apparel. Assignments of small parts to the raw material sheets or three dimensional bin-packing problems are such problems. This work deals with Open Dimension Problem, that is 1,5 dimensional version of Cutting Stock Problem. Objective of the problem is assigning all of the parts from bill of manufacturing to a rectangular sheet which is one dimension is fixed but the other dimension is open by means of minimizing the waste of material. There is no restriction about the convexity of the parts. Approached to the problem as a variant of the Travelling Salesman Problem, order of the parts and angles was handled as optimization criteria. Problem was solved by using Genetic Algorithms. A new crossover operator was proposed; the principal of choosing the larger part was added. For packaging, bottom-left-fill algorithm was used by adding rotation feature. Parts which are subject to this study is taken from the published article of Anand, McCord and Sharma (1999), the results were compared with the proposed method in articleKeywords: Cutting and Packing Problem, Cutting Stock Problem, Open Dimension Problem, Genetic Algorithm, Travelling Salesman Problem, Optimisation, irregular Shapes, Non-Convex, Concave, Fabric Cut, Metal Cut, Leather Cut, Rotating Bottom-Left Algorithm 176
- Published
- 2013
17. Entropy functional based adaptive decision fusion framework
- Author
-
Günay, Osman, Töreyin, B. U., Köse, Kıvanç, Çetin, A. Enis, and Çetin, A. Enis
- Subjects
Signal processing ,Active fusion ,Decision value ,Compound algorithm ,Decision fusion methods ,Entropy ,Human operator ,Confidence levels ,Decision-fusion algorithms ,Wildfire detection ,Computer vision applications ,Decision fusion ,Computer vision ,Set theory ,Real number ,Algorithms ,Entropy functional ,Projections onto convex sets - Abstract
Date of Conference: 18-20 April 2012 Conference name: 20th Signal Processing and Communications Applications Conference (SIU), 2012 Bu bildiride, resim analizi ve bilgisayarla görü uygulamalarında kullanılmak üzere entropi fonksiyonuna dayanan uyarlanır karar tümleştirme yapısı geliştirilmiştir. Bu yapıda bileşik algoritma, herbiri güven derecesini temsil eden sıfır merkezli bir gerçek sayı olarak kendi kararını oluşturan birçok alt algoritmadan meydana gelir. Karar değerleri, çevrimiçi olarak alt algoritmaları tanımlayan dışbukey kümelerin üzerine entropik izdüşümler yapmaya dayalı bir aktif tümleştirme yöntemi ile güncellenen ağırlıklar kullanılarak doğrusal olarak birleştirilir. Bu yapıda genelde bir insan olan bir uzman da bulunur ve karar tümleştirme algoritmasına geribesleme sağlar. Önerilen karar tümleştirme algoritmasının performansı geliştirdigimiz video tabanlı bir orman yangını bulma sistemi kullanılarak test edilmiştir. In this paper, an entropy functional based online adaptive decision fusion framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several sub-algorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular sub-algorithm. Decision values are linearly combined with weights which are updated online according to an active fusion method based on performing entropic projections onto convex sets describing sub-algorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm.
- Published
- 2012
18. Çoklu imge eşikleme problemlerinde metasezgisel algoritmaların performans analizi
- Author
-
Pekdemir, Gökhan, Baykan, Ömer Kaan, Bilgisayar Mühendisliği Anabilim Dalı, and Enstitüler, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Ana Bilim Dalı
- Subjects
Optimization ,Particle swarm optimization ,Exhaustive search ,Firefly algorithm ,Multilevel ımage thresholding problem ,Computer Engineering and Computer Science and Control ,Otsu'nun yöntemi ,Cuckoo optimization algorithm ,Image ,Etraflı arama ,Ateş böceği algoritması ,Parçacık sürü optimizasyonu ,Optimization problem ,Algorithms ,Çoklu imge eşikleme problemi ,Guguk kuşu optimizasyonu algoritması ,Othu's metod ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
İmge bölütleme, bir imgeyi birbiriyle örtüşmeyen alt imge gruplarına ayırma işlemidir. Çok seviyeli imge eşikleme, en popüler imge bölütleme tekniklerinden biridir ve sıklıkla amaç fonksiyonunun optimizasyonu problemi olarak görülmektedir. Bu çalışmada, çoklu imge eşikleme problemini çözmek amacıyla imgenin gri seviye dağılımını (histogram) kullanan yöntemlerden Otsu'nun yöntemi amaç fonksiyonu olarak kullanılmıştır. En uygun eşik değerlerini belirlemek amacıyla seçilen test imgeleri üzerinde Parçacık Sürü Optimizasyonu (PSO), Ateş böceği Algoritması (AA), Guguk kuşu Optimizasyonu Algoritması (GOA), Etraflı Arama ile Otsu'nun amaç fonksiyonunun hem minimize (sınıf içi değişinti) edildiği hem de maksimize (sınıflar arası değişinti) edildiği deneyler gerçekleştirilmiştir. Yöntemlerin performansları stabilite, çözüm kalitesi ve yakınsama süreleri açısından kıyaslanmıştır. Çözüm kalitesi ve stabilite açısından genellikle GOA'nın, yakınsama süresi açısından PSO'nun daha başarılı olduğu gözlenmiştir., Image segmentation is a process that separates the image to non-overlapping sub-image groups. Multilevel image thresholding is one of the most popular image segmentation techniques and it's often seen as an optimization problem of an objective function. In this paper, Otsu's method, which is one of the methods that use the gray-level distribution (histogram) of image, is used as an objective function to solve the multilevel image thresholding problem. To consider the optimal threshold values on the selected test images, several experiments were performed by both minimizing (within-class variance) and maximizing (between-class variance) the Otsu's objective function with Particle Swarm Optimization (PSO), Firefly Algorithm (FA), Cuckoo Optimization Algorithm (COA) and Exhaustive Search. Performances of the metaheuristic techniques were compared in terms of stability, solution quality and convergence time. In general, it was observed that GOA is more successful than others with respect to solution quality and stability; PSO is more successful than others with respect to convergence time.
- Published
- 2012
19. Successive cancelation approach for doppler frequency estimation in pulse doppler radar systems
- Author
-
Soğancı, Hamza, Gezici, Sinan, and Gezici, Sinan
- Subjects
Signal processing ,Radar systems ,Global minima ,Iterative methods ,Received signals ,Doppler radar ,Data_CODINGANDINFORMATIONTHEORY ,Radar target recognition ,Iterative algorithm ,Waveform structure ,Cost functions ,Radar ,Signal to noise ratio ,Point targets ,Doppler ,Monte Carlo Simulation ,Pulse doppler ,Monte Carlo methods ,Doppler frequency ,Computer simulation ,Matched filtering ,Maximum likelihood estimation ,Doppler effect ,Particle swarm optimization (PSO) ,Frequency estimation ,Doppler frequency estimation ,Algorithms - Abstract
Date of Conference: 22-24 April 2010 In this paper, a successive cancelation approach is proposed to estimate Doppler frequencies of targets in pulse Doppler radar systems. This technique utilizes the Doppler domain waveform structure of the received signal coming from a point target after matched filtering and pulse Doppler processing steps. The proposed technique is an iterative algorithm. In each iteration, a target that minimizes a cost function is found, and the signal coming from that target is subtracted from the total received signal. These steps are repeated until there are no more targets. The global minimum value of the cost function in each iteration is found via particle swarm optimization (PSO). Performance of this technique is compared with the optimal maximum likelihood solution for various signal-to-noise ratio (SNR) values based on Monte Carlo simulations.
- Published
- 2010
20. Programlama öğretiminde görselleştirme araçlarının kullanımının öğrenci başarı ve motivasyonuna etkisi
- Author
-
Gülmez, Işil, Özdener Dönmez, Nesrin, and Bilgisayar ve Öğretim Teknolojileri Eğitimi Anabilim Dalı
- Subjects
Motivation ,Student achievement ,Visual tools ,Teaching ,Eğitim ve Öğretim ,Programming ,Education and Training ,Primary education students ,Teaching methods ,Teaching aids ,Algorithms - Abstract
Bu çalışmada, programlama öğretiminde görselleştirme araçları kullanmanın, öğrenci başarı ve motivasyonuna olan etkisi araştırılmıştır. Çalışmada ayrıca, ilköğretim seviyesindeki öğrencilerin programlama başarılarının yordanmasına katkı sağlaması amacıyla algoritma geliştirme başarıları ile ilişkisi olan dersler belirlenmeye çalışılmıştır. Deneme ve ilişkisel tarama modelinin kullanıldığı çalışma, iki deney grubu ile yürütülmüştür. Gruplardan biri akış şeması modelli yazılımdan, diğeri ise algoritmayı hikayeleştiren yazılımdan faydalanmıştır. Gruplar, değişken, koşul ve döngü kullanımı konularındaki başarıları açısından karşılaştırılmıştır. Biri kağıt üzerinde diğeri uygulamalı olmak üzere iki adet son test sınavı kullanılmıştır. Grupları motivasyon açısından karşılaştırmada Özerbaş (2003) tarafından geliştirilen motivasyon testi kullanılmış, öğrencilerin algoritma geliştirme başarısı ile ilişkisi olan derslerin belirlenmesinde ise korelasyon testi sonuçlarından yararlanılmıştır.Çalışma sonucunda döngü ve koşul kullanımı konusunda algoritmayı hikayeleştiren araç lehine anlamlı fark gözlenmiştir. Çalışma grupları arasında motivasyon açısından anlamlı fark olmadığı belirlenmiş, öğrencilerin algoritma geliştirme başarıları ile Türkçe, matematik, İngilizce ve bilişim teknolojileri dersleri arasında anlamlı ilişki tespit edilmiştir.Anahtar Sözcükler: Akış şeması modelli araç, algoritmayı hikayeleştiren araç, algoritma geliştirme başarısı, motivasyon. In this study, the effects of using visualization tools in programming instruction on student success and motivation were investigated. Lessons which significantly correlate with elementary school students? algorithm development success tried to be determined, as it is useful to predict their programming achievement. The study which used experimental and correlative survey model was conducted by two experimental groups. One group used flow-model tool, the other used narrative tool. Groups? success on using variables, conditions and loops were compared using paper-based and computer-based tests. Ozerbas (2003)?s motivation survey was used for comparing groups? motivation, correlation test results was used for determining the lessons which significantly correlate with algorithm development success.Results revealed that there is significant difference on using conditions and loops in favor of narrative tools. There isn?t any difference between groups? motivation. The lessons which significantly correlate with students? algorithm development success are Turkish, mathematics, English and computer technologies.Key Words: Flow-model tools, narrative tools, algorithm development success, motivation. 147
- Published
- 2009
21. Bar code localization by image processing
- Author
-
Öktem, R., Çetin, A. Enis, and Çetin, A. Enis
- Subjects
Bar code recognition ,Problem solving ,Bar codes ,Image processing ,Time complexity ,Object recognition ,Binary edge maps ,Algorithms ,Binary subband decomposition - Abstract
Date of Conference: 16-18 May 2005 Conference name: Proceedings of the IEEE 13th Signal Processing and Communications Applications Conference, 2005. Bu özette imge işleme kullanılarak çubuk kod bölgesi çıkarımı ele alınmıştır. Çubuk kodlar birbirine paralel açık-koyu doğrulardan oluştuklarından, ikili bir ayrıt haritasında da belirli bir oryantasyonda birbirine bağlı paralel doğrular olarak ayırdedilirler. Sunulan algoritmalar, bu özellikten yola çıkarak morfoloji ve serbest açı eşiklemesi yolu ile bar koda ait bölgeyi yersemeyi amaçlamaktadır. Ayrıt haritası oluşturmada Sobel işlemi ve ikili altbant dönüşümü kullanılmakta ve her iki yöntem de zaman karmaşıklığı ve performans açısından karşılaştırılmaktadır. This paper addresses the problem of bar code recognition by use of image processing. Bar codes are composed of parallel alternating dark-light stripes; hence they also appear as parallel lines connected at some orientation in a binary edge map. The proposed algorithms exploit that feature and use morphology and free angle thresholding for localization. Edge map is formed by Sobel operator and binary subband decomposition separately, and the two methods are compared in terms of time complexity and performance. Detailed discussion of test results are also presented. ©2005 IEEE.
- Published
- 2005
22. Rank kestirim yöntemi kullanarak SVD tabanlı gürültü filtreleme
- Author
-
Çek, Mehmet Emre, Savacı, Ferit Acar, TR20599, TR12076, Çek, Mehmet Emre, Savacı, Ferit Acar, and Izmir Institute of Technology. Electronics and Communication Engineering
- Subjects
Noisy matrix ,Singular vaule decomposition ,Noise abatement ,Data matrix ,Algorithms - Abstract
In this paper, an algorithm which performs the singular values decomposition of a noisy matrix has been presented in order to make noise reduction by rank estimation of the noise free data matrix. In this study the rank estimation methodis done by finding ratios of among all consecutive singular values and selecting the maximum of these ratios as the noise threshold.
- Published
- 2004
23. Pazar sepet analizinde ürün ilişkilerinin bulunması
- Author
-
Kaya, Yalçin, Göktürk, Mehmet, and Diğer
- Subjects
Basket analysis ,Data mining ,Computer Engineering and Computer Science and Control ,Algorithms ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
IV ÖZET Veri madenciliğinin genel amacı, büyük bir veri topluluğu içindeki dikkate değer ilişkileri ortaya çıkarmaktır. Diğer bir değişle veri madenciliği, anamlı örüntüleri ve kuralları bulmak için büyük miktardaki verinin analiz edilmesidir. Veri madenciliği, pazar sepet analizindeki ambar bütünleşmesini sağlamaya çalışan sık küme problemlerini içermektedir. Kümelerden oluşmuş veriler ve bu verilerin olasılıklarının elde olduğu varsayıldığında, sık küme problemi, en az olasılıkla hangi veri gruplarının elimizdeki veri topluluğunun içinde olduğunun belirlenmeye çalışılması işlemidir. Yapılan çalışmada bu tip problemlerde kullanılmak üzere, yeni ve hızlı bir algoritma geliştirilerek, algoritma için ikna edici bir hesaplama yöntemi geliştirilmiştir. Yapılan uygulamada ürünlerin alt kümelerinden oluşan büyük miktardaki veri birikimi ve ürünlerin dizileri elde edilerek, ürün hareketleri içerisinde, farklı ürünler arasındaki ilişkileri bulmak için, belirlenen eşik değerini aşan ürün kümeleri bulunmaya çalışılmıştır. Bunun için Zhenjiang Hu tarafandan geliştirilen algoritma [Zhe,2001] farklı bir şekilde ele alınıp yeni bir algoritma geliştirilmiştir. Eşik değerini aşan ürünler sık gerçekleşen ürünlerdir. V SUMMARY The general goal of data mining is to extract interesting correlated information from large collection of data. A key computationally intensive sub- problem of data mining involves finding frequent sets in order to help mine association rules for market basket analysis. Given a bag of sets and a probability, the frequent set problem is to determine which subsets occur in the bag with some minimum probability. This paper given a bag of sets and a probability, the frequent set problem is to determine which subsets occur in the bag with some minimum probability. Beginning with a simple but inefficient specification expressed in a functional language, the new algorithm is calculated in a systematic manner from the specification by applying a sequence of known calculation techniques. In this application.the item sets that exceed the defined threshold are processed to find the relations among different item sets by getting the big amount of data consisted of the subsets of the items and the item arrays.within the item transactions.To accomplish this, a new algorithm has been formulated which was based on the algorithm developed by Zhenjiang Hu.Those items who exceed the threshold are the ones that have the highest frequencies. [Zhe,2001] 60
- Published
- 2003
24. Hücresel yapay sinir ağları için iki öğrenme algoritması ve görüntü işleme uygulamaları
- Author
-
Karamahmut, Sinan, Güzeliş, Cüneyt, and Diğer
- Subjects
Image processing ,Artificial neural networks ,Elektrik ve Elektronik Mühendisliği ,Learning ,Electrical and Electronics Engineering ,Algorithms - Abstract
ÖZET Bu tezde, tam kararlı Hücresel Yapay Sinir Ağlan ( HYSA ) için istenen kararlı hal çıkışlarının eğiticili öğretilmesi amacına yönelik olarak Dinamik Algılayıcı öğrenme Algoritması (`Recurrent Perceptron Learning Algorithm : RPLA `) ve Dinamik Geriye- Yayılım öğrenme Algoritması (` Recurrent Backpropagation Learning Algorithm : RBLA `) acunda iki farklı eğiticili öğrenme algoritması geüştirilmiştir. Bu algoritmaları kullanan, HYSA '1ar için şablon öğrenme programı olan SLAT (` Supervised Learning Algorithm Tool `) programı yazılarak, bir kişisel bilgisayar ortamında benzitim düzeni elde edilmiştir. Geliştirilen algoritmalar, HYSA 'nın eğiticili öğrenmesinin bir amaç ölçütünün tasarım kısıtlamaları altında enazını bulma sorununa dönüştürülmesine dayanmaktadır. Enazlanmak istenen amaç ölçütü, istenen kararlı-durum çıkışları ile gerçek çıkışlar arasındaki öklid uzaklığını veren hata işlevidir. Tasarım kısıtlamaları ise HYSA 'nın tam-kararlı olmasını, yani kararlı-durumda tüm yörüngelerinin denge noktalarından birine yerleşmesini sağlayan bağlantı ağırlıklarının simetrik olma koşulu ile karalı-durum çıkışlarının iki-kutuplu ( + 1) olmasını sağlayan çıkıştan öz-geribesleme bağlantı ağırlığının, durumdan öz- geribesleme katsayısından büyük olma koşulundan oluşmaktadır. Geliştirilen algoritmalardan RBLA, hata işlevinin kısıtlamalar altında yerel enaz noktalarından birinin yeterince küçük seçilen öğrenme oranlan durumunda veren, izdüşüm türünden bir eğim-düşme enazlama algoritmasıdır. RBLA, Hopfield ağı için verilmiş olan Geriye- Yayılım Algoritmasının tam-kararlı HYSA için bir uygulamasıdır. Tam-bağlantılı bir ağ olan Hopfield ağında gerekli bağlantı ağırlık katsayılarının sayısı HYSA 'daki yerel ve düzgün bağlantıyı belirleyen şablon katsayılarının sayışma göre çok fazla olduğundan, önerilen RBLA, Hopfield ağı için önerilen algoritmaya göre, bu açıdan daha etkindir. HYSA 'lan için tezde önerilen RBLA 'nın, çıkış işlevinin doymaya yalan bölgelerde çok düşük türevleri olması yüzünden yavaş yakınsama sorunları vardır. Bu sorunlar tezde geliştirilen çeşitli teknikler ile yenilmeye çalışılmıştır. RPLA ise, HYSA 'nın herbir hücresinin kararlı-durumda hücreye giren toplam girişe bağlı olarak çıkışında +1 ya da -1 veren Algılıyıcı (`Perceptron`) benzeri bir işlem elemanı olarak çalışması gözlemine dayanarak geliştirilmiştir. RPLA, tam-kararlı HYSA 'nın eğiticili öğrenmesi için geliştirilmiş bilinen en etkin öğrenme algoritmasıdır. vı SUMMARY TWO LEARNING ALGORITHMS FOR CELLULAR NEURAL NETWORKS AND THEIR IMAGE PROCESSING APPLICATIONS In this thesis, two supervised learning algorithms for obtaining the template coefficients in completely stable Cellular Neural Networks (CNNs) are presented. The Recurrent Backpropagation Learning Algorithm (RBLA) can be viewed as the CNN version of the well-known Recurrent Backpropagation Algorithm originally developed for a Hopfield type completely stable continuous-time dynamical neural network. The second algorithm presented is inspired by the analogy between the input-output relation of the well-known Perceptron and the steady-state behavior of the cells in CNNs. It is hence called as Recurrent Perceptron Learning Algorithm (RPLA) since applied to a dynamical network, CNN. From the advent of the first useful computer ( ENIAC ) in 1946 until the late 1980s, essentially all information processing applications used a single basic approach: programmed computing. Solving a problem using programmed computing involves devising an algorithm and/or a set of rules for solving the problem and then correctly coding these in software and making necessary revisions and improvements. Clearly, programmed computing can be used in only those cases where the processing to be accomplished can be described in terms of a known procedure or a known set of rules. If the required algorithmic procedure and/or set of rules are not known, then they must be developed an undertaking that, in general, has been found to be costly and time consuming. In fact, if the algorithm required is not simple ( which is frequently the case with the most desirable capabilities ), the development process may have to await a flash of insight. Obviously, such an innovation process cannot be accurately planned or controlled. Even when the required algorithm or rule set can be devised, the problem of software development still must be faced. Because current computers operate on a totally logical basis, software must virtually perfect if it is to work. The exhaustive design, testing, and iterative improvement that software development demands makes it a lengthy and expensive process. A new approach to information processing that does not require algorithm or rule development and that often significantly reduces the quantity of software that must be developed has recently become available. This approach, called neurocomputing, allows, for some types of problems ( typically in areas such as sensor processing, image processing, pattern recognition, data analysis, and control ), the development of information processing capabilities for which the algorithms or rulesare not known ( or where they might be known, but where the software to implement them would be too expensive, time consuming, or inconvenient to develop ). For those information processing operations amenable to neurocomputing implementation, the software that must be developed is typically for relatively straightforward operations such as data file input and output, peripheral device interface, preprocessing, and postprocessing. The Computer Aided Software Engineering ( CASE ) tools often used with neurocomputing systems can frequently be utilized to build these routine software modules in a few hours. These properties make neurocomputing an interesting alternative to programmed computing, at least in those areas where it is applicable. Formally, neurocomputing is the technological discipline concerned with parallel, distributed, adaptive information processing systems that develop information environment. The primary information processing structures of interest in neurocomputing are artificial neural networks ( although other classes of adaptive information processing structures are sometimes also considered, such as learning automata, genetic learning systems, data-adaptive content addressable memories, simulated annealing systems, associative memories, and fuzzy learning systems ). Artificial neural systems function as parallel distributed computing networks. Their most basic characteristics is their architecture. Only some of the networks provide instantaneous responses. Other networks need time to respond and are characterized by their time-domain behavior, which we often refer to as dynamics. Neural network also differ from each other in their learning modes. There are a variety of learning rules that establish when and how the connecting weights change. Finally, networks exhibit different speeds and efficiency of learning. As a result, they also differ in their ability to accurately respond to the cues presented at the input. In contrast to conventional computers, which are programmed to perform specific tasks, most neural networks must be taught, or trained. Learning corresponds to parameter changes. Learning rules and algorithms used for experiential training of networks replace the programming required for conventional computation. Neural network users do not specify an algorithm to be executed by each computing node as would programmers of a more traditional machine. Instead, they select what in their view is the best architecture, specify the characteristics of the neurons and initial weights, and choose the training mode for the network. Appropriate inputs are then applied to the networks so that it can aquire knowledge from the environment. As a result of such exposure, the network assimilates the information that can later be recalled by the user. In the past three decades a number of neural network architectures have been developed. The architectures have been inspired both by the principles governing biological neural systems and the well-established theories of engineering and fundamental sciences. Most of the widely applied neural networks fall into two main classes: 1) memoryless neural networks and 2) dynamical neural networks. From a circuit theoretical point of view, the memoryless neural networks are non-linear resistive circuits, while the dynamical neural networks are non-linear R-L-C circuits. A memoryless neural network defines a non-linear transformation from the space of input signals into the space of output signals. Such networks have been successfully used in pattern recognition and several problems which can be defined as a non-linear transformation between two spaces. As in the Hopfield network and Cellular Neural Network, dynamical neural networks have usually been designed as dynamical V1Usystems where the inputs are set of some constant values and each trajectory approaches one of the stable equilibrium points depending upon the initial state. Some useful application of these networks includes image processing, pattern recognition and optimization. Due to the grid topology of Cellular Neural Networks which is tailor made for image processing, this artificial neural network model is considered in this thesis. A Cellular Neural Network is a 2-dimensional array of cells [1], Each cell is made up of a linear resistive summing input unit, an R-C linear dynamical unit, and a 3 -region, symmetrical, piecewise-linear resistive output unit. The cells in a CNN are connected only to the cells in their nearest neighborhood defined by the following metric: d(i,j;i,j) = max{i-î, j-jlf. Where (i,j) is the vector of integers indexing the cell C(i,j) in the i th row j th column of the 2-dimensional array. The system of equations describing a CNN with the neighborhood size of one is given in (l)-(2). *u=-Axi,j+ ZwP.iyi+P.j+i(°°) + ZzP.iui+P.j+i +I. ü) p.l e {-1.0,1} p,l e {-1,0,1} yy = f(*J =^{KJ+1-k-1}- (2) Where, A, I, wp, and zp, e R are constants. It is known in [1] that a CNN is completely stable if the feedback connection weights wpl are symmetric. Throughout the thesis, the input connection weights zpI are chosen symmetric for reducing computational costs while the feedback connection weights w, are chosen symmetric for ensuring the complete stability, i.e., def def def w-,,-, = wu = a,, w_]0 = w1 w00 = a5; def def def def def z-ı -1 = zı,ı = `ı ' z-ı,o = ^.o = `2.> z-ı.ı = /~ı = `3 ?> zo,-ı = zo,ı = b4, z00 = b5 IXIn this thesis, the learning is accomplished through modification of the following weight vector weR`. def r, def w = [aTbTl] = [a, a2a3a4a5b1b2b3b4b5l] T. (3) Several design methods for determining the entries of w, i.e., the feedback template coefficients a; 's, the input template coefficients b{ 's, and the threshold I are proposed in the literature [1], [3], [4]. The well-known relaxation methods for solving linear inequalities are used in [3]-[4] for finding one of the connection weights providing that the desired outputs can be obtained as the actual outputs for the given external inputs and for the properly chosen initial states. However, the methods in [3]-[4] do not specify which initial state vectors, except for the following trivial one, yield the desired output for the given external input and the weight vector found. The trivial solution in the determination of such a proper initial state vector is to take the desired output as the initial state; but this requires the knowledge of the desired output which is not available for the external inputs outside the training set. For this reason, it is still desired to develop new design methods and supervised learning algorithms for finding the connection weights yielding the desired output for the given external input and the chosen initial state. The supervised learning algorithms in this thesis are proposed for such a purpose. Recently, a number of supervised learning algorithms for CNNs are given in the literature [2], [7]-[9], [11]-[14]. The well-known Backpropagation through Time Algorithm is applied in [7] for learning the desired trajectories in continuous-time CNNs. A modified Alternating Variable Method is used in [8] for learning the steady- state outputs in discrete-time CNNs. Both of these algorithms are proposed to be used for any kind of CNN and hence they do not take into account the constraints which are needed to be imposed on the connection weights for ensuring the complete stability and the bipolarity of the steady-state outputs. It is shown in [9] that the supervised learning of the steady-state outputs in completely stable generalized CNNs [10] is a constrained optimization problem, where the objective function is the error function and the constraints are due to some desired qualitative and quantitative design requirements such as the bipolarity of the steady-state outputs and the complete stability. The algorithm given in [9] is a gradient-descent algorithm and indeed it is an extension of the recurrent backpropagation algorithm to the generalized CNNs. The recurrent backpropagation is applied also in [1 1]-[12] to a modified version of CNNs differing from the original CNN model in the following respects: i) The cells are fully-connected, ii) The output function is a differentiable sigmoidal one, and iii) The network is designed as a globally asymptotically stable network. In a very recent paper [13], the modified versions of the backpropagation through time and the recurrent backpropagation algorithms are used for finding a minimum point of an error measure of the states instead of the output. It is assumed in [13] that the steady-state values of the states have the derivative with respect to the connection weights. However, this assumption is true only if the magnitudes of the states are all strictly greater than one. Therefore, the gradient calculated in [13] doesnot describe the jumping of the steady-state values of the states from one saturation region to the other as a consequence of the changes in the connection weights. The lack of the derivative of the error function prevents the use of the gradient-based methods for finding the templates minimizing the error. In order to overcome this problem, the output function can be changed [14] with a continuously differentiable one which is very close to the original piecewise-linear function. Whereas the gradient methods are now applicable, the error surfaces have almost flat portions resulting in extremely slow convergence [14]. An alternative solution to this problem is to use methods not requiring the derivative of the error function. Such a method is given in [2] by introducing genetic optimization algorithms for supervised learning of the optimal template coefficients. The Recurrent Backpropagation Learning Algorithm (RPLA) given in this thesis is a special case of the one given in [30] and also can be viewed as the CNN version of the Recurrent Backpropagation Algorithm. The algorithm finds the optimal template coefficients and the threshold minimizing the total error function while satisfying the constraints i) w0 0 = a5 > A, and ii) the symmetry conditions for the feedback template coefficients ; where the first constraint ensures the bipolarity of the steady-state outputs and the second the completely stability in CNNs. The Recurrent Perceptron Learning Algorithm (RPLA) proposed in this thesis does not need the derivative of the error function. It is developed for finding the template coefficients of a CNN to realize an input-(steady-state)output map described by a set of training samples. Here, the input consists of two parts: The first part is the external input and the second is the initial state. The algorithm resembles the well- known Perceptron learning algorithm [16] and hence called as Recurrent Perceptron Learning Algorithm (RPLA) for CNNs. RPLA starts with an initial weight vector satisfying the constraint w0 0 = a5 > A ensuring the bipolarity of the steady-state output. The updated weight vector obtained in each step of the algorithm is projected r ^ i onto the constraint set A= { w gR11 = a5 > A j. It is shown in the thesis that if there is a weight vector which satisfies the bipolarity constraint and yields the zero Output Mismatching Error defined in (4), then RPLA finds this vector with a suitable time- varying learning-rate. e(w) = Z *»(*»-j = l} and D` ={(i,j) yjoo) = -dy =-l} It is assumed here that the state vector is never chosen equal to one of the equilibrium points in the center region or partial regions in the state space and therefore any steady-state output is either +1 or -1. XIThe proposed algorithms RBLA and RPLA have been applied to several image processing problem. It has been observed that CNNs using the rule defined by RBLA and RPLA can learn edge detection, corner detection, and hole filling tasks very quickly. The structure of the thesis is as follows. Chapter 2 gives an introduction to artificial neural networks. The general structure of CNNs is presented in Chapter 3. In order to put the work which is carried out by mis thesis into a more understandable way a general information on learning and different learning algorithms for artificial neural networks are discussed in Chapter 4 and 5, respectively. In Chapter 6, the Recurrent Backpropagation Learning Algorithm (RBLA) and the Recurrent Perceptron Learning Algorithm (RPLA) are proposed. Chapter 7 is a guide for the simulation tool SLAT ( Supervised Learning Algorithm Tool ) which uses RBLA and RPLA. The simulation results of SLAT are presented in Chapter 8. Xll 105
- Published
- 1994
25. RSA algoritmasını kullanan şifreleme/deşifreleme yazılımının tasarımı
- Author
-
Erhan, Metin, Örencik, Mehmet Bülent, and Diğer
- Subjects
Design ,Software industry ,Encryption ,Rivest-Shamir-Adleman ,Computer Engineering and Computer Science and Control ,Algorithms ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
ÖZET Şifreleme/deşifreleme (encryption-decryption) bir bilgisayar şebekesinde veya kişisel bilgisayarlarda haberleşme yada dosya güvenliğini sağlamak için kullanılır. Bu nedenle günümüz de, bilgisayarlar da ya da bilgisayar şebekelerinde şifrelemenin önemi gün geçtikçe artmaktadır. Bu çalışma da tasarlanan açık anahtar şifreleme yazılımı. RSA algoritmasını kullanarak kişisel bilgisayarlarda dosya güvenliğini sağlamak amacıyla gerçekleştirilmişdir. Yapılan tasarımda farklı işlevlere sahip iki program mevcuttur. Bunlardan ilki, RSA şifreleme sisteminin temelini oluşturan asal sayılann seçimi ve bunlara bağlı olarak anahtarların hesaplanması işlevini yerine getirir. Saklı tutulan bu asal sayılar seçilirken, bu sayıların çarpımından oluşan ve açık tutulan sayının faktorizasyonunun kolaylıkla yapılamayacak olmasına dikkat edilmişdir. Ayrıca bu işlemde, çeşitli algoritmalar ve olasılıksal test metodlan, asal sayılann seçiminde kullanılrnışdır. Şifreleme/deşifreleme işlemini gerçekleştiren program, önceki yazılım ile belirlenen anahtarları kullanarak, çeşitli algoritmalar yardımıyla sonuca kısa sürede ulaşır. Bu programlar da kullanılan sayılann boyutlan çok büyük olduğundan, tüm temel işlemler için ikili düzendeki sayılar üzerinde işlem yapan fonksiyonlar tanımlanmışdır. Programlar da bilinen üstünlükleri nedeniyle C programlama dili kullanılrnışdır. Ayrıca DES algoritması, anahtar yönetimi, bilinen diğer açık anahtar sistemleri ve uygulamaları, sayısal imza ve Hash fonksiyonlan, kriptolarına protokolleri kısaca anlatılmışdır. VI DATA ENCRYPTION / DECRYPTION METHODS AND SOFTWARE DESIGN OF RSA ALGORITHM SUMMARY Glyptography is a word that has been derived from the Greek words for `secret writing`. It generally implies that information which is secret or sensitive may be converted from an intelligible form to an unintelligible form. The intelligible form of information or data is called `plaintext` and unintelligible form is called `ciphertext`. The process of converting from plaintext to ciphertext is called `encryption` and the reverse process is called `decryption`. Most cryptographic algorithms make use of a secret value called the key. Encryption and decryption should be virtually impossible without the use of the correct key. The process of attempting to find a shortcut method, not envisioned by the designer, for decrypting the ciphertext when the key is unknown is called `cryptoanalysis`. Computer communication systems, local-area networks., interconnected local-area networks, and electronic mail systems are playing an increasingly important role in office automation, telecommunications, and factory automation. A prerequisite for extensive usage of these services with full or partial replacement of conventional paper mail by an electronic medium, is security. It must be possible to guarantee the secrecy of a message. Furthermore, the receiver of a message wants to verify that the indicated and the real sender are one and the same (i.e., there must be a provision for electronic (digital) signatures and signature verification). Two major crytosystems are in use today; private key cryptosystems and public key cryptosystems. Two major encryption algorithms related to these cryptosystems : DES and RSA. respectively. After the publication of the Data Encryption Standart in 1 977, it quickly became clear that there was much more to the implementation of a secure cryptographic system than a high quality cryptographic algorithm. It can be argued that the development of a secure cryptoalgorithm is an essential tool, but only one building block, of a secure data system. The known organizations have developed data security standarts for security applications. Their goal was to achieve a common level of security and inter-operability. The efforts of the standarts-making organizations have also served a purpose far beyond the actual standarts that were developed. Standartizations, validation, and certification programs greatly increased the public's interest in cryptography and raised the level of confidence that it could be a cost effective solution to practical security problems. There is still much to decide about the best vnuse of cryptography, but there is now no doubt that it will be used far beyond its original military applications. Originally standart data encryption algorithms were intended for the encryption and decryption of computer data. However its application has been extended to data authentication as well. In automated data processing systems it is often not possible for humans to scan data to determine if it has been modified. Examination may be too time consuming for the vast quantities of data involved in modern data processing or the data may have insufficient redundancy for error detection. Even if human scanning were possible, the data could have been modified in such a manner that it would be very difficult for the human to detect the modification. For example, `do` may have been changed to `do not` or `1900` may have been changed to `9100`. Without additional information the human scanner could easily accept the altered data as authentic. These threats may still exist even when data encryption is used. It is therefore desirable to have an automated means of detecting both intentional and unintentional modifications of data. Ordinary error detecting codes are not adequate because, if the algorithm for generating the code is known, an adversary can generate the correct code after modifying the data. Intentional modification is undetectable with such codes. However for example, DES can be used to produce a cryptographic checksum which can protect against both accidentia! and intentional, but unauthorized, data modification. When using a private key cryptosysterns such as DES, both the receiver must know the key used to encrypt (decrypt) the data. Therefore, you need a safe means of transmitting the key from one to the other. If you change the keys frequently transmitting them becomes a major problem. Furthermore it's impossible to communicate with someone new until you've safely exchanged keys, but this can take a long time. The most popular private key crypto system is DES. DES works on one 8-byte (64 bit) block at a time. The encryption process is controlled by a user supplied 56-bit key. Every bit in the output is a complex function of every bit in the key. Decryption under DES is the reverse of encryption and is performed by working the algorithm backward. The encryption process consists of an initial permutation of the input block followed by 1 6 rounds of encipherrnent and finally an inverse of the initial permutation. After the initial permutation, the block being encrypted is divided into two parts, called Lg and Rg. In each of the 16 rounds of encipherrnent the new L part is the previous round's R part. The new R is the previous round's L part XORed with the result of the cipher function f. The cipher function (t) derives its output based on the old R part and the current round's key (Kj). You use the inputs to perform substitution via eight look up tables called S-boxes and then permute the combined output of the S-boxes to give the function's output. DES's biggest weakness is its limited key length. It's critics claim that you might be able to break DES with a brute-force attack (i.e., by trying every possible keys). VIIIPublic key cryptosysterns are designed to overcome the shortcomings of the private key cryptosysterns. Public key cryptosysterns are based on the use use of a trap-door one way function. You can easily compute such a function in one way only used to encrypt the data. To compute the function in the other direction used to decrypt the data you must have certain secret information, hence the name trap-door. In a public key cryptosystem, each person has two keys ; one for encrypting, Ep, and one for decrypting. Dp. Decrypting with Dp. a plaintext P that was encrypted using E^ restores the original plaintext- that's, Dpi, EpiP))=P. Both Ep_ and D^ should be easy to compute, but knowing Ep_ does not reveal DA If you use a public key cryptosystem, you can publish your encrypting key E/^(the public key) in a public directory, while you keep D/sJthe private key) secret. If someone wants to send you a message, all that person has to do is look up your public key {Ep) and use it to encrypt the message as Ep[P). Only you know the private key D^, so only you can decrypt the message back to its original plaintext DpflzpfPfrP. The most irnportatnt public key cryptosystem todat is P.SA named after its inventors Rivest Shamir and Adleman. To use RSA, you need to choose, at random, two large prime numbers, to be called (p) and (q). Compute (n) as the product of the two primes ; n=p*q. Then randomly choose a large number (d), so that (d) is relatively prime to (p-1)*(q-1). In other words, the greatest common divisor of (d) and (p-1)*(q-1) is 1. Finally compute (e) so that (e*d) rnod(p-1)*(q-1)=1. The public key is the pair of numbers (e,n) and the private key is (d,n). In addition to ensuring privacy, encryption can be used to verify authenticity. For instance, send a message to another user, how can that user prove that you did ?. Simply encrypting the message using a key known only to you and the other user does not solve the problem. The other user would be satisfied that you had sent the message. Public key cryptosysterns can provide an elegant and simple solution by creating digital signatures. If you want to send a private message that can be authenticated to someone else, then you encrypt D^(P) with that person's public key, giving Eg(D^(P)). Using the private key, Dg, that person would derive Db(Eb(D^(P)))=Da(P) and then decrypt DA(P) by using EpPpjP))=P. Thus, both privacy and authenticity have been achieved. To send a secret message M to a user B, user A obtains B's public key Eg, encrypts the plaintext message M as C=Eg(M), and transmits the ciphertext C to B. B's private transformation Dg, is the inverse of Eg, so that B can decipher C and obtain M by computing Dg(C)=M. IXif the ctytosystem is secure., secrecy is possible under the following conditions ; *no other user knows Dg * there's enough uncertainty about M. The encryption key Eg is public, so if only a few likely candidates M-,Mg Mn exist for M, then M can be found by enciphering these candidates until a M is found that enciphers to the sane C; that's EB(M)=C, where M=Mj. With encryption alone, B can't be sure the received message is the one sent from A, because an active wiretapper could obtain Eg and alter A 's message. He might even impersonate A. To give B this assurance the message must be signed by A. To send a signed message M to B, user A applies the private transformation D^ to M. Ignoring the issue of secrecy for the moment A computes and transmits to B the digital signature x=D^(M). A 's public transformation E^ is the inverse of D/^, so that B (or a judge) can validate A's signature on an alleged message M by checking whether E/vl(x)=M. Public key systems generally encrypt more slowly than conventional ciphers such as DES. Therefore it is usually not desirable to apply a digital signature directly to a long message. On the other hand, the entire message must be signed. The solution is that using Hash functions. A Hash function H accepts a variable-size message M as input and outputs a fixed- size representation H(M) of M, sometimes called a message digest. In general, H(M) is much smaller than M. Regardless of whether a conventional or public, key cryptosystem is used, it is necessary for users to obtain other users' keys. In conventional cryptosystems this problem can be solved by using a courier service or central authority. Another solution is Merkle puzzles. In public, key systems the key- management problem is simpler because of the public nature of the key material exchanged between users. The solution is Exponential key exchange scheme. Several public key systems other than RSA have been proposed. One of thern is Knapsack systems, the another one is El-Gamal signature scheme. None of these systems rivals RSA if a combination of versatility, security and practicality is the criterion. However, this does not preclude their use for specific applications such as digital signatures. The essence of zero-knowledge is that one party can prove something to another without revealing any dditional information. There are some protocols related to zero-knowledge proofs. That's, these protocols are used for convincing proofs with no details. Furthermore there are some important protocols related to partial disclosure of secrets and sharing a secretIn modular arithmetic, there are very useful theorems and algorithms for efficient computation. These are briefly explained in chapter?. In this study, RSA algorithm with large primes is realised in C programming languages. Choosing large primes is done by another program which uses some cryptographic theorems and algorithms. Efficient algorithms are used in RSA program for minimizing the run time. Numbers used by programs can be larged easily because of the modular structures of the programs. For each mathematical operation, a standart function that works on large numbers is written. As we move towards a society where automated information resurces are increasingly shared, cryptography will continue to increase in importance as a security mechanism. Electronic networks for banking, shopping, inventory control, benefit and service delivery, information storage and retrieval distributed processing and government applications will need improved methods for access control and data security. xt 83
- Published
- 1993
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.