19 results on '"B. M. Sagar"'
Search Results
2. Forecasting Crop Yield with Machine Learning Techniques and Deep Neural Network
- Author
-
B. G. Chaitra, B. M. Sagar, N. K. Cauvery, and T. Padmashree
- Published
- 2023
3. Use of Super Absorbent Polymer with GGBS in Normal Concrete
- Author
-
T. J. Rajeeth, B. M. Sagar, R. Arpitha, M. S. Shashank, and Manu S. Gowda
- Published
- 2022
4. Analysis and Prediction of Cotton Yield with Fertilizer Recommendation Using Gradient Boost Algorithm
- Author
-
B. Pranava, Prashant Abbi, N. K. Cauvery, N. Vismita, Pranav A. Bhat, and B. M. Sagar
- Subjects
Yield (engineering) ,engineering ,Agricultural engineering ,Fertilizer ,engineering.material ,Mathematics - Published
- 2021
5. Smart Health Care Implementation Using Naïve Bayes Algorithm
- Author
-
Harshitha M and B M Sagar
- Subjects
Naive Bayes classifier ,business.industry ,Computer science ,Health care ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer - Published
- 2019
6. Performance Analysis on Machine Learning Algorithms with Deep Learning Model for Crop Yield Prediction
- Author
-
B. M. Sagar, N. K. Cauvery, Supreetha A. Shetty, and T. Padmashree
- Subjects
Artificial neural network ,Mean squared error ,business.industry ,Deep learning ,Yield (finance) ,Crop yield ,Statistics ,Artificial intelligence ,business ,Perceptron ,Hectare ,Mathematics ,Random forest - Abstract
Crop yield prediction is the task of estimating the yield of the crop in terms of kilogram per hectare by considering various features like weather conditions, soil properties, water level, location, previous year yield, etc. A Multi-Layer Perceptron neural network model and Random forest regression models are trained using the data collected for 4 major crops grown in the Karnataka region. Weather data and past yield data of 30 districts of Karnataka are collected. Weather data includes minimum, maximum and average values of temperature, humidity and pressure. These two datasets are then merged, pre-processed for training the model. To evaluate the trained models. evaluation metrics used were mean absolute error (MAE), mean square error (MSE) and root mean square error (RMSE). The results show that Multi-Layer Perceptron network and Random forest regression obtained the Mean absolute error of 12.3% and 12.4%, mean square error of 3.4% and 2.9%, root mean square error of 18.55% and 17.12% respectively. For real-time prediction, a basic web-application is built using a python web framework, Flask and the trained model is called to predict the yield.
- Published
- 2021
7. Distributed Representation of Words in Vector Space for Kannada Language
- Author
-
Pandurang S Kambali, B M Sagar, and Sanjana Suri
- Subjects
Context model ,Vocabulary ,Sequence ,Machine translation ,business.industry ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Automatic summarization ,Bag-of-words model ,Question answering ,Artificial intelligence ,business ,computer ,Natural language processing ,Word (computer architecture) ,media_common - Abstract
An objective of neural language modelling is to take in the joint probability function of sequence of words in a language. This is characteristically dif• cult due to the huge computation requirement and curse of dimensionality. A word sequence, the model will encounter during testing is probably going to be not quite the same as all the word sequence seen amid training. Recent works in learning word vector representation are successful in capturing semantic and syntactic relationship between words of a language. These word embeddings are proven to be very efficient in various Natural Language Processing (NLP) tasks like Machine Translation, Question Answering, Text summarization etc. Training word embeddings with neural networks has been prevalent among NLP researchers. Two major models, Continuous Bag of Words (CBOW) and Skip-gram have not only improved the accuracy but also reduced the training time. However, the vector space representation can still be improved using some existing techniques which are rarely used together like subword model, where a word is represented as a weighted average of n-gram representation. Pre-trained word vectors are key requirements in any NLP tasks, generating word vectors for Indian languages has drawn very less attention. This paper proposes a distributed representation for Kannada words using an optimal neural network model and combining various known techniques.
- Published
- 2018
8. Improving Crop Productivity Through A Crop Recommendation System Using Ensembling Technique
- Author
-
G N Srinivasan, B. M. Sagar, N. K. Cauvery, and Nidhi H Kulkarni
- Subjects
Support vector machine ,Naive Bayes classifier ,Majority rule ,Ensemble forecasting ,Agriculture ,business.industry ,Kharif crop ,Agricultural engineering ,Recommender system ,business ,Mathematics ,Random forest - Abstract
Agriculture plays a predominant role in the economic growth and development of the country. The major and serious setback in the crop productivity is that the farmers do not choose the right crop for cultivation. In order to improve the crop productivity, a crop recommendation system is to be developed that uses the ensembling technique of machine learning. The ensembling technique is used to build a model that combines the predictions of multiple machine learning models together to recommend the right crop based on the soil specific type and characteristics with high accuracy. The independent base learners used in the ensemble model are Random Forest, Naive Bayes, and Linear SVM. Each classifier provides its own set of class labels with an acceptable accuracy. The class labels of individual base learners are combined using the majority voting technique. The crop recommendation system classifies the input soil dataset into the recommendable crop type, Kharif and Rabi. The dataset comprises of the soil specific physical and chemical characteristics in addition to the climatic conditions such as average rainfall and the surface temperature samples. The average classification accuracy obtained by combining the independent base learners is 99.91%.
- Published
- 2018
9. Big Data Analysis with Apache Spark
- Author
-
Pallavi Singh, B. M. Sagar, and Saurabh Anand
- Subjects
Database ,business.industry ,Computer science ,Big data ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,computer.software_genre ,business ,computer - Published
- 2017
10. Detection of Outliers Using Interquartile Range Technique from Intrusion Dataset
- Author
-
B. M. Sagar, H. P. Vinutha, and B. Poornima
- Subjects
Computer science ,InformationSystems_DATABASEMANAGEMENT ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,Filter (signal processing) ,computer.software_genre ,Field (computer science) ,Intrusion ,ComputingMethodologies_PATTERNRECOGNITION ,Quartile ,Interquartile range ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
Unpredictable usage of Internet adds more problems to the network. Protecting the system from the anomalous behavior plays a major issue in NIDS. Data mining approaches in the field of Intrusion Detection System (IDS) is becoming more popular. The outlier is a current problem faced by many data mining researches. Outliers are the patterns which are not in the range of normal behavior. Outliers in the dataset produce more false positive alarms, and this has to be reduced to increase the efficiency of IDS. We have used Interquartile Range technique to identify the outliers in the NSLKDD’99. In this, the continuous range of input is divided into quartiles and these quartiles are analyzed to target the range of outliers. Then the obtained outliers are removed by a filter called remove with value. The experiment is conducted using Weka data mining tool.
- Published
- 2018
11. Working with Cassandra Database
- Author
-
Saurabh Anand, B. M. Sagar, and Pallavi Singh
- Subjects
Data records ,SQL ,Database ,Relational database ,Computer science ,02 engineering and technology ,computer.software_genre ,NoSQL ,Open source technology ,Oracle ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Lower cost ,computer ,computer.programming_language - Abstract
Traditional databases cannot handle a huge amount of data. NoSQL database like Cassandra can handle large data with easier management and lower cost compared to SQL databases like Oracle and other relational databases. Cassandra is an open source technology. This paper explains how Cassandra helps in handling large data efficiently with almost 30–35% more efficient than the relational database like Oracle when data size increases to ten thousand and beyond. For this, the paper explains an experiment which was carried out by varying the size of number of data records and comparing the performance of Cassandra and Oracle. As the data size was continuously increased, most of the Cassandra queries took almost 30–35% less time as compared to Oracle. Though Oracle was more efficient when data size was less (up to 40 × 103 records), the performance dropped thereafter continuously for every 10 × 103 increase in data size thereafter.
- Published
- 2018
12. Study on machine translation approaches for Indian languages and their challenges
- Author
-
B. M. Sagar and D. V. Sindhu
- Subjects
Machine translation ,business.industry ,Computer science ,Language barrier ,Second-generation programming language ,Pragmatics ,computer.software_genre ,Machine translation software usability ,Linguistics ,Example-based machine translation ,Artificial intelligence ,Computational linguistics ,business ,computer ,Language industry ,Natural language processing - Abstract
This survey mainly focuses on the developments of machine translation for the Indian languages. The survey throws a light on rule-based approach, empirical based approach and hybrid based approaches for machine translation. Every approach has its own advantages and disadvantages. Machine Translation (MT) is a process which translates from one language to another language. Due to rapid globalisation there is an increased data over the web machine translation plays a very important role to reduce the language barrier between different regions. In a country like India with 22 official languages shows a high attention for the translation. This paper focuses on the different MT systems for Indian languages and also their challenges.
- Published
- 2016
13. A review on different methods of paraphrasing
- Author
-
B. M. Sagar and Ashwini Gadag
- Subjects
Information retrieval ,Grammar ,business.industry ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Semantics ,Paraphrase ,law.invention ,Identification (information) ,Knowledge extraction ,Semantic equivalence ,law ,CLARITY ,Artificial intelligence ,business ,computer ,Natural language processing ,Natural language ,media_common - Abstract
This paper is survey of computational approaches for paraphrasing. Paraphrasing methods such as generation, identification and acquisition of phrases or sentences is a process that conveys same information. Paraphrasing is a process of expressing semantic content of source using different words to achieve the greater clarity. The task of generating or identifying the semantic equivalence for different elements of language such as words sentences; is an essential part of the natural language processing. Paraphrasing is being used for various natural language applications. This paper discuses paraphrase impact on few applications and also various paraphrasing methods.
- Published
- 2016
14. Paraphrase generator using dictionary lookup for Kannada language
- Author
-
B. M. Sagar and Ashwini I Gadag
- Subjects
Phrase ,Generator (computer programming) ,Computer science ,business.industry ,Semantics (computer science) ,Speech recognition ,computer.software_genre ,Paraphrase ,Inflection ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,Suffix ,business ,computer ,Word (computer architecture) ,Sentence ,Natural language processing - Abstract
Paraphrase generator tool is essential component of any NLP application. This paper presents paraphrase generator tool for Dravidian language of Kannada language using dictionary lookup approach. The paraphrase generator developed by using the morphological analyzer, dictionary lookup and morphological generator approach, generates paraphrases for a given sentence. The paraphrasing is a task that familiar to the speakers of all the languages. Morphological analyzer gives the internal structure of the words, it takes complete word form as input and gives the syntactic, and the morphological properties. The dictionary lookup method comprises a synonym substitution, which occurs in the exact same position in the output sentence as that of the original phrase in the input sentence. Then, the inflection of root word is generated using the morphological generator through suffix tables. The evaluation and analysis is done for various news domains.
- Published
- 2016
15. N-gram based paraphrase generator from large text document
- Author
-
Ashwini I Gadag and B. M. Sagar
- Subjects
Information retrieval ,Machine translation ,business.industry ,Computer science ,computer.software_genre ,Paraphrase ,Set (abstract data type) ,Range (mathematics) ,n-gram ,Metric (mathematics) ,Trigram ,Artificial intelligence ,business ,computer ,Natural language processing ,Generator (mathematics) - Abstract
This paper describes the paraphrase generation based on n-gram approach. N-grams are relevant words of text document that can be applied for a range of Natural Language Processing (NLP) applications. The candidate paraphrases are generated based on trigrams approach. The reference paraphrases (keyphrases) are the set of relevant paraphrases, which acts like training data set for generating candidate paraphrases. The task of paraphrase generation is similar to machine translation; hence we used machine translation evaluation metrics. R-precision evaluation metric is used to find the number of common words between candidate and reference paraphrases.
- Published
- 2016
16. Solving the Noun Phrase and Verb Phrase Agreement in Kannada Sentences
- Author
-
G. Shobha, B. M. Sagar, and P. Ramakanth Kumar
- Subjects
Gerund ,business.industry ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Verb phrase ,Nominative case ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Noun phrase ,Predicate (grammar) ,ComputingMethodologies_PATTERNRECOGNITION ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Verb phrase ellipsis ,Noun ,Determiner phrase ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
This paper proposes a way of producing context free grammar for solving Noun and Verb agreement in Kannada Sentences. In most of the Indian languages including Kannada a verb ends with a token which indicates the gender of the person (Noun/ Pronoun). This paper shows the implementation of this agreement using Context Free Grammar. It uses Recursive Descent Parser to parse the CFG. Around 200 sample sentences have taken to test the agreement.
- Published
- 2009
17. Dictionary Based Machine Translation from Kannada to Telugu
- Author
-
B M Sagar and D V Sindhu
- Subjects
Machine translation ,Grammar ,business.industry ,Computer science ,Bilingual dictionary ,media_common.quotation_subject ,computer.software_genre ,Semantics ,Telugu ,language.human_language ,language ,Transliteration ,Dictionary-based machine translation ,Artificial intelligence ,Suffix ,business ,computer ,Natural language processing ,media_common - Abstract
Machine Translation is a task of translating from one language to another language. For the languages with less linguistic resources like Kannada and Telugu Dictionary based approach is the best approach. This paper mainly focuses on Dictionary based machine translation for Kannada to Telugu. The proposed methodology uses dictionary for translating word by word without much correlation of semantics between them. The dictionary based machine translation process has the following sub process: Morph analyzer, dictionary, transliteration, transfer grammar and the morph generator.As a part of this work bilingual dictionary with 8000 entries is developed and the suffix mapping table at the tag level is built. This system is tested for the children stories. In near future this system can be further improved by defining transfer grammar rules.
- Published
- 2017
18. Complete Kannada Optical Character Recognition with syntactical analysis of the script
- Author
-
B. M. Sagar, P.R. Kumar, and G. Shobha
- Subjects
Parsing ,Computer science ,business.industry ,Speech recognition ,Optical character recognition ,Image segmentation ,computer.software_genre ,language.human_language ,Kannada ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,language ,Segmentation ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
In this paper, the development of Kannada optical character recognition (OCR) is discussed. In the process of development the preprocessing, segmentation, character recognition and post-processing modules were detailed in this paper. Since all most all the characters are in curve sharp, non-cursive the segmentation, character recognition and post processing is not easy for Kannada Script. Post Processing technique uses a dictionary based approach in order to increase the OCR output. At the end of this paper we have also discussed the syntactical analysis of Kannada Script. It is the analysis of grammatical errors in the language for kannada script.
- Published
- 2008
19. Character segmentation algorithms for Kannada optical character recognition
- Author
-
P.R. Kumar, B. M. Sagar, and G. Shobha
- Subjects
Pixel ,Character (computing) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Optical character recognition ,Image segmentation ,computer.software_genre ,Statistical classification ,Scripting language ,Segmentation ,Algorithm design ,Artificial intelligence ,business ,Algorithm ,computer ,Natural language processing - Abstract
In this paper we discuss various character segmentation algorithms for Kannada scripts. Kannada is one of the south Indian language which has 16 vowels and 34 consonants as the basic alphabet of the language. Segmentation algorithms are constructed after the deep study of peculiarities of the individual characters of Kannada. This paper gives characteristics of Kannada scripts, two surveyed segmentation algorithms and one implemented segmentation algorithm by brute force approach.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.