9 results on '"Neelakandan, S."'
Search Results
2. Deep Belief Network-Based User and Entity Behavior Analytics (UEBA) for Web Applications.
- Author
-
Deepa, S., Umamageswari, A., Neelakandan, S., Bhukya, Hanumanthu, Sai Lakshmi Haritha, I. V., and Shanbhog, Manjula
- Subjects
WEB-based user interfaces ,SOCIAL media ,ANOMALY detection (Computer security) ,MACHINE learning ,DEEP learning ,DEVIANT behavior ,INTERNET security - Abstract
Machine learning (ML) is currently a crucial tool in the field of cyber security. Through the identification of patterns, the mapping of cybercrime in real time, and the execution of in-depth penetration tests, ML is able to counter cyber threats and strengthen security infrastructure. Security in any organization depends on monitoring and analyzing user actions and behaviors. Due to the fact that it frequently avoids security precautions and does not trigger any alerts or flags, it is much more challenging to detect than traditional malicious network activity. ML is an important and rapidly developing anomaly detection field in order to protect user security and privacy, a wide range of applications, including various social media platforms, have incorporated cutting-edge techniques to detect anomalies. A social network is a platform where various social groups can interact, express themselves, and share pertinent content. By spreading propaganda, unwelcome messages, false information, fake news, and rumours, as well as by posting harmful links, this social network also encourages deviant behavior. In this research, we introduce Deep Belief Network (DBN) with Triple DES, a hybrid approach to anomaly detection in unbalanced classification. The results show that the DBN-TDES model can typically detect anomalous user behaviors that other models in anomaly detection cannot. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A novel WGF-LN based edge driven intelligence for wearable devices in human activity recognition.
- Author
-
Menaka, S. R., Prakash, M., Neelakandan, S., and Radhakrishnan, Arun
- Subjects
SUPERVISED learning ,FEATURE extraction ,DEEP learning ,MACHINE learning ,HUMAN activity recognition ,S-matrix theory ,SCATTER diagrams ,BINOMIAL distribution - Abstract
Human activity recognition (HAR) is one of the key applications of health monitoring that requires continuous use of wearable devices to track daily activities. The most efficient supervised machine learning (ML)-based approaches for predicting human activity are based on a continuous stream of sensor data. Sensor data analysis for human activity recognition using conventional algorithms and deep learning (DL) models shows promising results, but evaluating their ambiguity in decision-making is still challenging. In order to solve these issues, the paper proposes a novel Wasserstein gradient flow legonet WGF-LN-based human activity recognition system. At first, the input data is pre-processed. From the pre-processed data, the features are extracted using Haar Wavelet mother- Symlet wavelet coefficient scattering feature extraction (HS-WSFE). After that, the interest features are selected from the extracted features using (Binomial Distribution integrated-Golden Eagle Optimization) BD-GEO. The important features are then post-processed using the scatter plot matrix method. Obtained post-processing features are finally given into the WGF-LN for classifying human activities. From these experiments, the results can be obtained and showed the efficacy of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Fuzzy adaptive learning control network (FALCN) for image clustering and content-based image retrieval on noisy dataset.
- Author
-
Neelakandan, S., Easwaramoorthy, Sathishkumar Veerappampalayam, Chinnasamy, A., and Cho, Jaehyuk
- Subjects
ADAPTIVE fuzzy control ,CONTENT-based image retrieval ,MACHINE learning ,FUZZY neural networks ,COLOR space - Abstract
It has been demonstrated that fuzzy systems are beneficial for classification and regression. However, they have been mainly utilized in controlled settings. An image clustering technique essential for content-based picture retrieval in big image datasets is developed using the contents of color, texture and shape. Currently, it is challenging to label a huge number of photos. The issue of unlabeled data has been addressed. Unsupervised learning is used. K-means is the most often used unsupervised learning algorithm. In comparison to fuzzy c-means clustering, K-means clustering has lower-dimensional space resilience and initialization resistance. The dominating triple HSV space was shown to be a perceptual color space made of three modules, S (saturation), H (hue) and V (value), referring to color qualities that are significantly connected to how human eyes perceive colors. A deep learning technique for segmentation (RBNN) is built on the Gaussian function, fuzzy adaptive learning control network (FALCN), clustering and the radial basis neural network. The segmented image and critical information are fed into a radial basis neural network classifier. The suggested fuzzy adaptive learning control network (FALCN) fuzzy system, also known as the unsupervised fuzzy neural network, is very good at clustering images and can extract image properties. When a conventional fuzzy network system receives a noisy input, the number of output neurons grows needlessly. Finally, random convolutional weights extract features from data without labels. Furthermore, the state-of-the-art uniting the proposed FALCN with the RBNN classifier, the proposed descriptor also achieves comparable performance, such as improved accuracy is 96.547 and reduced mean squared error of 36.028 values for the JAFE, ORL, and UMIT datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Admission control policy and key agreement based on anonymous identity in cloud computing.
- Author
-
Paulraj, D., Neelakandan, S., Prakash, M., and Baburaj, E.
- Subjects
MACHINE learning ,INFORMATION technology outsourcing ,REINFORCEMENT learning ,CLOUD storage ,CLOUD computing ,FEATURE selection ,DOWNLOADING - Abstract
Cloud computing has completely revolutionized the concept of computing by providing users with always-accessible resources. In terms of computational, storage, bandwidth, and transmission costs, cloud technology offers its users an entirely new set of advantages and cost savings. Cross-cloud data migration, required whenever a user switches providers, is one of the most common issues the users encounter. Due to smartphones' limited local storage and computational power, it is often difficult for users to back up all data from the original cloud servers to their mobile phones to upload and download the data to the new cloud provider. Additionally, the user must remember numerous tokens and passwords for different applications. In many instances, the anonymity of users who access any or all services provided by this architecture must be ensured. Outsourcing IT resources carries risks, particularly regarding security and privacy, because cloud service providers manage and control all data and resources stored in the cloud. However, cloud users would prefer that cloud service providers not know the services they employ or the frequency of their use. Consequently, developing privacy protections takes a lot of work. We devised a system of binding agreements and anonymous identities to address this problem. Based on a binding contract and admission control policy (ACP), the proposed model facilitates cross-cloud data migration by fostering cloud provider trust. Finally, Multi-Agent Reinforcement Learning Algorithm (MARL) is applied to identify and classify anonymity in the cloud by conducting various pre-processing techniques, feature selection, and dimensionality reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Deep learning based sentiment analysis and offensive language identification on multilingual code-mixed data.
- Author
-
Shanmugavadivel, Kogilavani, Sathishkumar, V. E., Raja, Sandhiya, Lingaiah, T. Bheema, Neelakandan, S., and Subramanian, Malliga
- Subjects
SENTIMENT analysis ,NATURAL language processing ,DEEP learning ,MACHINE learning - Abstract
Sentiment analysis is a process in Natural Language Processing that involves detecting and classifying emotions in texts. The emotion is focused on a specific thing, an object, an incident, or an individual. Although some tasks are concerned with detecting the existence of emotion in text, others are concerned with finding the polarities of the text, which is classified as positive, negative, or neutral. The task of determining whether a comment contains inappropriate text that affects either individual or group is called offensive language identification. The existing research has concentrated more on sentiment analysis and offensive language identification in a monolingual data set than code-mixed data. Code-mixed data is framed by combining words and phrases from two or more distinct languages in a single text. It is quite challenging to identify emotion or offensive terms in the comments since noise exists in code-mixed data. The majority of advancements in hostile language detection and sentiment analysis are made on monolingual data for languages with high resource requirements. The proposed system attempts to perform both sentiment analysis and offensive language identification for low resource code-mixed data in Tamil and English using machine learning, deep learning and pre-trained models like BERT, RoBERTa and adapter-BERT. The dataset utilized for this research work is taken from a shared task on Multi task learning DravidianLangTech@ACL2022. Another challenge addressed by this work is the extraction of semantically meaningful information from code-mixed data using word embedding. The result represents an adapter-BERT model gives a better accuracy of 65% for sentiment analysis and 79% for offensive language identification when compared with other trained models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. LSGDM with Biogeography-Based Optimization (BBO) Model for Healthcare Applications.
- Author
-
Harshavardhan, A., Boyapati, Prasanthi, Neelakandan, S., Abdul-Rasheed Akeji, Alhassan Alolo, Singh Pundir, Aditya Kumar, and Walia, Ranjan
- Subjects
GROUP decision making ,HEALTH care industry ,BIG data ,MACHINE learning ,INDUSTRY 4.0 ,DATA management ,FUZZY algorithms - Abstract
Several studies aimed at improving healthcare management have shown that the importance of healthcare has grown in recent years. In the healthcare industry, effective decision-making requires multicriteria group decision-making. Simultaneously, big data analytics could be used to help with disease detection and healthcare delivery. Only a few previous studies on large-scale group decision-making (LSDGM) in the big data-driven healthcare Industry 4.0 have focused on this topic. The goal of this work is to improve healthcare management decision-making by developing a new MapReduce-based LSDGM model (MR-LSDGM) for the healthcare Industry 4.0 context. Clustering decision-makers (DM), modelling DM preferences, and classification are the three stages of the MR-LSDGM technique. Furthermore, the DMs are subdivided using a novel biogeography-based optimization (BBO) technique combined with fuzzy C-means (FCM). The subgroup preferences are then modelled using the two-tuple fuzzy linguistic representation (2TFLR) technique. The final classification method also includes a feature extractor based on long short-term memory (LSTM) and a classifier based on an ideal extreme learning machine (ELM). MapReduce is a data management platform used to handle massive amounts of data. A thorough set of experimental analyses is carried out, and the results are analysed using a variety of metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Intelligent deep learning based ethnicity recognition and classification using facial images.
- Author
-
Sunitha, Gurram, Geetha, K., Neelakandan, S., Pundir, Aditya Kumar Singh, Hemalatha, S., and Kumar, Vinay
- Subjects
- *
DEEP learning , *ETHNICITY , *FIREFLIES , *PRINCIPAL components analysis , *BIOMETRIC identification , *MACHINE learning - Abstract
Recently, computer vision-based face image analysis has sparked considerable interest in a variety of applications such as surveillance, security, biometrics and so on. The goal of the facial analysis was to derive facial soft biometrics such as identification, gender, age, ethnicity, expression and so on. Among these, ethnicity recognition remains a hot study topic, a major aspect of society with profound linkages to a variety of environmental and social concerns. The introduction of machine learning (ML) and deep learning (ML) technologies has proven advantageous for effective ethnicity recognition and classification. In this regard, the IDL-ERCFI technique, which is based on intelligent DL, is designed in this paper. The IDL-ERCFI technique's purpose is to distinguish and classify ethnicity based on facial photos. The IDL-ERCFI technique uses face landmarks to align photos before sending them to the network. Furthermore, the proposed model employs an Exception network as a feature extractor. Because the retrieved features are high-dimensional, the feature reduction procedure employs the principal component analysis (PCA) technique, which is effective in overcoming the "curse of dimensionality." Furthermore, the ethnicity classification procedure is carried out using an optimal kernel extreme learning machine (KELM), with parameter tuning of the KELM model carried out using the glow worm swarm optimization (GSO) technique. A complete experimental analysis is carried out to demonstrate the superiority of the IDL-ERCFI technique over the other techniques. • The IDL-ERCFI technique's purpose is to distinguish and classify ethnicity based on facial photos. • The IDL-ERCFI technique uses face landmarks to align photos before sending them to the network. • The feature retrieval, the feature reduction procedure employs the principal component analysis (PCA) model. • The ethnicity classification procedure is carried out using KELM, with a parameter tuning process. • KELM model carried out experimental analysis with the glow worm swarm optimization (GSO) technique. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Artificial intelligence based quality of transmission predictive model for cognitive optical networks.
- Author
-
Singh, Harinder, Ramya, D., Saravanakumar, R., Sateesh, Nayani, Anand, Rohit, Singh, Swarnjit, and Neelakandan, S.
- Subjects
- *
ARTIFICIAL intelligence , *PREDICTION models , *TELECOMMUNICATION systems , *OPTICAL communications , *QUALITY of service , *COGNITIVE computing , *MACHINE learning - Abstract
Due to the advancements in 5 G technologies, high-definition, and the internet of things (IoT), the capacity demand of optical networks has been exponentially increased. Optical communication networks offer several metrics such as high transmission capacity, low transmission loss, better anti-interference, robustness, etc which offers new opportunities to the communication field. To satisfy the increasing demands of optical networks, effective network resource utilization become essential. So, it is needed to design proper planning tools with superior accuracy for quality of transmission (QoT) in optical networks. Recently, artificial intelligence (AI) techniques pose new opportunities for resolving these issues and machine learning (ML) algorithms offer better performance over the analytical approaches. With this motivation, this paper presents a novel AI based cognitive QoT prediction (AI-CQoT) model for optical communication networks. The proposed AI-CQoT model aims to predict the QoT for the quality of service (QoS) link setup using AI techniques with the transmission equation based synthetic data generation. The proposed model uses the Label weighting extreme learning machine (LW-ELM) model for the prediction process which includes a link and signal characteristics as input features. Besides, the LW-ELM model is trained by the use of transmission equations. For improving the predictive performance of the LW-ELM model, the parameters such as weight matrix W and penalty factor C are optimally tuned by the use of the shuffled shepherd optimization (SSO) algorithm. A detailed experimental validation is performed to highlight the improved performance of the AI-CQoT model and the results are investigated in terms of different performance measures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.