7 results on '"Maglietta Rosalia"'
Search Results
2. Automated hippocampal segmentation in 3D MRI using random undersampling with boosting algorithm
- Author
-
Maglietta, Rosalia, Amoroso, Nicola, Boccardi, Marina, Bruno, Stefania, Chincarini, Andrea, Frisoni, Giovanni B., Inglese, Paolo, Redolfi, Alberto, Tangaro, Sabina, Tateo, Andrea, Bellotti, Roberto, and The Alzheimers Disease Neuroimaging Initiative
- Published
- 2016
- Full Text
- View/download PDF
3. Learning Analytics: Analysis of Methods for Online Assessment.
- Author
-
Renò, Vito, Stella, Ettore, Patruno, Cosimo, Capurso, Alessandro, Dimauro, Giovanni, and Maglietta, Rosalia
- Subjects
DIGITAL learning ,TEACHING methods ,SUPERVISED learning ,STATISTICAL learning ,ONLINE education ,LEARNING Management System - Abstract
Assessment is a fundamental part of teaching and learning. With the advent of online learning platforms, the concept of assessment has changed. In the classical teaching methodology, the assessment is performed by an assessor, while in an online learning environment, the assessment can also take place automatically. The main purpose of this paper is to carry out a study on Learning Analytics, focusing in particular on the study and development of methodologies useful for the evaluation of learners. The goal of this work is to define an effective learning model that uses Educational Data to predict the outcome of a learning process. Supervised statistical learning techniques were studied and developed for the analysis of the OULAD benchmark dataset. The evaluation of the learning process of learners was performed by making binary predictions about passing or failing a course and using features related to the learner's intermediate performance as well as the interactions with the e-learning platform. The Random Forest classification algorithm and other ensemble strategies were used to perform the task. The performance of the models trained on the OULAD dataset was excellent, showing an accuracy of 95% in predicting the students' learning assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Parallel selective sampling method for imbalanced and large data classification.
- Author
-
D’Addabbo, Annarita and Maglietta, Rosalia
- Subjects
- *
PARALLEL computers , *STATISTICAL sampling , *DATA analysis , *CLASSIFICATION algorithms , *SUPPORT vector machines - Abstract
Several applications aim to identify rare events from very large data sets. Classification algorithms may present great limitations on large data sets and show a performance degradation due to class imbalance. Many solutions have been presented in literature to deal with the problem of huge amount of data or imbalancing separately. In this paper we assessed the performances of a novel method, Parallel Selective Sampling (PSS), able to select data from the majority class to reduce imbalance in large data sets. PSS was combined with the Support Vector Machine (SVM) classification. PSS-SVM showed excellent performances on synthetic data sets, much better than SVM. Moreover, we showed that on real data sets PSS-SVM classifiers had performances slightly better than those of SVM and RUSBoost classifiers with reduced processing times. In fact, the proposed strategy was conceived and designed for parallel and distributed computing. In conclusion, PSS-SVM is a valuable alternative to SVM and RUSBoost for the problem of classification by huge and imbalanced data, due to its accurate statistical predictions and low computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. On classification of signals represented with data-dependent overcomplete dictionaries.
- Author
-
Maglietta, Rosalia and Ancona, Nicola
- Subjects
- *
ENCYCLOPEDIAS & dictionaries , *KERNEL functions , *SUPPORT vector machines , *SUPERVISED learning , *GENERALIZATION , *SIGNAL detection (Psychology) - Abstract
This paper focuses on the problem of how data representation influences the generalization error of kernel-based learning machines like support vector machines (SVMs). We analyse the effects of sparse and dense data representations on the generalization error of SVM. We show that using sparse representations the performances of classifiers belonging to hypothesis spaces induced by polynomial or Gaussian kernel functions reduce to the performances of linear classifiers. Sparse representations reduce the generalization error as long as the representation is not too sparse as with very large dictionaries. Dense data representations reduce the generalization error also using very large dictionaries. We use two schemes for representing data in data-independent overcomplete Haar and Gabor dictionaries, and measure the generalization error of SVMs on benchmark datasets. We study sparse and dense representations in the case of data-dependent overcomplete dictionaries and we show how this leads to principal component analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
6. Selection of relevant genes in cancer diagnosis based on their prediction accuracy
- Author
-
Maglietta, Rosalia, D’Addabbo, Annarita, Piepoli, Ada, Perri, Francesco, Liuni, Sabino, Pesole, Graziano, and Ancona, Nicola
- Subjects
- *
COLON cancer diagnosis , *CANCER diagnosis , *PATHOLOGY , *GENES , *ARTIFICIAL intelligence in medicine , *TUMORS - Abstract
Summary: Motivations: One of the main problems in cancer diagnosis by using DNA microarray data is selecting genes relevant for the pathology by analyzing their expression profiles in tissues in two different phenotypical conditions. The question we pose is the following: how do we measure the relevance of a single gene in a given pathology? Methods: A gene is relevant for a particular disease if we are able to correctly predict the occurrence of the pathology in new patients on the basis of its expression level only. In other words, a gene is informative for the disease if its expression levels are useful for training a classifier able to generalize, that is, able to correctly predict the status of new patients. In this paper we present a selection bias free, statistically well founded method for finding relevant genes on the basis of their classification ability. Results: We applied the method on a colon cancer data set and produced a list of relevant genes, ranked on the basis of their prediction accuracy. We found, out of more than 6500 available genes, 54 overexpressed in normal tissues and 77 overexpressed in tumor tissues having prediction accuracy greater than with p-value . Conclusions: The relevance of the selected genes was assessed (a) statistically, evaluating the p-value of the estimate prediction accuracy of each gene; (b) biologically, confirming the involvement of many genes in generic carcinogenic processes and in particular for the colon; (c) comparatively, verifying the presence of these genes in other studies on the same data-set. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
7. Data representations and generalization error in kernel based learning machines
- Author
-
Ancona, Nicola, Maglietta, Rosalia, and Stella, Ettore
- Subjects
- *
KERNEL functions , *MACHINE learning , *GEOMETRIC function theory , *MACHINE theory - Abstract
Abstract: This paper focuses on the problem of how data representation influences the generalization error of kernel based learning machines like support vector machines (SVM) for classification. Frame theory provides a well founded mathematical framework for representing data in many different ways. We analyze the effects of sparse and dense data representations on the generalization error of such learning machines measured by using leave-one-out error given a finite amount of training data. We show that, in the case of sparse data representations, the generalization error of an SVM trained by using polynomial or Gaussian kernel functions is equal to the one of a linear SVM. This is equivalent to saying that the capacity of separating points of functions belonging to hypothesis spaces induced by polynomial or Gaussian kernel functions reduces to the capacity of a separating hyperplane in the input space. Moreover, we show that, in general, sparse data representations increase or leave unchanged the generalization error of kernel based methods. Dense data representations, on the contrary, reduce the generalization error in the case of very large frames. We use two different schemes for representing data in overcomplete systems of Haar and Gabor functions, and measure SVM generalization error on benchmarked data sets. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.