44 results on '"Doulamis, Nikolaos"'
Search Results
2. Simultaneous Precise Localization And Classification of metal rust defects for robotic-driven maintenance and prefabrication using residual attention U-Net
- Author
-
Katsamenis, Iason, Doulamis, Nikolaos, Doulamis, Anastasios, Protopapadakis, Eftychios, and Voulodimos, Athanasios
- Published
- 2022
- Full Text
- View/download PDF
3. The Plegma dataset: Domestic appliance-level and aggregate electricity demand with metadata from Greece.
- Author
-
Athanasoulias, Sotirios, Guasselli, Fernanda, Doulamis, Nikolaos, Doulamis, Anastasios, Ipiotis, Nikolaos, Katsari, Athina, Stankovic, Lina, and Stankovic, Vladimir
- Subjects
ELECTRIC power consumption ,AGGREGATE demand ,SMART meters ,CONSUMPTION (Economics) ,SMART power grids ,MACHINE learning ,ENERGY consumption ,METADATA ,ACQUISITION of data - Abstract
The growing availability of smart meter data has facilitated the development of energy-saving services like demand response, personalized energy feedback, and non-intrusive-load-monitoring applications, all of which heavily rely on advanced machine learning algorithms trained on energy consumption datasets. To ensure the accuracy and reliability of these services, real-world smart meter data collection is crucial. The Plegma dataset described in this paper addresses this need bfy providing whole- house aggregate loads and appliance-level consumption measurements at 10-second intervals from 13 different households over a period of one year. It also includes environmental data such as humidity and temperature, building characteristics, demographic information, and user practice routines to enable quantitative as well as qualitative analysis. Plegma is the first high-frequency electricity measurements dataset in Greece, capturing the consumption behavior of people in the Mediterranean area who use devices not commonly included in other datasets, such as AC and electric-water boilers. The dataset comprises 218 million readings from 88 installed meters and sensors. The collected data are available in CSV format. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Spatio-temporal summarization of dance choreographies
- Author
-
Rallis, Ioannis, Doulamis, Nikolaos, Doulamis, Anastasios, Voulodimos, Athanasios, and Vescoukis, Vassilios
- Published
- 2018
- Full Text
- View/download PDF
5. Automatic crack detection for tunnel inspection using deep learning and heuristic image post-processing
- Author
-
Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Anastasios, Doulamis, Nikolaos, and Stathaki, Tania
- Published
- 2019
- Full Text
- View/download PDF
6. Ensemble Learning for Blending Gridded Satellite and Gauge-Measured Precipitation Data.
- Author
-
Papacharalampous, Georgia, Tyralis, Hristos, Doulamis, Nikolaos, and Doulamis, Anastasios
- Subjects
BOOSTING algorithms ,MACHINE learning ,ARTIFICIAL neural networks ,INDEPENDENT variables ,RANDOM forest algorithms ,DATABASES - Abstract
Regression algorithms are regularly used for improving the accuracy of satellite precipitation products. In this context, satellite precipitation and topography data are the predictor variables, and gauged-measured precipitation data are the dependent variables. Alongside this, it is increasingly recognised in many fields that combinations of algorithms through ensemble learning can lead to substantial predictive performance improvements. Still, a sufficient number of ensemble learners for improving the accuracy of satellite precipitation products and their large-scale comparison are currently missing from the literature. In this study, we work towards filling in this specific gap by proposing 11 new ensemble learners in the field and by extensively comparing them. We apply the ensemble learners to monthly data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets that span over a 15-year period and over the entire contiguous United States (CONUS). We also use gauge-measured precipitation data from the Global Historical Climatology Network monthly database, version 2 (GHCNm). The ensemble learners combine the predictions of six machine learning regression algorithms (base learners), namely the multivariate adaptive regression splines (MARS), multivariate adaptive polynomial splines (poly-MARS), random forests (RF), gradient boosting machines (GBM), extreme gradient boosting (XGBoost) and Bayesian regularized neural networks (BRNN), and each of them is based on a different combiner. The combiners include the equal-weight combiner, the median combiner, two best learners and seven variants of a sophisticated stacking method. The latter stacks a regression algorithm on top of the base learners to combine their independent predictions. Its seven variants are defined by seven different regression algorithms, specifically the linear regression (LR) algorithm and the six algorithms also used as base learners. The results suggest that sophisticated stacking performs significantly better than the base learners, especially when applied using the LR algorithm. It also beats the simpler combination methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Comparison of Machine Learning Algorithms for Merging Gridded Satellite and Earth-Observed Precipitation Data.
- Author
-
Papacharalampous, Georgia, Tyralis, Hristos, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
MACHINE learning ,ARTIFICIAL neural networks ,INDEPENDENT variables ,RANDOM forest algorithms ,ERROR functions ,BOOSTING algorithms - Abstract
Gridded satellite precipitation datasets are useful in hydrological applications as they cover large regions with high density. However, they are not accurate in the sense that they do not agree with ground-based measurements. An established means for improving their accuracy is to correct them by adopting machine learning algorithms. This correction takes the form of a regression problem, in which the ground-based measurements have the role of the dependent variable and the satellite data are the predictor variables, together with topography factors (e.g., elevation). Most studies of this kind involve a limited number of machine learning algorithms and are conducted for a small region and for a limited time period. Thus, the results obtained through them are of local importance and do not provide more general guidance and best practices. To provide results that are generalizable and to contribute to the delivery of best practices, we here compare eight state-of-the-art machine learning algorithms in correcting satellite precipitation data for the entire contiguous United States and for a 15-year period. We use monthly data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) gridded dataset, together with monthly earth-observed precipitation data from the Global Historical Climatology Network monthly database, version 2 (GHCNm). The results suggest that extreme gradient boosting (XGBoost) and random forests are the most accurate in terms of the squared error scoring function. The remaining algorithms can be ordered as follows, from the best to the worst: Bayesian regularized feed-forward neural networks, multivariate adaptive polynomial splines (poly-MARS), gradient boosting machines (gbm), multivariate adaptive regression splines (MARS), feed-forward neural networks and linear regression. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. A Low-Cost Gamified Urban Planning Methodology Enhanced with Co-Creation and Participatory Approaches.
- Author
-
Kavouras, Ioannis, Sardis, Emmanuel, Protopapadakis, Eftychios, Rallis, Ioannis, Doulamis, Anastasios, and Doulamis, Nikolaos
- Abstract
Targeted nature-based small-scale interventions is an approach commonly adopted by urban developers. The public acceptance of their implementation could be improved by participation, emphasizing residents or shopkeepers located close to the areas of interest. In this work, we propose a methodology that combines 3D technology, based on open data sources, user-generated content, 3D software and game engines for both minimizing the time and cost of the whole planning process and enhancing citizen participation. The proposed schemes are demonstrated in Piraeus (Greece) and Gladsaxe (Denmark). The core findings can be summarized as follows: (a) the time and cost are minimized by using online databases, (b) the gamification of the planning process enhances the decision making process and (c) the interactivity provided by the game engine inspired the participation of non-experts in the planning process (co-creation and co-evaluation), which decentralizes and democratizes the final planning solution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale.
- Author
-
Papacharalampous, Georgia, Tyralis, Hristos, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
MACHINE learning ,STATISTICAL learning ,RANDOM forest algorithms ,ALGORITHMS ,DATABASES - Abstract
Merging satellite products and ground-based measurements is often required for obtaining precipitation datasets that simultaneously cover large regions with high density and are more accurate than pure satellite precipitation products. Machine and statistical learning regression algorithms are regularly utilized in this endeavor. At the same time, tree-based ensemble algorithms are adopted in various fields for solving regression problems with high accuracy and low computational costs. Still, information on which tree-based ensemble algorithm to select for correcting satellite precipitation products for the contiguous United States (US) at the daily time scale is missing from the literature. In this study, we worked towards filling this methodological gap by conducting an extensive comparison between three algorithms of the category of interest, specifically between random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost). We used daily data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and the IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets. We also used earth-observed precipitation data from the Global Historical Climatology Network daily (GHCNd) database. The experiments referred to the entire contiguous US and additionally included the application of the linear regression algorithm for benchmarking purposes. The results suggest that XGBoost is the best-performing tree-based ensemble algorithm among those compared. Indeed, the mean relative improvements that it provided with respect to linear regression (for the case that the latter algorithm was run with the same predictors as XGBoost) are equal to 52.66%, 56.26% and 64.55% (for three different predictor sets), while the respective values are 37.57%, 53.99% and 54.39% for random forests, and 34.72%, 47.99% and 62.61% for gbm. Lastly, the results suggest that IMERG is more useful than PERSIANN in the context investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. A Prototype Machine Learning Tool Aiming to Support 3D Crowdsourced Cadastral Surveying of Self-Made Cities.
- Author
-
Potsiou, Chryssy, Doulamis, Nikolaos, Bakalos, Nikolaos, Gkeli, Maria, Ioannidis, Charalabos, and Markouizou, Selena
- Subjects
REAL property ,MACHINE learning ,INDOOR positioning systems ,DATA modeling ,MACHINE tools ,BLUETOOTH technology - Abstract
Land administration and management systems (LAMSs) have already made progress in the field of 3D Cadastre and the visualization of complex urban properties to support property markets and provide geospatial information for the sustainable management of smart cities. However, in less developed economies, with informally developed urban areas—the so-called self-made cities—the 2D LAMSs are left behind. Usually, they are less effective and mainly incomplete since a large number of informal constructions remain unregistered. This paper presents the latest results of an innovative on-going research aiming to structure, test and propose a low-cost but reliable enough methodology to support the simultaneous and fast implementation of both 2D land parcel and 3D property unit registration of informal, multi-story and unregistered constructions. An Indoor Positioning System (IPS) built upon low-cost Bluetooth technology combined with an innovative machine learning algorithm and connected with a 3D LADM-based cadastral mapping mobile application are the two key components of the technical solution under investigation. The proposed solution is tested for the first floor of a multi-room office building. The main conclusions concern the potential, usability and reliability of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. A review of non-invasive sensors and artificial intelligence models for diabetic foot monitoring.
- Author
-
Kaselimi, Maria, Protopapadakis, Eftychios, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
DIABETIC foot ,ARTIFICIAL intelligence ,OPTICAL sensors ,DETECTORS ,TRUST - Abstract
Diabetic foot complications have multiple adverse effects in a person's quality of life. Yet, efficient monitoring schemes can mitigate or postpone any disorders, mainly by early detecting regions of interest. Nowadays, optical sensors and artificial intelligence (AI) tools can contribute efficiently to such monitoring processes. In this work, we provide information on the adopted imaging schemes and related optical sensors on this topic. The analysis considers both the physiology of the patients and the characteristics of the sensors. Currently, there are multiple approaches considering both visible and infrared bands (multiple ranges), most of them coupled with various AI tools. The source of the data (sensor type) can support different monitoring strategies and imposes restrictions on the AI tools that should be used with. This review provides a comprehensive literature review of AI-assisted DFU monitoring methods. The paper presents the outcomes of a large number of recently published scholarly articles. Furthermore, the paper discusses the highlights of these methods and the challenges for transferring these methods into a practical and trustworthy framework for sufficient remote management of the patients. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Tensor-Based Learning for Detecting Abnormalities on Digital Mammograms.
- Author
-
Tzortzis, Ioannis N., Davradou, Agapi, Rallis, Ioannis, Kaselimi, Maria, Makantasis, Konstantinos, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
MAMMOGRAMS ,GENERAL Data Protection Regulation, 2016 ,ARTIFICIAL intelligence ,DEEP learning ,COMPUTER-aided diagnosis - Abstract
In this study, we propose a tensor-based learning model to efficiently detect abnormalities on digital mammograms. Due to the fact that the availability of medical data is limited and often restricted by GDPR (general data protection regulation) compliance, the need for more sophisticated and less data-hungry approaches is urgent. Accordingly, our proposed artificial intelligence framework utilizes the canonical polyadic decomposition to decrease the trainable parameters of the wrapped Rank-R FNN model, leading to efficient learning using small amounts of data. Our model was evaluated on the open source digital mammographic database INBreast and compared with state-of-the-art models in this domain. The experimental results show that the proposed solution performs well in comparison with the other deep learning models, such as AlexNet and SqueezeNet, achieving 90% ± 4% accuracy and an F1 score of 84% ± 5%. Additionally, our framework tends to attain more robust performance with small numbers of data and is computationally lighter for inference purposes, due to the small number of trainable parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Towards Trustworthy Energy Disaggregation: A Review of Challenges, Methods, and Perspectives for Non-Intrusive Load Monitoring.
- Author
-
Kaselimi, Maria, Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Nikolaos, and Doulamis, Anastasios
- Subjects
TRUST ,SIGNAL processing ,MACHINE learning ,SCIENTIFIC community ,SCHOLARLY publishing - Abstract
Non-intrusive load monitoring (NILM) is the task of disaggregating the total power consumption into its individual sub-components. Over the years, signal processing and machine learning algorithms have been combined to achieve this. Many publications and extensive research works are performed on energy disaggregation or NILM for the state-of-the-art methods to reach the desired performance. The initial interest of the scientific community to formulate and describe mathematically the NILM problem using machine learning tools has now shifted into a more practical NILM. Currently, we are in the mature NILM period where there is an attempt for NILM to be applied in real-life application scenarios. Thus, the complexity of the algorithms, transferability, reliability, practicality, and, in general, trustworthiness are the main issues of interest. This review narrows the gap between the early immature NILM era and the mature one. In particular, the paper provides a comprehensive literature review of the NILM methods for residential appliances only. The paper analyzes, summarizes, and presents the outcomes of a large number of recently published scholarly articles. Furthermore, the paper discusses the highlights of these methods and introduces the research dilemmas that should be taken into consideration by researchers to apply NILM methods. Finally, we show the need for transferring the traditional disaggregation models into a practical and trustworthy framework. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing.
- Author
-
Temenos, Anastasios, Tzortzis, Ioannis N., Kaselimi, Maria, Rallis, Ioannis, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
REMOTE sensing ,EPIDEMIOLOGY ,VIRAL transmission ,REMOTE-sensing images ,COVID-19 pandemic ,MULTIDIMENSIONAL databases ,URBAN planning - Abstract
The COVID-19 pandemic has affected many aspects of human life around the world, due to its tremendous outcomes on public health and socio-economic activities. Policy makers have tried to develop efficient responses based on technologies and advanced pandemic control methodologies, to limit the wide spreading of the virus in urban areas. However, techniques such as social isolation and lockdown are short-term solutions that minimize the spread of the pandemic in cities and do not invert long-term issues that derive from climate change, air pollution and urban planning challenges that enhance the spreading ability. Thus, it seems crucial to understand what kind of factors assist or prevent the wide spreading of the virus. Although AI frameworks have a very efficient predictive ability as data-driven procedures, they often struggle to identify strong correlations among multidimensional data and provide robust explanations. In this paper, we propose the fusion of a heterogeneous, spatio-temporal dataset that combine data from eight European cities spanning from 1 January 2020 to 31 December 2021 and describe atmospheric, socio-economic, health, mobility and environmental factors all related to potential links with COVID-19. Remote sensing data are the key solution to monitor the availability on public green spaces between cities in the study period. So, we evaluate the benefits of NIR and RED bands of satellite images to calculate the NDVI and locate the percentage in vegetation cover on each city for each week of our 2-year study. This novel dataset is evaluated by a tree-based machine learning algorithm that utilizes ensemble learning and is trained to make robust predictions on daily cases and deaths. Comparisons with other machine learning techniques justify its robustness on the regression metrics RMSE and MAE. Furthermore, the explainable frameworks SHAP and LIME are utilized to locate potential positive or negative influence of the factors on global and local level, with respect to our model's predictive ability. A variation of SHAP, namely treeSHAP, is utilized for our tree-based algorithm to make fast and accurate explanations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. STAMINA: Bioinformatics Platform for Monitoring and Mitigating Pandemic Outbreaks.
- Author
-
Bakalos, Nikolaos, Kaselimi, Maria, Doulamis, Nikolaos, Doulamis, Anastasios, Kalogeras, Dimitrios, Bimpas, Mathaios, Davradou, Agapi, Vlachostergiou, Aggeliki, Fotopoulos, Anaxagoras, Plakia, Maria, Karalis, Alexandros, Tsekeridou, Sofia, Anagnostopoulos, Themistoklis, Despotopoulou, Angela Maria, Bonavita, Ilaria, Petersen, Katrina, Pelepes, Leonidas, Voumvourakis, Lefteris, Anagnostou, Anastasia, and Groen, Derek
- Subjects
PANDEMICS ,DATA warehousing ,BIOINFORMATICS ,PREDICTION models ,CUSTOMIZATION - Abstract
This paper presents the components and integrated outcome of a system that aims to achieve early detection, monitoring and mitigation of pandemic outbreaks. The architecture of the platform aims at providing a number of pandemic-response-related services, on a modular basis, that allows for the easy customization of the platform to address user's needs per case. This customization is achieved through its ability to deploy only the necessary, loosely coupled services and tools for each case, and by providing a common authentication, data storage and data exchange infrastructure. This way, the platform can provide the necessary services without the burden of additional services that are not of use in the current deployment (e.g., predictive models for pathogens that are not endemic to the deployment area). All the decisions taken for the communication and integration of the tools that compose the platform adhere to this basic principle. The tools presented here as well as their integration is part of the project STAMINA. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. COVID-19 Spatio-Temporal Evolution Using Deep Learning at a European Level.
- Author
-
Kavouras, Ioannis, Kaselimi, Maria, Protopapadakis, Eftychios, Bakalos, Nikolaos, Doulamis, Nikolaos, and Doulamis, Anastasios
- Subjects
DEEP learning ,SPATIOTEMPORAL processes ,BOX-Jenkins forecasting ,COVID-19 ,COVID-19 pandemic ,VIRAL transmission - Abstract
COVID-19 evolution imposes significant challenges for the European healthcare system. The heterogeneous spread of the pandemic within EU regions elicited a wide range of policies, such as school closure, transport restrictions, etc. However, the implementation of these interventions is not accompanied by the implementation of quantitative methods, which would indicate their effectiveness. As a result, the efficacy of such policies on reducing the spread of the virus varies significantly. This paper investigates the effectiveness of using deep learning paradigms to accurately model the spread of COVID-19. The deep learning approaches proposed in this paper are able to effectively map the temporal evolution of a COVID-19 outbreak, while simultaneously taking into account policy interventions directly into the modelling process. Thus, our approach facilitates data-driven decision making by utilizing previous knowledge to train models that predict not only the spread of COVID-19, but also the effect of specific policy measures on minimizing this spread. Global models at the EU level are proposed, which can be successfully applied at the national level. These models use various inputs in order to successfully model the spatio-temporal variability of the phenomenon and obtain generalization abilities. The proposed models are compared against the traditional epidemiological and Autoregressive Integrated Moving Average (ARIMA) models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. ELECTRIcity: An Efficient Transformer for Non-Intrusive Load Monitoring.
- Author
-
Sykiotis, Stavros, Kaselimi, Maria, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
DEEP learning ,VERNACULAR architecture ,ELECTRIC transformers ,HOUSEHOLD appliances ,CONSUMPTION (Economics) ,ELECTRICITY - Abstract
Non-Intrusive Load Monitoring (NILM) describes the process of inferring the consumption pattern of appliances by only having access to the aggregated household signal. Sequence-to-sequence deep learning models have been firmly established as state-of-the-art approaches for NILM, in an attempt to identify the pattern of the appliance power consumption signal into the aggregated power signal. Exceeding the limitations of recurrent models that have been widely used in sequential modeling, this paper proposes a transformer-based architecture for NILM. Our approach, called ELECTRIcity, utilizes transformer layers to accurately estimate the power signal of domestic appliances by relying entirely on attention mechanisms to extract global dependencies between the aggregate and the domestic appliance signals. Another additive value of the proposed model is that ELECTRIcity works with minimal dataset pre-processing and without requiring data balancing. Furthermore, ELECTRIcity introduces an efficient training routine compared to other traditional transformer-based architectures. According to this routine, ELECTRIcity splits model training into unsupervised pre-training and downstream task fine-tuning, which yields performance increases in both predictive accuracy and training time decrease. Experimental results indicate ELECTRIcity's superiority compared to several state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Fall Detection Using Multi-Property Spatiotemporal Autoencoders in Maritime Environments.
- Author
-
Katsamenis, Iason, Bakalos, Nikolaos, Karolou, Eleni Eirini, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
DEEP learning ,VIDEO surveillance ,STREAMING video & television ,ANOMALY detection (Computer security) ,COMPUTER vision - Abstract
Man overboard is an emergency in which fast and efficient detection of the critical event is the key factor for the recovery of the victim. Its severity urges the utilization of intelligent video surveillance systems that monitor the ship's perimeter in real time and trigger the relative alarms that initiate the rescue mission. In terms of deep learning analysis, since man overboard incidents occur rarely, they present a severe class imbalance problem, and thus, supervised classification methods are not suitable. To tackle this obstacle, we follow an alternative philosophy and present a novel deep learning framework that formulates man overboard identification as an anomaly detection task. The proposed system, in the absence of training data, utilizes a multi-property spatiotemporal convolutional autoencoder that is trained only on the normal situation. We explore the use of RGB video sequences to extract specific properties of the scene, such as gradient and saliency, and utilize the autoencoders to detect anomalies. To the best of our knowledge, this is the first time that man overboard detection is made in a fully unsupervised manner while jointly learning the spatiotemporal features from RGB video streams. The algorithm achieved 97.30% accuracy and a 96.01% F1-score, surpassing the other state-of-the-art approaches significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Deep Recurrent Neural Networks for Ionospheric Variations Estimation Using GNSS Measurements.
- Author
-
Kaselimi, Maria, Voulodimos, Athanasios, Doulamis, Nikolaos, Doulamis, Anastasios, and Delikaraoglou, Demitris
- Subjects
RECURRENT neural networks ,DEEP learning ,GLOBAL Positioning System ,FEATURE extraction - Abstract
Modeling ionospheric variability throughout a proper total electron content (TEC) parameter estimation is a demanding, however, crucial, process for achieving better accuracy and rapid convergence in precise point positioning (PPP). In particular, the single-frequency PPP (SF-PPP) method lacks accuracy due to the difficulty of dealing adequately with the ionospheric error sources. In order to apply ionosphere corrections in techniques, such as SF-PPP, external information of global ionosphere maps (GIMs) is crucial. In this article, we propose a deep learning model to efficiently predict TEC values and to replace the GIM-derived data that inherently have a global character, with equal or better in accuracy regional ones. The proposed model is suitable for predicting the ionosphere delay at different locations of receiver stations. The model is tested during different periods of time, under different solar and geomagnetic conditions and for stations in various latitudes, providing robust estimations of the ionospheric activity at the regional level. Our proposed model is a hybrid model comprising of a 1-D convolutional layer used for the optimal feature extraction and stacked recurrent layers used for temporal time series modeling. Thus, the model achieves good performance in TEC modeling compared to other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Unsupervised 3D Motion Summarization Using Stacked Auto-Encoders.
- Author
-
Protopapadakis, Eftychios, Rallis, Ioannis, Doulamis, Anastasios, Doulamis, Nikolaos, and Voulodimos, Athanasios
- Subjects
MOTION capture (Human mechanics) ,ALGORITHMS ,MOTION ,STANDARD deviations - Abstract
In this paper, a deep stacked auto-encoder (SAE) scheme followed by a hierarchical Sparse Modeling for Representative Selection (SMRS) algorithm is proposed to summarize dance video sequences, recorded using the VICON Motion capturing system. SAE's main task is to reduce the redundant information embedding in the raw data and, thus, to improve summarization performance. This becomes apparent when two dancers are performing simultaneously and severe errors are encountered in the humans' point joints, due to dancers' occlusions in the 3D space. Four summarization algorithms are applied to extract the key frames; density based, Kennard Stone, conventional SMRS and its hierarchical scheme called H-SMRS. Experimental results have been carried out on real-life dance sequences of Greek traditional dances while the results have been compared against ground truth data selected by dance experts. The results indicate that H-SMRS being applied after the SAE information reduction module extracts key frames which are deviated in time less than 0.3 s to the ones selected by the experts and with a standard deviation of 0.18 s. Thus, the proposed scheme can effectively represent the content of the dance sequence. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Gaussian Process Regression Tuned by Bayesian Optimization for Seawater Intrusion Prediction.
- Author
-
Kopsiaftis, George, Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Nikolaos, and Mantoglou, Aristotelis
- Subjects
SALTWATER encroachment ,KRIGING ,WELLHEAD protection ,STANDARD deviations ,GROUNDWATER management ,LATIN hypercube sampling - Abstract
Accurate prediction of the seawater intrusion extent is necessary for many applications, such as groundwater management or protection of coastal aquifers from water quality deterioration. However, most applications require a large number of simulations usually at the expense of prediction accuracy. In this study, the Gaussian process regression method is investigated as a potential surrogate model for the computationally expensive variable density model. Gaussian process regression is a nonparametric kernel-based probabilistic model able to handle complex relations between input and output. In this study, the extent of seawater intrusion is represented by the location of the 0.5 kg/m3 iso-chlore at the bottom of the aquifer (seawater intrusion toe). The initial position of the toe, expressed as the distance of the specific line from a number of observation points across the coastline, along with the pumping rates are the surrogate model inputs, whereas the final position of the toe constitutes the output variable set. The training sample of the surrogate model consists of 4000 variable density simulations, which differ not only in the pumping rate pattern but also in the initial concentration distribution. The Latin hypercube sampling method is used to obtain the pumping rate patterns. For comparison purposes, a number of widely used regression methods are employed, specifically regression trees and Support Vector Machine regression (linear and nonlinear). A Bayesian optimization method is applied to all the regressors, to maximize their efficiency in the prediction of seawater intrusion. The final results indicate that the Gaussian process regression method, albeit more time consuming, proved to be more efficient in terms of the mean absolute error (MAE), the root mean square error (RMSE), and the coefficient of determination (R²). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Tensor-Based Classification Models for Hyperspectral Data Analysis.
- Author
-
Makantasis, Konstantinos, Doulamis, Anastasios D., Doulamis, Nikolaos D., and Nikitakis, Antonis
- Subjects
HYPERSPECTRAL imaging systems ,TENSOR algebra ,FEEDFORWARD neural networks ,SUPPORT vector machines ,LOGISTIC regression analysis ,MACHINE learning - Abstract
In this paper, we present tensor-based linear and nonlinear models for hyperspectral data classification and analysis. By exploiting the principles of tensor algebra, we introduce new classification architectures, the weight parameters of which satisfy the rank-1 canonical decomposition property. Then, we propose learning algorithms to train both linear and nonlinear classifiers. The advantages of the proposed classification approach are that: 1) it significantly reduces the number of weight parameters required to train the model (and thus the respective number of training samples); 2) it provides a physical interpretation of model coefficients on the classification output; and 3) it retains the spatial and spectral coherency of the input samples. The linear tensor-based model exploits the principles of logistic regression, assuming the rank-1 canonical decomposition property among its weights. For the nonlinear classifier, we propose a modification of a feedforward neural network (FNN), called rank-1 FNN, since its weights satisfy again the rank-1 canonical decomposition property. An appropriate learning algorithm is also proposed to train the network. Experimental results and comparisons with state-of-the-art classification methods, either linear (e.g., linear support vector machine) or nonlinear (e.g., deep learning), indicate the outperformance of the proposed scheme, especially in the cases where a small number of training samples is available. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Data-Driven Background Subtraction Algorithm for In-Camera Acceleration in Thermal Imagery.
- Author
-
Makantasis, Konstantinos, Nikitakis, Antonios, Doulamis, Anastasios D., Doulamis, Nikolaos D., and Papaefstathiou, Ioannis
- Subjects
IMAGE processing ,MATHEMATICAL models ,INFRARED imaging ,ESTIMATION theory ,REAL-time computing ,HARDWARE - Abstract
Detection of moving objects in videos is a crucial step toward successful surveillance and monitoring applications. A key component for such tasks is called background subtraction and tries to extract regions of interest from the image background for further processing or action. For this reason, its accuracy and real-time performance are of great significance. Although effective background subtraction methods have been proposed, only a few of them take into consideration the special characteristics of thermal imagery. In this paper, we propose a background subtraction scheme, which models the thermal responses of each pixel as a mixture of Gaussians with unknown number of components. Following a Bayesian approach, our method automatically estimates the mixture structure, while simultaneously it avoids over-/underfitting. The pixel density estimate is followed by an efficient and highly accurate updating mechanism, which permits our system to be automatically adapted to dynamically changing operation conditions. We propose a reference implementation of our method in reconfigurable hardware achieving both adequate performance and low-power consumption. Adopting a high-level synthesis design and demanding floating point arithmetic operations are mapped in reconfigurable hardware, demonstrating fast prototyping and on-field customization at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Dance Pose Identification from Motion Capture Data: A Comparison of Classifiers.
- Author
-
Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Anastasios, Camarinopoulos, Stephanos, Doulamis, Nikolaos, and Miaoulis, Georgios
- Subjects
MOTION capture (Human mechanics) ,DANCE ,JOINTS (Anatomy) ,DETECTORS ,FOLK dancing - Abstract
In this paper, we scrutinize the effectiveness of classification techniques in recognizing dance types based on motion-captured human skeleton data. In particular, the goal is to identify poses which are characteristic for each dance performed, based on information on body joints, acquired by a Kinect sensor. The datasets used include sequences from six folk dances and their variations. Multiple pose identification schemes are applied using temporal constraints, spatial information, and feature space distributions for the creation of an adequate training dataset. The obtained results are evaluated and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Deep Learning for Computer Vision: A Brief Review.
- Author
-
Voulodimos, Athanasios, Doulamis, Nikolaos, Doulamis, Anastasios, and Protopapadakis, Eftychios
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *COMPUTER vision , *SIGNAL denoising , *FACE perception , *BOLTZMANN machine - Abstract
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. Stacked Autoencoders for Outlier Detection in Over-the-Horizon Radar Signals.
- Author
-
Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Anastasios, Doulamis, Nikolaos, Dres, Dimitrios, and Bimpas, Matthaios
- Subjects
OUTLIER detection ,RADAR signal processing ,SURFACE waves (Seismic waves) ,DEEP learning ,CLUSTER analysis (Statistics) - Abstract
Detection of outliers in radar signals is a considerable challenge in maritime surveillance applications. High-Frequency Surface-Wave (HFSW) radars have attracted significant interest as potential tools for long-range target identification and outlier detection at over-the-horizon (OTH) distances. However, a number of disadvantages, such as their low spatial resolution and presence of clutter, have a negative impact on their accuracy. In this paper, we explore the applicability of deep learning techniques for detecting deviations from the norm in behavioral patterns of vessels (outliers) as they are tracked from an OTH radar. The proposed methodology exploits the nonlinear mapping capabilities of deep stacked autoencoders in combination with density-based clustering. A comparative experimental evaluation of the approach shows promising results in terms of the proposed methodology’s performance. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Event Detection in Twitter Microblogging.
- Author
-
Doulamis, Nikolaos D., Doulamis, Anastasios D., Kokkinos, Panagiotis, and Varvarigos, Emmanouel Manos
- Abstract
The millions of tweets submitted daily overwhelm users who find it difficult to identify content of interest revealing the need for event detection algorithms in Twitter. Such algorithms are proposed in this paper covering both short (identifying what is currently happening) and long term periods (reviewing the most salient recently submitted events). For both scenarios, we propose fuzzy represented and timely evolved tweet-based theoretic information metrics to model Twitter dynamics. The Riemannian distance is also exploited with respect to words’ signatures to minimize temporal effects due to submission delays. Events are detected through a multiassignment graph partitioning algorithm that: 1) optimally retains maximum coherence within a cluster and 2) while allowing a word to belong to several clusters (events). Experimental results on real-life data demonstrate that our approach outperforms other methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
28. TECHNICAL ASPECTS FOR THE CREATION OF A MULTI-DIMENSIONAL LAND INFORMATION SYSTEM.
- Author
-
Ioannidis, Charalabos, Potsiou, Chryssy, Soile, Sofia, Verykokou, Styliani, Mourafetis, George, and Doulamis, Nikolaos
- Subjects
URBAN land use ,DECISION making ,THREE-dimensional display systems - Abstract
The complexity of modern urban environments and civil demands for fast, reliable and affordable decision-making requires not only a 3D Land Information System, which tends to replace traditional 2D LIS architectures, but also the need to address the time and scale parameters, that is, the 3D geometry of buildings in various time instances (4
th dimension) at various levels of detail (LoDs - 5th dimension). This paper describes and proposes solutions for technical aspects that need to be addressed for the 5D modelling pipeline. Such solutions include the creation of a 3D model, the application of a selective modelling procedure between various time instances and at various LoDs, enriched with cadastral and other spatial data, and a procedural modelling approach for the representation of the inner parts of the buildings. The methodology is based on automatic change detection algorithms for spatialtemporal analysis of the changes that took place in subsequent time periods, using dense image matching and structure from motion algorithms. The selective modelling approach allows a detailed modelling only for the areas where spatial changes are detected. The procedural modelling techniques use programming languages for the textual semantic description of a building; they require the modeller to describe its part-to-whole relationships. Finally, a 5D viewer is developed, in order to tackle existing limitations that accompany the use of global systems, such as the Google Earth or the Google Maps, as visualization software. An application based on the proposed methodology in an urban area is presented and it provides satisfactory results. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
29. 5D MODELLING: AN EFFICIENT APPROACH FOR CREATING SPATIOTEMPORAL PREDICTIVE 3D MAPS OF LARGE-SCALE CULTURAL RESOURCES.
- Author
-
Doulamis, Anastasios, Doulamis, Nikolaos, Ioannidis, Charalabos, Chrysouli, Christina, Grammalidis, Nikos, Dimitropoulos, Kosmas, Potsiou, Chryssy, Stathopoulou, Elisavet Konstantina, and Ioannides, Marinos
- Subjects
MODELING (Sculpture) ,GEOGRAPHY - Abstract
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. On the Exploration of Automatic Building Extraction from RGB Satellite Images Using Deep Learning Architectures Based on U-Net.
- Author
-
Temenos, Anastasios, Temenos, Nikos, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
DEEP learning ,REMOTE-sensing images ,CONVOLUTIONAL neural networks ,URBAN planning ,DATA mining ,FEATURE extraction - Abstract
Detecting and localizing buildings is of primary importance in urban planning tasks. Automating the building extraction process, however, has become attractive given the dominance of Convolutional Neural Networks (CNNs) in image classification tasks. In this work, we explore the effectiveness of the CNN-based architecture U-Net and its variations, namely, the Residual U-Net, the Attention U-Net, and the Attention Residual U-Net, in automatic building extraction. We showcase their robustness in feature extraction and information processing using exclusively RGB images, as they are a low-cost alternative to multi-spectral and LiDAR ones, selected from the SpaceNet 1 dataset. The experimental results show that U-Net achieves a 91.9 % accuracy, whereas introducing residual blocks, attention gates, or a combination of both improves the accuracy of the vanilla U-Net to 93.6 % , 94.0 % , and 93.7 % , respectively. Finally, the comparison between U-Net architectures and typical deep learning approaches from the literature highlights their increased performance in accurate building localization around corners and edges. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Multiclass Confusion Matrix Reduction Method and Its Application on Net Promoter Score Classification Problem.
- Author
-
Markoulidakis, Ioannis, Rallis, Ioannis, Georgoulas, Ioannis, Kopsiaftis, George, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
RECEIVER operating characteristic curves ,CLASSIFICATION algorithms ,CURVES ,MACHINE learning ,KEY performance indicators (Management) ,MATRICES (Mathematics) ,CUSTOMER experience - Abstract
The current paper presents a novel method for reducing a multiclass confusion matrix into a 2 × 2 version enabling the exploitation of the relevant performance metrics and methods such as the receiver operating characteristic and area under the curve for the assessment of different classification algorithms. The reduction method is based on class grouping and leads to a special type of matrix called the reduced confusion matrix. The developed method is then exploited for the assessment of state of the art machine learning algorithms applied on the net promoter score classification problem in the field of customer experience analytics indicating the value of the proposed method in real world classification problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images.
- Author
-
Voulodimos, Athanasios, Protopapadakis, Eftychios, Katsamenis, Iason, Doulamis, Anastasios, Doulamis, Nikolaos, and Ghamisi, Pedram
- Subjects
COMPUTED tomography ,COVID-19 ,IMAGE segmentation ,SUPERVISED learning ,WEIGHT training ,DEEP learning - Abstract
Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information.
- Author
-
Himri, Khadidja, Ridao, Pere, Gracias, Nuno, and Doulamis, Nikolaos
- Subjects
AUTONOMOUS underwater vehicles ,OBJECT recognition (Computer vision) ,POINT cloud ,TEXT recognition - Abstract
This paper proposes a 3D object recognition method for non-coloured point clouds using point features. The method is intended for application scenarios such as Inspection, Maintenance and Repair (IMR) of industrial sub-sea structures composed of pipes and connecting objects (such as valves, elbows and R-Tee connectors). The recognition algorithm uses a database of partial views of the objects, stored as point clouds, which is available a priori. The recognition pipeline has 5 stages: (1) Plane segmentation, (2) Pipe detection, (3) Semantic Object-segmentation and detection, (4) Feature based Object Recognition and (5) Bayesian estimation. To apply the Bayesian estimation, an object tracking method based on a new Interdistance Joint Compatibility Branch and Bound (IJCBB) algorithm is proposed. The paper studies the recognition performance depending on: (1) the point feature descriptor used, (2) the use (or not) of Bayesian estimation and (3) the inclusion of semantic information about the objects connections. The methods are tested using an experimental dataset containing laser scans and Autonomous Underwater Vehicle (AUV) navigation data. The best results are obtained using the Clustered Viewpoint Feature Histogram (CVFH) descriptor, achieving recognition rates of 51.2 % , 68.6 % and 90 % , respectively, clearly showing the advantages of using the Bayesian estimation ( 18 % increase) and the inclusion of semantic information ( 21 % further increase). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Accuracy Investigation of the Pose Determination of a VR System.
- Author
-
Bauer, Peter, Lienhart, Werner, Jost, Samuel, and Doulamis, Nikolaos
- Subjects
GYROSCOPES ,OPTICAL scanners ,INDUSTRIAL lasers ,SURFACE plates ,OPTICAL gyroscopes ,MIXED reality ,TRACKING algorithms - Abstract
The usage of VR gear in mixed reality applications demands a high position and orientation accuracy of all devices to achieve a satisfying user experience. This paper investigates the system behaviour of the VR system HTC Vive Pro at a testing facility that is designed for the calibration of highly accurate positioning instruments like geodetic total stations, tilt sensors, geodetic gyroscopes or industrial laser scanners. Although the experiments show a high reproducibility of the position readings within a few millimetres, the VR system has systematic effects with magnitudes of several centimetres. A tilt of about 0.4° of the reference plane with respect to the horizontal plane was detected. Moreover, our results demonstrate that the tracking algorithm faces problems when several lighthouses are used. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Stacked Autoencoders Driven by Semi-Supervised Learning for Building Extraction from near Infrared Remote Sensing Imagery.
- Author
-
Protopapadakis, Eftychios, Doulamis, Anastasios, Doulamis, Nikolaos, and Maltezos, Evangelos
- Subjects
REMOTE sensing ,OPTICAL radar ,LIDAR ,INFRARED imaging ,SUPERVISED learning ,ESTIMATION theory ,SECURE Sockets Layer (Computer network protocol) - Abstract
In this paper, we propose a Stack Auto-encoder (SAE)-Driven and Semi-Supervised (SSL)-Based Deep Neural Network (DNN) to extract buildings from relatively low-cost satellite near infrared images. The novelty of our scheme is that we employ only an extremely small portion of labeled data for training the deep model which constitutes less than 0.08% of the total data. This way, we significantly reduce the manual effort needed to complete an annotation process, and thus the time required for creating a reliable labeled dataset. On the contrary, we apply novel semi-supervised techniques to estimate soft labels (targets) of the vast amount of existing unlabeled data and then we utilize these soft estimates to improve model training. Overall, four SSL schemes are employed, the Anchor Graph, the Safe Semi-Supervised Regression (SAFER), the Squared-loss Mutual Information Regularization (SMIR), and an equal importance Weighted Average of them (WeiAve). To retain only the most meaning information of the input data, labeled and unlabeled ones, we also employ a Stack Autoencoder (SAE) trained under an unsupervised manner. This way, we handle noise in the input signals, attributed to dimensionality redundancy, without sacrificing meaningful information. Experimental results on the benchmarked dataset of Vaihingen city in Germany indicate that our approach outperforms all state-of-the-art methods in the field using the same type of color orthoimages, though the fact that a limited dataset is utilized (10 times less data or better, compared to other approaches), while our performance is close to the one achieved by high expensive and much more precise input information like the one derived from Light Detection and Ranging (LiDAR) sensors. In addition, the proposed approach can be easily expanded to handle any number of classes, including buildings, vegetation, and ground. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. A Machine Learning Based Classification Method for Customer Experience Survey Analysis.
- Author
-
Markoulidakis, Ioannis, Rallis, Ioannis, Georgoulas, Ioannis, Kopsiaftis, George, Doulamis, Anastasios, and Doulamis, Nikolaos
- Subjects
MACHINE learning ,QUALITY of service ,CUSTOMER satisfaction ,MARKET surveys ,CALL centers ,CLASSIFICATION - Abstract
Customer Experience (CX) is monitored through market research surveys, based on metrics like the Net Promoter Score (NPS) and the customer satisfaction for certain experience attributes (e.g., call center, website, billing, service quality, tariff plan). The objective of companies is to maximize NPS through the improvement of the most important CX attributes. However, statistical analysis suggests that there is a lack of clear and accurate association between NPS and the CX attributes' scores. In this paper, we address the aforementioned deficiency using a novel classification approach, which was developed based on logistic regression and tested with several state-of-the-art machine learning (ML) algorithms. The proposed method was applied on an extended data set from the telecommunication sector and the results were quite promising, showing a significant improvement in most statistical metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. Automatic 3D Modeling and Reconstruction of Cultural Heritage Sites from Twitter Images.
- Author
-
Doulamis, Anastasios, Voulodimos, Athanasios, Protopapadakis, Eftychios, Doulamis, Nikolaos, and Makantasis, Konstantinos
- Abstract
This paper presents an approach for leveraging the abundance of images posted on social media like Twitter for large scale 3D reconstruction of cultural heritage landmarks. Twitter allows users to post short messages, including photos, describing a plethora of activities or events, e.g., tweets are used by travelers on vacation, capturing images from various cultural heritage assets. As such, a great number of images are available online, able to drive a successful 3D reconstruction process. However, reconstruction of any asset, based on images mined from Twitter, presents several challenges. There are three main steps that have to be considered: (i) tweets' content identification, (ii) image retrieval and filtering, and (iii) 3D reconstruction. The proposed approach first extracts key events from unstructured tweet messages and then identifies cultural activities and landmarks. The second stage is the application of a content-based filtering method so that only a small but representative portion of cultural images are selected to support fast 3D reconstruction. The proposed methods are experimentally evaluated using real-world data and comparisons verify the effectiveness of the proposed scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. A Causal Long Short-Term Memory Sequence to Sequence Model for TEC Prediction Using GNSS Observations.
- Author
-
Kaselimi, Maria, Voulodimos, Athanasios, Doulamis, Nikolaos, Doulamis, Anastasios, and Delikaraoglou, Demitris
- Subjects
RECURRENT neural networks ,TIME delay estimation ,CAUSAL models ,PREDICTION models ,DEEP learning ,FORECASTING ,SCIENTIFIC community - Abstract
The necessity of predicting the spatio-temporal phenomenon of ionospheric variability is closely related to the requirement of many users to be able to obtain high accuracy positioning with low cost equipment. The Precise Point Positioning (PPP) technique is highly accepted by the scientific community as a means for providing high level of position accuracy from a single receiver. However, its main drawback is the long convergence time to achieve centimeter-level accuracy in positioning. Hereby, we propose a deep learning-based approach for ionospheric modeling. This method exploits the advantages of Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) for timeseries modeling and predicts the total electron content per satellite from a specific station by making use of a causal, supervised deep learning method. The scope of the proposed method is to compare and evaluate the between-satellites ionospheric delay estimation, and to aggregate the Total Electron Content (TEC) outcomes per-satellite into a single solution over the station, thus constructing regional TEC models, in an attempt to replace Global Ionospheric Maps (GIM) data. The evaluation of our proposed recurrent method for the prediction of vertical total electron content (VTEC) values is compared against the traditional Autoregressive (AR) and the Autoregressive Moving Average (ARMA) methods, per satellite. The proposed model achieves error lower than 1.5 TECU which is slightly better than the accuracy of the current GIM products which is currently about 2.0–3.0 TECU. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Choreographic Pattern Analysis from Heterogeneous Motion Capture Systems Using Dynamic Time Warping.
- Author
-
Rallis, Ioannis, Protopapadakis, Eftychios, Voulodimos, Athanasios, Doulamis, Nikolaos, Doulamis, Anastasios, and Bardis, Georgios
- Subjects
MOTION capture (Human mechanics) ,BINOCULAR vision ,DYNAMICAL systems ,TIME management ,DANCE companies ,CULTURAL property - Abstract
The convention for the safeguarding of Intangible Cultural Heritage (ICH) by UNESCO highlights the equal importance of intangible elements of cultural heritage to tangible ones. One of the most important domains of ICH is folkloric dances. A dance choreography is a time-varying 3D process (4D modelling), which includes dynamic co-interactions among different actors, emotional and style attributes, and supplementary elements, such as music tempo and costumes. Presently, research focuses on the use of depth acquisition sensors, to handle kinesiology issues. The extraction of skeleton data, in real time, contains a significant amount of information (data and metadata), allowing for various choreography-based analytics. In this paper, a trajectory interpretation method for Greek folkloric dances is presented. We focus on matching trajectories' patterns, existing in a choreographic database, to new ones originating from different sensor types such as VICON and Kinect II. Then, a Dynamic Time Warping (DTW) algorithm is proposed to find out similarities/dissimilarities among the choreographic trajectories. The goal is to evaluate the performance of the low-cost Kinect II sensor for dance choreography compared to the accurate but of high-cost VICON-based choreographies. Experimental results on real-life dances are carried out to show the effectiveness of the proposed DTW methodology and the ability of Kinect II to localize dances in 3D space. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Recent Developments in Deep Learning for Engineering Applications.
- Author
-
Voulodimos, Athanasios, Doulamis, Nikolaos, Bebis, George, and Stathaki, Tania
- Subjects
- *
DEEP learning , *RECURRENT neural networks , *HIDDEN Markov models , *MATHEMATICAL convolutions , *ELECTROENCEPHALOGRAPHY - Published
- 2018
- Full Text
- View/download PDF
41. Ionospheric modeling in GNSS positioning using deep learning models.
- Author
-
Kaselimi, Maria, Doulamis, Nikolaos, and Delikaraoglou, Demitris
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *SPEECH perception , *NETWORK performance , *TIME series analysis - Abstract
Deep learning techniques are used for capturing intricate structures of large-scale data by employing computational models of multiple processing layers that can learn and represent data with multiple levels of abstraction. Such methods can include Convolutional Neural Networks, stacked auto-encoders and Long-Short Term Memory (LSTM) architectures. LSTM networks are suitable for dealing with time-dependent data through mapping input to output sequences as, for instance, in language modeling and speech recognition. One application that has recently attracted considerable attention within the geodetic community is the possibility of applying these techniques to account for the adverse effects of the ionospheric delays on the GNSS satellite signals. Ionospheric delay depend on three main factors: (i) the total electron content (TEC), (ii) the frequency of the GNSS signals, and (iii) the angle at which the signal enters the ionospheric layer. The combination of multi-frequency GNSS measurements allows us to remove most of the ionospheric effects. However, ionospheric prediction models have to be applied for single-frequency ionospheric estimation in order to remove this effect. This can be done using Global Ionospheric Maps (GIMs) available from the International GNSS Service (IGS) and evaluating the TEC effect via a mapping function involving the azimuth and elevation angles of the GNSS signals. This paper deals with a modeling approach suitable for predicting the ionospheric delay at different locations of the IGS network stations using the LSTM networks. We also incorporate a Bayesian optimization method for selecting the best configuration parameters of the LSTM network, thus improving network's performance. LSTM architecture models long-range dependencies in time series, making it appropriate for ionospheric modeling in GNSS positioning. As experimental data we used actual GNSS observations from the global IGS network stations participating in the MGEX project that provides various satellite signals from the currently available multiple navigation satellite systems. Slant TEC data (STEC) were obtained using the available GPS/GLONASS/Galileo signals after processing with various techniques, such as Precise Point Positioning (PPP). The combination of multi-GNSS signals in PPP, allows us to investigate additional biases such as Differential Code Biases (DCBs) that are closely related to the ionospheric delays. Consequently, a basic LSTM network structure is created, having as minimum inputs the following: time epoch, GNSS signal azimuth and elevation angle, ionospheric delays for the previous and present observation epochs. The sequential GNSS observations are fed one by one in a chain-like structure, in a way that a single state is stored in a unique node with a single activation function in the network at one timestep and then is propagated to the next timestep. In this way, the LSTM algorithm is able to predict future ionospheric delay values. Topics to be discussed in the paper include the design of the LSTM network structure, the Bayesian optimization strategy, the training methods exploiting steepest descent algorithms, data analysis, as well as preliminary testing results of the ionospheric delay predictions as compared, for instance, with the IGS ionosphere products and the widely used Klobuchar model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
42. WaterSpy: A High Sensitivity, Portable Photonic Device for Pervasive Water Quality Analysis.
- Author
-
Doulamis, Nikolaos, Voulodimos, Athanasios, Doulamis, Anastasios, Bimpas, Matthaios, Angeli, Aikaterini, Bakalos, Nikolaos, Giusti, Alessandro, Philimis, Panayiotis, Varriale, Antonio, Ausili, Alessio, D'Auria, Sabato, Lampropoulos, George, Baer, Matthias, Schmauss, Bernhard, Freitag, Stephan, Lendl, Bernhard, Młynarczyk, Krzysztof, Sosna-Głębska, Aleksandra, Trajnerowicz, Artur, and Pawluczyk, Jarosław
- Subjects
- *
WATER quality , *PHOTONICS , *QUANTUM cascade lasers , *DETECTORS , *ELECTRONICS - Abstract
In this paper, we present WaterSpy, a project developing an innovative, compact, cost-effective photonic device for pervasive water quality sensing, operating in the mid-IR spectral range. The approach combines the use of advanced Quantum Cascade Lasers (QCLs) employing the Vernier effect, used as light source, with novel, fibre-coupled, fast and sensitive Higher Operation Temperature (HOT) photodetectors, used as sensors. These will be complemented by optimised laser driving and detector electronics, laser modulation and signal conditioning technologies. The paper presents the WaterSpy concept, the requirements elicited, the preliminary architecture design of the device, the use cases in which it will be validated, while highlighting the innovative technologies that contribute to the advancement of the current state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. A Vision Transformer Model for Convolution-Free Multilabel Classification of Satellite Imagery in Deforestation Monitoring.
- Author
-
Kaselimi M, Voulodimos A, Daskalopoulos I, Doulamis N, and Doulamis A
- Subjects
- Conservation of Natural Resources methods, Satellite Imagery, Neural Networks, Computer
- Abstract
Understanding the dynamics of deforestation and land uses of neighboring areas is of vital importance for the design and development of appropriate forest conservation and management policies. In this article, we approach deforestation as a multilabel classification (MLC) problem in an endeavor to capture the various relevant land uses from satellite images. To this end, we propose a multilabel vision transformer model, ForestViT, which leverages the benefits of the self-attention mechanism, obviating any convolution operations involved in commonly used deep learning models utilized for deforestation detection. Experimental evaluation in open satellite imagery datasets yields promising results in the case of MLC, particularly for imbalanced classes, and indicates ForestViT's superiority compared with well-established convolutional structures (ResNET, VGG, DenseNet, and ModileNet neural networks). This superiority is more evident for minority classes.
- Published
- 2023
- Full Text
- View/download PDF
44. WaterSpy: A High Sensitivity, Portable Photonic Device for Pervasive Water Quality Analysis.
- Author
-
Doulamis N, Voulodimos A, Doulamis A, Bimpas M, Angeli A, Bakalos N, Giusti A, Philimis P, Varriale A, Ausili A, D'Auria S, Lampropoulos G, Baer M, Schmauss B, Freitag S, Lendl B, Młynarczyk K, Sosna-Głębska A, Trajnerowicz A, Pawluczyk J, Żbik M, Kułakowski J, Georgiadis P, Blaser S, and Bazzurro N
- Abstract
In this paper, we present WaterSpy, a project developing an innovative, compact, cost-effective photonic device for pervasive water quality sensing, operating in the mid-IR spectral range. The approach combines the use of advanced Quantum Cascade Lasers (QCLs) employing the Vernier effect, used as light source, with novel, fibre-coupled, fast and sensitive Higher Operation Temperature (HOT) photodetectors, used as sensors. These will be complemented by optimised laser driving and detector electronics, laser modulation and signal conditioning technologies. The paper presents the WaterSpy concept, the requirements elicited, the preliminary architecture design of the device, the use cases in which it will be validated, while highlighting the innovative technologies that contribute to the advancement of the current state of the art., Competing Interests: The authors declare no conflict of interest.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.