10 results on '"Hassani, Hossein"'
Search Results
2. Large Language Models as Benchmarks in Forecasting Practice.
- Author
-
HASSANI, HOSSEIN and SILVA, EMMANUEL SIRIMAL
- Subjects
MACHINE learning ,GENERATIVE artificial intelligence ,LANGUAGE models ,STANDARD deviations ,TIME series analysis - Abstract
This article explores the use of large language models (LLMs) as tools for forecasting. The authors conducted a comparative analysis using LLMs, specifically ChatGPT and Microsoft Copilot, along with the forecast package in R to forecast three different datasets. While LLMs showed promise in generating accurate forecasts, there were also challenges and limitations to their use. The article emphasizes the need for caution and education when interpreting the results of LLM forecasts. The authors recommend further research and exploration to fully understand the potential and limitations of LLMs in forecasting. [Extracted from the article]
- Published
- 2024
3. Deep Learning and Implementations in Banking
- Author
-
Hassani, Hossein, Huang, Xu, Silva, Emmanuel, and Ghodsi, Mansi
- Published
- 2020
- Full Text
- View/download PDF
4. Optimization of machine learning algorithms for remote alteration mapping.
- Author
-
Bahrami, Yousef and Hassani, Hossein
- Subjects
- *
MACHINE learning , *DISTANCE education , *PROSPECTING , *HYDROTHERMAL alteration , *COSINE function , *K-nearest neighbor classification , *PRINCIPAL components analysis - Abstract
• This study focused on applicability of optimized algorithms in alteration mapping. • The ML algorithms considered for optimization included QDA, CKNN, and BDT. • The optimization process employed various techniques, such as GS, RS, BO, and PCA. • Using the optimized algorithms holds promise for enhancing the precision of models. • The study highlights the implications of ML methods for mineral explorations. World-class large to sub-economic small porphyry copper deposits (PCDs) are primarily found in the Kerman Cenozoic Magmatic Arc (KCMA) which is a fascinating area for geological remote sensing investigations because of its well-exposed rocks and roughly vegetated surfaces. Remote hydrothermal alteration mapping is a critical component of mineral exploration and resource assessment, vital for identifying PCDs. This study explored the application of ML techniques, such as quadratic discriminant analysis (QDA), cosine K-nearest neighbor (CKNN), and bagging decision tree (BDT), in remote hydrothermal alteration mapping. Moreover, the study highlights the transformative impact of optimization methods such as grid search (GS), random search (RS), Bayesian optimization (BO), and principal component analysis (PCA) methods in fine-tuning these algorithms to achieve superior results. These algorithms were found accurate and helpful in identifying PCD-related argillic, phyllic, propylitic, and iron oxide/hydroxide alteration-type zones based on field observations, petrographic studies, and XRD analysis. This research revealed evidence for widespread phyllic and silicic alteration zones, as well as confined argillic and iron oxide/hydroxide zones surrounded by wider regions of propylitic alteration. As the field of ML continues to advance, the future holds promise for even more refined and innovative approaches to hydrothermal alteration mapping. This study underscores the pivotal role that optimized ML algorithms play in revolutionizing mineral exploration practices and paving the way for a more sustainable and responsible resource assessment industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Multicriteria Consensus Models to Support Intelligent Group Decision-Making
- Author
-
Hassani, Hossein
- Subjects
Computational intelligence ,Group decision-making ,Machine learning ,Reinforcement learning ,Intelligent systems ,Electrical and Computer Engineering - Abstract
The development of intelligent systems is progressing rapidly, thanks to advances in information technology that enable collective, automated, and effective decision-making based on information collected from diverse sources. Group decision-making (GDM) is a key part of intelligent decision-making (IDM), which has received considerable attention in recent years. IDM through GDM refers to a decision-making problem where a group of intelligent decision-makers (DMs) evaluate a set of alternatives with respect to specific attributes. Intelligent communication among DMs aims to give orders to the available alternatives. However, GDM models developed for IDM must incorporate consensus support models to effectively integrate input from each DM into the final decision. Many efforts have been made to design consensus models to support IDM, depending on the decision problem or environment. Despite promising results, significant gaps remain in research on the design of such support models. One major drawback of existing consensus models is their dependence on the type of decision environment, making them less generalizable. Moreover, these models are often static and cannot respond to dynamic changes in the decision environment. Another limitation is that consensus models for large-scale decision environments lack an efficient communication regime to enable DM interactions. To address these challenges, this dissertation proposes developing consensus models to support IDM through GDM. To address the generalization issue of existing consensus models, reinforcement learning (RL) is proposed. RL agents can be built on the Markov decision process to enable IDM, potentially removing the generalization issue of consensus support models. Contrary to most consensus models, which assume static decision environments, this dissertation proposes a computationally efficient dynamic consensus model to support dynamic IDM. Finally, to facilitate secure and efficient interactions among intelligent DMs in large-scale problems, Blockchain technology is proposed to speed up the consensus process. The proposed communication regime also includes trust-building mechanisms that employ Blockchain protocols to remove enduring and limitative assumptions on opinion similarity among agents.
- Published
- 2023
6. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field.
- Author
-
Hassani, Hossein and Silva, Emmanuel Sirmal
- Subjects
CHATGPT ,ARTIFICIAL intelligence ,MACHINE learning ,DATA science ,COMPUTATIONAL linguistics ,NATURAL language processing ,CHATBOTS - Abstract
ChatGPT, a conversational AI interface that utilizes natural language processing and machine learning algorithms, is taking the world by storm and is the buzzword across many sectors today. Given the likely impact of this model on data science, through this perspective article, we seek to provide an overview of the potential opportunities and challenges associated with using ChatGPT in data science, provide readers with a snapshot of its advantages, and stimulate interest in its use for data science projects. The paper discusses how ChatGPT can assist data scientists in automating various aspects of their workflow, including data cleaning and preprocessing, model training, and result interpretation. It also highlights how ChatGPT has the potential to provide new insights and improve decision-making processes by analyzing unstructured data. We then examine the advantages of ChatGPT's architecture, including its ability to be fine-tuned for a wide range of language-related tasks and generate synthetic data. Limitations and issues are also addressed, particularly around concerns about bias and plagiarism when using ChatGPT. Overall, the paper concludes that the benefits outweigh the costs and ChatGPT has the potential to greatly enhance the productivity and accuracy of data science workflows and is likely to become an increasingly important tool for intelligence augmentation in the field of data science. ChatGPT can assist with a wide range of natural language processing tasks in data science, including language translation, sentiment analysis, and text classification. However, while ChatGPT can save time and resources compared to training a model from scratch, and can be fine-tuned for specific use cases, it may not perform well on certain tasks if it has not been specifically trained for them. Additionally, the output of ChatGPT may be difficult to interpret, which could pose challenges for decision-making in data science applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Improved exploration–exploitation trade-off through adaptive prioritized experience replay.
- Author
-
Hassani, Hossein, Nikan, Soodeh, and Shami, Abdallah
- Subjects
- *
DEEP reinforcement learning , *MACHINE learning , *DEEP learning , *SAMPLING errors , *ALGORITHMS - Abstract
Experience replay is an indispensable part of deep reinforcement learning algorithms that enables the agent to revisit and reuse its past and recent experiences to update the network parameters. In many baseline off-policy algorithms, such as deep Q-networks (DQN), transitions in the replay buffer are typically sampled uniformly. This uniform sampling is not optimal for accelerating the agent's training towards learning the optimal policy. A more selective and prioritized approach to experience sampling can yield improved learning efficiency and performance. In this regard, this work is devoted to the design of a novel prioritizing strategy to adaptively adjust the sampling probabilities of stored transitions in the replay buffer. Unlike existing sampling methods, the proposed algorithm takes into consideration the exploration–exploitation trade-off (EET) to rank transitions, which is of utmost importance in learning an optimal policy. Specifically, this approach utilizes temporal difference and Bellman errors as criteria for sampling priorities. To maintain balance in EET throughout training, the weights associated with both criteria are dynamically adjusted when constructing the sampling priorities. Additionally, any bias introduced by this sample prioritization is mitigated through assigning importance-sampling weight to each transition in the buffer. The efficacy of this prioritization scheme is assessed through training the DQN algorithm across various OpenAI Gym environments. The results obtained underscore the significance and superiority of our proposed algorithm over state-of-the-art methods. This is evidenced by its accelerated learning pace, greater cumulative reward, and higher success rate. • A novel sample prioritization is proposed for deep Q-Networks. • Temporal difference and Bellman errors are employed to construct the priority score. • Weights of augmented errors into the priority score are adaptively updated. • The weighted priority balances the exploration–exploitation trade-off. • This score yields significant improvement over baselines across Gym environments. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Traffic navigation via reinforcement learning with episodic-guided prioritized experience replay.
- Author
-
Hassani, Hossein, Nikan, Soodeh, and Shami, Abdallah
- Subjects
- *
REINFORCEMENT learning , *DEEP reinforcement learning , *ARTIFICIAL intelligence , *MACHINE learning , *DEEP learning , *TRAFFIC circles - Abstract
Deep Reinforcement Learning (DRL) models play a fundamental role in autonomous driving applications; however, they typically suffer from sample inefficiency because they often require many interactions with the environment to learn effective policies. This makes the training process time-consuming. To address this shortcoming, Prioritized Experience Replay (PER) has proven to be effective by prioritizing samples with high Temporal-Difference (TD) error for learning. In this context, this study contributes to artificial intelligence by proposing a sample-efficient DRL algorithm called Episodic-Guided Prioritized Experience Replay (EPER). The core innovation of EPER lies in the utilization of an episodic memory, dedicated to storing successful training episodes. Within this memory, expected returns for each state–action pair are extracted. These returns, combined with TD error-based prioritization, form a novel objective function for deep Q-network training. To prevent excessive determinism, EPER introduces exploration into the learning process by incorporating a regularization term into the objective function that allows exploration of state-space regions with diverse Q-values. The proposed EPER algorithm is suitable to train a DRL agent for handling episodic tasks, and it can be integrated into off-policy DRL models. EPER is employed for traffic navigation through scenarios such as highway driving, merging, roundabout, and intersection to showcase its application in engineering. The attained results denote that, compared with the PER and an additional state-of-the-art training technique, EPER is superior in expediting the training of the agent and learning a more optimal policy that leads to lower collision rates within the constructed navigation scenarios. • Proposed Episodic-Guided Prioritized Experience Replay algorithm. • Proposed to integrate episodic information in deep Q-Network training. • Regularization of the DQN loss function for enhanced performance. • Improving exploration–exploitation trade-off management. • Enhancing vehicle autonomy with Episodic-Guided Prioritized Experience Replay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A deep autoencoder network connected to geographical random forest for spatially aware geochemical anomaly detection.
- Author
-
Soltani, Zeinab, Hassani, Hossein, and Esmaeiloghli, Saeid
- Subjects
- *
RANDOM forest algorithms , *ANOMALY detection (Computer security) , *LINEAR network coding , *DEEP learning , *RIVER sediments , *MACHINE learning - Abstract
Machine learning (ML) and deep learning (DL) techniques have recently shown encouraging performance in recognizing metal-vectoring geochemical anomalies within complex Earth systems. However, the generalization of these techniques to detect subtle anomalies may be precluded due to overlooking non-stationary spatial structures and intra-pattern local dependencies contained in geochemical exploration data. Motivated by this, we conceptualize in this paper an innovative algorithm connecting a DL architecture to a spatial ML processor to account for local neighborhood information and spatial non-stationarities in support of spatially aware anomaly detection. A deep autoencoder network (DAN) is trained to abstract deep feature codings (DFCs) of multi-element input data. The encoded DFCs represent the typical performance of a nonlinear Earth system, i.e., multi-element signatures of geochemical background populations developed by different geo-processes. A local version of the random forest algorithm, geographical random forest (GRF), is then connected to the input and code layers of the DAN processor to establish nonlinear and spatially aware regressions between original geochemical signals (dependent variables) and DFCs (independent variables). After contributions of the latter on the former are determined, residuals of GRF regressions are quantified and interpreted as spatially aware anomaly scores related to mineralization. The proposed algorithm (i.e., DAN‒GRF) is implemented in the R language environment and examined in a case study with stream sediment geochemical data pertaining to the Takht-e-Soleyman district, Iran. The high-scored anomalies mapped by DAN‒GRF, compared to those by the stand-alone DAN technique, indicated a stronger spatial correlation with locations of known metal occurrences, which was statistically confirmed by success-rate curves, Student's t ‒statistic method, and prediction-area plots. The findings suggested that the proposed algorithm has an enhanced capability to recognize subtle multi-element geochemical anomalies and extract reliable insights into metal exploration targeting. [Display omitted] • A hybrid algorithm to recognize metal-vectoring geochemical anomaly patterns. • A deep autoencoder network to learn deep feature codings of multi-element input data. • A geographical random forest regression to quantify spatially aware anomaly scores. • A comparative experiment on a case study from Takht-e-Soleyman district, Iran. • Success-rate curves, Student's t ‒statistic, and prediction-area plots for performance evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Investigating the capabilities of multispectral remote sensors data to map alteration zones in the Abhar area, NW Iran.
- Author
-
Bahrami, Yousef, Hassani, Hossein, and Maghsoudi, Abbas
- Subjects
HYDROTHERMAL alteration ,REMOTE sensing ,MULTISPECTRAL imaging ,SUPPORT vector machines ,MACHINE learning ,ARTIFICIAL neural networks - Abstract
Economic mineralization is often associated with alterations that are identifiable by remote sensing coupled geological analysis. The present paper aims to investigate the capabilities of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Landsat-8 and Sentinel-2 data to map iron oxide and hydrothermally alteration zones in the Abhar area, NW Iran. To achieve this goal, the principal component analysis (PCA) and two machine learning methods including support vector machine (SVM) and artificial neural network (ANN) were employed. PCA method was carried out on four bands of all data and then the appropriate principal components were selected to map alterations. Due to the high precision of ASTER data within the short-wave infrared range, these data results are more satisfactory compared with Landsat-8 and Sentinel-2 sensors in detecting hydrothermally alterations through the PCA technique. Based on the obtained maps, the performance of all data types was approximately similar in the detection of iron oxide zones. Our desired data were classified by two methods of SVM and ANN. The results of these algorithms were presented as confusion matrix. According to these results, for hydrothermally alterations, ASTER data showed better performance in both SVM and ANN than other datasets by gaining values greater than 90%. These data did not perform well in the iron oxide zones detection, while Landsat-8 and Sentinel-2 have been more successful. For iron oxide, based on confusion matrix, Landsat-8 data have obtained the values of 78% and 79% through SVM and ANN algorithms, respectively, and also Sentinel-2 has acquired the values of 88.11% and 90.55% via SVM and ANN, respectively. Therefore, to map iron oxide zones, Sentinel-2 data are more favorable than Landsat-8 data. In addition, the ANN algorithm in ASTER data has represented the highest overall accuracy and Kappa coefficient with the values of 88.73% and 0.8453, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.