31,371 results
Search Results
102. ITERATIVE ALGORITHMS FOR VARIATIONAL INCLUSIONS IN BANACH SPACES.
- Author
-
ANSARI, QAMRUL HASAN, BALOOEE, JAVAD, and PETRUŞEL, ADRIAN
- Subjects
BANACH spaces ,LIPSCHITZ continuity ,PAPER arts ,DIFFERENTIAL inclusions ,ALGORITHMS - Abstract
The present paper is in two folds. In the first fold, we prove the Lipschitz continuity of the proximal mapping associated with a general strongly H-monotone mapping and compute an estimate of its Lipschitz constant under some mild assumptions imposed on the mapping H involved in the proximal mapping. We provide two examples to show that a maximal monotone mapping need not be a general H-monotone for a single-valued mapping H from a Banach space to its dual space. A class of multi-valued nonlinear variational inclusion problems is considered, and by using the notion of proximal mapping and Nadler's technique, an iterative algorithm with mixed errors is suggested to compute its solutions. Under some appropriate hypotheses imposed on the mappings and parameters involved in the multi-valued nonlinear variational inclusion problem, the strong convergence of the sequences generated by the proposed algorithm to a solution of the aforesaid problem is verified. The second fold of this paper investigates and analyzes the notion of Cn-monotone mappings defined and studied in [S.Z. Nazemi, A new class of monotone mappings and a new class of variational inclusions in Banach spaces, J. Optim. Theory Appl. 155(3)(2012) 785-795]. Several comments related to the results and algorithm appeared in the above mentioned paper are given. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
103. Special issue "Discrete optimization: Theory, algorithms and new applications".
- Author
-
Werner, Frank
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,ONLINE algorithms ,LINEAR matrix inequalities ,ALGORITHMS ,ROBUST stability analysis ,NONLINEAR integral equations - Abstract
This document is an editorial for a special issue of the journal AIMS Mathematics on the topic of discrete optimization. The issue includes 21 papers covering a range of subjects, including molecular trees, network systems, variational inequality problems, scheduling, image restoration, spectral clustering, integral equations, convex functions, graph products, optimization algorithms, air quality prediction, humanitarian planning, inertial methods, neural networks, transportation problems, emotion identification, fixed-point problems, structural engineering design, single machine scheduling, and ensemble learning. The papers present new theoretical results, algorithms, and applications in these areas. The guest editor expresses gratitude to the journal staff and reviewers and hopes that readers will find inspiration for their own research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
104. Measuring algorithmically infused societies.
- Author
-
Wagner C, Strohmaier M, Olteanu A, Kıcıman E, Contractor N, and Eliassi-Rad T
- Subjects
- Computer Simulation, Datasets as Topic, Guidelines as Topic, Humans, Politics, Social Conditions economics, Algorithms, Social Conditions statistics & numerical data, Social Sciences methods
- Abstract
It has been the historic responsibility of the social sciences to investigate human societies. Fulfilling this responsibility requires social theories, measurement models and social data. Most existing theories and measurement models in the social sciences were not developed with the deep societal reach of algorithms in mind. The emergence of 'algorithmically infused societies'-societies whose very fabric is co-shaped by algorithmic and human behaviour-raises three key challenges: the insufficient quality of measurements, the complex consequences of (mis)measurements, and the limits of existing social theories. Here we argue that tackling these challenges requires new social theories that account for the impact of algorithmic systems on social realities. To develop such theories, we need new methodologies for integrating data and measurements into theory construction. Given the scale at which measurements can be applied, we believe measurement models should be trustworthy, auditable and just. To achieve this, the development of measurements should be transparent and participatory, and include mechanisms to ensure measurement quality and identify possible harms. We argue that computational social scientists should rethink what aspects of algorithmically infused societies should be measured, how they should be measured, and the consequences of doing so.
- Published
- 2021
- Full Text
- View/download PDF
105. Using Paper Texture for Choosing a Suitable Algorithm for Scanned Document Image Binarization.
- Author
-
Lins, Rafael Dueire, Bernardino, Rodrigo, Barboza, Ricardo da Silva, and De Oliveira, Raimundo Correa
- Subjects
DOCUMENT imaging systems ,HISTORICAL source material ,TEXTURES ,ALGORITHMS - Abstract
The intrinsic features of documents, such as paper color, texture, aging, translucency, the kind of printing, typing or handwriting, etc., are important with regard to how to process and enhance their image. Image binarization is the process of producing a monochromatic image having its color version as input. It is a key step in the document processing pipeline. The recent Quality-Time Binarization Competitions for documents have shown that no binarization algorithm is good for any kind of document image. This paper uses a sample of the texture of the scanned historical documents as the main document feature to select which of the 63 widely used algorithms, using five different versions of the input images, totaling 315 document image-binarization schemes, provides a reasonable quality-time trade-off. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
106. Enumeration optimization of open pit production scheduling based on mobile capacity search domain.
- Author
-
Xu X, Gu X, Wang Q, Zhao Y, Kong W, Zhu Z, and Wang F
- Subjects
- Algorithms, Mining
- Abstract
The optimization of open pit mine production scheduling is not only a multistage decision-making problem but also involves space-time dynamic action among multiple factors, which makes it difficult to optimize production capacity, mining sequence, mining life, and other factors simultaneously in optimizing design. In addition, the production capacity is disorderly expanded, the calculation scale is large, and the optimization time is long. Therefore, this article designs a mobile capacity search domain method to improve computing efficiency without omitting the optimal production capacity. At the same time, taking the maximum net present value as the objective function, an enumeration method is used to optimize the possible paths in different capacity domains and calculate the infrastructure investment and facility idle cost required to meet the maximum production capacity on each possible path to control the disorderly expansion and violent fluctuation of production capacity. The research shows that the open pit mine production scheduling optimization algorithm proposed in this article can not only realize the simultaneous optimization of the three elements of production capacity, mining sequence, and mining life but also improve the computing efficiency by 200 times. Furthermore, the production capacity fluctuation is less than 1.4%. The mining life of the mine is extended by 13 years, and the overall economic benefit is increased by 18%., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
107. Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network.
- Author
-
Saba, Tanzila, Haseeb, Khalid, Rehman, Amjad, Damaševičius, Robertas, and Bahaj, Saeed Ali
- Subjects
RANDOM walks ,ALGORITHMS ,ARTIFICIAL intelligence ,INTERNET of things ,ELECTRONIC paper ,INTERNET traffic - Abstract
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the intelligent routing and flowing of internet traffic with the support of artificial intelligence. However, the inefficient usage of nodes' batteries and long-range communication degrades the connectivity time for the deployed sensors with the end devices. Moreover, trustworthy route identification is another significant research challenge for formulating a smart system. Therefore, this paper presents a smart Random walk Distributed Secured Edge algorithm (RDSE), using a multi-regression model for IoT networks, which aims to enhance the stability of the chosen IoT network with the support of an optimal system. In addition, by using secured computing, the proposed architecture increases the trustworthiness of smart devices with the least node complexity. The proposed algorithm differs from other works in terms of the following factors. Firstly, it uses the random walk to form the initial routes with certain probabilities, and later, by exploring a multi-variant function, it attains long-lasting communication with a high degree of network stability. This helps to improve the optimization criteria for the nodes' communication, and efficiently utilizes energy with the combination of mobile edges. Secondly, the trusted factors successfully identify the normal nodes even when the system is compromised. Therefore, the proposed algorithm reduces data risks and offers a more reliable and private system. In addition, the simulations-based testing reveals the significant performance of the proposed algorithm in comparison to the existing work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
108. Active learning for ordinal classification based on expected cost minimization.
- Author
-
He D
- Subjects
- Algorithms
- Abstract
To date, a large number of active learning algorithms have been proposed, but active learning methods for ordinal classification are under-researched. For ordinal classification, there is a total ordering among the data classes, and it is natural that the cost of misclassifying an instance as an adjacent class should be lower than that of misclassifying it as a more disparate class. However, existing active learning algorithms typically do not consider the above ordering information in query selection. Thus, most of them do not perform satisfactorily in ordinal classification. This study proposes an active learning method for ordinal classification by considering the ordering information among classes. We design an expected cost minimization criterion that imbues the ordering information. Meanwhile, we incorporate it with an uncertainty sampling criterion to impose the query instance more informative. Furthermore, we introduce a candidate subset selection method based on the k-means algorithm to reduce the computational overhead led by the calculation of expected cost. Extensive experiments on nine public ordinal classification datasets demonstrate that the proposed method outperforms several baseline methods., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
109. Quantum variational algorithms are swamped with traps.
- Author
-
Anschuetz ER and Kiani BT
- Subjects
- Algorithms, Neural Networks, Computer
- Abstract
One of the most important properties of classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike the case in classical neural networks, variational quantum models are often not trainable. The most studied phenomenon is the onset of barren plateaus in the training landscape of these quantum models, typically when the models are very deep. This focus on barren plateaus has made the phenomenon almost synonymous with the trainability of quantum models. Here, we show that barren plateaus are only a part of the story. We prove that a wide class of variational quantum models-which are shallow, and exhibit no barren plateaus-have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering these models untrainable if no good initial guess of the optimal parameters is known. We also study the trainability of variational quantum algorithms from a statistical query framework, and show that noisy optimization of a wide variety of quantum models is impossible with a sub-exponential number of queries. Finally, we numerically confirm our results on a variety of problem instances. Though we exclude a wide variety of quantum algorithms here, we give reason for optimism for certain classes of variational algorithms and discuss potential ways forward in showing the practical utility of such algorithms., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
110. Millimeter-Wave Radar-Based Identity Recognition Algorithm Built on Multimodal Fusion.
- Author
-
Guo, Jian, Wei, Jingpeng, Xiang, Yashan, and Han, Chong
- Subjects
FEATURE extraction ,HEART rate monitors ,ALGORITHMS ,SIGNAL-to-noise ratio - Abstract
Millimeter-wave radar-based identification technology has a wide range of applications in persistent identity verification, covering areas such as security production, healthcare, and personalized smart consumption systems. It has received extensive attention from the academic community due to its advantages of being non-invasive, environmentally insensitive and privacy-preserving. Existing identification algorithms mainly rely on a single signal, such as breathing or heartbeat. The reliability and accuracy of these algorithms are limited due to the high similarity of breathing patterns and the low signal-to-noise ratio of heartbeat signals. To address the above issues, this paper proposes an algorithm for multimodal fusion for identity recognition. This algorithm extracts and fuses features derived from phase signals, respiratory signals, and heartbeat signals for identity recognition purposes. The spatial features of signals with different modes are first extracted by the residual network (ResNet), after which these features are fused with a spatial-channel attention fusion module. On this basis, the temporal features are further extracted with a time series-based self-attention mechanism. Finally, the feature vectors of the user's vital sign modality are obtained to perform identity recognition. This method makes full use of the correlation and complementarity between different modal signals to improve the accuracy and reliability of identification. Simulation experiments show that the algorithm identity recognition proposed in this paper achieves an accuracy of 94.26% on a 20-subject self-test dataset, which is much higher than that of the traditional algorithm, which is about 85%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
111. Didactic Strategies for the Understanding of the Kalman Filter in Industrial Instrumentation Systems
- Author
-
Flórez C., Oscar D., Camargo L., Julián R., and Hurtado, Orlando García
- Abstract
This paper presents an application of the Kalman filter in signal processing in instrumentation systems when the conditions of the environment generate a large amount of interference for the acquisition of signals from measurement systems. The unwanted interferences make important use of the instrumentation system resources and do not represent useful information under any aspect. A simulation is presented using the Matlab tool, which remarkably facilitates the information processing so that the corresponding actions are taken according to the information obtained, taking advantage of the current resources offered by the embedded systems and the required measurements are obtained with enough accuracy.
- Published
- 2022
112. Specialized Content Knowledge of Pre-Service Teachers on the Infinite Limit of a Sequence
- Author
-
Arnal-Palacián, Mónica and Claros-Mellado, Javier
- Abstract
This paper analyses how pre-service teachers approach the notion of the infinite limit of a sequence from two perspectives: Specialized Content Knowledge and Advanced Mathematical Thinking. The aim of this study is to identify the difficulties associated with this notion and to classify them. In order to achieve this, an exploratory qualitative approach was applied using a sample of 12 future teachers. Among the results, we can affirm that pre-service teachers mainly use algorithmic procedures to solve tasks in which this type of limit is implicit, although they would consider a resolution that specifically involves the notion with an intuitive approach if they had to explain it to their students.
- Published
- 2022
113. A Complicated Relationship: Examining the Relationship between Flexible Strategy Use and Accuracy
- Author
-
Garcia Coppersmith, Jeannette and Star, Jon R.
- Abstract
This study explores student flexibility in mathematics by examining the relationship between accuracy and strategy use for solving arithmetic and algebra problems. Core to procedural flexibility is the ability to select and accurately execute the most appropriate strategy for a given problem. Yet the relationship between strategy selection and accurate execution is nuanced and poorly understood. In this paper, this relationship was examined in the context of an assessment where students were asked to complete the same problem twice using different approaches. In particular, we explored (a) the extent to which students were more accurate when selecting standard or better-than-standard strategies, (b) whether this accuracy-strategy use relationship differed depending on whether the student solved a problem for the first time or the second time, and (c) the extent to which students were more accurate when solving algebraic versus arithmetic problems. Our results indicate significant associations between accuracy and all of these aspects--we found differences in accuracy based on strategy, problem type, and a significant interaction effect between strategy and assessment part. These findings have important implications both for researchers investigating procedural flexibility as well as secondary mathematics educators who seek to promote this capacity among their students.
- Published
- 2022
114. Analyzing Ranking Strategies to Characterize Competition in the Co-Operative Education Job Market
- Author
-
Chopra, Shivangi and Golab, Lukasz
- Abstract
Co-operative education is a form of work-integrated learning that includes academic study and paid work experience. This provides new learning opportunities for students and a talent pipeline for employers, but also requires participation in a competitive job market. This paper studies competition through a unique dataset from a large North American co-operative program, in which students and employers rank each other after a round of interviews, then a matching algorithm assigns students to jobs based on the ranks, and finally, they evaluate each other at the end of the work term. The results suggest that less experienced students and small employers are more strongly affected by competition and consider more options in their rankings, whereas senior students and large employers often only identify their top choice. Additionally, competition appears to affect satisfaction since students and employers give higher work term evaluations when matched with their top choice.
- Published
- 2022
115. Representation of Learning in the Post-Digital: Students' Dropout Predictive Models with Artificial Intelligence Algorithms
- Author
-
Zanellati, Andrea, Macauda, Anita, Panciroli, Chiara, and Gabbrielli, Maurizio
- Abstract
Within scientific debate on post-digital and education, we present a position paper to describe a research project aimed at the design of a predictive model for students' low achievements in mathematics in Italy. The model is based on the INVALSI data set, an Italian large-scale assessment test, and we use decision trees as the classification algorithm. In designing this tool, we aim to overcome the use of economic, social, and cultural context indices as main factors for the prediction of a learning gap occurrence. Indeed, we want to include a suitable representation of students' learning in the model, by exploiting the data collected through the INVALSI tests. We resort to a knowledge-based approach to address this issue and specifically, we try to understand what knowledge is introduced into the model through the representation of learning. In this sense, our proposal allows a students' learning encoding, which is transferable to different students' cohort. Furthermore, the encoding methods may be applied to other large-scale assessments test. Hence, we aim to contribute to a debate on knowledge representation in AI tool for education.
- Published
- 2023
- Full Text
- View/download PDF
116. Application of Motion Capture Based on Digital Filtering Algorithm in Sports Dance Teaching.
- Author
-
Rao, Fan
- Subjects
MOTION capture (Human mechanics) ,INTELLIGENT sensors ,ELECTRONIC paper ,ALGORITHMS ,MOTION detectors ,SYSTEMS design - Abstract
In order to improve the teaching effect of sports dance, this paper analyzes the traditional dance teaching motion capture, uses sensor motion perception algorithms to capture sports dance motion perception, and designs an intelligent sensor system that can be used for sports dance motion capture. Moreover, this paper combines the digital filter algorithm to design the hardware system structure of the sports dance motion capture system and builds a motion capture system for sports dance teaching based on the digital filter algorithm according to actual needs. In addition, this paper combines the simulation test to evaluate the performance of the system designed in this paper. The research results show that the motion capture system for sports dance teaching based on the digital filtering algorithm proposed in this paper can play an important role in sports dance teaching and effectively improve the efficiency of sports dance teaching. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
117. Building Crack Detection Based on Digital Image Processing Technology and Multiscale Feature Analysis Automatic Detection Algorithm.
- Author
-
Liu, Chenguang
- Subjects
DIGITAL image processing ,SMART structures ,ENGINEERING personnel ,ELECTRONIC paper ,ALGORITHMS ,CRACKING of concrete - Abstract
At present, the monitoring of concrete cracks is still mainly carried out by engineering personnel using simple mechanical monitoring instruments. The human inspection will undoubtedly be interfered by the individual's psychological, physical, and external conditions, and there may also be unobjective emotions, so it is impossible to ensure that the quality of the detection is up to standard and accurate. This paper combines digital image processing technology and multiscale feature analysis automatic detection algorithm to construct an intelligent building structure crack detection system. Moreover, this paper proposes an enrichment scheme for the unknown partially entangled states of building microparticles and utilizes the entanglement exchange process based on the Raman interaction of two building microparticles. The experimental results show that the automatic detection method of building cracks based on digital image processing technology and multiscale feature analysis has a good effect. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
118. A Review on Federated Learning and Machine Learning Approaches: Categorization, Application Areas, and Blockchain Technology.
- Author
-
Ogundokun, Roseline Oluwaseun, Misra, Sanjay, Maskeliunas, Rytis, and Damasevicius, Robertas
- Subjects
BLOCKCHAINS ,ARTIFICIAL intelligence ,MACHINE learning ,CONFERENCE papers ,ALGORITHMS ,SCIENCE publishing - Abstract
Federated learning (FL) is a scheme in which several consumers work collectively to unravel machine learning (ML) problems, with a dominant collector synchronizing the procedure. This decision correspondingly enables the training data to be distributed, guaranteeing that the individual device's data are secluded. The paper systematically reviewed the available literature using the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guiding principle. The study presents a systematic review of appliable ML approaches for FL, reviews the categorization of FL, discusses the FL application areas, presents the relationship between FL and Blockchain Technology (BT), and discusses some existing literature that has used FL and ML approaches. The study also examined applicable machine learning models for federated learning. The inclusion measures were (i) published between 2017 and 2021, (ii) written in English, (iii) published in a peer-reviewed scientific journal, and (iv) Preprint published papers. Unpublished studies, thesis and dissertation studies, (ii) conference papers, (iii) not in English, and (iv) did not use artificial intelligence models and blockchain technology were all removed from the review. In total, 84 eligible papers were finally examined in this study. Finally, in recent years, the amount of research on ML using FL has increased. Accuracy equivalent to standard feature-based techniques has been attained, and ensembles of many algorithms may yield even better results. We discovered that the best results were obtained from the hybrid design of an ML ensemble employing expert features. However, some additional difficulties and issues need to be overcome, such as efficiency, complexity, and smaller datasets. In addition, novel FL applications should be investigated from the standpoint of the datasets and methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
119. Reply to "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Gołąb concerning the paper "A Review of Center of Pressure (COP) Variables to Quantify Standing Balance in Elderly People: Algorithms and Open Access Code"
- Author
-
Quijoux, Flavien and Nicolaï, Alice
- Subjects
- *
OLDER people , *EQUILIBRIUM testing , *ALGORITHMS , *WAVELETS (Mathematics) - Abstract
Letter to the Editor concerning "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Golab. Reply to "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Golab concerning the paper "A Review of Center of Pressure (COP) Variables to Quantify Standing Balance in Elderly People: Algorithms and Open Access Code" Our choice was actually to present the formula of the prediction ellipse area in the article, as it indeed does not strongly depend on the sample size as the confidence ellipse area does. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
120. Special Issue on papers from the 2019 Workshop on Models and Algorithms for Planning and Scheduling Problems.
- Author
-
Khuller, Samir
- Subjects
SCHEDULING ,ALGORITHMS ,ONLINE algorithms - Abstract
The paper "Well-behaved Online Load Balancing Against Strategic Jobs" by Li, Li and Wu considers a truthful online load-balancing problem with the objective of the makespan minimization on related machines. The 2019 workshop on models and algorithms for planning and scheduling problems was held in Renesse (The Netherlands). [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
121. Risk assessment of interstate pipelines using a fuzzy-clustering approach.
- Author
-
Osman A and Shehadeh M
- Subjects
- Cluster Analysis, Egypt, Risk Assessment, Algorithms, Fuzzy Logic
- Abstract
Interstate pipelines are the most efficient and feasible mean of transport for crude oil and gas within boarders. Assessing the risks of these pipelines is challenging despite the evolution of computational fuzzy inference systems (FIS). The computational intricacy increases with the dimensions of the system variables especially in the typical Takagi-Sugeno (T-S) fuzzy-model. Typically, the number of rules rises exponentially as the number of system variables increases and hence, it is unfeasible to specify the rules entirely for pipeline risk assessments. This work proposes the significance of indexing pipeline risk assessment approach that is integrated with subtractive clustering fuzzy logic to address the uncertainty of the real-world circumstances. Hypothetical data is used to setup the subtractive clustering fuzzy-model using the fundamental rules and scores of the pipeline risk assessment indexing method. An interstate crude-oil pipeline in Egypt is used as a case study to demonstrate the proposed approach., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
122. A SMART RESOURCE UTILIZATION ALGORITHM FOR HIGH SPEED 5G COMMUNICATION NETWORKS BASED ON CLOUD SERVERS.
- Author
-
Ali, Syed Ibad, Jadhav, Jagannath, Arunkumar, R., and Kanagavalli, N.
- Subjects
TELECOMMUNICATION systems ,5G networks ,ELECTRONIC paper ,ALGORITHMS ,COMPUTER systems - Abstract
The 5G technology will become a truly integrated technology. It's becoming the most famous technology because of its optimal system computing complex and transforming a group of individual network components. Adequate resources must be provided for the various devices that are typically on the same network. The various functions in the data system also require resource allocation to suit its needs. This will ensure that the various devices on that network perform different types of work effectively. It also represents the various device functionalites to ensure the estimated resources onetime manner. Only then will it be convenient to carry out a variety of jobs depending on their speed and selection. And its various data requirements vary according to the functionality of the various devices featured in this series segment dynamic configuration. In this paper a smart resource utilization scheme was proposed. Its main purpose is to better manage the off-the-shelf resources available here. And provide it where it is needed and streamline data delivery to users on the network. This ensures that all data goes to the users in the correct manner. The proposed method getting 49% energy consumption, 90% resource utilization, 92% resource reservation and 91% Quality of services. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
123. Epidemiological Algorithm for Early Detection of COVID-19 Cases in a Mexican Oncologic Center.
- Author
-
González-Escamilla, Moisés, Pérez-Ibave, Diana Cristina, Burciaga-Flores, Carlos Horacio, Ortiz-Murillo, Vanessa Natali, Ramírez-Correa, Genaro A., Rodríguez-Niño, Patricia, Piñeiro-Retif, Rafael, Rodríguez-Gutiérrez, Hazyadee Frecia, Alcorta-Nuñez, Fernando, González-Guerrero, Juan Francisco, Vidal-Gutiérrez, Oscar, and Garza-Rodríguez, María Lourdes
- Subjects
COVID-19 pandemic ,COVID-19 ,LATENT infection ,ALGORITHMS ,ELECTRONIC paper - Abstract
An early detection tool for latent COVID-19 infections in oncology staff and patients is essential to prevent outbreaks in a cancer center. (1) Background: In this study, we developed and implemented two early detection tools for the radiotherapy area to identify COVID-19 cases opportunely. (2) Methods: Staff and patients answered a questionnaire (electronic and paper surveys, respectively) with clinical and epidemiological information. The data were collected through two online survey tools: Real-Time Tracking (R-Track) and Summary of Factors (S-Facts). Cut-off values were established according to the algorithm models. SARS-CoV-2 qRT-PCR tests confirmed the positive algorithms individuals. (3) Results: Oncology staff members (n = 142) were tested, and 14% (n = 20) were positives for the R-Track algorithm; 75% (n = 15) were qRT-PCR positive. The S-Facts Algorithm identified 7.75% (n = 11) positive oncology staff members, and 81.82% (n = 9) were qRT-PCR positive. Oncology patients (n = 369) were evaluated, and 1.36% (n = 5) were positive for the Algorithm used. The five patients (100%) were confirmed by qRT-PCR. (4) Conclusions: The proposed early detection tools have proved to be a low-cost and efficient tool in a country where qRT-PCR tests and vaccines are insufficient for the population. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
124. Intelligent algorithms and complex system for a smart parking for vaccine delivery center of COVID-19.
- Author
-
Jemmali, Mahdi
- Subjects
COVID-19 ,INTELLIGENT buildings ,ALGORITHMS ,HERD immunity ,SMART cities ,NP-hard problems ,ELECTRONIC paper - Abstract
Achieving community immunity against the coronavirus disease 2019 (COVID-19) depends on vaccinating the largest number of people within a specific period while taking all precautionary measures. To address this problem, this paper presents a smart parking system that will help the health crisis management committee to vaccinate the largest number of people with the minimum period of time while ensuring that all precautionary measures are followed, through a set of algorithms. These algorithms seek to ensure a uniform distribution of persons in parking. This paper proposes a novel complex system for smart parking and nine algorithms to address the NP-hard problem. The experimental results demonstrate the performance of the proposed algorithms in terms of gap and time. Applying these algorithms to smart cities to ensure precautionary measures against COVID-19 can help fight against this pandemic. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
125. The Augmented Synthetic Control Method
- Author
-
Ben-Michael, Eli, Feller, Avi, and Rothstein, Jesse
- Abstract
The synthetic control method (SCM) is a popular approach for estimating the impact of a treatment on a single unit in panel data settings. The "synthetic control" is a weighted average of control units that balances the treated unit's pre-treatment outcomes and other covariates as closely as possible. A critical feature of the original proposal is to use SCM only when the fit on pre-treatment outcomes is excellent. We propose Augmented SCM as an extension of SCM to settings where such pre-treatment fit is infeasible. Analogous to bias correction for inexact matching, Augmented SCM uses an outcome model to estimate the bias due to imperfect pretreatment fit and then de-biases the original SCM estimate. Our main proposal, which uses ridge regression as the outcome model, directly controls pre-treatment fit while minimizing extrapolation from the convex hull. This estimator can also be expressed as a solution to a modified synthetic controls problem that allows negative weights on some donor units. We bound the estimation error of this approach under different data generating processes, including a linear factor model, and show how regularization helps to avoid over-fitting to noise. We demonstrate gains from Augmented SCM with extensive simulation studies and apply this framework to estimate the impact of the 2012 Kansas tax cuts on economic growth. We implement the proposed method in the new augsynth R package. [This paper was published in "Journal of the American Statistical Association" v116 n536 2021.]
- Published
- 2021
- Full Text
- View/download PDF
126. Special Issue "Scheduling: Algorithms and Applications".
- Author
-
Werner, Frank
- Subjects
METAHEURISTIC algorithms ,FLOW shop scheduling ,OPTIMIZATION algorithms ,ALGORITHMS ,ASSEMBLY line balancing ,JOB applications - Abstract
The paper [[10]] considers an assignment problem and some modifications which can be converted to routing, distribution, or scheduling problems. This special issue of I Algorithms i is dedicated to recent developments of scheduling algorithms and new applications. References 1 Werner F., Burtseva L., Sotskov Y. Special Issue on Algorithms for Scheduling Problems. For this problem, a hybrid metaheuristic algorithm is presented which combines a genetic algorithm with a so-called spotted hyena optimization algorithm. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
127. Digital Art Design Effectiveness Model System Based on K-Medoids Algorithm.
- Author
-
Luo, Xin
- Subjects
COMPUTER art ,DIGITAL communications ,DIGITAL technology ,ABSTRACT art ,ELECTRONIC paper ,COMPUTER networks ,ALGORITHMS - Abstract
With the development of the times, figurative expressions no longer meet the creative needs of artists and the aesthetic demands of the people. In order to express art in a more profound way, the perfect use of abstract graphics plays a crucial role in the success of the work. In recent years, there has been a surge in the creation of digital art, but there is relatively little theoretical literature on the combination of abstract graphics as a visual language and digital art. In addition, research on the theoretical aspects of digital art design is also relatively weak, so it is essential to analyse the formal aesthetics and innovative applications. In fact, digital art design has a very important role to play in promoting the development of creative cultural industries. In other words, the healthy development of digital art design can influence the future prospects of a country's creative and cultural industries. Digital art design is an integrated and complex production process and labour outcome. In addition to its human, aesthetic, and social value, digital art also has an economic value. Digital art is a new art form that combines digital technology and artistic aesthetics. As such, digital art is characterised by high technology, diverse forms, popularised art, and the advantages of high communication, interactivity, and influence, which can provide more assistance for the innovation and application of abstract graphics. Digital art is multifaceted and has an artistic expression that cannot be matched by other forms of technology. Abstract graphics, driven by digital art, are full of novelty and interest and can greatly enrich people's emotions and senses. Abstract graphics bring the experience of digital art to its fullest potential. The combination of digital art and abstract graphics offers more innovation and possibilities for the development of art and will bring great prosperity to art communication. With the widespread use of computer and network technology, the Internet has developed rapidly. In this context, digital art, as art created in a digital way and concept, has gained widespread attention. As a result, how to integrate existing computer resources in the new environment to build a model of digital art design effectiveness will cause a direct influence on the quality of digital art design with digital content innovation as the core. At the same time, as digital art becomes more and more popular, the demand for digital talents becomes very urgent. As a result, the cultivation of high-quality digital talents has become a major concern for society. Therefore, in order to explore the success of digital art design and the cultivation of digital art talents, and to better serve the innovation of digital art, this paper proposes a digital art effectiveness model based on the K-medoids algorithm. This model can provide a deeper and more comprehensive understanding of digital art and abstract graphics and provide theoretical support for professional design creation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
128. Reasoning Algorithms on Feature Modeling—A Systematic Mapping Study.
- Author
-
Sepúlveda, Samuel and Cravero, Ania
- Subjects
ALGORITHMS ,COMPUTER software industry ,PRODUCT lines ,EMPIRICAL research - Abstract
Context: Software product lines (SPLs) have reached a considerable level of adoption in the software industry. The most commonly used models for managing the variability of SPLs are feature models (FMs). The analysis of FMs is an error-prone, tedious task, and it is not feasible to accomplish this task manually with large-scale FMs. In recent years, much effort has been devoted to developing reasoning algorithms for FMs. Aim: To synthesize the evidence on the use of reasoning algorithms for feature modeling. Method: We conducted a systematic mapping study, including six research questions. This study included 66 papers published from 2010 to 2020. Results: We found that most algorithms were used in the domain stage (70%). The most commonly used technologies were transformations (18%). As for the origins of the proposals, they were mainly rooted in academia (76%). The FODA model continued to be the most frequently used representation for feature modeling (70%). A large majority of the papers presented some empirical validation process (90%). Conclusion: We were able to respond to the RQs. The FODA model is consolidated as a reference within SPLs to manage variability. Responses to RQ2 and RQ6 require further review. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
129. Community Discovery Algorithm Based on Multi-Relationship Embedding.
- Author
-
Dongming Chen, Mingshuo Nie, Jie Wang, and Dongqi Wang
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATRICES (Mathematics) ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Complex systems in the real world often can be modeled as network structures, and community discovery algorithms for complex networks enable researchers to understand the internal structure and implicit information of networks. Existing community discovery algorithms are usually designed for single-layer networks or single-interaction relationships and do not consider the attribute information of nodes. However, many real-world networks consist of multiple types of nodes and edges, and there may be rich semantic information on nodes and edges. The methods for single-layer networks cannot effectively tackle multi-layer information, multi-relationship information, and attribute information. This paper proposes a community discovery algorithm based on multi-relationship embedding. The proposed algorithm first models the nodes in the network to obtain the embedding matrix for each node relationship type and generates the node embedding matrix for each specific relationship type in the network by node encoder. The node embedding matrix is provided as input for aggregating the node embedding matrix of each specific relationship type using a Graph Convolutional Network (GCN) to obtain the final node embedding matrix. This strategy allows capturing of rich structural and attributes information in multi-relational networks. Experiments were conducted on different datasets with baselines, and the results show that the proposed algorithm obtains significant performance improvement in community discovery, node clustering, and similarity search tasks, and compared to the baseline with the best performance, the proposed algorithm achieves an average improvement of 3.1% on Macro-F1 and 4.7% on Micro-F1, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
130. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
131. Research on a Recognition Algorithm for Traffic Signs in Foggy Environments Based on Image Defogging and Transformer.
- Author
-
Liu, Zhaohui, Yan, Jun, and Zhang, Jinzhao
- Subjects
TRAFFIC signs & signals ,TRAFFIC monitoring ,ALGORITHMS ,AUTONOMOUS vehicles - Abstract
The efficient and accurate identification of traffic signs is crucial to the safety and reliability of active driving assistance and driverless vehicles. However, the accurate detection of traffic signs under extreme cases remains challenging. Aiming at the problems of missing detection and false detection in traffic sign recognition in fog traffic scenes, this paper proposes a recognition algorithm for traffic signs based on pix2pixHD+YOLOv5-T. Firstly, the defogging model is generated by training the pix2pixHD network to meet the advanced visual task. Secondly, in order to better match the defogging algorithm with the target detection algorithm, the algorithm YOLOv5-Transformer is proposed by introducing a transformer module into the backbone of YOLOv5. Finally, the defogging algorithm pix2pixHD is combined with the improved YOLOv5 detection algorithm to complete the recognition of traffic signs in foggy environments. Comparative experiments proved that the traffic sign recognition algorithm proposed in this paper can effectively reduce the impact of a foggy environment on traffic sign recognition. Compared with the YOLOv5-T and YOLOv5 algorithms in moderate fog environments, the overall improvement of this algorithm is achieved. The precision of traffic sign recognition of the algorithm in the fog traffic scene reached 78.5%, the recall rate was 72.2%, and mAP@0.5 was 82.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
132. Online Social Network Information Source Identification Algorithm Based on Multi-Attribute Topological Clustering.
- Author
-
Dong, Ming, Lu, Yujuan, Tan, Zhenhua, and Zhang, Bin
- Subjects
ONLINE social networks ,INFORMATION resources ,INFORMATION networks ,INFORMATION dissemination ,ALGORITHMS ,IDENTIFICATION - Abstract
This paper focuses on the problem of information source identification in online social networks (OSNs). By analyzing the research situation of source identification problems and challenges (such as the randomness of the information dissemination process and complexity of the underlying network topology), this paper studies the problem of multiple source diffusion and proposes a source identification algorithm based on multi-attribute topological clustering (MaTC). The basic idea of the algorithm is to decompose the multi-source problems into a series of single-source problems by using clustering partitioning to improve accuracy and efficiency. Firstly, it estimates the number of source nodes, which is also the number of network partitions, then characterizes the combination of multiple attribute structures as an attribute index of topological clustering, performs an analysis of the distribution of real source nodes in each partition to evaluate the accuracy of the clustering partition, and finally uses Jordan centrality within each partition for single-source identification. Through comparative experiments, it is verified that the proposed MaTC algorithm is superior to the comparison algorithms in evaluating indicators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
133. A Lightweight Remote Sensing Small Target Image Detection Algorithm Based on Improved YOLOv8.
- Author
-
Nie, Haijiao, Pang, Huanli, Ma, Mingyang, and Zheng, Ruikai
- Subjects
OBJECT recognition (Computer vision) ,ALGORITHMS ,REMOTE-sensing images ,REMOTE sensing - Abstract
In response to the challenges posed by small objects in remote sensing images, such as low resolution, complex backgrounds, and severe occlusions, this paper proposes a lightweight improved model based on YOLOv8n. During the detection of small objects, the feature fusion part of the YOLOv8n algorithm retrieves relatively fewer features of small objects from the backbone network compared to large objects, resulting in low detection accuracy for small objects. To address this issue, firstly, this paper adds a dedicated small object detection layer in the feature fusion network to better integrate the features of small objects into the feature fusion part of the model. Secondly, the SSFF module is introduced to facilitate multi-scale feature fusion, enabling the model to capture more gradient paths and further improve accuracy while reducing model parameters. Finally, the HPANet structure is proposed, replacing the Path Aggregation Network with HPANet. Compared to the original YOLOv8n algorithm, the recognition accuracy of mAP@0.5 on the VisDrone data set and the AI-TOD data set has increased by 14.3% and 17.9%, respectively, while the recognition accuracy of mAP@0.5:0.95 has increased by 17.1% and 19.8%, respectively. The proposed method reduces the parameter count by 33% and the model size by 31.7% compared to the original model. Experimental results demonstrate that the proposed method can quickly and accurately identify small objects in complex backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
134. Automatic selection of the number of clusters using Bayesian clustering and sparsity-inducing priors.
- Author
-
Valle D, Jameel Y, Betancourt B, Azeria ET, Attias N, and Cullen J
- Subjects
- Alberta, Bayes Theorem, Brazil, Cluster Analysis, Algorithms
- Abstract
Clustering is a ubiquitous task in ecological and environmental sciences and multiple methods have been developed for this purpose. Because these clustering methods typically require users to a priori specify the number of groups, the standard approach is to run the algorithm for different numbers of groups and then choose the optimal number using a criterion (e.g., AIC or BIC). The problem with this approach is that it can be computationally expensive to run these clustering algorithms multiple times (i.e., for different numbers of groups) and some of these information criteria can lead to an overestimation of the number of groups. To address these concerns, we advocate for the use of sparsity-inducing priors within a Bayesian clustering framework. In particular, we highlight how the truncated stick-breaking (TSB) prior, a prior commonly adopted in Bayesian nonparametrics, can be used to simultaneously determine the number of groups and estimate model parameters for a wide range of Bayesian clustering models without requiring the fitting of multiple models. We illustrate the ability of this prior to successfully recover the true number of groups for three clustering models (two types of mixture models, applied to GPS movement data and species occurrence data, as well as the species archetype model) using simulated data in the context of movement ecology and community ecology. We then apply these models to armadillo movement data in Brazil, plant occurrence data from Alberta (Canada), and bird occurrence data from North America. We believe that many ecological and environmental sciences applications will benefit from Bayesian clustering methods with sparsity-inducing priors given the ubiquity of clustering and the associated challenge of determining the number of groups. Two R packages, EcoCluster and bayesmove, are provided that enable the straightforward fitting of these models with the TSB prior., (© 2021 The Ecological Society of America.)
- Published
- 2022
- Full Text
- View/download PDF
135. Feasibility of ultra-high-speed acquisition in xSPECT bone algorithm: a phantom study with advanced bone SPECT-specific phantom.
- Author
-
Ichikawa H, Miyaji N, Onoguchi M, Shibutani T, Nagaki A, Kato T, and Shimada H
- Subjects
- Feasibility Studies, Humans, Image Processing, Computer-Assisted methods, Lumbar Vertebrae diagnostic imaging, Male, Phantoms, Imaging, Algorithms, Tomography, Emission-Computed, Single-Photon methods
- Abstract
Objective: Although xSPECT Bone (xB) provides quantitative single-photon emission computed tomography (SPECT) high-resolution images, patients' burden remains high due to long acquisition time; therefore, this study aimed to investigate the feasibility of shortening the xB acquisition time using a custom-designed phantom., Methods: A custom-designed xSPECT bone-specific (xSB) phantom with simulated cortical and spongious bones was developed based on the thoracic bone phantom. Both standard- and ultra-high-speed (UHS) xB acquisitions were performed in a male patient with lung cancer. In this phantom study, SPECT was acquired for 3, 6, 9, 12, and 30 min. The clinical SPECT acquisition time per rotation was 9 and 3 min for standard and UHS, respectively. SPECT images were reconstructed using ordered subset expectation maximization with three-dimensional resolution recovery (Flash3D; F3D) and xB algorithms. Quantitative SPECT value (QSV) and coefficient of variation (CV) were measured using the volume of interests (VOIs) placed at the center of the vertebral body and hot sphere. A linear profile was plotted on the spinous process at the center of the xSB phantom; then, the full width at half maximum (FWHM) was measured. The standardized uptake value (SUV) and standard deviation from the first thoracic to the fifth lumbar vertebrae in clinical standard- and UHS-xB images were measured using a 1-cm
3 VOI., Results: The QSV of F3D images was underestimated even in large regions, whereas those of xB images were close to actual radioactivity concentration. The CV was similar or lower for xB images than that for F3D images but was not decreased with increasing acquisition time for both reconstruction images. The FWHM of xB images was lower than those of F3D images at all acquisition times. The mean SUV values from the first thoracic to fifth lumbar vertebrae for standard- and UHS-xB images were 6.73 ± 0.64 and 6.19 ± 0.87, respectively, showing a strong positive correlation., Conclusions: Results of this phantom study suggest that xB imaging can be obtained in only one-third of the acquisition time without compromising the image quality. The SUV of UHS-xB images can be similar to that of standard-xB images in terms of clinical interpretation., (© 2021. The Japanese Society of Nuclear Medicine.)- Published
- 2022
- Full Text
- View/download PDF
136. Design of a Learning Path Recommendation System Based on a Knowledge Graph
- Author
-
Liu, Chunhong, Zhang, Haoyang, Zhang, Jieyu, Zhang, Zhengling, and Yuan, Peiyan
- Abstract
Current learning platforms generally have problems such as fragmented knowledge, redundant information, and chaotic learning routes, which cannot meet learners' autonomous learning requirements. This paper designs a learning path recommendation system based on knowledge graphs by using the characteristics of knowledge graphs to structurally represent subject knowledge. The system uses the node centrality and node weight to expand the knowledge graph system, which can better express the structural relationship among knowledge. It applies the particle swarm fusion algorithm of multiple rounds of iterative simulated annealing to achieve the recommendation of learning paths. Furthermore, the system feeds back the students' learning situation to the teachers. Teachers check and fill in the gaps according to the performance of the learners in the teaching activities. Aiming at the weak links of students' knowledge points, the particle swarm intelligence algorithm is used to recommend learning paths and learning resources to fill in the gaps in a targeted manner.
- Published
- 2023
- Full Text
- View/download PDF
137. Artificial Intelligence in Intelligent Tutoring Systems toward Sustainable Education: A Systematic Review
- Author
-
Lin, Chien-Chang, Huang, Anna Y. Q., and Lu, Owen H. T.
- Abstract
Sustainable education is a crucial aspect of creating a sustainable future, yet it faces several key challenges, including inadequate infrastructure, limited resources, and a lack of awareness and engagement. Artificial intelligence (AI) has the potential to address these challenges and enhance sustainable education by improving access to quality education, creating personalized learning experiences, and supporting data-driven decision-making. One outcome of using AI and Information Technology (IT) systems in sustainable education is the ability to provide students with personalized learning experiences that cater to their unique learning styles and preferences. Additionally, AI systems can provide teachers with data-driven insights into student performance, emotions, and engagement levels, enabling them to tailor their teaching methods and approaches or provide assistance or intervention accordingly. However, the use of AI and IT systems in sustainable education also presents challenges, including issues related to privacy and data security, as well as potential biases in algorithms and machine learning models. Moreover, the deployment of these systems requires significant investments in technology and infrastructure, which can be a challenge for educators. In this review paper, we will provide different perspectives from educators and information technology solution architects to connect education and AI technology. The discussion areas include sustainable education concepts and challenges, technology coverage and outcomes, as well as future research directions. By addressing these challenges and pursuing further research, we can unlock the full potential of these technologies and support a more equitable and sustainable education system.
- Published
- 2023
- Full Text
- View/download PDF
138. An Operations Research-Based Teaching Unit for Grade 10: The ROAR Experience, Part I
- Author
-
Colajanni, Gabriella, Gobbi, Alessandro, Picchi, Marinella, Raffaele, Alice, and Taranto, Eugenia
- Abstract
We introduce "Ricerca Operativa Applicazioni Reali" (ROAR; in English, "Real Applications of Operations Research"), a three-year project for higher secondary schools. Its main aim is to improve students' interest, motivation, and skills related to Science, Technology, Engineering, and Mathematics disciplines by integrating mathematics and computer science through operations research. ROAR offers examples and problems closely connected with students' everyday life or with the industrial reality, balancing mathematical modeling and algorithmics. The project is composed of three teaching units, addressed to grades 10, 11, and 12. The implementation of the first teaching unit took place in Spring 2021 at the scientific high school IIS Antonietti in Iseo (Brescia, Italy). In particular, in this paper, we provide a full description of this first teaching unit in terms of objectives, prerequisites, topics and methods, organization of the lectures, and digital technologies used. Moreover, we analyze the feedback received from students and teachers involved in the experimentation, and we discuss advantages and disadvantages related to distance learning that we had to adopt because of the COVID-19 pandemic.
- Published
- 2023
- Full Text
- View/download PDF
139. Behavior Recognition of College Students Based on Improved Deep Learning Algorithm
- Author
-
Ning, Xiaoke
- Abstract
With the vigorous development of intelligent campus construction, great changes have taken place in the development of information technology in colleges and universities from the previous digital to intelligent development. In the teaching process, the analysis of students' classroom learning has also changed from the previous manual observation to intelligent analysis. Based on this, this paper studies the behavior recognition of college students based on the improved deep learning algorithm. Based on a brief analysis of the research background of behavior recognition, the research framework of college students' behavior recognition is constructed. Finally, the authors designed an experiment to evaluate the accuracy of classroom student behavior recognition analysis. The results show that the improved recognition of college students' behavior based on deep learning algorithm can improve the recognition accuracy
- Published
- 2023
- Full Text
- View/download PDF
140. Application of Machine Learning Technology in Classical Music Education
- Author
-
Wang, Dongfang
- Abstract
The goal is to promote the healthy and stable development of music education in China. The time-frequency sequence topology in frequency domain can improve the effect of convolution operation. Therefore, this paper applies the above algorithms to classical music education, including the recognition of classical instruments, the feature extraction and recognition of classical music, and the quality evaluation of classical music education. The quality of the music quality evaluation system can be judged according to the correlation between the output results and the subjective evaluation. The higher the correlation, the better the music quality evaluation method. Through relevant experiments, it is proved that DTW score alignment and end-to-end are more successful in extracting the features of classical music, and more accurate in identifying classical instruments. The objective evaluation method of pronunciation teaching quality is more objective and accurate than P.563 music teaching quality evaluation.
- Published
- 2023
- Full Text
- View/download PDF
141. The Evaluation Algorithm of English Teaching Ability Based on Big Data Fuzzy K-Means Clustering
- Author
-
Lili Qin, Weixuan Zhong, and Hugh C. Davis
- Abstract
In response to the problem of inaccurate classification of big data information in traditional English teaching ability evaluation algorithms, this paper proposes an English teaching ability estimation algorithm based on big data fuzzy K-means clustering. Firstly, the article establishes a constraint parameter index analysis model. Secondly, quantitative recursive analysis is used to evaluate the capabilities of big data information models and achieve entropy feature extraction of capability constrained feature information. Finally, by integrating big data information fusion and K-means clustering algorithm, the article achieves clustering and integration of indicator parameters for English teaching ability, prepares corresponding teaching resource allocation plans, and evaluates English teaching ability. The experimental results show that using this method to evaluate English teaching ability has good information fusion analysis ability and improves the accuracy of teaching ability evaluation and the efficiency of teaching resource application.
- Published
- 2023
- Full Text
- View/download PDF
142. Application of a Short Video Caption Generation Algorithm in International Chinese Education and Teaching
- Author
-
Dai, Qianhui
- Abstract
With the continuous development of speech recognition technology, automatic subtitle generation has gradually attracted people's attention. However, the level of short videos is uneven, and cultural teaching is also one-sided, irregular, and lacking systematicness. Since the media era, it is possible to apply the short video subtitle generation algorithm to international Chinese education and teaching. However, Chinese teachers should pay attention to some possible problems in self-media videos and adopt appropriate teaching strategies. This paper mainly discusses the development of international Chinese education and teaching under the new media environment, and discusses the characteristics, advantages, and disadvantages of international Chinese education and teaching under the new media environment, as well as the existing problems. Short video subtitle generation algorithm provides a new way for international Chinese education and teaching, enhances the vitality of education, and expands educational channels.
- Published
- 2023
- Full Text
- View/download PDF
143. Avoiding the Digital Age is Hurting Research Efforts: A greater shift from paper records and physical assets is achievable.
- Author
-
HOLLAN, MIKE
- Subjects
DIGITAL technology ,ARTIFICIAL intelligence ,LIFE sciences ,AUTOMATIC data collection systems ,ELECTRONIC data interchange ,ELECTRONIC health records ,MACHINE learning ,DRUG development ,ALGORITHMS - Abstract
The article offers information on the importance of data in drug development and the life sciences industry. Topics include the use of new technologies like AI and machine learning for data collection and analysis, the persistence of paper-based processes in the industry, and challenges such as the "first-mile problem" in data collection and management.
- Published
- 2024
144. Rock-Paper-Scissors Play: Beyond the Win-Stay/Lose-Change Strategy.
- Author
-
Zhang, Hanshu, Moisan, Frederic, and Gonzalez, Cleotilde
- Subjects
- *
COMPUTER algorithms , *ALGORITHMS , *COMPUTER engineering - Abstract
This research studied the strategies that players use in sequential adversarial games. We took the Rock-Paper-Scissors (RPS) game as an example and ran players in two experiments. The first experiment involved two humans, who played the RPS together for 100 times. Importantly, our payoff design in the RPS allowed us to differentiate between participants who used a random strategy from those who used a Nash strategy. We found that participants did not play in agreement with the Nash strategy, but rather, their behavior was closer to random. Moreover, the analyses of the participants' sequential actions indicated heterogeneous cycle-based behaviors: some participants' actions were independent of their past outcomes, some followed a well-known win-stay/lose-change strategy, and others exhibited the win-change/lose-stay behavior. To understand the sequential patterns of outcome-dependent actions, we designed probabilistic computer algorithms involving specific change actions (i.e., to downgrade or upgrade according to the immediate past outcome): the Win-Downgrade/Lose-Stay (WDLS) or Win-Stay/Lose-Upgrade (WSLU) strategies. Experiment 2 used these strategies against a human player. Our findings show that participants followed a win-stay strategy against the WDLS algorithm and a lose-change strategy against the WSLU algorithm, while they had difficulty in using an upgrade/downgrade direction, suggesting humans' limited ability to detect and counter the actions of the algorithm. Taken together, our two experiments showed a large diversity of sequential strategies, where the win-stay/lose-change strategy did not describe the majority of human players' dynamic behaviors in this adversarial situation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
145. Zero-preserving imputation of single-cell RNA-seq data.
- Author
-
Linderman GC, Zhao J, Roulis M, Bielecki P, Flavell RA, Nadler B, and Kluger Y
- Subjects
- Animals, B-Lymphocytes cytology, B-Lymphocytes metabolism, Bronchi cytology, Bronchi metabolism, Datasets as Topic, Epithelial Cells cytology, Epithelial Cells metabolism, Humans, Killer Cells, Natural cytology, Killer Cells, Natural metabolism, Mice, Monocytes cytology, Monocytes metabolism, Primary Cell Culture, RNA metabolism, RNA-Seq, Single-Cell Analysis, T-Lymphocytes cytology, T-Lymphocytes metabolism, Algorithms, RNA genetics, Sequence Analysis, RNA statistics & numerical data
- Abstract
A key challenge in analyzing single cell RNA-sequencing data is the large number of false zeros, where genes actually expressed in a given cell are incorrectly measured as unexpressed. We present a method based on low-rank matrix approximation which imputes these values while preserving biologically non-expressed genes (true biological zeros) at zero expression levels. We provide theoretical justification for this denoising approach and demonstrate its advantages relative to other methods on simulated and biological datasets., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
146. A new method for roadheader pick arrangement based on meshing pick spatial position and rock cutting verification.
- Author
-
Zhang, Mengqi, Yan, Xianguo, and Qin, Guoqiang
- Subjects
GAUSSIAN distribution ,COMPRESSIVE strength ,PAPER arts ,CONSUMPTION (Economics) ,ALGORITHMS - Abstract
This paper proposes a cutting head optimization method based on meshing the spatial position of the picks. According to the expanded shape of the spatial mesh composed of four adjacent picks on the plane, a standard mesh shape analysis method can be established with mesh skewness, mesh symmetry, and mesh area ratio as the indicators. The traversal algorithm is used to calculate the theoretical meshing rate, pick rotation coefficient, and the variation of cutting load for the longitudinal cutting head with 2, 3, and 4 helices. The results show that the 3-helix longitudinal cutting head has better performance. By using the traversal result with maximum theoretical meshing rate as the design parameter, the longitudinal cutting head CH51 with 51 picks was designed and analyzed. The prediction model of pick consumption is established based on cutting speed, direct rock cutting volume of each pick, pick rotation coefficient, uniaxial compressive strength, and CERCHAR abrasivity index. And the rock with normal distribution characteristics of Uniaxial Compressive Strength is used for the specific energy calculating. The artificial rock wall cutting test results show that the reduction in height loss suppresses the increase in pick equivalent loss caused by the increase in mass loss, and the pick consumption in this test is only 0.037–0.054 picks/m
3 . In addition, the correlation between the actual pick consumption and the prediction model, and the correlation between the actual cutting specific energy and the theoretical calculation value are also analyzed. The research results show that the pick arrangement design method based on meshing pick tip spatial position can effectively reduce pick consumption and improve the rock cutting performance. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
147. A Distributed Security SDN Cluster Architecture for Smart Grid Based on Blockchain Technology.
- Author
-
Xiong, Ao, Tian, Hongkang, He, Wenchen, Zhang, Jie, Meng, Huiping, Guo, Shaoyong, Wang, Xinyan, Wu, Xinyi, and Kadoch, Michel
- Subjects
BLOCKCHAINS ,SMART power grids ,TELECOMMUNICATION systems ,DENIAL of service attacks ,ALGORITHMS ,ELECTRONIC paper ,INFORMATION technology ,MULTICASTING (Computer networks) - Abstract
This paper proposes a smart grid distributed security architecture based on blockchain technology and SDN cluster structure, referred to as ClusterBlock model, which combines the advantages of two emerging technologies, blockchain and SDN. The blockchain technology allows for distributed peer-to-peer networks, where the network can ensure the trusted interaction of untrusted nodes in the network. At the same time, this article adopts the design of an SDN controller distributed cluster to avoid single point of failure and balance the load between equipment and the controller. A cluster head was selected in each SDN cluster, and it was used as a blockchain node to construct an SDN cluster head blockchain. By combining blockchain technology, the security and privacy of the SDN communication network can be enhanced. At the same time, this paper designs a distributed control strategy and network attack detection algorithm based on blockchain consensus and introduces the Jaccard similarity coefficient to detect the network attacks. Finally, this paper evaluates the ClusterBlock model and the existing model based on the OpenFlow protocol through simulation experiments and compares the security performance. The evaluation results show that the ClusterBlock model has more stable bandwidth and stronger security performance in the face of DDoS attacks of the same scale. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
148. Two-stage algorithms for visually exploring spatio-temporal clustering of avian influenza virus outbreaks in poultry farms.
- Author
-
Wu HI and Chao DY
- Subjects
- Animals, Influenza in Birds diagnosis, Influenza in Birds virology, Poultry Diseases diagnosis, Poultry Diseases virology, Taiwan, Time Factors, Algorithms, Animal Husbandry, Influenza A Virus, H5N2 Subtype pathogenicity, Influenza A Virus, H5N8 Subtype pathogenicity, Influenza in Birds transmission, Poultry virology, Poultry Diseases transmission, Space-Time Clustering
- Abstract
The development of visual tools for the timely identification of spatio-temporal clusters will assist in implementing control measures to prevent further damage. From January 2015 to June 2020, a total number of 1463 avian influenza outbreak farms were detected in Taiwan and further confirmed to be affected by highly pathogenic avian influenza subtype H5Nx. In this study, we adopted two common concepts of spatio-temporal clustering methods, the Knox test and scan statistics, with visual tools to explore the dynamic changes of clustering patterns. Since most (68.6%) of the outbreak farms were detected in 2015, only the data from 2015 was used in this study. The first two-stage algorithm performs the Knox test, which established a threshold of 7 days and identified 11 major clusters in the six counties of southwestern Taiwan, followed by the standard deviational ellipse (SDE) method implemented on each cluster to reveal the transmission direction. The second algorithm applies scan likelihood ratio statistics followed by AGC index to visualize the dynamic changes of the local aggregation pattern of disease clusters at the regional level. Compared to the one-stage aggregation approach, Knox-based and AGC mapping were more sensitive in small-scale spatio-temporal clustering., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
149. Comparative Dissemination of Aerosol and Splatter Using Suction Device during Ultrasonic Scaling: A Pilot Study.
- Author
-
Engsomboon, Nutthawadee, Pachimsawat, Praewpat, and Thanathornwong, Bhornsawan
- Subjects
AEROSOLS ,ULTRASONIC equipment ,CARDBOARD ,DENTAL equipment ,PILOT projects ,MASS spectrometers ,DENTAL scaling - Abstract
Objective: This study compared the aerosol and splatter diameter and count numbers produced by a dental mouth prop with a suction holder device and a saliva ejector during ultrasonic scaling in a clinical setting. Methodology: Fluorescein dye was placed in the dental equipment irrigation reservoirs with a mannequin, and an ultrasonic scaler was employed. The procedures were performed three times per device. The upper and bottom board papers were placed on the laboratory platform. All processes used an ultrasonic scaler to generate aerosol and splatter. A dental mouth prop with a suction holder and a saliva ejector were also tested. Photographic analysis was used to examine the fluorescein samples, followed by image processing in Python and assessment of the diameter and count number. For device comparison, statistics were used with an independent t-test. Result: When using the dental mouth prop with a suction holder, the scaler produced aerosol particles that were maintained on the upper board paper (mean ± SD: 1080 ± 662 µm) compared to on the bottom board paper (1230 ± 1020 µm). When the saliva ejector was used, it was found that the diameter of the aerosol on the upper board paper was 900 ± 580 µm, and the diameter on the bottom board paper was 1000 ± 756 µm. Conclusion: There was a significant difference in the aerosol and splatter particle diameter and count number between the dental mouth prop with a suction holder and saliva ejector (p < 0.05). Furthermore, the results revealed that there was a statistically significant difference between the two groups on the upper and bottom board papers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
150. Tools and algorithms for the construction and analysis of systems: a special issue for TACAS 2020.
- Author
-
Biere, Armin and Parker, David
- Subjects
ALGORITHMS ,SOFTWARE verification ,TECHNOLOGY transfer ,SOFTWARE maintenance ,SOFTWARE engineering - Abstract
This special issue of Software Tools for Technology Transfer comprises extended versions of selected papers from the 26th edition of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020). The focus of this conference series is tools and algorithms for the rigorous analysis of software and hardware systems, and the papers in this special cover the spectrum of current work in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.