12 results
Search Results
2. Profit maximization for security-aware task offloading in edge-cloud environment.
- Author
-
Li, Zhongjin, Chang, Victor, Hu, Haiyang, Yu, Dongjin, Ge, Jidong, and Huang, Binbin
- Subjects
- *
PROFIT maximization , *ARTIFICIAL intelligence , *ALGORITHMS , *DATA security , *MOBILE apps , *CLOUD computing - Abstract
Mobile devices (MDs) and applications are receiving extensive popularity and attracting significant attention. Mobile applications, especially for artificial intelligence (AI) applications, require powerful computation-intensive resources. Hence, running all the AI applications on a single MD introduces high energy consumption and application delay, as it has limited battery capacity and computation resources. Fortunately, the emerging edge-cloud computing (ECC) architecture pushes the computation resource to both the network edge and remote cloud to cope with challenging AI applications. Although the advantage of ECC greatly benefits various mobile applications, data security remains an important open issue in this scenario, which has not been well studied. This paper focuses on the profit maximization (PM) problem for security-aware task offloading in an ECC environment, i.e., considering the tasks from MDs with different service demands, edge nodes should decide them to be processed on the edge node or the remote cloud with a security guarantee. Specifically, we first construct the security model to measure the time overhead for each task under various scenarios. We then formulate the PM problem by jointly considering the security demand and deadline constraints of tasks. Finally, we propose a genetic algorithm-based PM (GA-PM) algorithm, the coding strategy of which considers the task execution location and execution order. Moreover, the crossover and mutation operations are implemented based on the coding strategy. Extensive simulation experiments with various parameters varying demonstrate that our GA-PM can achieve better performance than all the comparison algorithms. • The security model is built to measure the execution time of tasks under different parameters. • A genetic algorithm (GA)-based PM algorithm is proposed to implement task offloading and obtain optimal profit. • A coding strategy is devised by considering the tasks execution location and execution order. • Extensive simulation results demonstrate that GA-PM algorithm achieves the optimal profit. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. The soft actor–critic algorithm for automatic mode-locked fiber lasers.
- Author
-
Li, Jin, Chang, Kun, Liu, Congcong, Ning, Yu, Ma, Yuansheng, He, Jiangyong, Liu, Yange, and Wang, Zhi
- Subjects
- *
MODE-locked lasers , *FIBER lasers , *REINFORCEMENT learning , *DEEP reinforcement learning , *ALGORITHMS , *ARTIFICIAL intelligence - Abstract
With the development of artificial intelligence, deep reinforcement learning (DRL) has been applied for fiber laser. In this paper, the intelligent passively mode-locked fiber laser (PMLFL) with the Soft Actor–Critic (SAC) algorithm is reported. The SAC algorithm is a DRL algorithm with random policy, which combines the Actor–Critic framework and the maximum entropy. The agent learns the logic of mode-locking by outputting actions and inputting states of the laser. Due to the maximum entropy model, more exploration is encouraged, which means that multiple policies can be learned to maximize the reward, and the robustness is enhanced accordingly. The results show that the logic learned by the agent is similar to that of human. In 80 random initial state of polarization mode-locked tests, 37 explorations are needed on average, and the frequency of achieving mode-locked state exceeds 0.8 within 60 explorations. Further, the laser system can be monitored or controlled remotely, which expands the application scenarios. • Passively mode-locked fiber laser based on the Soft Actor–Critic (SAC) algorithm is proposed. • SAC algorithm is more exploratory and thus more robust. • The laser system based on SAC algorithm can resist the change of environment and realize mode-locking. • The experimental results prove the high stability of SAC algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. The methodology of studying fairness perceptions in Artificial Intelligence: Contrasting CHI and FAccT.
- Author
-
van Berkel, Niels, Sarsenbayeva, Zhanna, and Goncalves, Jorge
- Subjects
- *
ARTIFICIAL intelligence , *FAIRNESS , *COMMUNITIES , *HUMAN-computer interaction , *SCIENTIFIC community ,WESTERN countries - Abstract
The topic of algorithmic fairness is of increasing importance to the Human–Computer Interaction research community following accumulating concerns regarding the use and deployment of Artificial Intelligence-based systems. How we conduct research on algorithmic fairness directly influences our inferences and conclusions regarding algorithmic fairness. To better understand the methodological decisions of studies focused on people's perceptions of algorithmic fairness, we systematic analysed relevant papers from the CHI and FAccT conferences. We identified 200 relevant papers published between 1993 and 2022 and assessed their study design, participant sample, and geographical location of participants and authors. Our results highlight that studies are predominantly cross-sectional, cover a wide range of participant roles, and that both authors and participants are primarily from the United States. Based on these findings, we reflect on the potential pitfalls and shortcomings in how the community studies algorithmic fairness. • Mapping of CHI and FAccT papers on algorithmic fairness published 1993–2022. • Over one-third of papers miss details on participant locality and compensation. • Algorithmic fairness research largely restricted to US and other Western countries. • Limited cross-country collaboration, most study samples from a single country. • Remote studies are most common, longitudinal studies are relatively rare. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Visual video evaluation association modeling based on chaotic pseudo-random multi-layer compressed sensing for visual privacy-protected keyframe extraction.
- Author
-
Liu, Jixin, Li, Yicong, Han, Guang, and Sun, Ning
- Subjects
- *
ARTIFICIAL intelligence , *VIDEO processing , *OPTICAL information processing , *PRIVACY , *ALGORITHMS - Abstract
In current society, artificial intelligence processing technology offers convenient video monitoring, but also raises the risk of privacy leakage. Theoretically, the data used in intelligent video processing methods may directly convey visual information containing private content. For the above problem, this paper uses a multi-layer visual privacy-protected (VPP) coding method to blur private content in the video at the visual level, while avoiding the loss of important visual features contained in the video as much as possible. And this provides a guarantee of the quality of the subsequent keyframe extraction step. Then a visual evaluation algorithm is proposed for assessing the quality of VPP-encoded video privacy protection. And the experiment shows that the results are consistent with those of subjective evaluation. In addition, for VPP-encoded video, we propose an unsupervised two-layer clustering keyframe extraction method with corresponding performance evaluation index. Finally, an association model is established to balance the privacy protection quality and the keyframe extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. 3D dense reconstruction from 2D video sequence via 3D geometric segmentation
- Author
-
Han, Bing, Paulson, Christopher, and Wu, Dapeng
- Subjects
- *
IMAGE reconstruction , *COMPUTER vision , *ARTIFICIAL intelligence , *IMAGE processing , *VIDEOS , *GEOMETRY , *ALGORITHMS , *VISUAL communication - Abstract
Abstract: 3D reconstruction is a major problem in computer vision. This paper considers the problem of reconstructing 3D structures, given a 2D video sequence. This problem is challenging since it is difficult to identify the trajectory of each object point/pixel over time. Traditional stereo 3D reconstruction methods and volumetric 3D reconstruction methods suffer from the blank wall problem, and the estimated dense depth map is not smooth, resulting in loss of actual geometric structures such as planes. To retain geometric structures embedded in the 3D scene, this paper proposes a novel surface fitting approach for 3D dense reconstruction. Specifically, we develop an expanded deterministic annealing algorithm to decompose 3D point cloud to multiple geometric structures, and estimate the parameters of each geometric structure. In this paper, we only consider plane structure, but our methodology can be extended to other parametric geometric structures such as spheres, cylinders, and cones. The experimental results show that the new approach is able to segment 3D point cloud into appropriate geometric structures and generate accurate 3D dense depth map. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
7. Intake monitoring in free-living conditions: Overview and lessons we have learned.
- Author
-
Diou, Christos, Kyritsis, Konstantinos, Papapanagiotou, Vasileios, and Sarafis, Ioannis
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *VIDEO recording , *FOOD habits , *BEHAVIORAL research , *SMARTWATCHES , *MEALS , *ALGORITHMS , *DIET - Abstract
The progress in artificial intelligence and machine learning algorithms over the past decade has enabled the development of new methods for the objective measurement of eating, including both the measurement of eating episodes as well as the measurement of in-meal eating behavior. These allow the study of eating behavior outside the laboratory in free-living conditions, without the need for video recordings and laborious manual annotations. In this paper, we present a high-level overview of our recent work on intake monitoring using a smartwatch, as well as methods using an in-ear microphone. We also present evaluation results of these methods in challenging, real-world datasets. Furthermore, we discuss use-cases of such intake monitoring tools for advancing research in eating behavior, for improving dietary monitoring, as well as for developing evidence-based health policies. Our goal is to inform researchers and users of intake monitoring methods regarding (i) the development of new methods based on commercially available devices, (ii) what to expect in terms of effectiveness, and (iii) how these methods can be used in research as well as in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Branching interval algebra: An almost complete picture.
- Author
-
Bertagnon, A., Gavanelli, M., Passantino, A., Sciavicco, G., and Trevisani, S.
- Subjects
- *
ALGEBRA , *PROBLEM solving , *ARTIFICIAL intelligence , *ALGORITHMS , *BRANCHING processes - Abstract
Branching Algebra is the natural branching-time generalization of Allen's Interval Algebra. As in the linear case, the consistency problem for Branching Algebra is NP-hard. Branching Algebra has many potential applications in different areas of Artificial Intelligence; therefore, being able to efficiently solve classical problems expressed in Branching Algebra is very important. This can be achieved in two steps: first, by identifying expressive enough, yet tractable fragments of the whole algebra, and, second, by using such fragments to boost the performances of a backtracking algorithm for the whole language. In this paper we study the properties of several such fragments, both from the algebraic and the computational point of view, and we give an almost complete picture of tractable and non-tractable fragments of the Branching Algebra. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Effective learning in the presence of adaptive counterparts
- Author
-
Burkov, Andriy and Chaib-draa, Brahim
- Subjects
- *
ADAPTIVE computing systems , *ALGORITHMS , *INTELLIGENT agents , *MACHINE learning , *MATRICES (Mathematics) , *ARTIFICIAL intelligence - Abstract
Abstract: Adaptive learning algorithms (ALAs) is an important class of agents that learn the utilities of their strategies jointly with the maintenance of the beliefs about their counterparts'' future actions. In this paper, we propose an approach of learning in the presence of adaptive counterparts. Our Q-learning based algorithm, called Adaptive Dynamics Learner (ADL), assigns Q-values to the fixed-length interaction histories. This makes it capable of exploiting the strategy update dynamics of the adaptive learners. By so doing, ADL usually obtains higher utilities than those of equilibrium solutions. We tested our algorithm on a substantial representative set of the most known and demonstrative matrix games. We observed that ADL is highly effective in the presence of such ALAs as Adaptive Play Q-learning, Infinitesimal Gradient Ascent, Policy Hill-Climbing and Fictitious Play Q-learning. Further, in self-play ADL usually converges to a Pareto efficient average utility. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
10. The value of agreement a new boosting algorithm
- Author
-
Leskes, Boaz and Torenvliet, Leen
- Subjects
- *
COGNITIVE learning , *MACHINE learning , *ARTIFICIAL intelligence , *ALGORITHMS - Abstract
Abstract: In the past few years unlabeled examples and their potential advantage have received a lot of attention. In this paper a new boosting algorithm is presented where unlabeled examples are used to enforce agreement between several different learning algorithms. Not only do the learning algorithms learn from the given training set but they are supposed to do so while agreeing on the unlabeled examples. Similar ideas have been proposed before (for example, the Co-Training algorithm by Mitchell and Blum), but without a proof or under strong assumptions. In our setting, it is only assumed that all learning algorithms are equally adequate for the tasks. A new generalization bound is presented where the use of unlabeled examples results in a better ratio between training-set size and the resulting classifier''s quality and thus reduce the number of labeled examples necessary for achieving it. The extent of this improvement depends on the diversity of the learners—a more diverse group of learners will result in a larger improvement whereas using two copies of a single algorithm gives no advantage at all. As a proof of concept, the algorithm, named Agreement Boost, is applied to two test problems. In both cases, using Agreement Boost results in an up to 40% reduction in the number of labeled examples. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
11. The processing of verbs and nouns in neural networks: Insights from synthetic brain imaging
- Author
-
Cangelosi, Angelo and Parisi, Domenico
- Subjects
- *
COGNITIVE neuroscience , *BIOLOGICAL neural networks , *ARTIFICIAL intelligence , *ALGORITHMS - Abstract
The paper presents a computational model of language in which linguistic abilities evolve in organisms that interact with an environment. Each individual’s behavior is controlled by a neural network and we study the consequences in the network’s internal functional organization of learning to process different classes of words. Agents are selected for reproduction according to their ability to manipulate objects and to understand nouns (objects’ names) and verbs (manipulation tasks). The weights of the agents’ neural networks are evolved using a genetic algorithm. Synthetic brain imaging techniques are then used to examine the functional organization of the neural networks. Results show that nouns produce more integrated neural activity in the sensory-processing hidden layer, while verbs produce more integrated synaptic activity in the layer where sensory information is integrated with proprioceptive input. Such findings are qualitatively compared with human brain imaging data that indicate that nouns activate more the posterior areas of the brain related to sensory and associative processing, while verbs activate more the anterior motor areas. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
12. Suppression of impulse noise in MR images using artificial intelligent based neuro-fuzzy adaptive median filter
- Author
-
Toprak, Abdullah, Siraç Özerdem, Mehmet, and Güler, İnan
- Subjects
- *
MAGNETIC resonance imaging , *ARTIFICIAL intelligence , *FUZZY systems , *ALGORITHMS - Abstract
Abstract: This paper presents a new artificial intelligent based neuro-fuzzy rule base adaptive median filter for removing highly impulse noise. Since the filter is rule base, it is called neuro-fuzzy rule base adaptive median (NFRBAM) filter. The NFRBAM filter is an improved version of switch mode fuzzy adaptive median filter (SMFAMF) and is presented for the purpose of noise reduction of images corrupted with additive impulse noise. The NFRBAM filter consists of a decision unit and three different types of filters. In the decision unit, the noisy input image is directed to the proper filter with respect to the noise density. Neuro-fuzzy rule based approach is used in both decision and filtering parts. In artificial neural network, multi layer perceptron (MLP) architecture with backpropagation (BP) algorithm is used for noise detection and removing highly impulse noise corrupted MR images. In fuzzy logic, bell-shaped membership function is employed in order to obtain better results. Experimental results indicate that the proposed filter is improvable with the increased fuzzy rules to reduce more noise corrupted images and preserve image details more than SMFAMF. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.