Back to Search
Start Over
Model-based prioritization for acquiring protection
- Publication Year :
- 2021
-
Abstract
- Protection, or the mitigation of harm, often involves the capacity to prospectively plan the actions needed to combat a threat. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other positive prospective actions. Here we examine effects of valence and context by comparing protection to reward, which occurs in a different context but is also positively valenced, and punishment, which also occurs in an aversive context but differs in valence. We applied computational modeling across three independent studies (Total N=600) using five iterations of a ‘two-step’ behavioral task to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring safety via protection evoked a higher degree of model-based control than acquiring reward and avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome.
Details
- Database :
- OAIster
- Notes :
- application/pdf, Model-based prioritization for acquiring protection, English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1338287709
- Document Type :
- Electronic Resource