174 results on '"Peter Dayan"'
Search Results
2. Predicting the Future with Simple World Models.
3. Detecting and Deterring Manipulation in a Cognitive Hierarchy.
4. Characterising the Creative Process in Humans and Large Language Models.
5. Neural Network Poisson Models for Behavioural and Neural Spike Train Data.
6. Reinforcement Learning with Simple Sequence Priors.
7. Habits of Mind: Reusing Action Sequences for Efficient Planning.
8. Compositionality under time pressure.
9. Habits of Mind: Reusing Action Sequences for Efficient Planning.
10. Reinforcement Learning with Simple Sequence Priors.
11. The Inner Sentiments of a Thought.
12. Reframing dopamine: A controlled controller at the limbic-motor interface.
13. Two steps to risk sensitivity.
14. Static and Dynamic Values of Computation in MCTS.
15. Using Primary Reinforcement to Enhance Translatability of a Human Affect and Decision-Making Judgment Bias Task.
16. Disentangled behavioural representations.
17. Optimism and pessimism in optimised replay.
18. Biased belief priors versus biased belief updating: Differential correlates of depression and anxiety.
19. The pursuit of happiness: A reinforcement learning perspective on habituation and comparisons.
20. Correcting experience replay for multi-agent communication.
21. Tracking the Unknown: Modeling Long-Term Implicit Skill Acquisition as Non-Parametric Bayesian Sequence Learning.
22. Confidence in control: Metacognitive computations for information search.
23. Exploring learning trajectories with dynamic infinite hidden Markov models.
24. A Local Temporal Difference Code for Distributional Reinforcement Learning.
25. Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models.
26. Fast Parametric Learning with Activation Memorization.
27. Catastrophe, Compounding & Consistency in Choice.
28. Two steps to risk sensitivity.
29. Dissecting the links between reward and loss, decision-making, and self-reported affect using a computational approach.
30. Internality and the internalisation of failure: Evidence from a novel task.
31. Static and Dynamic Values of Computation in MCTS.
32. Correcting Experience Replay for Multi-Agent Communication.
33. Combined model-free and model-sensitive reinforcement learning in non-human primates.
34. Tracking human skill learning with a hierarchical Bayesian sequence model.
35. Interactions between attributions and beliefs at trial-by-trial level: Evidence from a novel computer game task.
36. Vigilance, arousal, and acetylcholine: Optimal control of attention in a simple detection task.
37. Examining Workflow in a Pediatric Emergency Department to Develop a Clinical Decision Support for an Antimicrobial Stewardship Program.
38. Applying the RE-AIM Framework for the Evaluation of a Clinical Decision Support Tool for Pediatric Head Trauma: A Mixed-Methods Study.
39. Feudal Multi-Agent Hierarchies for Cooperative Reinforcement Learning.
40. Learning to use past evidence in a sophisticated world model.
41. Models that learn how humans learn: The case of decision-making and its disorders.
42. A computational account of threat-related attentional bias.
43. A model of structure learning, inference, and generation for scene understanding.
44. Clinical Decision Support for a Multicenter Trial of Pediatric Head Trauma.
45. The influence of contextual reward statistics on risk preference.
46. Multiple value signals in dopaminergic midbrain and their role in avoidance contexts.
47. The Dopaminergic Midbrain Mediates an Effect of Average Reward on Pavlovian Vigor.
48. Bayes-Adaptive Simulation-based Search with Value Function Approximation.
49. Probabilistic Meta-Representations Of Neural Networks.
50. Fast Parametric Learning with Activation Memorization.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.