63 results on '"Scott Sanner"'
Search Results
2. Multi-modal Generative Models in Recommendation System.
3. Elaborative Subtopic Query Reformulation for Broad and Indirect Queries in Travel Destination Recommendation.
4. Recommendation with Generative Models.
5. Generalized Multi-hop Traffic Pressure for Heterogeneous Traffic Perimeter Control.
6. Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects.
7. Multi-Aspect Reviewed-Item Retrieval via LLM Query Decomposition and Aspect Fusion.
8. Large Language Model Driven Recommendation.
9. CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail Knowledge.
10. Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering.
11. A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys).
12. Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation.
13. Retrieval-Augmented Conversational Recommendation with Prompt-based Semi-Structured Natural Language State Tracking.
14. Constraint-Generation Policy Optimization (CGPO): Nonlinear Programming for Policy Optimization in Mixed Discrete-Continuous MDPs.
15. Self-Supervised Contrastive BERT Fine-tuning for Fusion-based Reviewed-Item Retrieval.
16. A Generalized Framework for Predictive Clustering and Optimization.
17. LogicRec: Recommendation with Users' Logical Requirements.
18. Bayesian Knowledge-driven Critiquing with Indirect Evidence.
19. LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations.
20. Safe MDP Planning by Learning Temporal Patterns of Undesirable Trajectories and Averting Negative Side Effects.
21. Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences.
22. Revisiting Random Forests in a Comparative Evaluation of Graph Convolutional Neural Network Variants for Traffic Prediction.
23. Perimeter Control Using Deep Reinforcement Learning: A Model-free Approach towards Homogeneous Flow Rate Optimization.
24. Diffusion on the Probability Simplex.
25. DiffuDetox: A Mixed Diffusion Model for Text Detoxification.
26. TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation.
27. Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus.
28. Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization.
29. Unintended Bias in Language Model-driven Conversational Recommendation.
30. Learning to Follow Instructions in Text-Based Games.
31. A Critical Review of Traffic Signal Control and A Novel Unified View of Reinforcement Learning and Model Predictive Control Approaches for Adaptive Traffic Signal Control.
32. Sample-efficient Iterative Lower Bound Optimization of Deep Reactive Policies for Planning in Continuous MDPs.
33. pyRDDLGym: From RDDL to Gym Environments.
34. ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification.
35. EDDA: Explanation-driven Data Augmentation to Improve Model and Explanation Alignment.
36. Multi-axis Attentive Prediction for Sparse EventData: An Application to Crime Prediction.
37. Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning.
38. Planning with Learned Binarized Neural Networks Benchmarks for MaxSAT Evaluation 2021.
39. RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation.
40. Online Continual Learning in Image Classification: An Empirical Survey.
41. Risk-Aware Transfer in Reinforcement Learning using Successor Features.
42. Batch-level Experience Replay with Review for Continual Learning.
43. Adversarial Shapley Value Experience Replay for Task-Free Continual Learning.
44. Contextual Policy Reuse using Deep Mixture Models.
45. Attentive Autoencoders for Multifaceted Preference Learning in One-class Collaborative Filtering.
46. Bayesian Experience Reuse for Learning from Multiple Demonstrators.
47. Noise Contrastive Estimation for Autoencoding-based One-Class Collaborative Filtering.
48. ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning.
49. Reward Potentials for Planning with Learned Neural Network Transition Models.
50. Optimizing Search API Queries for Twitter Topic Classifiers Using a Maximum Set Coverage Approach.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.