Invited Talks:
The EARL workshop proudly presents 2 invited talks, who will provide in-depth insights into the latest developments in the integration of large language models with recommendation systems.
- Prof. Scott Sanner (ssanner@mie.utoronto.ca), from University of Toronto, has a broad range of topics from the data-driven fields of Machine Learning and Information Retrieval to the decision-driven fields of Artificial Intelligence and Operations Research.
In this talk I will begin by discussing the need for personalized recommendation in conversational AI assistants and the fundamental challenges that make this problem difficult. I will next provide a review of recent developments in this field leading up to modern LLM-enhanced conversational recommendation systems. I will then discuss some of my group's own research on LLM-based conversational recommendation that highlights the significant advances and opportunities offered by LLMs as well some of the technical and societal challenges created by this fundamental shift to LLM-based architectures.
- Assoc. Prof. Yongfeng Zhang (yongfeng.zhang@rutgers.edu), from Rutgers University, focuses on a wide range of subjects, including Machine Learning, Data Mining, Information Retrieval, Recommender Systems, and Explainable AI.
Generative AI driven by Foundation Models has brought a paradigm shift for recommender systems. Instead of traditional multi-stage filtering and matching-based recommendation, it now becomes possible to do straightforward single-stage recommendation by directly generating the recommended items based on users’ personalized inputs. This paradigm shift not only brings increased recommendation accuracy, but also improves the efficiency through single-stage recommendation, and enables better controllability for users based on natural language prompts. This talk will introduce generative recommendation from various perspectives, including foundation models for recommendation, item representation, textual ID learning, item indexing methods, multi-modal recommendation, prompt generation, as well as the explainability of foundation models for recommendation.
Coffee Break:
A central feature of the EARL workshop is the interactive session, coupled with a coffee break. This session is not merely an opportunity to delve into specific research topics, but also a perfect setting for attendees to network and engage in meaningful discussions, fostering a collaborative environment.
Accepted Papers:
-
Leveraging LLMs to Enhance a Web-Scale Webpage Recommendation System
Jaidev Shah, Iman Barjasteh, Amey Barapatre, Rana Forsati, Gang Luo, Fan Wu, Julie Fang, Xue Deng, Blake Shepard, Ronak Shah, Linjun Yang, Hongzhi Li
-
GenRec: Generative Sequential Recommendation with Large Language Models
Panfeng Cao, Pietro Liò
-
A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation
Dugang Liu, Shenxian Xian, Xiaolin Lin, Xiaolian Zhang, Hong Zhu, Yuan Fang, Zhen Chen, Zhong Ming
-
Data Imputation Using Large Language Model to Accelerate Recommender System
Jiahao Tian, Jinman Zhao, Zhenkai Wang, Zhicheng Ding, Siyang Li