We are thrilled to announce that Memorable's paper on video memorability has been accepted to CVPR 2023, the top AI and Computer Vision conference! This is great news: CVPR is one of the most recognized events in AI year after year, and arguably one of the top conferences in computer science at the moment. It has one of the lowest acceptance rates and highest impact scores in the field.
The paper introduces a new framework for analyzing features that impact memorability as well as a novel prediction model beating all previous benchmarks.
Titled "Modular Memorability: Tiered Representations for Video Memorability Prediction”, our work shows our continued commitment to truly understanding what makes creative effective and sharing this knowledge with the community.
1. Tiered categorization of visual features: we perform an in-depth analysis of visual features that affect video memorability and propose a tiered categorization tied to the hurdles that visual information needs to go through before being stored into memory.
2. Combining feature tiers and contextual information: we develop a novel memorability model that combines information from different feature tiers and contrasts it with contextual information. The model surpasses the state of the art on the two largest publicly available video memorability datasets.
3. In-depth model ablation study: we analyze the model part by part to understand how its modules differ in terms of internal representations. This allows us to extract interesting insights on the predictive power of features at different cognitive levels.
Sounds exciting, right? 🤩
See the paper presented in CVPR here!