With the risk of over saturating the blogosphere with RecSys 2012 summaries, I decided to write up some of my thoughts about the conference. Just as Xavier Amatriain wrote in his summary, I should make no secret about the fact that this is by far the conference I enjoy going to the most. This being my fourth recommender systems conference (having missed the first two events as they occurred before and at the very beginning of my involvement with recommender systems) I've seen the conference develop and grow into what it is now; a close to perfect blend of academic research and industrial know-how providing a great setting for new ideas, networking and inspirations for that next big paper.

Pre Conference Workshops

Sadly, I was not able to attend the first workshop day, meaning I missed the workshop on Context-Aware Recommender Systems, the workshop on Recommender Systems and the Social Web, the workshop on Human Decision Making in Recommender Systems, and the workshop on Recommender Utility Evaluation. From what I've heard and read, some of the highlights at the workshops were the CARS panel, which seems to have followed last years concept with each panelist having a different definition of context. For me, this reflects the importance of the context-aware research field in recommender systems: many of us work on or with context, but our perspectives seem to slightly different (e.g. they are contextual). Another highlight I regret not having had the opportunity to attend the keynote by Netflix's Carlos Gomez-Uribe at the RUE workshop on Challenge and Limitation on the Offlne and Online Evaluation of Recommender Systems: A Netflix Case Study (I haven't been able to find the slides online, let me know if you have a URL). Additionally, in the same workshop Gravity's Domonkos Tikk presented out paper "Recommender Systems Evaluaiton: A 3D Benchmark" (pdf/slides).

Tutorials

Having arrived on Sunday night, I was on time for the main conference program to start on Monday morning, this meant the first out of two parallel tutorial sessions. I chose to attend Bart Knijnenburg's tutorial on "User Experiments in Recommender Systems" (slides). Bart is likely one of the most outspoken supporters of the importance of user studies in recommender systems, and I fully agree with him on their importance. In the 90 minute tutorial Bart managed to present the basics of user studies as well as delve into details regarding experiments, analysis and evaluation. One of the problems discussed in the tutorial (and in the back-channel discussion on twitter) was the lack of knowledge in proper evaluation of user studies. I fully agree with LinkedIn's Daniel Tunkelang in saying that a book on the topic would be a very nice resource for the community.

In parallel to Bart's tutorial was the tutorial on "Personality-based Recommender Systems: An Overview" (slides), when there are several simultaneous sessions, you cannot attend both. The topic is relevant to recommender systems in several aspects as it merges multiple concepts (context-awareness, decisions making, etc.) to provide better recommendations.

The second tutorial session was given on Tuesday morning, in this session Xavier Amatriain held a tutorial on "Building Large-scale, Real-world Recommender Systems" (slides) where he presented many of the issues Netflix deals with in their recommender setting. I must say that I was a little disappointed when I learnt that our  (Domonkos Tikk, Andreas Hotho and me) tutorial and panel on "Best Practices in Recommender System Challenges" (slides) was scheduled in parallel to Xavier's as I would not be able to attend his. Still, our tutorial, even with a much smaller crowd than the former one, seemed to raise issues which were discussed in length in the following panel session with Darren Vengroff, Yehuda Koren and Torben Brodt as panelists. The take-home message was to organize more challenges, but to reflect on the utility of releasing "yet another rating dataset" and how to create interesting challenges which could potentially advance recommender systems research in a similar fashion to what the Netflix Prize did.

Keynotes and Selected papers


Following the Monday morning tutorials was the first of several keynotes. This one given by Jure Leskovec on "How users evaluate things and each other in social media" (slides). In the keynote Jure presented how people's attitudes and opinions influence the rate of success of recommender systems, as well as how social (direct and indirect) connections have changed the recommendation setting.

This year, the acceptance ratio for papers was the most competitive so far in the history of the conference (20% for full papers and 31.8% for short papers) which was reflected in the overall quality of the presentations. I'm not going to list each and every paper presented, this is merely a short list of those that I found most interesting:

User Effort vs. Accuracy in Rating-based Elicitation - Cremonesi et al. (doi)

This paper deals both with cold start-related (so-called profile length) issues as well as evaluates several algorithms both in a traditional offline setting as well as a large-scale user study with almost 1000 users. The authors find that between 5 and 20 ratings are necessary in order to be able to provide satisfactory recommendations.

How Many Bits per Rating? - Kluver, et al.  (doi)

Personally, I found this to be a very interesting paper, not only since it is related to our work on the magic barrier, but also since it deals with an often overlooked topic in recommender systems evaluation: the fact that ratings are afflicted with noise and should perhaps not be treated as an absolute truth in evaluation scenarios. The authors present a framework for measuring the level of noise in ratings in order to estimate how much information the ratings actually contain.

BlurMe: Inferring and obfuscating user gender based on ratings - Weinsberg, et al. (doi)

This paper, written by Technicolor's research team, presented intetesting results on how much information actually contain. The authors were able to identify the gender of users solely based on their rating profiles - without using additional movie-related meta data. Following this, they present methods for obfuscating gender information by adding ratings to the user profiles without creating a too large impact on recommendation accuracy.

When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination - Ekstrand and Riedl (doi)

This was a poster paper that really caught my eye, the authors look into what algorithms work in what scenario's, or as they express it: what causes particular recommenders to fail. They also present comparisons of recommendation algorithms in difference settings.

Using Graph Partitioning Techniques for Neighbour Selection in User-Based Collaborative Filtering - Bellogín and Parapar (doi)

In this poster paper, the authors use Normalised Cut, a spectral clustering algorithm, to perform the neighbourhood selection in a nearest neighbour scenario. Their approach outperforms more standard memory-based collaborative filtering techniques. This paper was also awarded the Best Poster Award.

Industry Session

As usual, the industry session was filled with interesting talks from interesting people working at interesting companies. The two most interesting talks were given by Ron Kohavi from Microsoft and Paul Lamere from The Echonest.

Ron's keynote "Online Controlled Experiments: Introduction, Learnings, and Humbling Statistics" (slides) was an insightful talk about online testing and concepts such as the "HiPPO" (Highest Paid Person’s Opinion) and A/A testing, i.e. testing the same settings/algorithms on different groups before going on with A/B tests. Ron collected some of the most interesting twitter comments on his website, be sure to check it out.

Paul Lamers, a well-know figure in the MIR (Music Information Retrieval) community presented a talk titled "I've got 10 million songs in my pocket. Now what?" (slides) which focused on 10 things one needs to consider when building a music recommender system. Most of the things are relevant in other domains as well, e.g. very large item space, large personal collections, contextual usage, etc. All 10 concepts are listed in the slides linked above and I would recommend everyone even remotely interested in (music) recommender systems to have a look.

Post Conference Workshop


Just like on the first day of the conference, the last day consisted of workshops. This was the third year the conference had two workshop days, and as usual they offered a wide range of presentations and formats. One thing I like about the workshop format is that the organizers can decide to not have a standard "mini-conference" type of workshop. And at RecSys, this happened in the PeMA workshop in 2011, and in this year's follow-up workshop on "Personalizing the Local Mobile Experience" which was a merger of the PeMa and Loca workshops. I could not attend the workshop as I was organizing another one myself, but form what I heard it was a great success. The other workshop this day were the workshops on "Interfaces for Recommender Systems", "Recommender Technologies for Lifestyle Change" and the Recommender Systems Challenge which I was involved in. As an organizer, you tend to miss what's going on in the other workshops, so I'll leave that to those who attended them. The Recommender Systems Challenge however turned out to be more interesting than I had anticipated with presentations from MovieLens, Mendeley, Mahout, MyMediaLite. Plista, Gravity R&D and more. There were three invited talks from the industry, where Domonkos Tikk from Gravity R&D talked about how Gravity went form a group participating in the Netflix Prize to a Recommender Systems vendor (slides), Kris Jack from Mendeley presented quite a lot of details on Mendeley's recommendation engine (slides), and Torben Brodt from Plista talked about the Plista contest (slides).

The workshop ended with a hands-on session were developers from Mahout, MyMediaLite and LenskitRS guided those who were interested in using the frameworks.

 

Other RecSys Summaries


There are a few other RecSys summaries, check them out for more details on what went on at the conference (and let me know if I've missed any and I'll add the link).

Updated:

Leave a Comment