

Over the past three years, we have been studying how automated evaluation of student mind maps (when compared to an expert map) shows student learning for a variety of metrics. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. For instance, the optimal size of an algorithms’ user model depended on users’ age. Some of the determinants have interdependencies. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. In this article, we examine the challenge of reproducibility in recommender-system research. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. Numerous recommendation approaches are in use today. We further discuss why several of our experiments delivered disappointing results, and provide statistics on how many researchers showed interest in our recommendation dataset. We explain the challenge in creating a randomization engine to run A/B tests, and how low data quality impacts the calculation of bibliometrics. Among others, we discuss the required skills to build recommender systems, and why the literature provides little help in identifying promising recommendation approaches. In this paper, we share some experiences we made during that time. In the past six years, we built three research-article recommender systems for digital libraries and reference managers, and conducted research on these systems.
#Top mind mapping software 2014 how to
Major challenges include non-reproducible research results, dealing with noisy data, and answering many questions such as how many recommendations to display, how often, and, of course, how to generate recommendations most effectively. Research on recommender systems is a challenging task, as is building and operating such systems.
