Retrieving specific categories of images among billions of images usually requires an annotation step. Unfortunately, keywords-based techniques suffer from the semantic gap ex- isting between a semantic concept and its digital representa- tion. Content Based Image Retrieval (CBIR) systems tackle this issue simply considering semantic proximities can be mapped to similarities in the image space. Introducing rel- evance feedbacks involves the user in the task, but extends the annotation step. To reduce the annotation time, we want to prove that im- plicit relevance feedback can replace an explicit one. In this study, we will evaluate the robustness of an implicit rele- vance feedback system only based on eye-tracking features (gaze-based interest estimator, GBIE). In [5], we showed that our GBIE was representative for any set of users using “neutral images”. Here, we want to prove that it remains valid for more “subjective categories” such as food recipe.