|Títol||A reward-based approach for preference modeling: A case study|
|Publication Type||Journal Article|
|Year of Publication||2017|
|Authors||Armengol E, Puyol-Gruart J|
|Journal||Journal of Applied Logic|
|Paraules clau||Reward logics|
Abstract Most of reasoning for decision making in daily life is based on preferences. As other kinds of reasoning processes, there are many formalisms trying to capture preferences, however none of them is able to capture all the subtleties of the human reasoning. In this paper we analise how to formalize the preferences expressed by humans and how to reason with them to produce rankings. Particularly, we show that qualitative preferences are best represented with a combination of reward logics and conditional logics. We propose a new algorithm based on ideas of similarity between objects commonly used in case-based reasoning. We see that the new approach produces rankings close to the ones expressed by users.
- Quant a IIIA