Deadline: 
27 Febrero 2020
Institution: 
Hariot-Watt University, Scotland
Department: 
Departamento de Sistemas de Aprendizaje
Speaker: 
Fiona McNeill

One of the most difficult aspects of developing matching systems -
whether for matching ontologies or for other types of mismatched
data - is evaluation. The accuracy of matchers are usually evaluated by
measuring the results produced by the systems against reference sets,
but gold-standard reference sets are expensive and difficult to create.
In this talk I will describe CRPTR , which generates multiple variations of
different sorts of datasets, where the degree of variation is controlled, in
order that they can be used to evaluate matchers in different contexts.