Nikooie Pour, Mina AbdMina AbdNikooie PourAlgergawy, AlsayedAlsayedAlgergawyAmini, ReihanehReihanehAminiFaria, DanielDanielFariaFundulaki, IriniIriniFundulakiHarrow, IanIanHarrowHertling, SvenSvenHertlingJiménez-Ruiz, ErnestoErnestoJiménez-RuizJonquet, ClementClementJonquetKaram, NaouelNaouelKaramKhiat, AbderrahmaneAbderrahmaneKhiatLaadhar, AmirAmirLaadharLambrix, PatrickPatrickLambrixLi, HuanyuHuanyuLiLi, YingYingLiHitzler, PascalPascalHitzlerPaulheim, HeikoHeikoPaulheimPesquita, CatiaCatiaPesquitaSaveta, TzaninaTzaninaSavetaShvaiko, PavelPavelShvaikoSplendiani, AndreaAndreaSplendianiThiéblin, ElodieElodieThiéblinTrojahn, CássiaCássiaTrojahnVatascinová, JanaJanaVatascinováYaman, BeyzaBeyzaYamanZamazal, OndrejOndrejZamazalZhou, LuLuZhou2022-03-142022-03-142020https://publica.fraunhofer.de/handle/publica/410084The Ontology Alignment Evaluation Initiative (OAEI) aims at com-paring ontology matching systems on precisely defined test cases. These testcases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus).The OAEI 2020 campaign offered 12 tracks with 36 test cases, and was attended by 19 participants. This paper is an overall presentation of that campaign.en004005006629Results of the ontology alignment evaluation initiative 2020conference paper