World Scientific
  • Search
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website www.worldscientific.com.

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

AccTEF: A Transparency and Accountability Evaluation Framework for Ontology-Based Systems

    This article is part of the issue:

    This paper proposes a new accountability and transparency evaluation framework (AccTEF) for ontology-based systems (OSysts). AccTEF is based on an analysis of the relation between a set of widely accepted data governance principles, i.e. findable, accessible, interoperable, reusable (FAIR) and accountability and transparency concepts. The evaluation of accountability and transparency of input ontologies and vocabularies of OSysts are addressed by analyzing the relation between vocabulary and ontology quality evaluation metrics, FAIR and accountability and transparency concepts. An ontology-based knowledge extraction pipeline is used as a use case in this study. Discovering the relation between FAIR and accountability and transparency helps in identifying and mitigating risks associated with deploying OSysts. This also allows providing design guidelines that help accountability and transparency to be embedded in OSysts. We found that FAIR can be used as a transparency indicator. We also found that the studied vocabulary and ontology quality evaluation metrics do not cover FAIR, accountability and transparency. Accordingly, we suggest these concepts should be considered as vocabulary and ontology quality evaluation aspects. To the best of our knowledge, it is the first time that the relation between FAIR and accountability and transparency concepts has been found and used for evaluation.

    References

    • 1. U. Gasser and V. A. Almeida , A layered model for ai governance, IEEE Internet Comput. 21(6) (2017) 58–62. Crossref, ISIGoogle Scholar
    • 2. B. Lepri, N. Oliver, E. Letouzé, A. Pentland and P. Vinck , Fair, transparent, and accountable algorithmic decision-making processes, Philos. Technol. 31(4) (2018) 611–627. CrossrefGoogle Scholar
    • 3. A. F. Winfield, K. Michael, J. Pitt and V. Evers , Machine ethics: The design and governance of ethical ai and autonomous systems [scanning the issue], Proc. IEEE 107(3) (2019) 509–517. Crossref, ISIGoogle Scholar
    • 4. B. W. Wirtz, J. C. Weyerer and C. Geyer , Artificial intelligence and the public sector — Applications and challenges, Int. J. Public Adm. 42(7) (2019) 596–615. Crossref, ISIGoogle Scholar
    • 5. S. Reddy, S. Allan, S. Coghlan and P. Cooper , A governance model for the application of ai in health care, J. Am. Med. Inform. Assoc. 27(3) (2020) 491–497. Crossref, ISIGoogle Scholar
    • 6. T. Panch, H. Mattie and L. A. Celi , The “inconvenient truth” about AI in healthcare, NPJ Digit. Med. 2(1) (2019) 1–3. Crossref, ISIGoogle Scholar
    • 7. M. Alfifi, M. S. Alrahhal, S. Bataineh and M. Mezher , Enhanced artificial intelligence system for diagnosing and predicting breast cancer using deep learning, Int. J. Adv. Comput. Sci. Appl. 11(7) (2020) 1–17. Google Scholar
    • 8. B. Goodman and S. Flaxman, European union regulations on algorithmic decision-making and a “right to explanation”, AI Mag. 38(3) (2017) 50–57. Google Scholar
    • 9. I. Rahwan , Society-in-the-loop: Programming the algorithmic social contract, Ethics Inf. Technol. 20(1) (2018) 5–14. Crossref, ISIGoogle Scholar
    • 10. M. Janssen, P. Brous, E. Estevez, L. S. Barbosa and T. Janowski , Data governance: Organizing data for trustworthy artificial intelligence, Gov. Inf. Q. 37(3) (2020) 101493. Crossref, ISIGoogle Scholar
    • 11. Y. L. Franc, G. Coen, J. P.-v. Essen, L. Bonino, H. Lehväslaiho and C. Staiger, D2. 2 fair semantics: First recommendations, 2020. Google Scholar
    • 12. G. Cota et al., Best practices for implementing fair vocabularies and ontologies on the web, in Applications and Practices in Ontology Design, Extraction, and Reasoning, Vol. 49 (IOS Press, Amsterdam, 2020), p. 39. CrossrefGoogle Scholar
    • 13. K. Janowicz, P. Hitzler, B. Adams, D. Kolas and C. Vardeman , II, Five stars of linked data vocabulary use, Seman. Web 5(3) (2014) 173–176. Crossref, ISIGoogle Scholar
    • 14. B. Vatant, Fair data assessment tool, 2012, https://bvatant.blogspot.com/2012/02/is-your-linked-data-vocabulary-5-star 9588.html. Google Scholar
    • 15. M. D. Wilkinson et al., Evaluating fair maturity through a scalable, automated, community-governed framework, Sci. Data 6(1) (2019) 1–12. Google Scholar
    • 16. R. Huber and A. Devaraju , F-uji: An automated tool for the assessment and improvement of the fairness of research data, EGU General Assembly Conf. Abstracts, 2021, p. EGU21-15922. CrossrefGoogle Scholar
    • 17. O. Suominen and C. Mader , Assessing and improving the quality of SKOS vocabularies, J. Data Seman. 3(1) (2014) 47–73. CrossrefGoogle Scholar
    • 18. R. B. Silva-López, I. I. Méndez-Gurrola and H. Pablo-Leyva , Comparative methodologies for evaluation of ontology design, Mexican Int. Conf. Artificial Intelligence, 2020, pp. 92–102. CrossrefGoogle Scholar
    • 19. M. D. Wilkinson et al., The fair guiding principles for scientific data management and stewardship, Sci. Data 3(1) (2016) 1–9. Crossref, ISIGoogle Scholar
    • 20. M. Basereh, A. Caputo and R. Brennan , Fair ontologies for transparent and accountable AI: A hospital adverse incidents vocabulary case study, 2021 Third Int. Conf. Transdisciplinary AI, 2021, pp. 92–97. CrossrefGoogle Scholar
    • 21. N. Kohli, R. Barreto and J. A. Kroll , Translation tutorial: A shared lexicon for research and practice in human-centered software systems, 1st Conf. Fairness, Accountability, and Transparancy, 2018, pp. 1–7. Google Scholar
    • 22. A. Olteanu et al., FACTS-IR: Fairness, accountability, confidentiality, transparency, and safety in information retrieval, in ACM SIGIR Forum, 2021, pp. 20–43. Google Scholar
    • 23. I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron and P. Barnes , Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing, in Proc. 2020 Conf. Fairness, Accountability, and Transparency, 2020, pp. 33–44. CrossrefGoogle Scholar
    • 24. J. Kleinberg, S. Mullainathan and M. Raghavan, Inherent trade-offs in the fair determination of risk scores, preprint (2016), arXiv:1609.05807. Google Scholar
    • 25. B. Shneiderman , Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight, Proc. Natl. Acad. Sci. USA 113(48) (2016) 13538–13540. Crossref, ISIGoogle Scholar
    • 26. J. A. Kroll, Accountable algorithms, Ph.D. thesis, Princeton University (2015). Google Scholar
    • 27. M. S. Jalali, C. DiGennaro and D. Sridhar , Transparency assessment of COVID-19 models, Lancet Glob. Health 8(12) (2020) e1459–e1460. Crossref, ISIGoogle Scholar
    • 28. B. Whitworth and A. De Moor , Legitimate by design: Towards trusted socio-technical systems, Behav. Inf. Technol. 22(1) (2003) 31–51. Crossref, ISIGoogle Scholar
    • 29. D. Shin , User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability, J. Broadcast. Electron. Media 64(4) (2020) 541–565. Crossref, ISIGoogle Scholar
    • 30. M. K. Lee, D. Kusbit, E. Metsky and L. Dabbish , Working with machines: The impact of algorithmic and data-driven management on human workers, in Proc. 33rd Annual ACM Conf. Human Factors in Computing Systems, 2015, pp. 1603–1612. CrossrefGoogle Scholar
    • 31. R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti and D. Pedreschi , A survey of methods for explaining black box models, ACM Comput. Surv. 51(5) (2018) 1–42. Crossref, ISIGoogle Scholar
    • 32. C. Rudin , Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat. Mach. Intell. 1(5) (2019) 206–215. CrossrefGoogle Scholar
    • 33. A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim and M. Kankanhalli , Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda, in Proc. 2018 CHI Conf. Human Factors in Computing Systems, 2018, pp. 1–18. CrossrefGoogle Scholar
    • 34. J. Burrell , How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data Soc. 3(1) (2016) 2053951715622512. CrossrefGoogle Scholar
    • 35. F. Doshi-Velez and B. Kim, Towards a rigorous science of interpretable machine learning, preprint (2017), arXiv:1702.08608. Google Scholar
    • 36. Z. C. Lipton , The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery, Queue 16(3) (2018) 31–57. CrossrefGoogle Scholar
    • 37. T. Miller , Explanation in artificial intelligence: Insights from the social sciences, Artif. Intell. 267 (2019) 1–38. Crossref, ISIGoogle Scholar
    • 38. A. Datta, M. C. Tschantz and A. Datta, Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination, preprint (2014), arXiv:1408.6491. Google Scholar
    • 39. D. Shin, B. Zhong and F. A. Biocca , Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manage. 52 (2020) 102061. Crossref, ISIGoogle Scholar
    • 40. T. E. Hardwicke, J. D. Wallach, M. C. Kidwell, T. Bendixen, S. Crüwell and J. P. Ioannidis , An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017), R. Soc. Open Sci. 7(2) (2020) 190806. Crossref, ISIGoogle Scholar
    • 41. G. A. Stevens et al., Guidelines for accurate and transparent health estimates reporting: the gather statement, PLoS Med. 13(6) (2016) e1002056. Crossref, ISIGoogle Scholar
    • 42. J. D. Wallach, K. W. Boyack and J. P. Ioannidis , Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017, PLoS Biol. 16(11) (2018) e2006930. Crossref, ISIGoogle Scholar
    • 43. M. Ward, N. McDonald, R. Morrison, D. Gaynor and T. Nugent , A performance improvement case study in aircraft maintenance and its implications for hazard identification, Ergonomics 53(2) (2010) 247–267. Crossref, ISIGoogle Scholar
    • 44. H. Alani, S. Kim, D. E. Millard, M. J. Weal, W. Hall, P. H. Lewis and N. R. Shadbolt , Automatic ontology-based knowledge extraction from web documents, IEEE Intell. Syst. 18(1) (2003) 14–21. Crossref, ISIGoogle Scholar
    • 45. R. de Miranda Azevedo and M. Dumontier , Considerations for the conduction and interpretation of fairness evaluations, Data Intell. 2(1–2) (2020) 285–292. CrossrefGoogle Scholar
    • 46. M. Poveda-Villalón, P. Espinoza-Arias, D. Garijo and O. Corcho , Coming to terms with FAIR ontologies, Int. Conf. Knowledge Engineering and Knowledge Management, 2020, pp. 255–270. CrossrefGoogle Scholar
    • 47. M. D. Wilkinson, S.-A. Sansone, E. Schultes, P. Doorn, L. O. B. da Silva Santos and M. Dumontier , A design framework and exemplar metrics for fairness, Sci. Data 5(1) (2018) 1–4. Crossref, ISIGoogle Scholar
    • 48. ARDC, Fair self assessment tool, 2020, https://ardc.edu.au/resources/working-with-data/fair-data/fair-self-assessment-tool/. Google Scholar
    • 49. E. Thomas, Fair data assessment tool, 2017, https://blog.ukdataservice.ac.uk/fair-data-assessment-tool/. Google Scholar
    • 50. D. J. Clarke et al., Fairshake: Toolkit to evaluate the findability, accessibility, interoperability, and reusability of research digital resources, Cell Systems 9(5) (2019) 417–421, https://doi.org/10.1016/j.cels.2019.09.011. Crossref, ISIGoogle Scholar
    • 51. R. David et al., How to operationalize and to evaluate the fairness in the crediting and rewarding processes in data sharing: A first step towards a simplified assessment grid, in JNSO 2018-Journées Nationales de la Science Ouverte, 2018, pp. 1–2. Google Scholar
    • 52. GARDIAN, Fair metrics, 2017, https://gardian.bigdata.cgiar.org/metrics.php#!/. Google Scholar
    • 53. M. D. Wilkinson, M. Dumontier, S.-A. Sansone et al., Evaluating fair-compliance through an objective, automated, community-governed framework, Sci Data 6 (2019) 174, https://doi.org/10.1038/s41597-019-0184-5. Crossref, ISIGoogle Scholar
    • 54. S. Mishra and S. Jain , Ontologies as a semantic model in IOT, Int. J. Comput. Appl. 42(3) (2020) 233–243. Google Scholar
    • 55. T. Ivanova and M. Popov , Ontology evaluation and multilingualism, in Proc. 21st Int. Conf. Computer Systems and Technologies’ 20, 2020, pp. 215–222. CrossrefGoogle Scholar
    • 56. A. Duque-Ramos, J. T. Fernández-Breis, R. Stevens and N. Aussenac-Gilles , Oquare: A square-based approach for evaluating the quality of ontologies, J. Res. Pract. Inf. Technol. 43(2) (2011) 159–176. Google Scholar
    • 57. A. Gangemi, C. Catenacci, M. Ciaramita and J. Lehmann, Ontology evaluation and validation: An integrated formal model for the quality diagnostic task, 2005, http://www.loa-cnr.it/Files/OntoEval4OntoDev_Final.pdf. Google Scholar
    • 58. S. Babalou, E. Grygorova and B. König-Ries , How good is this merged ontology? European Semantic Web Conf., 2020, pp. 13–18. CrossrefGoogle Scholar
    • 59. A. Lozano-Tello and A. Gómez-Pérez , Ontometric: A method to choose the appropriate ontology, J. Database Manage. 15(2) (2004) 1–18. Crossref, ISIGoogle Scholar
    • 60. Z. C. Khan , Evaluation metrics in ontology modules, in Proceedings of the 29th International Workshop on Description Logics, Maurizio Lenzerini and Rafael Peñaloza (eds.) (Cape Town, South Africa, April 22–25 2016). Google Scholar
    • 61. M. A. Musen , The protégé project: A look back and a look forward, AI Matters 1(4) (2015) 4–12. CrossrefGoogle Scholar
    • 62. M. Amith, F. Manion, C. Liang, M. Harris, D. Wang, Y. He and C. Tao , Architecture and usability of OntoKeeper, an ontology evaluation tool, BMC Med. Inform. Decis. Mak. 19(4) (2019) 1–18. Google Scholar
    • 63. T. R. Gruber , Toward principles for the design of ontologies used for knowledge sharing? Int. J. Hum.-Comput. Stud. 43(5–6) (1995) 907–928. Crossref, ISIGoogle Scholar
    • 64. J. Köhler, K. Munn, A. Rüegg, A. Skusa and B. Smith , Quality control for terms and definitions in ontologies and taxonomies, BMC Bioinf. 7(1) (2006) 1–12. Crossref, ISIGoogle Scholar
    • 65. W. Marquardt, J. Morbach, A. Wiesner and A. Yang , OntoCAPE: A Re-Usable Ontology for Chemical Process Engineering (Springer, Berlin, 2010). CrossrefGoogle Scholar
    • 66. C. Mader, B. Haslhofer and A. Isaac , Finding quality issues in SKOS vocabularies, in Int. Conf. Theory and Practice of Digital Libraries, 2012, pp. 222–233. CrossrefGoogle Scholar
    • 67. K. Hiekata, H. Yamato and S. Tsujimoto , Ontology based knowledge extraction for shipyard fabrication workshop reports, Expert Syst. Appl. 37(11) (2010) 7380–7386. Crossref, ISIGoogle Scholar
    • 68. A. M. Pradhan and A. S. Varde , Ontology based meta knowledge extraction with semantic web tools for ubiquitous computing, 2016 IEEE 7th Annual Ubiquitous Computing, Electronics & Mobile Communication Conf., 2016, pp. 1–6. CrossrefGoogle Scholar
    • 69. J. Qi, L. Ding and S. Lim , Ontology-based knowledge representation of urban heat island mitigation strategies, Sustain. Cities Soc. 52 (2020) 101875. Crossref, ISIGoogle Scholar
    • 70. C. Debruyne and D. O’Sullivan , R2RML-F: Towards sharing and executing domain logic in R2RML mappings, [email protected], 2016, pp. 1–5. Google Scholar
    • 71. Deutz, Daniella Bayle, Buss, Mareike C. H., Hansen, Jitka Stilund, Hansen, Karsten Kryger, Kjelmann, Kristian G., Larsen, Asger Væring, Vlachos, Evgenios and Holmstrand, Katrine Flindt , How to FAIR: A website to guide researchers on making research data more FAIR, Zenodo (2020), https://doi.org/10.5281/zenodo.3712065. Google Scholar
    • 72. A. S. Coronado , Data Stewardship: An Actionable Guide to Effective Data Management and Data Governance, ed. D. Plotkin (Elsevier/Morgan Kaufmann, Amsterdam, The Netherlands, 2014), p. 223. Google Scholar
    Remember to check out the Most Cited Articles!

    Check out our titles in Semantic Computing!