Loading…
Towards an AI-Centric Requirements Engineering Framework for Trustworthy AI
Ethical guidelines are an asset for artificial intelligence(AI) development and conforming to them will soon be a procedural requirement once the EU AI Act gets ratified in the European parliament. However, developers often lack explicit knowledge on how to apply these guidelines during the system d...
Saved in:
Main Author: | |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Ethical guidelines are an asset for artificial intelligence(AI) development and conforming to them will soon be a procedural requirement once the EU AI Act gets ratified in the European parliament. However, developers often lack explicit knowledge on how to apply these guidelines during the system development process. A literature review of different ethical guidelines from various countries and organizations has revealed inconsistencies in the principles presented and the terminology used to describe such principles. This research begins by identifying the limitations of existing ethical AI development frameworks in performing requirements engineering(RE) processes during the development of trustworthy AI. Recommendations to address those limitations will be proposed to make the frameworks more applicable in the RE process to foster the development of trustworthy AI. This could lead to wider adoption, greater productivity of the AI systems, and reduced workload on humans for non-cognitive tasks. Considering the impact of some of the newer foundation models like GitHub Copilot and ChatGPT, the vision for this research project is to work towards the development of holistic operationalisable RE guidelines for the development and implementation of trustworthy AI not only on a product level but also on process level. |
---|---|
ISSN: | 0270-5257 2574-1934 |
DOI: | 10.1109/ICSE-Companion58688.2023.00075 |