Responsible AI Copilot

From Computer Laboratory Group Design Projects
Revision as of 15:25, 6 October 2022 by afb21 (talk | contribs) (Created page with "Marios Constantinides, Nokia Bell Labs <marios.constantinides@nokia-bell-labs.com> Many AI-driven applications turn out to have built-in biases, or problems of trust and tran...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Marios Constantinides, Nokia Bell Labs <marios.constantinides@nokia-bell-labs.com>

Many AI-driven applications turn out to have built-in biases, or problems of trust and transparency. In principle, developers could be warned of such problems as they write code, adding additional code to address the problems, recommending a specific debiasing algorithm, or adding inline comments or sticky notes that warn of need for future action. Your task is to provide such facilities in a modified version of Jupyter notebooks, perhaps using generative language models such as GPT-3 or OpenAI Codex to generate the relevant code and text output.