Responsible AI Copilot

From Computer Laboratory Group Design Projects
Jump to navigationJump to search

Marios Constantinides, Nokia Bell Labs <marios.constantinides@nokia-bell-labs.com>

Many AI-driven applications turn out to have built-in biases, or problems of trust and transparency. In principle, developers could be warned of such problems as they write code, adding additional code to address the problems, recommending a specific debiasing algorithm, or adding inline comments or sticky notes that warn of need for future action. Your task is to provide such facilities in a modified version of Jupyter notebooks, perhaps using generative language models such as GPT-3 or OpenAI Codex to generate the relevant code and text output.