Umbrella Analytics

From Computer Laboratory Group Design Projects
Revision as of 13:08, 23 October 2020 by afb21 (talk | contribs)
Jump to navigationJump to search

Contact: John Pettigrew <johnp@umbrellaanalytics.net>

Algorithmic De-biasing

The past year has seen a significant rise in awareness of misogyny, racism and other discrimination in society and, in particular, in workplaces. In addition, there is growing awareness of how ‘algorithms’ can reinforce bias rather than remove it. Your task is to produce a system that can help businesses recruit more fairly, by removing biased language from their job adverts that would put off many candidates. Your system should allow users to upload the text for a job ad, to identify problems using natural-language processing and statistics, and to recommend changes to the user so that they can make iterative improvements. Ideally, your system would give each text an overall score as well as word-level feedback.

Introduced via Ideaspace

Suggestion: might it be appropriate to build an experimental prototype of some kind of “bias alert” app, that could integrate analysis of news coverage about a company with scanning of internal correspondence, and perhaps also whistleblower channels? To make this less commercially sensitive, it could focus on policy bias, health response, or political instability