Umbrella Analytics: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
 
Line 1: Line 1:
Contact: John Pettigrew <johnp@umbrellaanalytics.net>
Contact: John Pettigrew <johnp@umbrellaanalytics.net>


[[Algorithmic De-biasing]]
2021 project; [[De-biasing the Employment Process]]


The past year has seen a significant rise in awareness of misogyny, racism and other discrimination in society and, in particular, in workplaces. In addition, there is growing awareness of how ‘algorithms’ can reinforce bias rather than remove it. Your task is to produce a system that can help businesses recruit more fairly, by removing biased language from their job adverts that would put off many candidates. Your system should allow users to upload the text for a job ad, to identify problems using natural-language processing and statistics, and to recommend changes to the user so that they can make iterative improvements. Ideally, your system would give each text an overall score as well as word-level feedback.


Introduced via [[Ideaspace]]
Introduced via [[Ideaspace]]


Suggestion: might it be appropriate to build an experimental prototype of some kind of “bias alert” app, that could integrate analysis of news coverage about a company with scanning of internal correspondence, and perhaps also whistleblower channels? To make this less commercially sensitive, it could focus on policy bias, health response, or political instability
Suggestion: might it be appropriate to build an experimental prototype of some kind of “bias alert” app, that could integrate analysis of news coverage about a company with scanning of internal correspondence, and perhaps also whistleblower channels? To make this less commercially sensitive, it could focus on policy bias, health response, or political instability

Latest revision as of 13:25, 23 October 2020

Contact: John Pettigrew <johnp@umbrellaanalytics.net>

2021 project; De-biasing the Employment Process


Introduced via Ideaspace

Suggestion: might it be appropriate to build an experimental prototype of some kind of “bias alert” app, that could integrate analysis of news coverage about a company with scanning of internal correspondence, and perhaps also whistleblower channels? To make this less commercially sensitive, it could focus on policy bias, health response, or political instability