Personal Ambiguator: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
 
Line 1: Line 1:
Client: Eleanor Drage, Centre for Gender Studies <ed575@cam.ac.uk>
Client: Eleanor Drage, [[Centre for Gender Studies]] <ed575@cam.ac.uk>


Many areas of life such as housing, health, education and employment are subject to gender, racial and other biases that can be detected with machine learning classification systems. Your task is to identify potential bias in personal documents such as a CV, health record or university application, and then to use a generative neural network approach to tweak these documents (perhaps by adjusting text or selectively removing information) so that they cannot clearly be classified. The algorithm should be packaged in an interactive tool that allows people to write ambiguated personal documents, and also highlights and educates them about the sources of bias.
Many areas of life such as housing, health, education and employment are subject to gender, racial and other biases that can be detected with machine learning classification systems. Your task is to identify potential bias in personal documents such as a CV, health record or university application, and then to use a generative neural network approach to tweak these documents (perhaps by adjusting text or selectively removing information) so that they cannot clearly be classified. The algorithm should be packaged in an interactive tool that allows people to write ambiguated personal documents, and also highlights and educates them about the sources of bias.

Latest revision as of 19:12, 12 November 2021

Client: Eleanor Drage, Centre for Gender Studies <ed575@cam.ac.uk>

Many areas of life such as housing, health, education and employment are subject to gender, racial and other biases that can be detected with machine learning classification systems. Your task is to identify potential bias in personal documents such as a CV, health record or university application, and then to use a generative neural network approach to tweak these documents (perhaps by adjusting text or selectively removing information) so that they cannot clearly be classified. The algorithm should be packaged in an interactive tool that allows people to write ambiguated personal documents, and also highlights and educates them about the sources of bias.