Auto-Emoji: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
(Created page with "Client: James Jillians, Sparx <James.Jillians@sparx.co.uk> People increasingly rely on emoji to express the tone of digital communications. But as the set of emoji icons ...")
 
No edit summary
 
Line 1: Line 1:
It's useful to include emojis in your messages as a quick indicator of emotional state, but why should you have to call up a special keyboard, or scroll through many alternatives, when your emotional state could be read off your face? The goal of this project is to augment the on-screen keyboard by using the front facing camera to just read off the emotional state (using a standard facial emotion classifier) and put the right emotion in. It could also be useful to use the rear camera to capture suitable non-emotional emojis, such as recognising a burger, a champagne glass, a cute puppy and so on.
An earlier version of this project was offered (but did not proceed) with the following description:
Client: James Jillians, [[Sparx]] <James.Jillians@sparx.co.uk>
Client: James Jillians, [[Sparx]] <James.Jillians@sparx.co.uk>


People increasingly rely on emoji to express the tone of digital communications. But as the set of emoji icons grows, it is frustrating and time-consuming to select the right one. The OpenFace library (originally developed in Cambridge) is an open source facial behaviour analysis toolkit, that can monitor a webcam to detect emotional state via action units, such as smiling or raised eyebrows. Your task is to build a custom keyboard app that can insert appropriate emojis directly into the text, based on “commands" directly received from the user’s face.
People increasingly rely on emoji to express the tone of digital communications. But as the set of emoji icons grows, it is frustrating and time-consuming to select the right one. The OpenFace library (originally developed in Cambridge) is an open source facial behaviour analysis toolkit, that can monitor a webcam to detect emotional state via action units, such as smiling or raised eyebrows. Your task is to build a custom keyboard app that can insert appropriate emojis directly into the text, based on “commands" directly received from the user’s face.

Latest revision as of 14:01, 4 October 2022

It's useful to include emojis in your messages as a quick indicator of emotional state, but why should you have to call up a special keyboard, or scroll through many alternatives, when your emotional state could be read off your face? The goal of this project is to augment the on-screen keyboard by using the front facing camera to just read off the emotional state (using a standard facial emotion classifier) and put the right emotion in. It could also be useful to use the rear camera to capture suitable non-emotional emojis, such as recognising a burger, a champagne glass, a cute puppy and so on.

An earlier version of this project was offered (but did not proceed) with the following description:

Client: James Jillians, Sparx <James.Jillians@sparx.co.uk>

People increasingly rely on emoji to express the tone of digital communications. But as the set of emoji icons grows, it is frustrating and time-consuming to select the right one. The OpenFace library (originally developed in Cambridge) is an open source facial behaviour analysis toolkit, that can monitor a webcam to detect emotional state via action units, such as smiling or raised eyebrows. Your task is to build a custom keyboard app that can insert appropriate emojis directly into the text, based on “commands" directly received from the user’s face.