Empathetic Chatbot: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
(Created page with "Client: Lee Wilson, Centre for Policy Futures <l.wilson7@uq.edu.au> Managed dialog chatbot systems are very good at dispensing accurate information such as medical advice...")
 
No edit summary
 
Line 1: Line 1:
Client: Lee Wilson, [[Centre for Policy Futures]] <l.wilson7@uq.edu.au>
Client: Lee Wilson, [[Centre for Policy Futures]] <l.wilson7@uq.edu.au>


Managed dialog chatbot systems are very good at dispensing accurate information such as medical advice, following a configurable decision tree, but such text needs to be standardised and impersonal. Language model text generators on the other hand, produce text that is grammatical and entertaining but often profoundly wrong. Your task is to integrate the two into a demonstrator for the World Health Organisation that can be configured by public health clinicians to give accurate advice when needed on problems like Covid infection, and can also stimulate mental health with original creative responses - but only when the question from the patient makes it safe to do so!
Chatbot dialog builders can be used to create systems that are very good at dispensing accurate information such as medical advice, following a configurable decision tree, but the output text needs to be standardised and impersonal. Language model-based text generators on the other hand, produce text that is grammatical and possibly entertaining but often factually wrong. Your task is to integrate the two into a demonstrator for the World Health Organisation that can be configured by public health clinicians to give accurate advice when needed on problems like Covid infection, and can also stimulate mental health with original creative responses - but only when the question from the patient makes it safe to do so!

Latest revision as of 13:34, 13 October 2021

Client: Lee Wilson, Centre for Policy Futures <l.wilson7@uq.edu.au>

Chatbot dialog builders can be used to create systems that are very good at dispensing accurate information such as medical advice, following a configurable decision tree, but the output text needs to be standardised and impersonal. Language model-based text generators on the other hand, produce text that is grammatical and possibly entertaining but often factually wrong. Your task is to integrate the two into a demonstrator for the World Health Organisation that can be configured by public health clinicians to give accurate advice when needed on problems like Covid infection, and can also stimulate mental health with original creative responses - but only when the question from the patient makes it safe to do so!