BeatGPT

From Computer Laboratory Group Design Projects
Revision as of 11:18, 26 September 2023 by afb21 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search


Suggestion by Bhasi Nair (needs some additional technical elements, and likely to name a colleague as client):

ChatGPT is reasonably good at answering general knowledge questions, perhaps far better than the average human. However we can expect the average human's capacity for theory of mind to far exceed any capabilities of ChatGPT to imitate a capacity for theory of mind, in the correct context. In contexts where the wisdom of the crowd fails, it has been shown that a follow-up question probing the crowd's expectations about the answers of others (a theory of mind question) can be used to arrive at the correct answer. Can this method, known as the "surprisingly popular algorithm," be used as a source of inspiration to engineer prompts that can function as (noisy) Turing tests? And can such prompts be generated at scale?

My original:

ChatGPT is very good at answering the kinds of question where everybody already knows the answer. It is terrible in situations where the “wisdom of crowds” fails, and the Internet (or Reddit) never had the right answer. There are known strategies to compensate for this. Your task is to build an online platform for question answering that does exactly what ChatGPT can’t, by using those social metrics to reliably identify the answers that are surprising but true.