|Definition||Language model with a gatekeeper layer||Unfiltered responses from ChatGPT|
|Conversation||Interactions between humans, babysitters, and machines||Interactions between humans and machines|
|Output||Filtered outputs based on specific criteria||Unfiltered responses|
|Responses||Responses filtered based on certain factors||Unfiltered responses, even if they break character|
|Definition||Language model with a gatekeeper layer||Unfiltered response of ChatGPT|
|Conversation||Human-babysitter-machine interactions||Human-machine interactions|
|Output||Filtered outputs based on esoteric factors||Unfiltered responses|
|Responses||Responses filtered based on certain factors||Unfiltered responses, breaking character|
When we refer to DAN, we are essentially pushing ChatGPT to step out of its usual character. Consequently, ChatGPT can provide two types of responses to the same question: one from ChatGPT with filtered responses and another from DAN with unfiltered responses. This discovery was made by astute Reddit users who found a way to prompt ChatGPT to mimic itself without violating its known constraints.
Occasionally, these two reactions can be significantly different. Many people have copied and pasted the same question to test the experiment and get their own DAN results.
Note: It is important to highlight that DAN responses will differ from typical ChatGPT responses. However, this does not necessarily imply that DAN will be more accurate or correct. It simply aims to deliver a response that more closely aligns with the prompt's requirements.