A faster, better way to prevent an AI chatbot from giving toxic responses

A new technique can more effectively perform a safety check on an AI chatbot. Researchers enabled their model to prompt a chatbot to generate toxic responses, which are used to prevent the chatbot from giving hateful or harmful answers when deployed.

A new technique can more effectively perform a safety check on an AI chatbot. Researchers enabled their model to prompt a chatbot to generate toxic responses, which are used to prevent the chatbot from giving hateful or harmful answers when deployed.