Thank you for your article. I'm experimenting with GPT-3 as well. Indeed, toxic contents are possible, but the thing is: GPT-3 represent a huge track of human though heritage. Alas, this heritage is in many ways biased and full of stereotypes.
GPT-3 does nithing but reflect and activate some of such topics if they are triggered. As far as I see - OpenAI is working hard on sensibility and risk prevention regarding such negative effects. At the end it's up to users how to handle the system - and here we have to speak not only about AI ethics, but also about Ethics of human-used AI. Very important topic, which seems often be overshadowed by technical examinations of implications.