GPT-3 is a non-biased framework, trained on human cultural heritage, which is biased. Connecting "Muslims" with "terrorist" is not the work by GPT-3 algorithms; it’s rather our human communication, biased with stereotypes and chauvinism, is to blame here. Not the algorithms.
That's why OpenAI is building-in content filters as countermeasure against bias. Books are not biased. Writers are.
You cannot blame the letters in the book or printers in the typography for biases and chauvinism - but the authors. GPT3 is not the author, it's a mirror of our society (which is - to be honest - still in the Dark Middle Age state, full of misconceptions and stereotypes) .
OpenAI is very aware of these situations, that's why they are continuously working on improving of sensitivity measures. Stopping GPT3 can be compared with making the world offline, closing libraries and forbidding printing books - they could contain hate speech or chauvinism.
Disclaimer: I am not working for OpenAI, just am a beta user sinc ethe beginning and am active in their community, experimenting closely with GPT-3 core. Speaking often closely with OpenAI researchers I see how serious they take the ethics and bias topics. But again, GPT-3 reflects our society.
But btw: if one's prompts aren't provoking, one will get very enlightened replies. I proved it empirically. GPT-3 is reflection. Don't blame the mirror.