OpenAI Presents GPT-3, A 175 Billion Parameters Language Model
tl;dr: “We find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.”
featured in #194
Giving GPT-3 a Turing Test
tl;dr: "GPT-3 is quite impressive in some areas, and still clearly subhuman in others." Kevin shows the questions he asks OpenAI’s new GPT-3 language model, along with its answers.
featured in #193
Testifying At the Senate About A.I. - Selected Content On The Internet
tl;dr: Over the summer Stephan was asked by congress whether "algorithmic transparency" is a policy option with regard to regulating "persuasive internet platforms." He discusses the complexities involved and a couple of conceptual options.
featured in #166
Responsible AI: Putting Our Principles Into Action
tl;dr: An outline of what Google is doing to educate and train its employees on the ethics of AI, including research papers and internal tooling.
featured in #147