/Artificial Intelligence

OpenAI Presents GPT-3, A 175 Billion Parameters Language Model

tl;dr: “We find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.”

featured in #194

Giving GPT-3 a Turing Test

- Kevin Lacker tl;dr: "GPT-3 is quite impressive in some areas, and still clearly subhuman in others." Kevin shows the questions he asks OpenAI’s new GPT-3 language model, along with its answers.

featured in #193

Testifying At the Senate About A.I. - Selected Content On The Internet

- Stephan Wolfram tl;dr: Over the summer Stephan was asked by congress whether "algorithmic transparency" is a policy option with regard to regulating "persuasive internet platforms." He discusses the complexities involved and a couple of conceptual options.  

featured in #166

Responsible AI: Putting Our Principles Into Action

- Jeff Dean tl;dr: An outline of what Google is doing to educate and train its employees on the ethics of AI, including research papers and internal tooling.

featured in #147

Notes on AI Bias

- Ben Evans tl;dr: "Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns". Article runs through examples and scenarios of bias, along with methods to troubleshoot and manage such issues.

featured in #138

The Bitter Lesson

- Richard S. Sutton tl;dr: AI Research is conducted with the assumption that computational power is constant so researchers seek human knowledge as leverage. However, due to Moore's Law, computational power is the defining factor in the long run.

featured in #133