Timnit Gebru’s Exit From Google Exposes a Crisis in AI

- Alex Hanna Meredith Whittaker tl;dr: Google's firing of Timnit Gebru shows that "corporate-funded research can never be divorced from the realities of power." AI will reinforce discrimination unless action is taken: (1) Tech workers need to unionize as a "key lever for change." (2) We need protection and funding for research. (3) Regulation.

featured in #220

DALL·E: Creating Images from Text

tl;dr: "We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language."

featured in #220

Experimenting With Automatic Video Creation From A Web Page

- Peggy Chi Irfan Essa tl;dr: "we envision a future where creators focus on making high-level decisions and an ML model interactively suggests detailed temporal and graphical edits for a final video creation on multiple platforms."

featured in #215

OpenAI Presents GPT-3, A 175 Billion Parameters Language Model

tl;dr: “We find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.”

featured in #194

Giving GPT-3 a Turing Test

- Kevin Lacker tl;dr: "GPT-3 is quite impressive in some areas, and still clearly subhuman in others." Kevin shows the questions he asks OpenAI’s new GPT-3 language model, along with its answers.

featured in #193

Testifying At the Senate About A.I. - Selected Content On The Internet

- Stephan Wolfram tl;dr: Over the summer Stephan was asked by congress whether "algorithmic transparency" is a policy option with regard to regulating "persuasive internet platforms." He discusses the complexities involved and a couple of conceptual options.  

featured in #166

Responsible AI: Putting Our Principles Into Action

- Jeff Dean tl;dr: An outline of what Google is doing to educate and train its employees on the ethics of AI, including research papers and internal tooling.

featured in #147

Notes on AI Bias

- Ben Evans tl;dr: "Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns". Article runs through examples and scenarios of bias, along with methods to troubleshoot and manage such issues.

featured in #138

The Bitter Lesson

- Richard S. Sutton tl;dr: AI Research is conducted with the assumption that computational power is constant so researchers seek human knowledge as leverage. However, due to Moore's Law, computational power is the defining factor in the long run.

featured in #133