Join me on my ML Journey from novice to expert! with some nomadic meanderings along the way.

Machine Learning

Join us in the AI Test Kitchen

Join us in the AI Test Kitchen

As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural human-computer interactions. We see a future where you can find the information you’re looking for in the same conversational way you speak to friends and family. While there’s still lots of work to be done before this type of human-computer interaction is possible, recent research breakthroughs in generative language models — inspired by the natural conversations of people — are accelerating our progress. One of our most promising models is called LaMDA (Language Model for Dialogue Applications), and as we move ahead with development, we feel a great responsibility to get this right.

That’s why we introduced an app called AI Test Kitchen at Google I/O earlier this year. It provides a new way for people to learn about, experience, and give feedback on emerging AI technology, like LaMDA. Starting today, you can register your interest for the AI Test Kitchen as it begins to gradually roll out to small groups of users in the US, launching on Android today and iOS in the coming weeks.

Linked image of AI Test Kitchen registration page

Our goal is to learn, improve and innovate responsibly on AI together.

Similar to a real test kitchen, AI Test Kitchen will serve a rotating set of experimental demos. These aren’t finished products, but they’re designed to give you a taste of what’s becoming possible with AI in a responsible way. Our first set of demos explore the capabilities of our latest version of LaMDA, which has undergone key safety improvements. The first demo, “Imagine It,” lets you name a place and offers paths to explore your imagination. With the “List It” demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the “Talk About It (Dogs Edition)” demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.

Evaluating LaMDA’s potential and its risks

As you try each demo, you’ll see LaMDA’s ability to generate creative responses on the fly. This is one of the model’s strengths, but it can also pose challenges since some responses can be inaccurate or inappropriate. We’ve been testing LaMDA internally over the last year, which has produced significant quality improvements. More recently, we’ve run dedicated rounds of adversarial testing to find additional flaws in the model. We enlisted expert red teaming members — product experts who intentionally stress test a system with an adversarial mindset — who have uncovered additional harmful, yet subtle, outputs. For example, the model can misunderstand the intent behind identity terms and sometimes fails to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts. It can also produce harmful or toxic responses based on biases in its training data, generating responses that stereotype and misrepresent people based on their gender or cultural background. These areas and more continue to be under active research.

In response to these challenges, we’ve added multiple layers of protection to the AI Test Kitchen. This work has minimized the risk, but not eliminated it. We’ve designed our systems to automatically detect and filter out words or phrases that violate our policies, which prohibit users from knowingly generating content that is sexually explicit; hateful or offensive; violent, dangerous, or illegal; or divulges personal information. In addition to these safety filters, we made improvements to LaMDA around quality, safety, and groundedness — each of which are carefully measured. We have also developed techniques to keep conversations on topic, acting as guardrails for a technology that can generate endless, free-flowing dialogue. As you’re using each demo, we hope you see LaMDA’s potential, but also keep these challenges in mind.

Responsible progress, together

In accordance with our AI Principles, we believe responsible progress doesn’t happen in isolation. We’re at a point where external feedback is the next, most helpful step to improve LaMDA. When you rate each LaMDA reply as nice, offensive, off topic, or untrue, we’ll use this data — which is not linked to your Google account — to improve and develop our future products. We intend for AI Test Kitchen to be safe, fun, and educational, and we look forward to innovating in a responsible and transparent way together.