Post content

Image: Newshounds by Squiz Kids in the classroom


This week, UNESCO’s Media Literacy Week is focused on nurturing trust in media and information. There’s no better time to educate and empower people to be confident consumers of media than at school, which is why Squiz Kids has partnered with the Google News Initiative to roll out its media literacy program ‘Newshounds’ to primary schools across New Zealand. 



Squiz Kids, a daily news podcast for 8-12yos, has developed Newshounds by Squiz Kids as a plug-and-play media literacy teaching resource comprising eight x 10 minute podcasts and accompanying in-classroom activities, packaged up in an engaging board-game style format. 



Squiz-E the Newshound takes primary-aged kids on a media literacy journey, teaching them to understand the myriad forms of media to which they’re exposed every day and recognise the multiple agendas that drive them. Underpinning it all are exercises that give kids the skills to identify misinformation and disinformation. 



“Kids today have more information coming at them on a daily basis than at any other time in history,”  said Squiz Kids Director Bryce Corbett. “We created Newshounds to make kids critical consumers of media - to teach them to stop, think and check before believing everything they come across on the internet. Teachers and parents alike know it’s important to teach their children media literacy, but few know where to start. By partnering with Google, it’s hoped that Newshounds starts conversations with adults that help kids recognise online fact from fiction.”



The partnership with Google will allow classrooms across New Zealand to access the Newshounds media literacy program for free from this week.


The Manaiakalani Education schools in Tāmaki Makaurau have been running a pilot of the programme in their classrooms over the past few months, and they’ve found students were engaged by the content, and most importantly, were transferring these concepts to other areas of learning when they were online.



Listeners, readers and viewers are incredibly powerful in the fight against misinformation - the more they demand quality information, the higher chance facts have to win the battle. But those audiences need support. 


Understanding the many complex elements that go into deciding what is fact and what is a falsehood starts at an early age, which is why we’re so proud to work with Squiz Kids to launch Newshounds in New Zealand schools. This partnership builds on our efforts to build a vibrant, diverse and innovative news industry in Aotearoa. In August we launched Google News Showcase and other Google News Initiative programmes that continue our long term support - to help people find quality journalism and contribute to the sustainability of news organisations.



Teachers are invited to create a free account at newshounds.squizkids.com.au - and start their class on the path to media literacy.

Post content

a screenshot of a phone with the text "AI test kitchen" and an illustration of a stove.


As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural interactions with computers. We see a future where you can find the information you’re looking for in the same conversational way you speak to friends and family. While there’s still lots of work to be done before this type of human-computer interaction is possible, recent research breakthroughs in generative language models — inspired by the natural conversations of people — are accelerating our progress. One of our most promising models is called LaMDA (Language Model for Dialogue Applications), and as we move ahead with development, we feel a great responsibility to get this right.


That’s why we introduced an app called AI Test Kitchen at Google I/O earlier this year. It provides a new way for people to learn about, experience, and give feedback on emerging AI technology, like LaMDA. Starting today, you can register your interest for the AI Test Kitchen as it begins to gradually roll out in New Zealand on Android and iOS.


Linked image of AI Test Kitchen registration page

Our goal is to learn, improve and innovate responsibly on AI together.


Similar to a real test kitchen, AI Test Kitchen will serve a rotating set of experimental demos. These aren’t finished products, but they’re designed to give you a taste of what’s becoming possible with AI in a responsible way. Our first set of demos explore the capabilities of our latest version of LaMDA, which has undergone key safety improvements. The first demo, “Imagine It,” lets you name a place and offers paths to explore your imagination. With the “List It” demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the “Talk About It (Dogs Edition)” demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.


Evaluating LaMDA’s potential and its risks

As you try each demo, you’ll see LaMDA’s ability to generate creative responses on the fly. This is one of the model’s strengths, but it can also pose challenges since some responses can be inaccurate or inappropriate. We’ve been testing LaMDA internally over the last year, which has produced significant quality improvements. More recently, we’ve run dedicated rounds of adversarial testing to find additional flaws in the model. We enlisted expert red teaming members — product experts who intentionally stress test a system with an adversarial mindset — who have uncovered additional harmful, yet subtle, outputs. For example, the model can misunderstand the intent behind terms and sometimes fails to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts. It can also produce harmful or toxic responses based on biases in its training data, generating responses that stereotype and misrepresent people based on their gender or cultural background. These areas and more continue to be under active research.


In response to these challenges, we’ve added multiple layers of protection to the AI Test Kitchen. This work has minimised the risk, but not eliminated it. We’ve designed our systems to automatically detect and filter out words or phrases that violate our policies, which prohibit users from knowingly generating content that is sexually explicit; hateful or offensive; violent, dangerous, or illegal; or divulges personal information. In addition to these safety filters, we made improvements to LaMDA around quality, safety, and groundedness — each of which are carefully measured. We have also developed techniques to keep conversations on topic, acting as guardrails for a technology that can generate endless, free-flowing dialogue. As you’re using each demo, we hope you see LaMDA’s potential, but also keep these challenges in mind.


Responsible progress, together

In accordance with our AI Principles, we believe responsible progress doesn’t happen in isolation. We’re at a point where external feedback is the next, most helpful step to improve LaMDA. When you rate each LaMDA reply as nice, offensive, off topic, or untrue, we’ll use this data — which is not linked to your Google account — to improve and develop our future products. We intend for AI Test Kitchen to be safe, fun, and educational, and we look forward to innovating in a responsible and transparent way together.


Post content