At Google, we recognise the importance of working together to create a safer online world for everyone. In addition to the work we do to improve our systems and protect you from scams, including using the latest advancements in AI, we’ve been partnering closely with industry, community groups and the Government to further our work to protect Kiwis from scams.
Today, I'm pleased to share that we're working as part of the new New Zealand Anti-Scam Alliance - a national effort that seeks to reduce the number of Kiwis falling victim to online financial scams. Recently, I had the pleasure of meeting with Minister Scott Simpson (Commerce and Consumer Affairs) to share the news that we’re committing to the Australian Online Scams Code (AOSC) in New Zealand. This builds on our existing work, including last year’s launch of the Financial Services Verification to help combat scammers. To date, we are the only platform implementing financial services advertiser verification in New Zealand.
In an effort to keep Kiwis updated, we’re sharing three scams trends we’re tackling in addition to industry and government collaboration and how the latest AI is working to improve online safety. Through sharing tactics commonly used by scammers we can grow our collective understanding and become more vigilant in spotting and reporting threats:
Exploitation of major events: Scammers use AI to quickly adapt to topical moments, such as ‘end of financial year sales’ and breaking news, targeting everything from concerts and sporting events to natural disasters, when people feel pressured to act fast. Google strengthens our monitoring and enforcement during major events and crises, and has dedicated sensitive events policies that prohibit products or services that exploit, dismiss, or condone the sensitive event.
Safety tips: Be skeptical of “too good to be true” deals. Buy tickets and donate only through official channels. Verify charities and check URLs before clicking. Use the ‘About this result’ feature on Search for source information.
Malvertising: Increasingly our teams have observed scammers targeting more sophisticated users — those with valuable assets like crypto wallets or individuals with significant online influence. Scammers may use malvertising as an initial step in their attack cycle, often trying to convince users their software is safe, even urging them to ignore security warnings or turn off antivirus. Google actively works with trusted advertisers and partners to help prevent malware in ads through a combination of AI and human review. Accounts violating Google Ads malicious software policies are immediately suspended.
Safety tips: Download software from official sources and verify the URL. Be wary of offers of free versions of licensed software. Use Enhanced Safe Browsing to detect hidden risks when downloading encrypted files.
Package tracking & Toll Road Scams: These scams send fraudulent messages that appear to come from legitimate sources, tricking users into paying additional "fees" related to delivery of items or road tolls. In Google Messages we launched Scam Detection, which uses powerful on-device AI to detect suspicious patterns in SMS, MMS and RCS messages and warn users.
Safety tips: Check Website URLs before entering info, ensure the website address is legitimate (look for "https" and no typos). Be mindful of what you share online, especially personal and financial details.
We're committed to protecting people from these threats, and we want to empower Kiwis with the knowledge and tools to stay safe too.
Remember, staying safe online is a shared responsibility. For more information and resources on how to stay safe online, visit Google’s Safety Centre.
AI presents a significant opportunity for economic growth and workforce transformation in Asia-Pacific. But many people don’t have the skills they need to make the most of this opportunity. While people in the region are excited about AI — with 58% of people in Asia-Pacific excited about its potential — only 15% have received AI training and most are unaware this type of training exists, according to a new report from AVPN, a network of social investors. This illustrates the critical need for the Google.org AI Opportunity Fund: Asia-Pacific. Our goal is to help everyone in the region unlock the opportunities of AI.
Today, we’re introducing 49 social impact organisations that will receive funding from the first phase of the Fund, selected by AVPN through their open call and supported by the Asian Development Bank (ADB), including Literacy Waitākere of Tāmaki Makaurau Auckland, to provide contextualized and localized AI training and resources to those who need it most.
Literacy Waitākere provides accessible programs for adult learners to develop critical literacy and digital skills, promoting lifelong learning and enabling pathways to employment and independence.
Sue West, CEO of Literacy Waitākere said, "We see firsthand that many in our community risk being left behind as AI transforms our world. Our goal is to demystify AI for learners who are still building foundational digital literacy, and often have low or no formal qualifications and primarily use only their phones. This support from the Google.org AI Opportunity Fund: Asia-Pacific is crucial as it empowers us to extend our 'Digital Skills for Life' programme into essential AI literacy. We want to explicitly teach those in our community how to use AI tools effectively, how to craft good prompts, and, critically, how to evaluate AI outputs responsibly and safely.
Many employers will soon expect these skills, and AI is already impacting recruitment. We aim to equip our learners to not just find jobs or switch careers, but to thrive in them. By building their confidence and competence with AI, we’re fostering digital equity and preparing them to be assets in their workplaces. We believe this knowledge will create a ripple effect, as participants share their learning with families and communities, ensuring more people can navigate and benefit from the AI-powered future, leading to increased prosperity and inclusion for all."
To help even more people across Asia Pacific learn how to use AI, today Google.org is also announcing a $12M expansion of the Google.org AI Opportunity Fund: Asia Pacific, helping workers, small businesses and nonprofits make the most of technology in their communities. Interested organisations can find out more information here. Together with the first phase, the Fund aims to train 720,000 workers, 100,000 micro-, small- and medium-sized enterprises (MSMEs) and reach 10,000 nonprofits in the region.
A shift towards an AI-powered economy needs to be fair and inclusive, so all workers have the necessary knowledge and tools to participate. Through the Google.org AI Opportunity Fund: Asia-Pacific and our ongoing collaborations, we're committed to making sure everyone can benefit from the potential of AI.
Today, we're celebrating Waitangi Day in New Zealand with a special Google Doodle! Illustrated by guest artist Jordan Tuhura of Tuatahi Creative, the Doodle beautifully captures the essence of this important day.
It focuses on the themes of rangatiratanga (leadership) and partnership, reflecting the spirit of Te Tiriti o Waitangi (the Treaty of Waitangi) signed on February 6, 1840. It also symbolizes New Zealand's ongoing journey towards unity and understanding.
At Google, we recognize Waitangi Day's significance to Aotearoa’s history and its role as the foundation for the country's diverse landscape. We've been celebrating Waitangi Day through Google Doodles since 2018, with each year's artwork offering a unique interpretation of the positivity and unity that the Treaty represents.
At the heart of our 2025 Doodle, two figures representing Māori and the British Crown engage in a hongi, a traditional Māori greeting that signifies the sharing of life and connection. They are flanked by four powerful pou (carved figures) standing as guardians, representing the strength and resilience of the Māori people.
The Doodle seamlessly blends traditional Māori art forms with modern digital techniques. It serves as a powerful reminder of the partnership at the heart of Waitangi Day and New Zealand's commitment to building a future where different cultures coexist in harmony.
We're dedicated to celebrating and preserving Māori culture, language, and histories, and helping global audiences learn about them. From supporting te reo Māori in our products to showcasing the beauty of Aotearoa through initiatives like Google Doodle, we strive to honor the unique heritage of New Zealand and share it with the world.
We are also sharing the frontiers of our agentic research by showcasing prototypes enabled by Gemini 2.0’s native multimodal capabilities.
Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times. Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed. 2.0 Flash also comes with new capabilities. In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution as well as third-party user-defined functions.
Our goal is to get our models into people’s hands safely and quickly. Over the past month, we’ve been sharing early, experimental versions of Gemini 2.0, getting great feedback from developers.
Gemini 2.0 Flash is available now as an experimental model to developers via the Gemini API in Google AI Studio and Vertex AI with multimodal input and text output available to all developers, and text-to-speech and native image generation available to early-access partners. General availability will follow in January, along with more model sizes.
To help developers build dynamic and interactive applications, we’re also releasing a new Multimodal Live API that has real-time audio, video-streaming input and the ability to use multiple, combined tools. More information about 2.0 Flash and the Multimodal Live API can be found in our developer blog.
Also starting today, Gemini users globally can access a chat optimized version of 2.0 Flash Experimental by selecting it in the model drop-down on desktop and mobile web and it will be available in the Gemini mobile app soon. With this new model, users can experience an even more helpful Gemini assistant.
Early next year, we’ll expand Gemini 2.0 to more Google products.
Gemini 2.0 Flash’s native user interface action-capabilities, along with other improvements like multimodal reasoning, long context understanding, complex instruction following and planning, compositional function-calling, native tool use and improved latency, all work in concert to enable a new class of agentic experiences.
The practical application of AI agents is a research area full of exciting possibilities. We’re exploring this new frontier with a series of prototypes that can help people accomplish tasks and get things done. These include an update to Project Astra, our research prototype exploring future capabilities of a universal AI assistant; the new Project Mariner, which explores the future of human-agent interaction, starting with your browser; and Jules, an AI-powered code agent that can help developers.
We’re still in the early stages of development, but we’re excited to see how trusted testers use these new capabilities and what lessons we can learn, so we can make them more widely available in products in the future.
Since we introduced Project Astra at I/O, we’ve been learning from trusted testers using it on Android phones. Their valuable feedback has helped us better understand how a universal AI assistant could work in practice, including implications for safety and ethics. Improvements in the latest version built with Gemini 2.0 include:
Better dialogue: Project Astra now has the ability to converse in multiple languages and in mixed languages, with a better understanding of accents and uncommon words.
New tool use: With Gemini 2.0, Project Astra can use Google Search, Lens and Maps, making it more useful as an assistant in your everyday life.
Better memory: We’ve improved Project Astra’s ability to remember things while keeping you in control. It now has up to 10 minutes of in-session memory and can remember more conversations you had with it in the past, so it is better personalized to you.
Improved latency: With new streaming capabilities and native audio understanding, the agent can understand language at about the latency of human conversation.
We’re working to bring these types of capabilities to Google products like Gemini app, our AI assistant, and to other form factors like glasses. And we’re starting to expand our trusted tester program to more people, including a small group that will soon begin testing Project Astra on prototype glasses.
Project Mariner is an early research prototype built with Gemini 2.0 that explores the future of human-agent interaction, starting with your browser. As a research prototype, it’s able to understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms, and then uses that information via an experimental Chrome extension to complete tasks for you.
When evaluated against the WebVoyager benchmark, which tests agent performance on end-to-end real world web tasks, Project Mariner achieved a state-of-the-art result of 83.5% working as a single agent setup.
It’s still early, but Project Mariner shows that it’s becoming technically possible to navigate within a browser, even though it’s not always accurate and slow to complete tasks today, which will improve rapidly over time.
To build this safely and responsibly, we’re conducting active research on new types of risks and mitigations, while keeping humans in the loop. For example, Project Mariner can only type, scroll or click in the active tab on your browser and it asks users for final confirmation before taking certain sensitive actions, like purchasing something.
Trusted testers are starting to test Project Mariner using an experimental Chrome extension now, and we’re beginning conversations with the web ecosystem in parallel.
Next, we’re exploring how AI agents can assist developers with Jules — an experimental AI-powered code agent that integrates directly into a GitHub workflow. It can tackle an issue, develop a plan and execute it, all under a developer’s direction and supervision. This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding.
More information about this ongoing experiment can be found in our developer blog post.
Google DeepMind has a long history of using games to help AI models become better at following rules, planning and logic. Just last week, for example, we introduced Genie 2, our AI model that can create an endless variety of playable 3D worlds — all from a single image. Building on this tradition, we’ve built agents using Gemini 2.0 that can help you navigate the virtual world of video games. It can reason about the game based solely on the action on the screen, and offer up suggestions for what to do next in real time conversation.
We're collaborating with leading game developers like Supercell to explore how these agents work, testing their ability to interpret rules and challenges across a diverse range of games, from strategy titles like “Clash of Clans” to farming simulators like “Hay Day.”
Beyond acting as virtual gaming companions, these agents can even tap into Google Search to connect you with the wealth of gaming knowledge on the web.
In addition to exploring agentic capabilities in the virtual world, we’re experimenting with agents that can help in the physical world by applying Gemini 2.0's spatial reasoning capabilities to robotics. While it’s still early, we’re excited about the potential of agents that can assist in the physical environment.
You can learn more about these research prototypes and experiments at labs.google.
Gemini 2.0 Flash and our research prototypes allow us to test and iterate on new capabilities at the forefront of AI research that will eventually make Google products more helpful.
As we develop these new technologies, we recognize the responsibility it entails, and the many questions AI agents open up for safety and security. That is why we are taking an exploratory and gradual approach to development, conducting research on multiple prototypes, iteratively implementing safety training, working with trusted testers and external experts and performing extensive risk assessments and safety and assurance evaluations.
For example:
As part of our safety process, we’ve worked with our Responsibility and Safety Committee (RSC), our longstanding internal review group, to identify and understand potential risks.
Gemini 2.0's reasoning capabilities have enabled major advancements in our AI-assisted red teaming approach, including the ability to go beyond simply detecting risks to now automatically generating evaluations and training data to mitigate them. This means we can more efficiently optimize the model for safety at scale.
As Gemini 2.0’s multimodality increases the complexity of potential outputs, we’ll continue to evaluate and train the model across image and audio input and output to help improve safety.
With Project Astra, we’re exploring potential mitigations against users unintentionally sharing sensitive information with the agent, and we’ve already built in privacy controls that make it easy for users to delete sessions. We’re also continuing to research ways to ensure AI agents act as reliable sources of information and don’t take unintended actions on your behalf.
With Project Mariner, we’re working to ensure the model learns to prioritize user instructions over 3rd party attempts at prompt injection, so it can identify potentially malicious instructions from external sources and prevent misuse. This prevents users from being exposed to fraud and phishing attempts through things like malicious instructions hidden in emails, documents or websites.
We firmly believe that the only way to build AI is to be responsible from the start and we'll continue to prioritize making safety and responsibility a key element of our model development process as we advance our models and agents.
Gemini 2.0, AI agents and beyond
Today’s releases mark a new chapter for our Gemini model. With the release of Gemini 2.0 Flash, and the series of research prototypes exploring agentic possibilities, we have reached an exciting milestone in the Gemini era. And we’re looking forward to continuing to safely explore all the new possibilities within reach as we build towards AGI.
What was on Kiwis' minds?
Kiwis had their eyes fixed on the world stage in 2024, with the US election dominating trending search queries. Sports captivated the nation. From the UEFA European Football Championship and Cricket T20 World Cup to the All Blacks’ rugby clash against England and the Australian Open, Kiwis proved once again they are sports fanatics.
A curious trend emerged this year: a surge in searches for the humble flat white. Perhaps it was fuelled by a rekindled debate about its origins - was it invented in New Zealand or Australia? Whatever the reason, this iconic Kiwi beverage struck a chord, landing a spot in the Top 10.
The tragic passing of Liam Payne sent shockwaves through the nation, sparking a wave of searches likely driven by One Direction nostalgia or a stark reminder of life's fragility. In a year marked by uncertainty, Kiwis sought escapism: flexing their vocabulary with the New York Times' Connections game or indulging their bargain-hunting instincts on Temu.
"Raygun", inspired by Australian Olympian Rachael Gunn's breakdancing, was a viral sensation in New Zealand, topping the memes chart. The "Demure" trend, with its emphasis on kindness and composure, resonated with Kiwis’ friendly and welcoming spirit. Classic memes like “What’s up brother” and “Knee surgery” also proved popular, showing that Kiwis ultimately appreciate a good laugh above all else.
How-to searches reveal a nation eager to learn
Kiwis' "how-to" searches in 2024 reveal a nation eager to learn, adapt and explore the digital world. "How to watch the Olympics in NZ" emerged as #1, showing a sporting nation keen to catch the action even from afar. Beyond sports, “How to lock Facebook profile" reflects a growing desire for online privacy.
“How to make human in Infinite Craft" and "How to say Happy Matariki in te reo” showcased a country embracing both digital innovation and cultural heritage. And who could forget the "How to mew" trend? This tongue-positioning technique, promising a sculpted jawline, demonstrates the unpredictable nature of online trends.
Finally, no Kiwi year would be complete without the America's Cup, with "How to watch America's Cup in NZ" rounding out the top searches.
This year's Google searches paint a vivid picture of New Zealand in 2024: a nation connected to the world yet proud of its unique identity, embracing both tradition and technology, with a keen interest in everything from sports and current events to quirky online trends.
To bring the year's top trending searches to life visually, we collaborated with the Kākano Youth Arts Collective, a programme that supports vulnerable young artists. Our collaborating artist has brilliantly used birds to show some of the key moments of 2024 – very cool and very Kiwi!
If you are a member of the press, please email our communications team at: press-australia-nz@google.com For all other inquiries, please visit our Help Center.