- Includes transparency markers to identify AI-generated content
- Moving forward with broad consultation on responsible use of AI
Artificial Intelligence (AI) is changing the way we understand and view what is happening around us, how we solve problems, and how we govern. Trust, transparency, and authenticity of information and media consumption are all impacted by the use of AI tools.
The Sunday Morning discussed the changing landscape and responsibilities with Meta Director of Public Policy – South and Central Asia Sarim Aziz.
Following are excerpts:
What role does Meta see itself playing in shaping international norms or standards for responsible AI development and deployment?
To realise the benefits of open source, it must be done responsibly. For each model we develop, we weigh the benefits of openly releasing models against potential risks and take steps to assess and mitigate potential risks when we are developing them. We also help to enable others to do the same when using our models in the context of their own specific use cases.
We work closely with policymakers, academia, civil society, and industry partners to address the issues people care about most, including safety, privacy, fairness, and ethical use. We also empower the AI community to develop and use our models responsibly through resources such as our Responsible Use Guide, which outlines best practices and considerations for developers, together with some mitigation strategies and resources available to developers to address risks at various points in the system.
These resources include the evaluation and safety tools that we make openly available.
What principles guide Meta’s decisions around transparency – such as model explainability, data provenance, and system limitations – in consumer-facing AI products?
Transparency is a core principle in how we build and deploy AI at Meta. We are committed to helping people understand how our AI models work, what data they are trained on, and the limitations they have. For example, our Generative AI guide in the Privacy Centre explains how we build AI models, how AI features work, and the choices and data privacy rights people have.
For AI-generated content, we employ transparency measures including visible markers on images created or edited using Meta AI, as well as invisible watermarks and metadata embedded within image files. These help people understand when content is AI-generated.
How does Meta reconcile its commitment to openness in AI research (e.g. Llama models) with growing concerns about misuse and safety risks?
Meta believes that openness and safety are complementary goals in AI development. Our open-source approach allows the AI community to inspect, evaluate, and improve our models, supporting a global standard that prioritises safety, fairness, and accountability. This openness has facilitated the download and use of our Llama models over one billion times, empowering developers and companies to innovate and create jobs.
Before releasing any model, we conduct extensive risk assessments, red-teaming exercises, and safety fine-tuning to mitigate misuse or harmful outcomes. We also engage in ongoing evaluation and community engagement to address emerging risks proactively, recognising that not all actors intend to use AI responsibly.
How does Meta approach setting internal AI governance policies, especially in light of rapidly evolving global regulatory frameworks like the European Union’s (EU) AI Act?
Meta takes a proactive and comprehensive approach to AI governance, integrating evolving regulatory requirements with our internal standards for safety, privacy, and ethics. We have robust privacy review processes that assess risks related to data collection, use, and sharing in AI development. Our governance includes ongoing monitoring, mitigation strategies, and compliance with applicable laws.
We engage with regulators, policymakers, and experts globally to ensure our AI products meet high standards for safety and data protection. Regulatory overlap and overreach in the EU risks throttling AI development and deployment. Over 40 leading European companies, which employ hundreds of thousands of people, have already come out publicly to raise concerns about the EU’s AI regulatory regime and the impact it will have on the EU’s AI competitiveness.
How is Meta working to ensure its AI systems do not unintentionally reinforce biases, especially across global, multicultural user bases?
Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we will continue to address these issues and improve these features as they evolve and more people share their feedback.
Addressing bias is an ongoing effort, and we continuously improve our models based on research, feedback, and new data to ensure AI works well for diverse global communities.
With AI-generated content proliferating, how is Meta adapting its content moderation and authenticity policies to maintain platform trust and safety?
We label AI-generated content with visible ‘AI Info’ labels and embed invisible watermarks and metadata to promote transparency. Our moderation systems use a combination of human reviewers and automated technology to detect and reduce harmful or misleading AI-generated content.
We apply our Community Standards consistently, regardless of whether content is created by AI or people. We also invest in ongoing research and collaboration with industry partners to improve detection of manipulated media, deepfakes, scams, and other risks associated with AI content.
Our goal is to maintain platform trust and safety while enabling creative and positive uses of AI-generated content.
How is Meta collaborating with other tech companies, academia, and governments to develop shared guardrails for frontier AI models?
Meta actively collaborates with a broad ecosystem of partners to develop shared guardrails for frontier AI models. We work with other tech companies, academic institutions, civil society, and governments to address safety, ethical, and regulatory challenges.
Our open source Llama models are widely used by researchers, developers, and government agencies, including those focused on defence and national security. We partner with organisations like Accenture Federal Services, Booz Allen, and others to support responsible AI adoption.
We participate in industry initiatives and policy discussions to promote standards of openness, transparency, and accountability. By sharing research, safety tools, and best practices, we aim to build a global AI ecosystem that balances innovation with robust safeguards.