--- title: "OECD report finds growing transparency efforts among leading AI developers" type: "News" locale: "zh-CN" url: "https://longbridge.com/zh-CN/news/258979340.md" description: "The OECD report highlights that leading AI developers are enhancing transparency and risk management practices. Companies like Google, Microsoft, and OpenAI are adopting advanced methods such as adversarial testing to improve AI reliability. The report emphasizes the importance of sharing risk management information to foster trust and innovation. However, tools for technical provenance remain limited. The OECD's voluntary reporting framework aims to standardize transparency expectations and support safe AI development, initiated under Japan's G7 Presidency in 2023." datetime: "2025-09-26T03:00:53.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/258979340.md) - [en](https://longbridge.com/en/news/258979340.md) - [zh-HK](https://longbridge.com/zh-HK/news/258979340.md) --- > 支持的语言: [English](https://longbridge.com/en/news/258979340.md) | [繁體中文](https://longbridge.com/zh-HK/news/258979340.md) # OECD report finds growing transparency efforts among leading AI developers **TF** PARIS: Leading AI developers are taking significant steps to make their systems more robust and secure, according to a new OECD report. How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct analyses voluntary transparency reporting under the G7 Hiroshima AI Process from technology and telecommunications companies as well as advisory, research, and educational institutions, including Anthropic, Google, Microsoft, NTT, OpenAI, Salesforce and Fujitsu. The analysis shows that many organisations are developing increasingly sophisticated methods to evaluate and mitigate risks, including adversarial testing and AI-assisted tools to better understand model behaviour and improve reliability. Larger technology firms tend to have more advanced practices, particularly in assessing systemic and society-wide risks. The report also finds that key AI actors increasingly recognise the importance of sharing information about risk management to build trust, enable peer-learning and create more predictable environments for innovation and investment. However, technical provenance tools such as watermarking, cryptographic signatures, and content credentials remain limited beyond some large firms. “Greater transparency is key to building trust in artificial intelligence and accelerating its adoption. By providing common reference points, voluntary reporting can help disseminate best practices, reduce regulatory fragmentation, and promote the uptake of AI across the economy, including by smaller firms” said Jerry Sheehan, Director for Science, Technology and Innovation at the OECD. "As we define common transparency expectations, the Hiroshima AI Process Reporting Framework can play a valuable role by streamlining the reporting process. Going forward, it could also help align organisations on emerging reporting expectations as AI technology and governance practices continue to advance." Amanda Craig, Senior Director, Responsible AI Public Policy, Microsoft. Developed under the Italian G7 Presidency in 2024 with input from business, academia and civil society, the OECD's voluntary reporting framework provides a foundation for co-ordinated approaches to safe, secure and trustworthy AI. It supports the implementation of the Hiroshima AI Process initiated under Japan’s 2023 G7 Presidency. Disclaimer: The content of this article is syndicated or provided to this website from an external third party provider. We are not responsible for, and do not control, such external websites, entities, applications or media publishers. The body of the text is provided on an “as is” and “as available” basis and has not been edited in any way. Neither we nor our affiliates guarantee the accuracy of or endorse the views or opinions expressed in this article. Read our full disclaimer policy here. ### 相关股票 - [OpenAI (OpenAI.NA)](https://longbridge.com/zh-CN/quote/OpenAI.NA.md) - [Alphabet (GOOGL.US)](https://longbridge.com/zh-CN/quote/GOOGL.US.md) ## 相关资讯与研究 - [OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise](https://longbridge.com/zh-CN/news/281250634.md) - [OpenAI is now bringing in $2 billion a month - and 3 more highlights from its latest update](https://longbridge.com/zh-CN/news/281258023.md) - [Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project](https://longbridge.com/zh-CN/news/281282573.md) - [OpenAI Launches Bug Bounty Program To Cripple AI Weaponization](https://longbridge.com/zh-CN/news/280830008.md) - [OpenAI's Next Big Revenue Driver Is Here — ChatGPT Ads Hit $100M in Just Six Weeks](https://longbridge.com/zh-CN/news/280743264.md)