• Home
  • AI
  • Warning: Researchers Highlight Deteriorating Transparency in AI Models
Warning: Researchers Highlight Deteriorating Transparency in AI Models

Warning: Researchers Highlight Deteriorating Transparency in AI Models

Major AI Foundation Models Becoming Less Transparent, Says Stanford Study

A recent study from Stanford University’s Center for Research on Foundation Models (CRFM) has found that major AI foundation models such as ChatGPT, Claude, Bard, and LlaM-A-2 are becoming less transparent. This lack of transparency poses challenges for businesses, policymakers, and consumers alike.

Rishi Bommasani, Society Lead at CRFM, stated that “companies in the foundation model space are becoming less transparent.” The companies behind these models have different views on openness and transparency. OpenAI, for example, has embraced the lack of transparency as a cautionary measure. They believe in safely sharing access to and benefits of their systems rather than releasing everything openly.

Anthropic, on the other hand, is committed to treating AI systems that are transparent and interpretable. They focus on setting transparency and procedural measures to ensure compliance with their commitments. Google also recently launched a Transparency Center to address this issue.

Why AI Transparency Matters

The lack of transparency in AI models has significant implications. It makes it harder for businesses to build applications relying on these models, for academics to conduct research using them, for policymakers to create meaningful policies, and for consumers to understand limitations or seek redress for harms caused.

Introducing the Foundation Model Transparency Index (FMTI)

To address the issue of transparency, researchers from Stanford, MIT, and Princeton developed the Foundation Model Transparency Index (FMTI). The index evaluates various aspects of transparency in AI companies’ models, including how they are built, dataset availability, functionality, and downstream usage.

The Results: Scores of Transparency

The results of the FMTI were less than stellar. The highest scores ranged from 47 to 54 on a scale of 0 to 100. Meta’s Llama 2 had the highest score, followed by OpenAI with 47%, Google with 41%, and Anthropic with 39% transparency.

Hot Take: The Importance of AI Transparency for All Stakeholders

The decreasing transparency of major AI foundation models poses challenges for businesses, policymakers, researchers, and consumers. It hinders innovation, research, policy development, and consumer understanding. As AI continues to play an increasingly significant role in our lives, it is crucial for companies to prioritize transparency and accountability. OpenAI’s shift in thinking towards safe sharing of benefits is a step in the right direction. However, more efforts are needed to ensure that AI models are transparent, interpretable, and accountable to build trust and maximize their positive impact on society.

Read Disclaimer
This content is aimed at sharing knowledge, it's not a direct proposal to transact, nor a prompt to engage in offers. Lolacoin.org doesn't provide expert advice regarding finance, tax, or legal matters. Caveat emptor applies when you utilize any products, services, or materials described in this post. In every interpretation of the law, either directly or by virtue of any negligence, neither our team nor the poster bears responsibility for any detriment or loss resulting. Dive into the details on Critical Disclaimers and Risk Disclosures.

Share it

Warning: Researchers Highlight Deteriorating Transparency in AI Models