Krist Novoselic Urges Microsoft to Reevaluate Approach to AI
Krist Novoselic, co-founder and bass guitarist for Nirvana, presented a shareholder proposal to Microsoft, calling for a reevaluation of its approach to generative artificial intelligence (AI). The proposal, titled “Shareholder Proposal 13: Report on AI Misinformation and Disinformation,” was submitted by Arjuna Capital on behalf of Novoselic and other shareholder groups.
Concerns About Artificial Intelligence Risks
The proposal highlighted concerns about the potential for Microsoft-developed or -backed AI models to contribute to the spread of disinformation and misinformation. It also questioned whether Section 230, a law that protects internet hosts and users from liability for third-party content, would apply to content generated by Microsoft’s own AI systems.
In his presentation, Novoselic raised the issue of inaccurate information produced by Microsoft’s AI-powered Bing platform. He also mentioned the experts’ call for a six-month pause on AI development, which Microsoft and the industry as a whole chose to ignore.
Microsoft’s Response
Microsoft’s board responded to the proposal by claiming that they had already addressed its request through existing and upcoming reporting. However, the purpose of the proposal was to obtain more comprehensive information about the long-term risks associated with generative AI.
The board recommended that shareholders reject the proposal, citing their current programs and reporting as sufficient. The proposal ultimately failed to pass in a subsequent shareholder vote.
Hot Take: Microsoft Should Prioritize Responsible AI Development
It is crucial for companies like Microsoft to prioritize responsible AI development. The potential risks associated with generative AI, such as misinformation and disinformation, need to be thoroughly assessed. Rushing technology deployment without considering the long-term consequences can have detrimental effects on society. It is important for shareholders to hold companies accountable and demand transparency in assessing and mitigating AI risks. By doing so, we can ensure that AI technologies are developed and deployed in a responsible and ethical manner.