10th June 2024
Stanford University have published their seventh annual Artificial Intelligence Index Report. The report covers "technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development". The Art Newspaper offers some thoughts on the report and the impact on artists and museums.
Sources: AI Index Report; The Art Newspaper
A group of OpenAI insiders and ex-employees have written an open letter warning of the risks of generative AI. The open letter calls on AI companies to adhere to the following principles:
- That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
- That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
- That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
- That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
The New York Times has more on the open letter and some of the background.
Sources: Open Letter; Insiders Warn of OpenAI’s Reckless Race to No. 1 (New York Times - accessible with your UEL login).
Meredith Whittaker, president of encrypted messaging app Signal and co-founder of the AI Now Institute, has warned of the privacy issues with using generative AI. Axios reports on her comments from their recent summit. Whittaker also spoke at a recent StrictlyVC event about AI. You can see the interview with her in the video below. Source: Axios
Commenting on blog posts requires an account.
Login is required to interact with this comment. Please and try again.
If you do not have an account, Register Now.