17th June 2024
The Ash Center for Democratic Governance and Innovation have published a comment piece on the use of AI for political polling:
"Researchers and firms are already using LLMs to simulate polling results. Current techniques are based on the ideas of AI agents. An AI agent is an instance of an AI model that has been conditioned to behave in a certain way. For example, it may be primed to respond as if it is a person with certain demographic characteristics and can access news articles from certain outlets. Researchers have set up populations of thousands of AI agents that respond as if they are individual members of a survey population, like humans on a panel that get called periodically to answer questions."
Source: Ash Center
"ChatGPT is bullshit" is an open access journal article written by three University of Glasgow researchers. Reflecting on so-called hallucinations, they argue that: "these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs."
Source: Ethics and Information Technology
Maggie Harrison Dupré at Futurism takes a look at a new report by Human Rights Watch which reveals that "a widely used, web-scraped AI training dataset includes images of and information about real children — meaning that generative AI tools have been trained on data belonging to real children without their knowledge or consent."
Source: Futurism; Human Rights Watch
TikTok ads may soon include AI generated avatars of creators.
Source: The Verge
OpenAI has appointed Paul M. Nakasone, a former head of the National Security Agency (NSA), to its board of directors. Nakasone was appointed by President Trump in 2018 and will be tasked with contributing to OpenAI’s efforts to "better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”
Source: The Verge
7th October 2024
Sasha Luccioni, a researcher in computer science, has warned about the impact of the growth of generative AI on the climate. Arguing that generative AI uses "30 times more energy than a traditional search engine", Luccioni is working on creating a "certification system" for AI so that users will be able to know the energy consumption of an AI product. Source: TechExplore/AFP
The Verge highlights Meta's use of user content to train its artificial intelligence models. Facebook's privacy centre outlines how it uses public content on Facebook and Instagram "to develop and improve generative AI models for our features and experiences". Details on how to withdraw consent are available here. Source: The Verge
An artist is battling the US Copyright Office following their refusal to register their artwork, generated using Midjourney. Jason Allen is appealing that decision, asking for a judicial review and alleging that "the negative media attention surrounding the Work may have influenced the Copyright Office Examiner's perception and judgment." Source: Ars Technica
Esquire has an article on the increasing number of men turning to AI companions. Source: Esquire
Google have announced some upcoming changes to Search. Source: Techradar
Commenting on blog posts requires an account.
Login is required to interact with this comment. Please and try again.
If you do not have an account, Register Now.