Generative AI Blog

Showing 10 of 0 Results

10/07/2024
profile-icon Ian Clark
No Subjects

7th October 2024

Sasha Luccioni, a researcher in computer science, has warned about the impact of the growth of generative AI on the climate. Arguing that generative AI uses "30 times more energy than a traditional search engine", Luccioni is working on creating a "certification system" for AI so that users will be able to know the energy consumption of an AI product. Source: TechExplore/AFP

The Verge highlights Meta's use of user content to train its artificial intelligence models. Facebook's privacy centre outlines how it uses public content on Facebook and Instagram "to develop and improve generative AI models for our features and experiences". Details on how to withdraw consent are available here. Source: The Verge

An artist is battling the US Copyright Office following their refusal to register their artwork, generated using Midjourney. Jason Allen is appealing that decision, asking for a judicial review and alleging that "the negative media attention surrounding the Work may have influenced the Copyright Office Examiner's perception and judgment." Source: Ars Technica

Esquire has an article on the increasing number of men turning to AI companions. Source: Esquire

Google have announced some upcoming changes to Search. Source: Techradar

This post has no comments.
06/17/2024
profile-icon Ian Clark
No Subjects

17th June 2024

The Ash Center for Democratic Governance and Innovation have published a comment piece on the use of AI for political polling:

"Researchers and firms are already using LLMs to simulate polling results. Current techniques are based on the ideas of AI agents. An AI agent is an instance of an AI model that has been conditioned to behave in a certain way. For example, it may be primed to respond as if it is a person with certain demographic characteristics and can access news articles from certain outlets. Researchers have set up populations of thousands of AI agents that respond as if they are individual members of a survey population, like humans on a panel that get called periodically to answer questions."

Source: Ash Center


"ChatGPT is bullshit" is an open access journal article written by three University of Glasgow researchers. Reflecting on so-called hallucinations, they argue that: "these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs."

Source: Ethics and Information Technology


Maggie Harrison Dupré at Futurism takes a look at a new report by Human Rights Watch which reveals that "a widely used, web-scraped AI training dataset includes images of and information about real children — meaning that generative AI tools have been trained on data belonging to real children without their knowledge or consent."

Source: Futurism; Human Rights Watch


TikTok ads may soon include AI generated avatars of creators.

Source: The Verge


OpenAI has appointed Paul M. Nakasone, a former head of the National Security Agency (NSA), to its board of directors. Nakasone was appointed by President Trump in 2018 and will be tasked with contributing to OpenAI’s efforts to "better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”

Source: The Verge

This post has no comments.
06/10/2024
profile-icon Ian Clark
No Subjects

10th June 2024

Stanford University have published their seventh annual Artificial Intelligence Index Report. The report covers "technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development". The Art Newspaper offers some thoughts on the report and the impact on artists and museums.

Sources: AI Index Report; The Art Newspaper

A group of OpenAI insiders and ex-employees have written an open letter warning of the risks of generative AI. The open letter calls on AI companies to adhere to the following principles:

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

The New York Times has more on the open letter and some of the background.

Sources: Open Letter; Insiders Warn of OpenAI’s Reckless Race to No. 1 (New York Times - accessible with your UEL login).

Meredith Whittaker, president of encrypted messaging app Signal and co-founder of the AI Now Institute, has warned of the privacy issues with using generative AI. Axios reports on her comments from their recent summit. Whittaker also spoke at a recent StrictlyVC event about AI. You can see the interview with her in the video below. Source: Axios

 

This post has no comments.
05/20/2024
profile-icon Ian Clark
No Subjects

20th May 2024

Julia Angwin takes a look at the hype around AI in a comment piece for the New York Times. Angwin argues that "The reality is that A.I. models can often prepare a decent first draft. But I find that when I use A.I., I have to spend almost as much time correcting and revising its output as it would have taken me to do the work myself." She goes on to argue that "it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests." Source: New York Times (available to UEL students via ProQuest).

An Open Access article by Chirag Shah and Emily M. Bender published in ACM Transactions on the Web critiques the use of large language models in search. In Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?, Shah and Bender argue that "LLMs have several important and often alarming shortcomings as they have been applied as technologies for IA [information access] systems. To understand whether and how LLMs could be effectively used in IA, a lot more work needs to be done to understand user needs, IB [information behaviour], and how the affordances both LLMs and other tools respond to these needs." Source: ACM Transactions on the Web

In Computers and Society, Abeba Birhane, Sepehr Dehdashtian, Vinay Uday Prabhu and Vishnu Boddeti examine the impact of increasing training data for large language models on classifying images of people. They find that as the training data increased, so did the probability of misclassifying Black and Latino men as "criminal". "The probability of predicting an image of a Black man and a Latino man as criminal increases by 65% and 69%, respectively, when the dataset is scaled from 400M to 2B samples for the larger ViT-L models." Source: arXiv

Jisc have published an update on their report on how students are using generative AI and their expectations of its integration across education. Jisc conducted a series of "nine in-person student discussion forums with over 200 students across colleges and universities to revisit student/learner perceptions of generative AI". Amongst their conclusions they found that "students/learners have significant concerns around equity, accessibility, ethical use, and the potential for bias within generative AI technologies". Source: Jisc

Matthew Hutson in Nature takes a look at how ChatGPT "thinks". Hutson notes that large language models "generate misinformation, perpetuate social stereotypes and leak private information" and examines the growth of XAI (explainable artificial intelligence) to explain large language models.

"...XAI tools are being devised to explain the workings of LLMs. Researchers want explanations so that they can create safer, more efficient and more accurate AI. Users want explanations so that they know when to trust a chatbot’s output. And regulators want explanations so that they know what AI guard rails to put in place." Source: Nature.

Kyle Wiggers in TechCrunch investigates the downfall of OpenAI’s "Superalignment" team, created to develop ways to govern and steer “superintelligent” AI systems. Source: TechCrunch

OpenAI will use Reddit posts to train ChatGPT. Source: Ars Technica

Nitish Pahwa examines the impact of AI on DeviantArt. Source: Slate.

Steven Levy of Wired interviews Arati Prabhakar, director of the White House Office of Science and Technology Policy and the president’s chief science and technology adviser. Source: Wired

The CDAC Network have posted Abeba Birhane's recent keynote delivered at the Alan Turing Institute’s AI UK Fringe in March 2024. You can view the full keynote below.

This post has no comments.
04/29/2024
profile-icon Ian Clark
No Subjects

29th April 2024

Scientific American takes a look at AI and the hidden human labour. The phenomenon is known as "fauxtomation” because it “hides the human work and also falsely inflates the value of the ‘automated’ solution,” according to Irina Raicu, director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics. Source: Scientific American.

Scharon Harding at Ars Technica takes a look at the increasing incorporation of generative AI into pre-existing technological hardware and software. Harding concludes: "There are times when AI can improve a gadget. But over the next months and years, I expect that more devices that aren't necessarily better with AI integration will advertise questionable features related to the tech." Source: Ars Technica.

The Verge report on the agreement between the Financial Times and OpenAI to license its content and develop AI tools. Expect there to be more developments on how the Financial Times incorporates generative AI in the weeks and months ahead. Source: The Verge.

OpenAI are facing a privacy complaint in the European Union targetting the "inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals". TechCrunch notes that it is possible to request for incorrect personal data to be removed, but it doesn't necessarily follow that it will be:

OpenAI’s privacy policy states users who notice the AI chatbot has generated “factually inaccurate information about you” can submit a “correction request” through privacy.openai.com or by emailing dsar@openai.com. However, it caveats the line by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”

In that case, OpenAI suggests users request that it removes their personal information from ChatGPT’s output entirely — by filling out a web form.

Source: TechCrunch

Finally, Dev Ittycheria, CEO of the tech company MongoDB, argues that there is currently too much hype around generative AI in an interview with TechCrunch:

“My life has not been transformed by AI,” he said. “Yes, maybe I can write an email better through all those assistants, but it’s not fundamentally transformed my life. Whereas the internet has completely transformed my life.”

Source: TechCrunch

This post has no comments.
04/22/2024
profile-icon Ian Clark
No Subjects

22nd April 2024

Kyle Wiggers at TechCrunch takes a look at claims around Open Source AI projects. Source: TechCrunch

Ofcom have published an update on their strategic approach to Artificial Intelligence (AI). Source: Ofcom

Ars Technica takes a look at accusations that a Netflix documentary had used AI generated imagery and failed to disclose it in the credits. Jeremy Grimaldi, an executive producer, pointed out that “the photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.” Ars Technica touches on some of the current controversies around the use of generative AI in cinema. Source: Ars Technica

TechCrunch hosts an interview with Anna Korhonen as part of its series looking at women who are making a difference in the development of Generative AI. Anna Korhonen is a professor at the University of Cambridge and she researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. Source: TechCrunch

Researchers Francesco Salvi, Manoel Horta Ribeiro, Riccardo Gallotti and Robert West have submitted a working paper to arXiv on the "conversational persuasiveness of large language models". The rearchers conclude that "LLMs can out-persuade humans in online conversations through microtargeting" and argue that "malicious actors interested in deploying chatbots for large-scale disinformation campaigns could obtain even stronger effects by exploiting fine-grained digital traces and behavioral data, leveraging prompt engineering or fine-tuning language models for their specific scope". Source: arXiv

 

This post has no comments.
04/15/2024
profile-icon Ian Clark
No Subjects

15th April 2024

Matt Burgess and Reece Rogers explain how you can take back control of your data from generative AI in Wired. In the article they explain how to opt out of AI training in Adobe, Google Gemini, Grammerly. OpenAI (Chat GPT and Dall-E) and more. Source: Wired.

Kyle Wiggers in TechCrunch takes a sceptical look at the relationship between AI and healthcare. They highlight a number of issues, from its "significant" limitations and "cocnerns around its efficacy" to its perpetuating of stereotypes to concerns around privacy. Source: TechCrunch

First Monday have a special issue on Ideologies of AI and the consolidation of power. Articles in this issue include:

  • Participation versus scale: Tensions in the practical demands on participatory AI by Meg Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington and Jacob Metcalf
  • Field-building and the epistemic culture of AI safety by Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff and Mona Wang
  • Undersea cables in Africa: The new frontiers of digital colonialism by Esther Mwema and Abeba Birhane

Source: First Monday

Ars Technica have a roundup of latest developments. Source: Ars Technica

This post has no comments.
03/25/2024
profile-icon Ian Clark
No Subjects

25th March 2024

Nature has published a news feature by Ananya exploring the work of researchers who looking into racial and gender bias in AI image generators, and takes a look at some of the solutions that have been proposed. Source: Nature.

Kyle Wiggers in TechCrunch takes a look at the growth in AI chatbots and highlighting a range of issues, from copyright breaches to apps that encourage academic dishonesty (and that are no more than "thinly veiled pipelines to premium services". Source: TechCrunch

Critical AI is a new interdisciplinary journal focused on issues around artificial intelligence. The first issue is freely available here, and they also have a blog which is available here.

Shahan Ali Memon and Jevin D. West (University of Washington) have written an article examining "what happens when inherently hallucinating language models are employed within search engines without proper guardrails in place". Source: Center For An Informed Public

Devin Coldewey at TechCrunch explains why it's difficult to review AIs but why TechCrunch is attempting to do so anyway. Source: TechCrunch

Cristina Monteiro ponders what William Morris would make of artificial intelligence. Source: Architects Journal

This post has no comments.
03/19/2024
profile-icon Ian Clark
No Subjects

18th March 2024

On Wednesday 13th March, the EU Parliament approved the Artificial Intelligence Act. Aiming to protect fundamental rights, the new regulation bans the use of certain AI applications that threaten citizens' rights, requires they meet certain transparency requirements and forbids AI technologies that "manipulates human behaviour or exploits people’s vulnerabilities". Source: European Parliament

Research has been published that claims GPT4 produced copyrighted content on 44% of the prompts entered. Copyright Catcher, a copyright detection API by Patronus AI, aims to detect the use of copyrighted materials in order to address concerns around intellectual property violations in generative AI tools. Source: Quartz. Original Research: Patronus AI.

The Institute For The Future Of Work (IFOW) has published a briefing paper exploring the impact of exposure to workplace technologies has on workers' quality of life. The paper argues that "quality of life is negatively correlated with frequency of interaction with newer workplace technologies". Source: IFOW

A high profile journal has published an article featuring an introduction generated by generative AI. The article in question opens with the line "Certainly, here is a possible introduction for your topic:...". Elsevier say they are investigating the paper and are in discussions with the editorial team. This follows another paper recently incorporating AI generated text which included the phrase:

"In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model."

It's important not only to engage with generative AI itself critically, but also to ensure you are aware of the signs of AI content creation when critically reading any research literature. Source: Technology Networks.

This post has no comments.
03/11/2024
profile-icon Ian Clark
No Subjects

11th March 2024

A new peer-reviewed article has been published in Nature (available via your UEL login) warning of the dangers to scientists of overlooking the limitations of Artificial Intelligence. The authors argue that scientists must "must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline". Source: Nature.

Nature have also published a commentary on the article and associated issues. Source: Nature.

Meanwhile, Ars Technica have published an interview with the researchers. Source: Ars Technica.

Liza Featherstone at The New Republic talks about another area of concern for artificial intelligence: the impact on the climate. Highlighting the water usage needed to maintain data centres, Featherstone points to the difficulties in obtaining information about the resources they consume. Source: The New Republic.

Furthermore, the Climate Action Against Disinformation coalition have released a report on the risks that artificial intelligence poses to the climate. The full report is pubished on the Friends of the Earth website. Source: Friends of the Earth.

Nilay Patel and Emilia David of The Verge take a look at generative AI and romantic chatbots, the trends, the benefits and the "serious pitfalls". Source: The Verge.

Ars Technica features an article by Matt Burgess, originally publised in Wired, that takes a look at the development of "AI worms" that can "spread from one system to another, potentially stealing data or deploying malware in the process". Source: Ars Technica.

Over on Wonkhe, Janice Kay, Chris Husbands and Jason Tangen take a look at how universities should rethink assessment techniques in the light of generative AI developments. Source: Wonkhe.

Reuters reports that Nvidia are being sued by three authors who have claimed it used their copyrighted books without permission to train its NeMo AI platform. Source: Reuters.

This post has no comments.
Field is required.