20th May 2024

Julia Angwin takes a look at the hype around AI in a comment piece for the New York Times. Angwin argues that "The reality is that A.I. models can often prepare a decent first draft. But I find that when I use A.I., I have to spend almost as much time correcting and revising its output as it would have taken me to do the work myself." She goes on to argue that "it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests." Source: New York Times (available to UEL students via ProQuest).

An Open Access article by Chirag Shah and Emily M. Bender published in ACM Transactions on the Web critiques the use of large language models in search. In Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?, Shah and Bender argue that "LLMs have several important and often alarming shortcomings as they have been applied as technologies for IA [information access] systems. To understand whether and how LLMs could be effectively used in IA, a lot more work needs to be done to understand user needs, IB [information behaviour], and how the affordances both LLMs and other tools respond to these needs." Source: ACM Transactions on the Web

In Computers and Society, Abeba Birhane, Sepehr Dehdashtian, Vinay Uday Prabhu and Vishnu Boddeti examine the impact of increasing training data for large language models on classifying images of people. They find that as the training data increased, so did the probability of misclassifying Black and Latino men as "criminal". "The probability of predicting an image of a Black man and a Latino man as criminal increases by 65% and 69%, respectively, when the dataset is scaled from 400M to 2B samples for the larger ViT-L models." Source: arXiv

Jisc have published an update on their report on how students are using generative AI and their expectations of its integration across education. Jisc conducted a series of "nine in-person student discussion forums with over 200 students across colleges and universities to revisit student/learner perceptions of generative AI". Amongst their conclusions they found that "students/learners have significant concerns around equity, accessibility, ethical use, and the potential for bias within generative AI technologies". Source: Jisc

Matthew Hutson in Nature takes a look at how ChatGPT "thinks". Hutson notes that large language models "generate misinformation, perpetuate social stereotypes and leak private information" and examines the growth of XAI (explainable artificial intelligence) to explain large language models.

"...XAI tools are being devised to explain the workings of LLMs. Researchers want explanations so that they can create safer, more efficient and more accurate AI. Users want explanations so that they know when to trust a chatbot’s output. And regulators want explanations so that they know what AI guard rails to put in place." Source: Nature.

Kyle Wiggers in TechCrunch investigates the downfall of OpenAI’s "Superalignment" team, created to develop ways to govern and steer “superintelligent” AI systems. Source: TechCrunch

OpenAI will use Reddit posts to train ChatGPT. Source: Ars Technica

Nitish Pahwa examines the impact of AI on DeviantArt. Source: Slate.

Steven Levy of Wired interviews Arati Prabhakar, director of the White House Office of Science and Technology Policy and the president’s chief science and technology adviser. Source: Wired

The CDAC Network have posted Abeba Birhane's recent keynote delivered at the Alan Turing Institute’s AI UK Fringe in March 2024. You can view the full keynote below.