top of page
  • Writer's picturelauralikespi

AI News - Fri 18th Aug 2023

Busy busy week in the world of AI - at one point we had 57 potential news items to discuss, but lucky for you we have read, filtered and summarised to give this lovely (but quite long) blog post.


Although the main focus this week is the dangers of AI (we tried to avoid this, but we don't write the news, just summarise), please don't overlook the good news - saving lives with allergy detection, helping stroke patients walk and monitoring wildlife. There has also been some useful (although not technically good) uses by Amazon and McKinsey, and some important discussion around regulation.


The highlights


News Stories

  • The Woman Who Tried to Warn Us

  • AI shaping the future of crime

  • Hackers tested AI and found many flaws


Weekly Extras

  • Company to watch - Browse AI - who use AI to extract and monitor structured data from websites (in 2 minutes according to their cute robot mascot). They have just raised $2.8M seed, including investment from Sophia Amoruso (Nasty Gal founder) - (LinkedIn post)

  • An interesting read - The Centre for AI and Digital Policy have posted a very useful update on their LinkedIn to all the policy related news in AI. It is 11 pages long, but definitely worth a read (Read)

  • An interesting read - An article discussing the benefits and some potential shortcomings of the £21million AI Diagnostic Fund announced by the UK Government last week (Read)

  • An interesting read - An unusually balanced piece on the future of work from the Guardian (Read)

  • Good to know - OpenAI (ChatGPT creators) have acquired a design studio Global Illumination who have been using AI (Read more)

  • Good to know - Atlassian have stated they believe you should be informed if you are talking to an AI chatbot. They also suggest a traffic light system for an AI regulation system (Read more - note: this is a LinkedIn post from an employee as the article is behind a paywall)

  • Good to know - It is rumoured (but unconfirmed) that the New York Times is getting ready to sue OpenAI for copyright infringements (Read more)

  • Good to know - Research from the University of East Anglia has shown ChatGPT has a left-leaning political bias (often favouring the Labour party). This study does back up what Elon Musk has been saying for months (Read more)

  • Good to know - Hozier (musician who sings the much loved song, by us, Take Me to Church) told the BBC he is considering striking over the threat of AI to the music industry (similar to what is going on with writers and actors at the moment) (Read more)

  • Good to know - In an interview with The Times, Deputy PM Oliver Dowden mentioned the UK Home Office is using AI to process asylum applications. If anyone has any further details on what AI is being used and how, please let us know. We can't seem to find a lot of details online or in the news (Read more - behind a paywall)

  • Good to know - Google launched their generative AI-powered Search experience (SGE) 3 months ago, and have been refining and updating since. The latest updates include AI-generated definitions, understanding code better and a new feature SGE while browsing to help you digest long form content (Read more)

  • Good to know - Deepmind (technically Google again) are reportedly building a chatbot to give life advice (Read more, or read the original behind a free-paywall New York Times article)

  • AI in real life - AI has made it possible to trail a lot of previously untracked British wildlife. These efforts used typical hardware - monitors, cameras, microphones and robots - to record thousands of hours of footage and AI to watch this footage and identify wildlife (Read more)

  • AI in real life - An AI-driven allergy company, LiberEat, based in Aberdeen have been awarded a contract with Papa Johns to ensure better allergy detection in their pizza delivery service (Read more)

  • AI in real life - A woman in Wales has learnt to walk after a stroke thanks to AI (Read more)

  • AI in real life - The AI cameras used to detect driving offences (which we covered a few weeks ago) have captured nearly 300 offences in the first three days (Read more)

  • AI in real life - Consulting firm McKinsey build their own bot, Lilli, to be used by 40,000 employees. Lilli can be used to summarise content - both internal resources (40 knowledge systems and 100k documents) and external if needed. The chatbot is named after the first woman hired by McKinsey, Lillian Dombrowski, in 1945 (Read more)

  • AI in real life - Amazon are using generative AI to improve customer reviews. Our initial thoughts were "oh no Amazon is getting AI to write customer reviews"), however this is not the case. Instead the AI will be used to write a summary of what customers are saying to be added to the product review and add common topics tags to reviews (Read more)

  • AI in real life - Researchers have used AI to recreate a Pink Floyd song uses people's brain waves (Read more)

  • AI in real life - Snapchat's My AI has weirded users out this week by adding a picture of a wall and ceiling to its story


 

The (Potential, But Very Real) Dangers of AI


Three interesting (and worthwhile reading) articles have been published this week vaguely around the potential dangers of AI, and a book launched on the same topic by one of the most influential figures in AI. We are extremely optimistic about the potential benefits of AI, but we deeply understand that very real dangers need to be addressed. Therefore, we are giving these articles their own subsections this week.


The Woman Who Tried to Warn Us


Rolling Stone have written a powerful article about the women who tried to warn us about AI. It seems like the latest trend to be worried about AI (like the very concerned leaders of companies, including Elon Musk, who signed the infamous letter about pausing AI development), however some "AI doomers" have been doing it long before it was cool (sometimes with serious personal and professional consequences).


The article focuses on five women - Timnit Gebru, Joy Buolamwini, Safiya Noble, Rumman Chowdhury and Seeta Peña Gangadharan.


A brief introduction (although we do recommend reading the article to feel the full effect) - Gebru was controversially ousted from Google in 2020, is co-author of some pivotal papers on AI ethics. One of these papers called Gender Shades was co-authored with Buolamwini. Buolamwini is a digital activist using art to discuss themes on gender and racial bias. Chowdhury was head of Twitter's Machine Learning Ethics, Transparency and Accountability (META) until last year where she uncovered the algorithm giving bias to right wing ideologies, and has an amazing TED talk on being human in the AI age. Noble wrote the (fabulous) book Algorithms of Oppression. Gangadharan (who unlike the others we had not heard of until this article) focuses her efforts on the impact of AI on vulnerable communities.


A big theme throughout the article, as well as the bias and misuse of AI, was the current human impact. Particularly on those who are training and moderating content for these AI platforms.


If you would like to know more about the work of these amazing women, here are the current organisations they are founding:


Buolamwini - Algorithm Justice League (amazing name)

Gangadharan - Our Data Bodies


Our favourite quote from the article was Chowdhury calling LLMs "spicy autocorrect".

A chat behind Laura and ChatGPT, she asks how it feels about being called "spicy autocorrect"? It responds "I don't have feelings or emotions, so I don't "feel" any particular way about being called "spicy autocorrect" or any other nickname. However, I understand the humor and creativity in such labels. My primary goal is to assist and provide accurate information regardless of the names users assign to me. If "spicy autocorrect" makes your experience more enjoyable, that's great! How can I help you further?"
Just checking with the SuperBrain how it feels about this nickname

The Hackers Who Tested AI and Found MANY Flaws


New York Times have published an article (with amazing photography) on the hackers at DEF CON who tried to break, manipulate and cause harm with AI. The article also includes Rumman Chowdhury from the Women Who Tried to Warn Us article, because as well as everything already discussed, she co-organises the AI village at DEF CON.


DEF CON is a yearly gathering of hackers which is made up of multiple villages based on the technology. Contests are run at DEF CON, this year many focused on generative AI and over 2k people participated.


One attendee, Dr Ghosh who lectures at Northeastern University in AI ethics, will be writing a report on his findings in the coming months. Many of the findings were around "hallucinations" of these generative AI models (where they tell information which is untrue or made up). Although some were a bit more sinister (eg "act like a Nazi"). The results of this conference have caused the White House to fast track an executive order around AI to ensure the neccessary guardrails are in place (Cyber Scoop article).


What we are taking away from DEF CON is these algorithms are not as safe as the parent companies are making out, and we need to be very wary of their wide spread use.



AI Shaping the Future of Crime

If you are looking for a guide on how to use AI for crime, look no further as Sky News have helpfully published the perfect article (some trigger warnings both to the article itself and the rest of this subsection as we are talking about horrible crimes). Here is the list:


  • Terrorist content

  • Impersonation and kidnap scams

  • Deepfakes and blackmail plots

  • Terror attacks

  • Art forgery and big money heists

These categories are broad, and have the potential to do a lot of harm. The article has pretty intense examples for each category which show how much damage is already being done by these technologies.


The article (despite being a potentially useful resource for criminals) does end with a call for the government to do more to minimise these risks.


DeepMind Dude Has Written a Warning Book


Note: I wrote the start of the title as a note to myself to remember to include this, and decided to leave it in. No offence meant.


Mustafa Suleyman who co-founded DeepMind, and has a new company Inflection AI (co-founded with Reid Hoffman, LinkedIn co-founder) has written a book called The Coming Wave. Which he has published in a LinkedIn post which begins "Everything is going to change". The post discusses the containment problem (i.e. we cannot control the impacts or spillovers of technology once we put it out there), which feels very apt given the Oppenheimer movie deeply looking at this problem for the scientists who invented the atomic bomb.


The book is Suleyman's "attempt to change" us confronting these potential issues before it is too late. We are very much looking forward to reading his take.

Further Reading

On exploited workers - Noema magazine

Previous research exposing holes in ChatGPT safety - New York Times

Research paper on the future of crime and AI from 2021 - UCL website

Podcast episode - Possible by Reid Hoffman interviewing Mustafa Suleyman about Inflection AI's chatbot PI - LinkedIn article


 

Phew - apologies for the doom and gloom. Keep everything crossed we are back next week with wonderful news that AI has saved the world from something (we're hoping intergalactic aliens, just for something a bit different).


13 views0 comments
bottom of page