lauralikespi
AI News - Fri 8th Sept 2023
Updated: Sep 12
Back to school week has been busy for everyone, including the AI news world. Our focus will be on all things regulation and politics at the end of this news update (including UK announcing their task force and G7 making AI agreements). We also have interesting real world applications in women's football and music, and important announcements from big companies - Apple, Google, Microsoft and Anthropic. Busy, busy.

Companies to Watch
AI Research Lab (Imbue) has Received $220 Million Series B
Imbue, who were previously called Generally Intelligent, are working AI agents which can reason and code to help create their vision of having "truly personal computers". They have closed an impressive Series B round of $220 million. The investors include the co-founder of Notion, Simon Last, and NVIDIA (Read more)
Biotech Company Inceptive Closed a $100 Million Series A
Inceptive is applying AI to drug development. Specifically them are trying to take the concepts from software programme, and apply them to cells in the body (mainly designing unique molecules of mRNA). The founder worked on one of the most pivotal papers in the development of transformers which are used in Large Language Models (like the ones which power ChatGPT). Their investors include NVentures (the investing part of NVIDIA) (Read more)
Interesting Reads
TIME piece on Elon Musk's impact on AI and OpenAI ("he's a jerk" is included as a quote from OpenAI's Sam Altman
Wired's piece on Why This Award-Winning Piece of AI Art Can’t Be Copyrighted - "too much machine, not enough human"
While we aren't usually readers of the Daily Express, it is worthwhile reading their article Relax. AI robots won't take your job - 5 reasons why you're more intelligent than them (charm, creativity, cheaper, morals, dealing with humans) to understand how the media is talking about AI to the public. We would like to point out the use of AI robots in the title
Wired's piece on whether you can really opt out of your data being used in Facebook's generative AI development
Good to Know
TIME 100 Most Influential People in AI

A very worthwhile read to understand what is being deemed important in AI. The first section, called Leaders, is topped by the CEO & president of Anthropic (who are siblings). It includes representatives from the expected companies (OpenAI, Hugging Face, Deepmind, Microsoft, NVIDIA) and investors, such as Reid Hoffman.
The other sections - Innovators, Shapers and Thinkers - are, to be frank, much more interesting. These include artists, researchers, government officials, representatives of amazing but less well known companies, and other roles than CEO from the main players. We love that three of the women we covered a few weeks ago have been included, as well as Black Mirror's Charlie Broker and Lilly Wachowski (of the Matrix).
Read the list for yourself
Microsoft Copyright Guarantee for Copilot Customers
In an effort to show their faith in their AI products, Microsoft is assuming responsibility for any copyright issues which arise from their customers using Microsoft Copilot. They make it very clear they understand the rights of authors, and have put guardrails in place to protect these. Their specific statement:
The Copilot Copyright Commitment extends Microsoft’s existing IP indemnification coverage to copyright claims relating to the use of our AI-powered Copilots, including the output they generate, specifically for paid versions of Microsoft commercial Copilot services and Bing Chat Enterprise. This includes Microsoft 365 Copilot that brings generative AI to Word, Excel, PowerPoint, and more – enabling a user to reason across their data or turn a document into a presentation. It also includes GitHub Copilot, which enables developers to spend less time on rote coding, and more time on creating wholly new and transformative outputs.
Apple is Apparently Spending 'Millions of Dollars a Day" to Train Their AI
Originally reported in the Information, which is behind a paywall, this includes a team of 16 engineers working on conversation AI called "Foundational Models". Other teams include Visual Intelligence, and a team researching "multimodal AI" which can understand and produce images and text. One other project reported by the Information is a chatbot to be used for customers contacting AppleCare. Apple's most powerful Large Language Model, called Ajax, is also reported to be trained on “more than 200 billion parameters”, and performing better than GPT3.5 (the OpenAI algorithm available in the free version of ChatGPT). Unlike other companies Apple are keeping this model secret, as they usually do before a big reveal in new products. The massive spend from these companies should be an insight (or a warning) to companies hoping to use their own AI as part of their business about the heavy financial leverage needed for the development (Read more)
Google Will Make Political Advertisers Declare Use of AI
In a policy update, Google has announced that political adverts will be required to "prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events". The rules will apply to audio, video and images. Google does clarify that editing for inconsequential reasons, e.g. resizing, cropping, removing red eyes, will be exempt from the disclosure. This feels like an important step towards minimising misinformation in the upcoming elections (Read more)
Anthropic Announce a Paid Version of Claude
Claude is an AI assistant from the Anthropic (who are a research lab focusing on AI safety). Following in OpenAI's steps (who announced ChatGPT for Business last week), Anthropic has launched a paid version for their super users. Claude Pro will be available for users in the US and UK, and cost us UK people £18 per month for:
5x more usage than our free tier provides, with the ability to send many more messages
Priority access to Claude.ai during high-traffic periods
Early access to new features that help you get the most out of Claude
IBM's Granite Foundation Models for Business
We all know of IBM's Watson, the question and answer bot which wowed the world. IBM now have watsonx.ai a platform for this AI. This week they announced some new Generative AI models, collectively called Granite, which can be used for text and code (Read more)
Research Surveys Released
According to Salesforce’s Generative AI Snapshot Research half the population have never used AI. GenZ are, unsurprisingly, using Generative AI the most (68%). Interestingly, the top uses of Generative AI are providing inspiration and taking tasks off their plates (Read the report - it's only 1 page)
2023 Americans in Workplace Survey has reported 38% of workers are worried about AI making some or all of their jobs redundant. This worry is higher among certain groups - people of colour, younger people and those with less formal education - compared to their counterparts (Read report)
AI in Real Life
Zoom AI Companion
After being called out on their AI policy a few weeks ago, Zoom have rolled out Zoom AI Companion to all paid customers. The features include composing chat responses, summarising the meeting (useful if you've missed a few minutes), questions about the content and even more advanced things like sending emails based on the meeting content.
Schools Using AI in Gun Detection
The Ocean City School District in New Jersey have attempted to prevent the rise in gun violence by using a technology solution. The system uses AI with human in the loop technology to monitor camera feeds and has been developed by a company called ZeroEyes. As well as being able to detect guns, the police are hoping just advertising the use of this technology will deter people have carrying guns to school, and on the boardwalk where it will also be in operation (Read more)
Heart on My Sleeve (ghostwriter977 song)
A bit of strange news this week. But first some background is needed. In April, a TikTok user called ghostwriter977 uploaded a song which was generated by AI. This song included deepfake (when AI is used to imitate people) voices of Drake and the Weekend. The song called heart on my sleeve was uploaded to Spotify, Youtube and TikTok and quickly became extremely popular with millions of views before it was removed by Universal Music Group. This caused a Stanford University Professor to say "The cat is not going back in the bag" about AI generated music (reported by NPR).
Now what has happened this week - first, the New York Times reported that ghostwriter has put this song heart on my sleeve forward for two Grammy categories. We are very intrigued about how the Grammy committee will respond to this nomination.
Also, ghostwriter977 has released another song, this time deepfaking Travis Scott.
Arsenal's Women Team Leading the Way in AI
Arsenal Women are partnering with "intelligent automation" company ABBYY to enhance match day experience, and launch a "Game Changers" campaign (Read more)
AI Causing Issues in US Immigration
The use of AI systems has caused some issues with asylum applications in the US. The reliance on these systems means migrants are often going without translators and are being misunderstood by the AI systems (Read more)
SpermSearch
Is AI the answer to make infertility? Some researchers from University of Technology Sydney believe they have built some software, SpermSearch, which can help the 7% of men facing these issues.
Trending - The World of Regulation is Back From Summer Holidays
The UK
The summer has been full of vague mentions of AI plans, particularly from the UK government. Last week, an AI Safety Summit was announced scheduled for November and quite a damning report released from the Science, Innovation and Technology committee calling for the Government to act on AI governance. It seems they have listened.
The Frontier's AI task force has been announced (Gov.uk). It includes:
Ian Hogarth who will be the Frontier AI Taskforce Chair,
Yoshua Bengio, a Turing Prize Laureate
Matt Clifford, co-founder of Entrepreneur First and Prime Minister’s Representative for the AI Safety Summit
Matt Collins, Deputy National Security Adviser
Alex Van Someren, Chief Scientific Adviser for National Security
Dame Helen Stokes-Lampard, Academy of Medical Royal Colleges Chair
Paul Christiano, Alignment Research Centre Chief
Financial Times published a piece on Ian Hogarth focusing on the cyber threat of AI to the NHS.
G7
The leaders of the G7 countries have agreed to make an international AI code of conduct. This will be "unified but nonbinding", and includes protecting businesses and investing in cybersecurity (politico)
Is the EU AI Act Enough?
The OECD have criticised the EU's AI Act as being too vague to really have an impact. They call for leaders to "define deceptive, subliminal and manipulative techniques" and address them separately. This paper based in recent research is worth a read if interest in the potential threat of AI (OECD)
What a week - it feels like a positive week compared to those in the past months. We're seeing some good progress in terms of regulation, and the real life use cases feel like they will be impactful (in a good way, fingers crossed).