lauralikespi
AI news - Fri 28th July 2023
Updated: Aug 10
On the whole it feels like a quiet week in AI news - there haven’t been as many BIG announcements as in recent weeks. But still things are going on, important things which might have implications for future AI development (and life in general). So grab a coffee and enjoy this week's AI news.

The highlights
News Stories
Regulation - 4 big players in AI have taken a stab at creating a regulatory group - the Frontier Model Forum has been created (is this the latest tech bro attempt to control AI or a step towards regulatory co-operation?)
Workers’ rights - Netflix seemingly throwing fuel on the fire of the writers strike with an AI job advert
AI in real life - AI being used by the police in the UK
Weekly Extras
Company to watch - OpenEvidence - valued at $425 million, aiming to combine LLMs and clinical documents to keep doctors up to date - Forbes article
An interesting read - LLM development - do we need a new Turing Test to access AI? Nature
An interesting read - Can AI settle a decades long debate about the painter of a Renaissance painting? Washington Post
Good to know - Google have cleared up that for SEO .ai domains are now seen as global (rather than the country of Anguilla). Read more
Good to know - Some seemingly dark and worrying stories with AI coming to light in a discussion of the impacts of AI on LinkedIn, including students being flagged as cheating by Turnitin software (trigger warning: suicide). Read more
Good to know - People are now more pessimistic about AI than before the current LLM wave according to the annual Steven's Institute of Technology released its annual TechPulse report. Read more
The Frontier Model Forum
Frontier… what?
The discussion around regulation of AI is deep and polarised (and which deserves much more time than we are giving here) - we have tech and AI ethicists who have been shouting about the potential risks of these technologies for years, specific countries and regions (including Italy, UK and the EU) adding their own regulations into the mix, and the infamous letter signed by tech CEOs.
This week four major players in AI - OpenAI, Microsoft, Google (who own DeepMind) and a startup Anthropic (an AI safety and research company founded by former OpenAI staff) - announced the Frontier Model Forum. In OpenAI’s one words:
We’re forming a new industry body to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry. - OpenAI
(Taken from the OpenAI website)
Some Specifics (we’re really intrigued, feel free to skip this section)
The blog post shared by OpenAI gives a lot of information about the Frontiers Model Forum. Some is interesting, to summarise (and mostly in their own words, shown by italics):
The aims:
(i) advance AI safety research to promote responsible development of frontier models and minimize potential risks,
(ii) identify safety best practices for frontier models,
(iii) share knowledge with policymakers, academics, civil society and others to advance responsible AI development;
and
(iv) support efforts to leverage AI to address society’s biggest challenges.
The objectives:
Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The Forum’s focus for the next year will be:
Identifying best practices
Advancing AI safety research
Facilitating information sharing among companies and governments
Other organisations will be able to join the forum, as long as they meet these criteria:
Develop and deploy frontier models (as defined by the Forum).
Demonstrate strong commitment to frontier model safety, including through technical and institutional approaches.
Are willing to contribute to advancing the Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
All regulation is good, right?
Personally, we’re waiting to see what the Forum defines as frontier models and who the first set of additional members are.
This view is shared by Dr Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, would told of the Guardian of his concerns around leadership and regulation of AI being in the hands of the private sector, rather than the government. He is quoted as saying, “It’s such a powerful technology, with great potential for good and ill, that it needs independent oversight that will represent people, economies and societies which will be impacted by AI in the future.” (quote is taken from the Guardian)
Further Reading
Netflix throwing two fingers up at the writers

Writers Strike
Writers and actors in the US have been striking. Their demands include a fairer pay structure (the current model, of residuals, has been eroded by streaming services becoming so popular) and some reassurances around the use of AI technologies replacing and changing their jobs.
Job Post
In what seems like ill-considered timing, a job advert for Product Manager - Machine Learning Platform (job ad). While the majority of news outlets are reporting the job having a salary of $900k a year (roughly £700k), it has a range of $300k-$900k total compensation (we assume this to include stock options, etc). We are not defending the salary (and we are very much in support of the writers) but it is worth noting both Machine Learning and Product Manager jobs do tend to be extremely well paid (particularly so in Silicon Valley where this job is based). The skill set for such a role (particularly a Product Manager for Machine Learning) would take years to craft. This particular role is also the first hire in this area, and thus comes with a lot of responsibility and potential stress. Again, we aren’t justifying, just explaining. (Secretly we are polishing our CVs, jokes)
The BBC is reporting how Netflix has angered the striking writers by posting a $900k a year (roughly £700k) for “an AI expert”. Tech Crunch provided a more balanced discussion of the job advert, ending with this nuanced comment on the situation:
“Hiring an AI researcher for an extravagant salary to refine their recommendation engine isn’t the problem on its own — it’s the hypocrisy demonstrated by Netflix (and every other company doing this, probably all of them) showing that it is willing to pay some people what they’re worth, and other people as little as they can get away with. That’s a deliberate choice, and one that the striking creators hopefully can ensure is no longer possible in the future.”
Further Reading
The writers strike, including the AI part, is well covered in this episode of the Reasons to be Cheerful Podcast (so not strictly reading...)
BBC news article
TechCrunch article
AI to the Rescue - Using Technology to Flag Driving Offence in the UK
UK Leading the Way
In May, the RAC reported the use of “the world’s first AI speed camera” (RAC website) in Lambeth, South London. It is described as having 4D radar technology which can scan inside the car.
This week Hampshire and Thames Valley forces have used what was described as a “police spy camera van which uses artificial intelligence” (in the Shropshire Star). This aims to determine if drivers are on their phones or not wearing seatbelts. Nearly 500 offences were identified in just one week!

Life Saving or Big Brother
Public response to this technology has not yet been discovered. The RAC road safety spokesperson said “Drivers who stick to the speed limit and obey the law have nothing to worry about regardless of what cameras are in place. It’s also worth remembering that – unlike in other countries – all cameras have to be painted yellow, so they’re plainly visible to drivers.”
However, it is understandable how some people could see this as a Big Brother is Watching move, or an infringement on their privacy. In an extremely hyperbolic thought, it is also easy to see how this could be step one towards a Minority Report style pre-crime punishment.
Thinking positively (and as the massive True Crime fans we are), we can see other positive uses as this technology develops, eg identifying kidnap victims in cars.
For now, we see the use of AI to prevent road traffic accidents as a good social use. Let’s hope this continues and the public agree.
Further Reading
[1] Hampshire police news
RAC article
Shropshire Star article
—-----------