top of page
  • Writer's picturelauralikespi

AI News - Fri 1st Sept 2023

Updated: Sep 1

As the final week of summer ("what summer?" all British people respond) wraps up, it is interesting to see the trend in AI for school education articles fade. This week focuses on some exciting news from OpenAI and some finger wagging at the UK for dropping the ball on AI regulation.


 

Companies to Watch


AI21 Labs - $1.4 Billion Valuation at Series C

Tel Aviv startup AI21 Labs has announced a Series C funding round of $155 million (valuation $1.4 billion). They themselves say the company is on a journey to "bring reliable AI to the enterprise" - they have their own large language model Jurassic-2 (Read more)


Superframe

Superframe, who have an AI model to manage complex Salesforce integrations, have raised $5 million in seed funding from more than 40 angel investors (Read more)


Ellie.ai

Ellie.ai want to "help data teams design great data products", and have raised €2.5M in seed funding. They have been referred as the Figma for data teams (Read more)


 

Good to Know


ChatGPT for Enterprise

The biggest AI news this week has to be the long awaiting announcement of ChatGPT Enterprise - "enterprise-grade security & privacy and the most powerful version of ChatGPT yet". The announcement has quotes from Klarna, Asana and Canva showing their excitement for using ChatGPT. The offering includes single sign-on, analytics dashboards, promises of company prompts not being used in trainings, as well as: The most powerful version of ChatGPT yet

  • Unlimited access to GPT-4 (no usage caps)

  • Higher-speed performance for GPT-4 (up to 2x faster)

  • Unlimited access to advanced data analysis (formerly known as Code Interpreter)

  • 32k token context windows for 4x longer inputs, files, or follow-ups

  • Shareable chat templates for your company to collaborate and build common workflows

  • Free credits to use our APIs if you need to extend OpenAI into a fully custom solution for your org

(Read more)


This announcement is an important step towards OpenAI becoming profitable.


Google's Deepmind Testing Watermarks for AI-Generated Art

Google's Deepmind have launched a tool SynthID for watermarking AI-generated images. This model can also identify if an image was created by AI (Read more)



BBC Bitesize has a short quiz to test how well you can identify AI generated images from real images (we got 7 out of 8 correct!).


AI Beats Humans in A Drone Race


In a very meta sentence, an AI has won in a drone race (machines manning machines feels a little off for some reason). The AI called Swift was created by the University of Zurich and has beaten the world champion drone racer (Read more)


Arabic Language Chatbot Launched


Jais, a bilingual chatbot (Arabic and English), has been launched in the UAE (Read more)

 

AI in Real Life


"Soft Robotics" AI-Powered Drug Implant for Chronic Illnesses

Lots of words in that title, but researchers at University of Galway and Massachusetts Institute of Technology (MIT) have developed a soft robot device which uses AI to administer drugs for long term conditions such as diabetes. Soft robots do not have the typical hard bodies or rigid joints which we usually think of when we think of robots (eg the Terminator style robot or Pepper). This device can remain in the body for long periods of time, sensing when the body is rejecting it or tissue is building around it, and move on when needed (Read more)


Call of Duty to Use AI for Moderation

The new Call of Duty game will be released on 10th November this year, and in an effort to prevent the well known problem of toxic behaviour online game makers Activision are turning to AI. They will be using ToxMod a voice chat moderation tool (Read more)

 

Interesting Reads

  • Financial Times opinion piece - What happens when AI passes through the ‘uncanny valley’? - ie when AI becomes too convincingly human

  • Yahoo Finance's article - A.I.’s un-learning problem - looking at whether model's can really forget user data

  • Guardian's article on career resilience in the AI future

  • Wired published an interesting article on the man trying to beat the current copyright laws (which state copyright doesn't apply unless a human made something) but proving his AI is sentient

 

Focus - UK Lagging Behind in AI Regulation


The UK has been in a lot of headlines this week, and unfortunately nothing feels overly positive.


House of Commons - Science, Innovation and Technology Committee - Report

A report titled The governance of artificial intelligence: interim report was published this week. The report put forward by a number of MPs identifies 12 challenges of AI goverance which have been accelerated by generative AI:


1) The Bias challenge. AI can introduce or perpetuate biases that society finds

unacceptable.

2) The Privacy challenge. AI can allow individuals to be identified and personal

information about them to be used in ways beyond what the public wants.

3) The Misrepresentation challenge. AI can allow the generation of material that

deliberately misrepresents someone’s behaviour, opinions or character.

4) The Access to Data challenge. The most powerful AI needs very large datasets,

which are held by few organisations.

5) The Access to Compute challenge. The development of powerful AI requires

significant compute power, access to which is limited to a few organisations.

6) The Black Box challenge. Some AI models and tools cannot explain why they

produce a particular result, which is a challenge to transparency requirements.

7) The Open-Source challenge. Requiring code to be openly available may promote

transparency and innovation; allowing it to be proprietary may concentrate market

power but allow more dependable regulation of harms.

8) The Intellectual Property and Copyright Challenge. Some AI models and tools

make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.

9) The Liability challenge. If AI models and tools are used by third parties to do

harm, policy must establish whether developers or providers of the technology bear any

liability for harms done.

10) The Employment challenge. AI will disrupt the jobs that people do and that are

available to be done. Policy makers must anticipate and manage the disruption.

11) The International Coordination challenge. AI is a global technology, and the

development of governance frameworks to regulate its uses must be an international

undertaking.

12) The Existential challenge. Some people think that AI is a major threat to human

life: if that is a possibility, governance needs to provide protections for national security.


The report itself lays out a balanced discussion of the benefits of AI (particularly in medicine and education). It also has a somber tone with regards to the challenges and the need for regulation (even calling for the King's Speech to include something on AI). It ends on a serious note:


"We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed."


House of Commons - Culture, Media and Sport Committee - Report

The House of Commons has been busy this week with another committee publishing another AI report - Connected tech: AI and creative technology. In this report, MPs are calling for the Government to abandon plans for allowing AI to use music, art, etc without usual copyright rules applying (Read more)


National Cyber Security Centre - Blogs on the Security of Chatbots


The National Cyber Security Centre (NCSC) have raised the alarm about potential security vulnerabilities of Large Language Model Chatbots (eg ChatGPT) in two blog posts (one and two). They categorise these as 'prompt injection' attacks and training data being poisoned. Both blog posts are aimed at businesses and give really useful tips on what to think about when designing LLM systems.


Copyrights Are Preventing Welsh Language Models Being Developed


Despite promising signs of ChatGPT understanding Welsh, researchers have said copyright rules are preventing AI development in Welsh. The government are considering loosening the rules (Read more)


 

Let's hope a return to school term sees some regulation being enacted, and some interesting real life applications being shared.

6 views0 comments

Recent Posts

See All
bottom of page