Matthew Blakemore on the creative future

Matthew Blakemore is an AI strategist with over 15 years of experience in tech and a decade in AI. With a track-record spanning digital transformation, ethical AI deployment and global standard-setting, he has worked at the intersection of innovation and responsibility for over a decade.
In this exclusive interview with The SpeakOut Speakers Agency, Blakemore shares his insights into how AI is reshaping the creative sector, why responsible implementation matters now more than ever, and what businesses must do to stay ahead.
AI is rapidly reshaping creative industries — from film production to content marketing. In your view, where do the most transformative opportunities lie for the creative sector as these technologies mature?
So I think that generative AI is certainly one of the areas where, in the creative sector, we could see significant changes in how businesses operate. I think that there are a lot of ethical questions that need to be asked around copyright and the sort of materials that have been used to train these generative AI models.
But fundamentally, though there are those challenges, I do think that there are opportunities in the sector to use generative AI to enhance the work of these companies and improve the output of the content.
For example, in the film industry, there’s the potential to use generative AI to create a more sustainable film-making environment, where you don’t necessarily need to send a team around the world to film in a particular location because generative AI could create that location.
So there are lots of different things that this technology can be utilised for, and it’s quite an exciting time. I myself was working recently on a content enhancement solution, and I think that we’re likely to see a lot of improvements in that area as well.
There are companies like Flawless AI, for example, that are working on tools to change, when a piece of content is dubbed, the voice and the lips so that it syncs directly with the actor, allowing them to speak in different languages.
Instead of dubbing where the voices move at different times, you have something very streamlined and great for an audience. So there are lots of different tools out there that are going to really impact the creative sector, and I think in a positive way.
Critics argue that AI will erode human creativity and eliminate jobs. How realistic is that fear, and what balance should leaders strike between innovation and workforce sustainability?
I think it’s certainly a risk, and I think it’s one we need to be aware of. But that’s where it really comes down to responsible AI and how we implement these solutions. The choice of how these solutions are implemented is with management, and if you wanted to, you could reduce significantly the headcount of different companies using these tools.
However, the danger of doing that, as with any industry actually, is the fact that these models need to be continuously trained. If you remove staff, you end up in a situation further down the line where there’s drift in the AI models, and you need to retrain them but don’t have the staff with the skills necessary to do so.
It becomes a bit of a challenge for the organisation, and we’ve seen companies fail because of that and because they’ve not prepared well for the fact that these models are not an endpoint. They need to be continuously trained and continuously updated.
So I think there’s a balance. Certainly, I would say that companies can streamline their operations, but what I would like to see is ethical implementation of AI within the creative sector and it being used as a hybrid solution with humans to enhance productivity and enable us as a sector to take away some of the more admin tasks, enhancing the user experience without cutting so many jobs.
I think it actually can be a job creator, certainly when it comes to data science, as I can see a big need in the sector for more data scientists. If you look at the quality of the data generally in the media and entertainment industry, there’s a lot of data normalisation and cleaning that needs to happen, and that will inevitably create jobs in the sector.
As AI tools become embedded in content creation and distribution, the ethical stakes are rising — from bias to intellectual property. What are the biggest ethical challenges you’ve observed, and how should organisations address them?
I think that it’s a bit of a minefield when it comes to the ethical quandaries because there are lots of different types. For example, one of the projects I was working on recently, which was in video analysis, we encountered a lot of potentially ethical situations where the AI that we trained on various clips from Hollywood movies would recognise certain issues as being more severe than others based on the training data set.
If you have more examples of one gender stabbing another gender, and it says that this is strong violence, these models can generalise and assume that every time that gender stabs the other, it’s a little bit more severe than the other way around. Those sorts of issues bring up ethical quandaries, especially when you look at gender bias and race bias, which can be real issues.
Then there’s the ethical quandary of the job losses as well that could potentially be caused by these solutions. On that front, it really comes down to strategic implementation, and I think there are ways in which companies can utilise this technology in a hybrid way to ensure that humans maintain their roles within organisations but productivity is increased and performance improved.
Other ethical issues can come down to the particular use case that the company is looking at. The beauty and danger of AI is that it can be used in so many different ways; it really comes down to strategic decision-making.
There are standards that serve as a very helpful guide in how these tools can be implemented, such as the EU AI Act. That’s a useful guide in terms of what is high-risk AI, what is low-risk AI, and what is moderate-risk AI, and the sorts of ethical issues you may encounter when taking on one of these projects.
From your experience, what are the most practical ways the creative sector can integrate AI — both to enhance performance and maintain artistic integrity?
I think the five key ways in which the creative sector can use this technology are:
- Information analysis – customer recommendation engines, analysing customer behaviour, and optimising the user experience.
- Content enhancement – visual effects; for example, tools that sync lips in dubbed media for a smoother viewing experience.
- Extract and enhance – semantic segmentation and image recognition. One of the big projects I worked on recently developed a multimodal AI solution that could recognise the severity levels of violence within video content.
- Data compression – AI can reduce the size of large media files to save cloud storage while maintaining quality.
- Content creation – generative AI offers exciting opportunities across marketing and entertainment, from promotional material to creating backgrounds for films. Of course, there are ongoing ethical debates around copyright, but it opens fascinating conversations about creativity itself – whether it lies in the idea, the execution, or both.
About Matthew Blakemore
As a leading artificial intelligence speaker with The AI Speakers Agency, Matthew lends his expertise to boards, C-suite forums and events around the world, helping organisations navigate the challenges and opportunities of AI integration.
This exclusive interview with Matthew Blakemore was conducted by Tabish Ali of The Motivational Speakers Agency.






