Generative AI: The double-edged sword of artistic expression

Tuesday 23 April, 2024

Generative AI is now one of the top, if not the fastest growing, sectors globally. It’s estimated value was around $38bn (USD) in 2023 and is projected to increase by 30-40% in 2024. With the rise of consumer-level software, it is becoming easier to access generative AI software or tools.

There is still a significant ongoing debate arguing the pros and cons of generative AI; some reports voice concerns that the content used to train generative AI is based on stealing other artists’ creative work or replacing staff with AI. Some have argued for positive uses of AI, where it has helped alleviate the tedious manual processes and exponentially speed up creative design processes. While it can aid with tasks that do not need human creativity or interjection, it can seriously put jobs at risk if we do not have the correct safeguarding measures, as we’ve briefly covered in our previous article and read concerning reports about.

This article will review the recent positive advancements in generative AI, how creatives can harness these tools in their work, and what pitfalls to avoid.

Using AI visual generation tools

There are many reasons to utilise AI within the creative space and for general use. For iterative work, with text-to-image generative imaging — by simply typing in an idea for a logo, you can quickly get a visual output to provide inspiration or a baseline to work from. For non-creatives or small business owners giving DIY design a go, using the output as the ‘final’ logo could be convenient when starting a new business on a low budget or trying out concepts before investing more money for a human team to redesign.

You could also use AI to generate placeholder content whilst building a website to better understand what the content could look like on a finalised website without the need for real images that you may not have ready yet. This presents a major threat to the stock image sector, which is why they are adopting a two-pronged approach: firstly fighting hard to detect AI-generated images that may have borrowed a visual as a starting reference or if it was trained using their libraries and secondly, lean in and building AI generation features into their products. This is why we will likely continue to see increased copyright infringement cases as stock image libraries hunt where their products may have been used without permission.

Mobile integrated AI software

Google’s Pixel phone was the pioneer in 2023 in utilising generative AI within its camera, with the ability to remove any unwanted objects in the background with its ‘Magic Eraser’ function. Another feature the Pixel phone utilises is ‘Best Take’, where the AI combines multiple images of the same shot and merges them into the perfect shot where no one is blinking or looking away from the camera. Whilst this feature is undoubtedly useful in improving quality with little effort, it could also start warping our perception of normality and expectations of image ‘perfection’, like how the media of body perfection contributed to a rise in body dysmorphia. Much of what is already captured on mobile cameras is based on behind-the-scenes processing where stitching multiple images together could be argued is already not faithful to what was captured.

In addition, active image manipulation with AI tools could become damaging to the younger generation and their mental health, as they’ll be more exposed to these ‘fake’ images. It will become more engrossed with maintaining a ‘perfect’ persona portrayed and contributing towards increasing levels of social anxiety disorders. Transparency may come in the form of enforced stamping, sophisticated AI detection software where images have been processed with AI, or a counter-movement of keeping it real for like #NoFilter.

Also, with most of the AI processing happening on the cloud, comes the concerns of privacy. Since there are no guarantees behind the AI software, other people don’t also have access to it. A remedy to this could come from more natively processed AI devices. An example is Samsung’s 2024 flagship mobile lineup, which is powered by their Galaxy AI technology, with most AI functions running natively and securely on their phone and not requiring the internet to work. We will no doubt see more companies offer this as cloud processing and data breaches become an increasing point of concern.

Video generation

OpenAI recently demonstrated Sora as their latest generative AI tool that is able to output near photorealistic video from text prompts. This text-to-video system can create minute-long videos, which technically grants users access to a powerful, high-quality video production system and bypasses the need for a full video production crew, actors or the time required to plan, film and edit. Considering only a year ago, text-to-video generative AIs results were unconvincing, especially when it came to movement and inconsistent frames not being cohesive and outputs venturing deep into the uncanny valley like the highly disturbing spaghetti video featuring Will Smith.

Sora and other providers of generative video have made a considerable leap in motion, lighting, and quality. This is an excellent example of why safeguards must be in place to protect the millions of staff involved in the global film industry. Purely relying on AI generative software would eventually come at the cost of losing traditional human skills and be detrimental to the industry and the creative processes. It could even lead to actors challenging their legal ownership of digital likenesses and lead us down a similar path to the ‘Joan is Awful’ episode of Black Mirror.

AI shortcomings and tell-tale signs

As AI generative imaging continues to evolve, there will come a point where the differentiation between human-made work and AI will be increasingly harder to discern. Still, for now, there are telltale signs to spot AI-generated images. For example, AI struggles with merging two different shapes and shadows that do not match the scene lighting and visual text generation or contextual sizes that do not make sense. Another characteristic that AI-generated images mistake in areas of human anatomy is hands, which can end up merging together or having many more digits or hands than usual.

Again, this is changing very quickly, with exponential improvements to AI being able to understand and improve on achieving better realism. In contrast to ‘traditional’ image editing, such as with Adobe Photoshop, having a human artist edit and manipulate an image could be more accurate with more attention to details where generative images currently fall short. While AI has the upper hand in speed, involving humans in the process will result in the finer detail of both anatomy and contextual understanding of the image being manipulated. In terms of the authenticity of an image, there are also ways to tell if an image has been doctored by verifying the file’s metadata and using forensic analysis techniques.


In conclusion, the world of generative AI and its recent rise in popularity within the mainstream has progressed at unprecedented speeds in terms of accuracy in simulating photorealistic imagery. However, with increasing adoption rates and companies prioritising saving time and cost over quality, we see more mishaps and negative pushback on this otherwise useful technology if adopted appropriately.

This argues the importance of protecting and mitigating the pending damage to human labour and everyday media consumers. We are still a long way from the peak of AI and already witnessing ethics being eroded. To fully integrate AI into our lives, we’ll need to move faster in safeguarding policies and establishing laws to ensure they benefit all.

Previous post
New website launched for Landscape Decisions Programme
Next post
New website launched for Spitalfields Housing Association