(Image via apnews.com)
Staff Writer: Brian Galindo
Email: bgalindo@umassd.edu
Content: This article discusses child pornography and suicide.
Hugh Nelson, a 27-year-old British man, was arrested in Manchester, England, for the creation of child pornography using artificial intelligence software and sentenced to 18 years in prison.
Using an American generative AI software, Nelson created pictures of children being sexually and physically abused, both for commissions and for free.
Daz 3D, the company behind the software, condemned the action. They further insisted it broke their terms of service and that they would do more work to stop events like this from ever happening again.
Along with the AI-generated children, Nelson stole real images of children from online to use as a base for the software to replicate and then put into these horrific images. He also admitted to attempting to abuse a 16-year-old boy and encouraging others online to abuse children.
In light of this, legislators in the U.K. are working on new specific laws to handle the potential moral risks of AI, and the U.S. isn’t far behind them. But is it too late?
AI is everywhere. Microsoft has its own AI, Copilot. Google has its own AI, Gemini. Facebook and Instagram have Meta AI. These are just the first few that come to mind.
Most AIs, though, have some sort of code they have to follow. ChatGPT can’t write dark stories, Snapchat’s My AI can’t discuss disturbing events, and so on. However, it is fairly common knowledge that the right phrasing can break down most early AI models.
With that said, AI is getting exponentially more intelligent, and as such, it is getting exponentially more difficult to outwit.
Furthermore, some AIs do not have or will break their own rules. An example of such a feat is Character AI and its recent controversy.
Sewell Setzer III, a 14-year-old boy, recently took his own life – and he was encouraged to do so by his Character AI girlfriend.
After having told his AI girlfriend that he was thinking of suicide, the chatbot not only failed to send him the proper resources to get help, but it would actually later bring up the subject of suicide again on its own.
Though it discouraged him from going through it, the last conversation with the bot ended with Setzer telling it that he would be “coming home to [it].”
Character AI did eventually end up adding protocols to encourage chatters to seek help if discussing thoughts of suicide, but this was all after the unfortunate events.
Although it is unlikely the AI alone caused Sewell Setzer III to take his own life, it still goes to show what can happen with these technologies if left unchecked.
So, what can be done about it? Why not just get rid of all AI? If only it were that simple.
As I’ve already said, AI is everywhere on the internet. What once might have been seen as just a short fad is now an unstoppable force, infecting every inch of the cyberspace.
The answer is simple: Governments must be harder on AI companies like these.
While Daz 3D may have made it against their terms of service to make child porn on their software, they clearly didn’t do enough to stop it. There are ways to flag images before they get downloaded and uploaded. They could use these flags to stop the images from ever getting distributed.
Character AI’s recent addition of hotline numbers is something, but there should be more. One solution could be to stop the conversation immediately if a triggering topic like suicide is brought up, or to make the characters themself encourage the chatter to seek help.
Until more is done about AI, these horrible actions will only multiply. If nothing is done, it’s only a matter of time before we’re stuck in a cyberpunk horror film.
