It’s a little disturbing how easy it is to make “fake” content. I generated a video of myself delivering parts of JFK’s famous “Ask not…” speech, but in Korean.

youtu.be/InYCZOSjR…

I never said those words. My level of Korean language proficiency is barely enough to order food at a restaurant. I grabbed the text from a website and put it into a translator and pasted that into the transcript for what my avatar would say.

Seeing a video of myself say words that I never said is quite a jarring experience. The political implications of this are obvious, as witnessed in the recent Argentinian elections. Those AI-faked videos were of poor quality, but it was still impactful. How much moreso will high quality faked political videos affect unsuspecting, unaware masses? Takedowns can be issued, but the social networks haven’t had the best report card on moderating content that should be removed.

In a recent panel discussion with Yann LeCun and others, they talk about how the algorithm drives the ultimate output. For social media networks, the goal was attention. The algorithms were tuned to keep viewers attention and in turn sell more ads. It can be argued that this affected an unhealthy body image, since anorexic teen girl videos attracted a lot of attention and were thus repeated on people’s feeds. It can be argued that this resulted in higher levels of depression and negatively impacted mental health. It can be argued that because the goal of the algorithm was attention, when kept unchecked this resulted in a number of adverse societal issues.

There should have been better regulation. It can’t be left to a profit-driven entity to self-moderate. With AI we are seeing a more powerful technological force and the need for regulation is clear. But how can we achieve this? What is to stop someone from generating fake political videos and spamming targeted social feeds? The damage will be done before any regulation enforcement or takedowns can be enacted. So, what is the driving force behind the technological wonders we see every day? I believe it is a mix of profit and innovation in the name of profit. And again, there need to be guardrails so that the technology can grow properly and avoid misuse.

OpenAI seems to have tried to create a corporate structure that has non-profit and for-profit sides to it. The goal of the non-profit side is “to build artificial general intelligence (AGI) that is safe and benefits all of humanity”. Looking back at the fiasco that happened with Sam A’s firing and re-hiring, it is clear that the non-profit side lacks teeth. Well, maybe that fiasco is more telling of the mismanagement of the board, but Sam A driving innovation was clearly the winner.

And now I hear that fake nudes are on the rise :( There are bad actors out there who make it their life’s work to torture people online with content like this, fake or real. The most recent episode of Darknet Diaries podcast is a really insightful view of one person’s decades long struggle to combat this. One thing I learned from that podcast was that one of the most effective tools to combat this is DMCA takedown and that when the subject of the image owns the copyright, they can immediately request a takedown of the content. In the case of AI-generated, how does copyright apply? Victims might have to resort to less effective means if this route isn’t readily available to them.

Tech solution…?

Maybe we could “stamp” all AI model usage such that with everything that gets generated. Then, we can trace the content back to which specific instance of that model generated it and which specific user of that instance summoned the content. This is likely not possible with AI models being open-source. This allows anyone to run the code on their own and modify it as long as they understand what they’re doing.

Another approach would be to have each content generation call to a service that records the model, instance, and user information and tracks that content. Then enforce that generated images have this traceability whenever a platform allows images/video on it. Then we would also need to register regular images which can come from cameras or creative tools. This would be an immense task. And if this is all eventually accomplished, the remaining excluded set of content would be things illegitimately generated and thus more susceptible to being taken down from platforms. Maybe a news organization can implement this and receive acknowledgement as having trustworthy media.

We’re starting to see some companies explicitly self-regulate. Purple Llama from Meta is an attempt at this. It is interesting that a company who dropped the ball on social regulation now has an initiative on AI regulation. It makes sense though - if I can regulate myself sufficiently to satisfy the necessary parties, maybe the government won’t feel the urgency. Or if EU/US regulation is enacted, having a model that can easily adapt to this and comply with regulation will make them more nimble and quicker to win market share.

Maybe companies will wise up and realize the positive gains from doing the right thing. But there will always be bad actors out there, so there will still be need for government regulation.