It's not very clear though and the article points that out:
> There is no evidence that the new restriction is limited only to the free tier of Google Colab – at the bottom of the list of prohibited activities to which deepfakes have now been added, is the note ‘Additional restrictions exist for paid users’, indicating that these are baseline regulations.
I also checked the Discord referenced in the article, and the users there are saying that it works on the paid version. I'm hoping that it's just a free version specific ban due to intensive resource usage, as the resources are rather limited.
"Let's make a technology that will allow anyone to create a perfectly convincing video of anyone saying or doing anything, surely that won't have any potential for abuse."
Exactly, I’ve seen some extremely convincing ones based on the quality which were obviously fake because of the content. Creating a 10 second clip of someone saying something, if you have a somewhat decent lookalike to start with, is doable. There’s even audio deepfakes to simulate the voice.
A classic case of « we spent so much time doing it that we forgot to ask if we should be doing it »
Those are political weapons. They don’t just go away
Probably have a fingerprint of common deep fake algorithms. Most people aren’t running a new algorithm they designed but got something “off the shelf.”
I'm excited for when AI actors can play a convincing role and the average indie film maker can create high quality films with a full AI cast.
I'm not excited for when you can't trust any video evidence ever again.
Banned on free version article says and adds reasons
You need the paid tiers anyway to have any hope of getting a GPU that can process more than a few frames per day.
It's not very clear though and the article points that out: > There is no evidence that the new restriction is limited only to the free tier of Google Colab – at the bottom of the list of prohibited activities to which deepfakes have now been added, is the note ‘Additional restrictions exist for paid users’, indicating that these are baseline regulations. I also checked the Discord referenced in the article, and the users there are saying that it works on the paid version. I'm hoping that it's just a free version specific ban due to intensive resource usage, as the resources are rather limited.
This feels like trying to ban nuclear weapons for others once you have them yourselves...
Banned from the public ;)
Used anyway
Well, banned for poor people basically
"Let's make a technology that will allow anyone to create a perfectly convincing video of anyone saying or doing anything, surely that won't have any potential for abuse."
One of the worst parts is now anyone can just say it was a deep fake I didn’t say that.
I've never seen a deepfake I didn't immediately recognize as fake... Which could be taken in two different contexts...
Exactly, I’ve seen some extremely convincing ones based on the quality which were obviously fake because of the content. Creating a 10 second clip of someone saying something, if you have a somewhat decent lookalike to start with, is doable. There’s even audio deepfakes to simulate the voice.
A classic case of « we spent so much time doing it that we forgot to ask if we should be doing it » Those are political weapons. They don’t just go away
This seems really dumb honestly
It's unenforceable other than throttling. It's probably a legal thing to shield liability.
My thoughts as well. How can they even tell you're making a deep fake?
Probably have a fingerprint of common deep fake algorithms. Most people aren’t running a new algorithm they designed but got something “off the shelf.”
I’m so sick of such amazing stuff being closed off to the public. Something that was so fun and such a great learning experience for all of us.
I'm excited for when AI actors can play a convincing role and the average indie film maker can create high quality films with a full AI cast. I'm not excited for when you can't trust any video evidence ever again.