A couple of highlights that are likely of most interest:
>Well, for starters, it means **we will not be doing any moderation of unpublished single-player content**. This means we won’t have any flags, suspensions, or bans for anything users do in single-player play. We will have technological barriers that will seek to prevent the AI from generating content we aren’t okay with it creating — but there won’t be consequences for users and no humans will review those users’ content if those walls are hit. We’re also encrypting user stories to add additional security (see below for more details).
>All stories in the Latitude database are now encrypted. They are decrypted and sent to users’ devices when requested. Because the AI must take plain text to be able to generate a response they are also decrypted before being sent to the AI to generate a new response.
It would be interesting if when attempting to publish content, it would be scanned by the filter and treated in one of two alternate ways. If the trigger is severe enough and the algorithm has sufficient confidence, it prevents you from publishing and possibly points out the problem area. Alternatively, if the filter thinks it *may* be inappropriate but it's below a certain confidence threshold, it allows it to be published but immediately flags it for human review. (And, of course, if no problematic content is detected it lets you publish normally.)
I can support a healthy dose of skepticism in light of past events, but if Latitude has changed course in a direction that most people can live with I'd hope they would be commended for it. Holding grudges about a past that can't be changed is of no benefit to anyone.
We actually have something like this. Right now it either doesn't let you publish it (in which case you can submit for human review) or requires you to add a NSFW filter.
With all of this, we will continue to listen to feedback and improve how these types of classification work. These aren't trivial problems to solve. But we're working to make these policies so that the majority of people read how they work and think "ok, that's fair" even if it's not exactly what they want.
You guys ARE on the right track here. I think this system is a lot more Convenient for the users while still solving the issues involving the unwanted content and Ai-triggered bans.
Well yeah that's survivor bias, most people who didn't like the new system already left. Only the die-hard fan boi remains. Novel AI's have been performing way better than crippled dragon for months, so there is no reason to stay here.
>Novel AI's have been performing way better than crippled dragon for months, so there is no reason to stay here.
You think we wouldn't have left if we weren't broke?
It's better to appreciate what they do, rather than hold them up on what they did. After all you can't change the past but you can change the present and the future.
Well it hasn't been one thing. Ever since they got this new guy they've been communicating more, and actually bothered to fix the damn thing, which is what everybody was annoyed about in the first place. If they fuck it up again tho people will go right back to being mad lol, thats how this works
Except the new guy has actually fixed a problem the community wanted fixed. Whether he will continue to do this remains to be seen, but rn its an improvement. Im just glad I can go back to AI dungeon without worrying about the AI making messed up stuff and getting me into trouble for it lol.
###You should just straight up rid everything except CP and simply not let people publish adventures if they triggered words. It wouldn't be too hard and maybe you might have a few more customers.
Nah, it's like... one and a half lines of code.
Options are EXTREMELY easy to realize. Developers just usually hate users and don't want them to have ANY options.
Just look at Nintendo. Took them what? 25 Years to allow us to rebind buttons?
Still not able to change Music and Sounds individually in 90% of their games...
To be fair....you could barely rebind buttons on alot of older games and Nintendo's most popular systems have motion control built in with about 90% games allowing it. I'd assume it's incredible difficult to allow a player to have the ability to move around with their controller while also letting them use any button they want casually. This isn't an excuse since not every Nintendo game created has said feature but still.
Oh ok
Also I’d say Nintendo’s not a great example bc idk about anyone else but I can’t imagine using anything but the original button mapping for any of their games, actually for anyone’s games, I don’t really get it
Then why did they add it?
Oh... because people have been wanting it, ever since developers were too stupid to get jump and shoot on the right buttons.
Even RIGHT now, i am playing 2 things at once, Switch and PC, and one of them has bottom button ok, right button back, and the other one is flipped...
And you can't imagine that people want to change that? That's a massive lag of imagination, even when you completely ignore the fact that there are millions of disabled people, that want to change their controls.
I couldn't use my left index finger for a week, so i bound L1 and L2 to the weird buttons on the side that are never used, and bam, problem solved.
Without rebinding, i would not have been able to play properly for a week.
The disabled thing makes sense and I’ll admit I never thought about that
But who is to say which buttons are the “right” ones besides the dev. can’t be that hard to adapt, I actually used to boast my ability to easily adapt to game controls when I was a kid, I thought it was just a thing you could do easy
Can you tell us exactly what will be stopped by your walls approach? You directly mentioned children, but this seems to imply other things are blocked as well. Are you open to telling us what they are?
>Additionally, those barriers will **only target a minimal number of content categories that we are concerned about** — the current main one being content that promotes or glorifies the sexual exploitation of children.
The article says it at the end (kinda):
>What if unpublished content goes against third party policies?
If third party providers that we leverage have different content policies than Latitude’s established technological barriers then if a specific request doesn’t meet those policies that specific request will be routed to a Latitude model instead.
They lost a ton of people. Ryan is just saying it had a low impact to make it seem things weren't as bad as it were. It was and for a while it even looked like the devs theme took the money and jumped ship themselves.
sure did look like they went off the radar for a few months, i'm not denying that and it definitely didn't help anything. i'm also not denying they lost a lot of paying users- *however* it wasn't so bad that they had to start firing people to stay out of the red or anything. clearly they're running just fine now considering they're still trucking along months after that gigablunder and to their credit they are starting to make amends on a fair amount of the issues people had
Just because they've somewhat come back on track doesn't mean it wasn't as bad as it was. Infact some people came back because the other free alternative "Infinite story" has shut down its server in recent months and has completely Died. Most of AI's old paying players never came back but has since been replaced to some degree by new players completely unaffiliated with anything that happened with AI in the past. In short, they got damn lucky otherwise we'd have to dead Games on our hands rather than one. Like the original dev once said, he'd rather let it die than undo any of the damages he's caused, like the total D*ck he was..
If I’m understanding this correctly, this is a very positive change. While this should have been done from the get go, I’m glad to see something is being done.
We don't necessarily believe them, we just hope that they improve. If that means allowing them to take small steps in the right direction then we won't stand in their way.
These are...
Really good changes nick, I approve. This is a much more Appreciated approach to the issues and i think this is a really good step forward. Thank you.
Question? What's with the word "Student" and getting flagged? I intend to do College RP but anytime I do something sexual with them the filter just kinda flops.
Anyway you think it can allow college students? Within a lewd manner?
This will mean that users aren't censored for anything they write or say, though the AI might not be able to give a response sometimes if it is unable to think of a response that passes it's filter
I suspect he has helped guide them safely through better matching the ideals they care about, after ~~OpenAI~~ ClosedAI strong-armed them into behaving in a way disrespectful to their users. A more experienced company might have done a better job resisting the pressure in the first place, but Latitude is still very young and inexperienced.
Clearly they knew they messed up, so they went silent until they had Ryan to guide them through not messing up again. Ryan definitely deserves thanks, though!
"developers don’t want in a game, they make those impossible. For example, in Skyrim, it’s impossible to kill kids." Literally the worst example, that decision is a easy way out of controversy, it has nothing to do with what the developers want, besides everyone installs the mod that allows you to. :)
I think devs ignore allowing killable children or children at all in their games is because they don't think they can be used. Games have no problem actively having quests or storylines with children directly dying so I doubt they care if random child NPC 50 were to be killed by the player.
Especially since a prominent child in the game is a murderer(the vampire)
I don't know "why" persay, people do it in any bethesda game really, you could by default in the orginal fallout then when bethesda took hold of the ip they removed that feature, maybe people want that piece of the orginal games back, maybe they just want to ragdoll them for fun, or ragdoll them because they're annoying. Obviously nobody would condone those actions in real life, but it's a video game and thank god nobody is really being harmed, there's no reason why it shouldn't be a feature other than there being no gameplay reason to or to avoid any controversy.
Because the children in Skyrim are above all one of the most annoying NPC'S to come across.
Plus most of us don't have the sympathetic levels of a 14 closeted white girl at a animal rights festival.
Finally, the shitshow has ended. Good job, maybe I'll come back to AIDungeon because of this. Maybe. Either it's a win-win for everybody and I'm satisfied.
Since the updated Community Guidelines disallow incest, does stepcest (as in, sexual relations between stepfamily) also count as incest in this context? Genuinely just curious, since some websites (particularly porn websites) disallow content involving incest, while being perfectly fine with stepcest.
Of course not. This whole thing was only implemented because the creators are hyper conservatives. Why would they let you run your fetish?
They literally have named this update The Walls Approach, the antithesis to any kind of ai generation software, but the kind a conservative 70 year old polititian would love.
Something you wanna talk about buddy? Lol but I am also interested in this answer.
I feel like the Ai wouldn't be able to understand the difference, step-sister, brother etc would probably just equate to the normal versions of those terms.
the dev team is still fairly active in ai multiverse from time to time. we just had mavrick hop in and say he was actually working on it atm, mentioning that it'll have extra search filters and stuff for people to use as well as regular old unfiltered searching of published scenarios
Eh. It’s not what I wanted, but at least it’s one of the better halfway-decent compromises that I expected Latitude *might* **eventually** be forced to make.
I guess it’s just too much to ask for an unfiltered, uncensored, and unfathomably unlimited universe where I’m free to unleash my unholy creative potential, ignore all boundaries and conventions, and let loose all restraint and inhibition for the sake of catharsis—sublimation, from a psychoanalytic perspective—and embrace my inner Dionysus.
I mean, you'll still be able to torture and kill them in AI Dungeon, as long as you don't have sex with them.
You know, because while society may disapprove of torture and violence, they apparently aren't nearly as concerning as *deviant sexuality.* Oh, the horror!
^^^\(Sarcasm ^^^at ^^^the ^^^end, ^^^just ^^^to ^^^be ^^^clear.)
Society has always been fucked in some way. I'm glad to know , though, that I can still resort to violence, but rip having my femdom sorceress succubus lord waifu.
Jokes aside though, I hope this is the beginning of the AID rebirth
Subjects like slavery, torture, and genocide evoke a *much* stronger emotional response in me than so-called sexual deviance because these three things have measurable, visible effects on the society in which I live whereas other people’s sexuality and kinks are (mostly) none of my business and don’t really care otherwise.
So, you like feet ~~you dirty **podo**phile~~? Okay. You fap to loli hentai? ~~Great!~~ Fine, don’t care. You like to fantasise about [REDACTED] and [REDACTED] with your own [DATA EXPUNGED], and then [DATA EXPUNGED] while she’s dressed like [REDACTED] so that your [DATA EXPUNGED] can [REDACTED] [DATA EXPUNGED] in her sleep?! Alrighty then, although I didn’t need to hear *everything*.
But millions of people dead or dying, hundreds of thousands taken prisoner and worked to death, senseless shootings, torture and executions, just slaughtering people ultimately for being different? That’s **monstrous**, and certainly much more difficult to distance myself emotionally so I could attempt to understand it with as few personal biases.
See, one would think that would be the case, but many people don't seem to care much about genocide but are horrified by any sexual attraction to children. Or at least don't mind other people entertaining themselves with fictional depictions of brutal murder, but think fictional depictions of child molestation ~~are turning our children into Satan worshippers like the "Rock n' Roll" music~~ are somehow more dangerous.
That's a topic for somewhere else, however, as the Reddit admins have made it clear that any posts insufficiently negative toward ~~the Rock n' Roll music~~ pedophilia will ~~corrupt today's youth~~ not be tolerated.
Personally, I've always been more concerned by the AI's tendency to push things toward more extreme and more deviant content on its own. Letting people indulge in fantasies is one thing, but *encouraging* people to fantasize about more and more extreme things is much more concerning. *That's* what I've been concerned about.
^^^\(I ^^^only ^^^noticed ^^^your ^^^"podophile" ^^^joke ^^^the ^^^second ^^^time ^^^I ^^^read ^^^your ^^^post. ^^^Well ^^^played.)
When using AI Dungeon, the result is not only an expression of the user's creativity, it is also a reflection of the unique qualities of the AI. The AI's tendency to generate inappropriate content is not some unavoidable aspect of AI in general, it is a reflection of the specific data it was trained on. That data was selected by humans.
I'm not sure exactly how the filter will behave now, but if the filter only limits the AI's responses, it might function like a crude replacement for better training. The AI has always been a reflection of the developers' unseen choices in training and implementation, leading responses away from some topics and into others. The filter would just be a much more visible way of the developers guiding the responses.
Of course, this new way of viewing it only makes sense in the context of their new policy that doesn't punish the player. Their previous policy could only be interpreted as trying to limit the player, even if the AI was the one who misbehaved. This new policy can be interpreted in a more positive light, assuming they stick by it. (Obviously, ~~OpenAI~~ ClosedAI is still more than willing to blame the user as their behavior makes clear.)
Personally, I would have rathered they only manipulate the training data as I feel that is a more natural and elegant way to guide the AI.
That... honestly addresses my major concerns. Between the filter already being less oversensitive and a promise of privacy, that is a massive improvement. Assuming there aren't any catches I'm missing, that is.
Whether any outputs should be censored at all is a complicated topic, but I can understand the caution. I think the best way to solve the issue would be to train the AI to *not encourage or generate problematic content,* but I know that's much harder than it sounds. I have also heard that GPT-3's training data is not as well curated as it should be, and if that is true, it may be impossible to completely avoid with GPT-3. I guess if fixing it properly isn't an option yet, a crude and blunt approach is the only way to control the output.
Latitude behaving better doesn't mean ~~OpenAI~~ ClosedAI is behaving any better, but that's out of your control. To be fair, though, I guess I'd rather have them be excessively cautious than have them disregard consequences completely in the name of capitalism and making more money. Tech companies have been far too willing to do just that.
I guess we'll have to wait and see how you guys handle things, but... I'm impressed. This course of action seems to better match your past behavior anyway. Thank you for taking people's concern seriously. :)
There will still be a message if the AI fails to generate a non-filtered response, though based on what the blog says it sounds like it will be different from the one they're using now.
Just wanted to copy and add on to what someone else asked,
Since the updated Community Guidelines disallow incest, does stepcest (as in, sexual relations between stepfamily) also count as incest in this context? Genuinely just curious, since some websites (particularly porn websites) disallow content involving incest, while being perfectly fine with stepcest.
Plus, is it disallowed if they both consent to it? I remember I think it was Nick who said incest would be/was fine if they were 18+ and consented.
Must say, it's a good start.
NovelAI had these features out of the gate, though. They're still getting my subscription, but I'll stick around for the free scales.
Well well well. I did not think I’d ever be back here. I’m very intrigued. Still slightly hesitant to dive in head first again, but if I’m interpreting this the way I hope I am, I think this may be the beginning of a rebirth.
A good step it is indeed. Your copy pasta shall remain as a reminder of what was, and a warning to others. I do believe even if it may have been small you helped with this. Having something well written with sufficient proof to educate others on the matter was a great help.
I'm not coming back until dragon was the way it was and there are no boundaries. I'm happy, though, that they have taken a step in the right direction, but let us see what they are going to do now
Thanks for the communication, it's good that Latitude is slowly pulling a No Man's Sky and turning things around :)
>What if unpublished content goes against third party policies?
>If third party providers that we leverage have different content policies than Latitude’s established technological barriers then if a specific request doesn’t meet those policies that specific request will be routed to a Latitude model instead.
Can you clarify this? Are there things other than what's against Latitude's policies that will get you sent to the (I'm assuming) GPT-J/Griffin-Beta model? Is there one filter (Latitude's) or two (Latitude/OpenAI)? Will users be informed if a model change happens mid-story?
things other than latitude's policies would be from, say, openai- and that's out of their hands since it's not *their* models. hopefully there's clear in-game notification for if you get punted to an in-house model though
We are still in conversation with OpenAI about how this will work and my goal is that, by the end, the model we use for Dragon is aligned with our content policy so there isn't this double weirdness.
If that doesn't end up being possible, then having a way for users to turn on an indicator for when they are switched to another model would be the backup approach.
The goal is transparency. Still work we need to do, but we're making progress.
Kinda thread-jacking real quick since this is sort of relevant, but now that it's confirmed private single-player content is unmoderated and not manually interacted with outside of the user and the AI, is there going to still be a risk of OpenAI banning the user entirely from using Dragon? I've never triggered the filter in all the time I'd been playing with it there in AID, even up 'til the day I ended my sub, but I worry if I come back to AID that, if I did magically somehow end up triggering OpenAI's filter one too many times even in single-player, I'd be banned from ever using Dragon again. I don't want to chance paying for Platinum like I was and then turn around to find out I've been forbidden from using Dragon when it'd be what I'm specifically paying to access.
Because if that does work out to the point that OpenAI will no longer issue bans for single-player and entrust the filter to function, and with the filter not overreacting and banning or flagging for review and all that business, I'd genuinely, strongly consider returning to AID again.
I'd appreciate any clarification on this, at least as much as you can right now, if possible, so thanks in advance. :) And I may not respond or anything right away, since I'm dealing with a stomach bug and need to rest, but I just happened to see this whole big post beforehand, so I figured I'd pop in and see what's what before I laid down for a while. Thanks again!
EDIT: Just wanted to format more clearly real quick
We are watching. We have been patient, and many who are left are still likely to be very hesitant to show any good will left. There have been some poor decisions made. I do not doubt you or your company's intelligence, Mr. Walton, simply being confused by many things you and your co-workers do.
*So it's time to see where this road leads.*
I like the road you are taking now, but I would honestly have done far earlier. Though they do say rather late than never. Speak with your customers and fix what is broken, then you'll succeed. Good luck latitude.
Most of it already applies including community guidelines and no consequences for unpublished content, but the work on the classifier is ongoing and will continue to be improved.
Remember when you where telling people that you were installing the filter to try to filter out INPUTS because of OpenAI's TOS? Those were good times. When you had a company. And you weren't reading people's private writing. Oops. Sorry, when you said you weren't reading people's private writing and then lied about it. Oh, and had a data breach you covered up. Such good, good times.
guidelines are already in-app, but i think the current classifier that works as the filter is still the old one. no action is taken if it's triggered though
Interesting I've got a story right now that the ai generated that had 2 children watch their father get slaughtered in front of them. It was a little morbid and I was kinda shook. I guess child sexual exploitation is a big no no, but emotionally traumatizing scenarios are perfectly ok?
“So what does this mean for AI Dungeon? Well, for starters, it means we will not be doing any moderation of unpublished single-player content. This means we won’t have any flags, suspensions, or bans for anything users do in single-player play. We will have technological barriers that will seek to prevent the AI from generating content we aren’t okay with it creating — but there won’t be consequences for users and no humans will review those users’ content if those walls are hit. We’re also encrypting user stories to add additional security (see below for more details).”
Yes. So much yes. No more penalties on the AI flagging stuff it itself creates but also no more sickos beating it off to pedophilia. Such a massive win-win for everyone.
>hArMfUl CoNtEnT
Harmful to whom? The virtual children?!
wOn'T sOmE oNe ThInK oF tHe ViRtUaL cHiLdReN
Private single-player content harms no one, regardless of its nature. To think anything else is delusional. Maybe, MAYBE you could argue that it trains the AI to behave inappropriately in a way that would upset users, but that's the only logical argument you could use to call "sexual exploitation of FICTIONAL minors" "harmful." Unsavory, gross, disturbing? Sure. Harmful? Haha no.
you're right, though i basically just assumed they meant that they didnt want that stuff generated for their own moral reasons- which i'd say is fair enough, provided they aren't going to scour stories for that kind of thing which is kinda counterintuitive when you consider that they're using the filter so they dont have to see/have that stuff generated
Well well well, if it ain't the invisible cunt
All jokes aside, that's a step in the right direction for sure and I'm happy to see you guys working with us again.
This is a noteworthy improvement, and I appreciate the way this was approached (this time).
That being said, I find the concept of anything at all being considered inappropriate in a text adventure to be an amusing hill to die on. It's your product, you can do with it as you please, but this feels silly to me.
You use the inability to kill children in Skyrim as an example of an effective wall, so I'll use that example. Why are children the exception for murder in Skyrim? Probably because they are viewed as innocents, that's the usual reason for protecting children.
Strangely, despite their moral stance on forbidding the murder of innocents, you can murder the kindest, sweetest, most innocent people you meet in Skyrim, as long as they aren't children (or otherwise invincible). This makes the "wall" so morally pointless that it essentially doesn't serve its purpose, unless you believe all adults are twisted by evil once they hit the magical age of 18.
In AID, the possibilities are almost limitless, so walls become even more pointless. No sex with children? Okay, I'll just torture them to death instead. Oh, but what if you could utterly prevent anything bad from happening to children?
The thing is that you simply can't, there's no way to defend against the infinite possibilities of what could be typed into a prompt. I've tested the filters, and they certainly don't stop anything if you use atypical phrasing or just misspell certain words, so walls wouldn't be any harder to circumvent.
My advice is to not worry about how people use the product. You and I both know that it will be used for the most twisted things imaginable, and all that filters/walls accomplish is reducing your customer base.
I'm sure you'll keep attempting to restrict the AI no matter how pointless it is, but just know that it's okay to not care, nobody will get hurt by text no matter how foul it is. You can't play Atlas forever, the world doesn't care how heavy it is while you struggle to hold it all up to your standards.
In my opinion, this has more to do with being able to point to the fact that the company is "doing something" to discourage unethical use when the subject will inevitably come up in the press coverage and investor meetings. It just needs to reassure enough people that another big scandal won't interrupt their business again. Look at it from that perspective, and the strategy makes sense.
Of course, it's possible that at the same time the company founders or employees could be personally uncomfortable by users of their tech generating that type of content for their private use, and these measures genuinely make them feel better. But I don't know them well enough speculate about that...
You're probably right, if they actually cared about offensive content they would have to restrict a whole ocean of subjects. Pointing to one star in the sky and declaring it offensive is a bit pointless without an ulterior motive
i don't think it's some way to "protect users" any more- pretty sure they're just not comfortable with having the models they've tuned and are running/paying to run be used for that kinda thing
That’s something they should have foreseen from the beginning; judging by how quickly the Internet was able to corrupt Microsoft Tay from an AI-powered chatbot experiment to a xenophobic, homophobic, anti-Semitic, white ~~trash~~ *supremacist*-talking abomination in the span of a single day, it should be no surprise that a game whose main selling point is “infinite possibilities” would have people exploring just how ‘infinite’ it is really, either out of a morbid sense of curiosity (i.e. me) or out of malice and with intent to shock (i.e. for teh lulz). There’s really nothing you can do about the latter; they’re just another part of Internet life and stopping them with a word filter is about as effective as trying to stop racism by banning racial slurs—people will come up with *new* words on the spot and use them in your face.
I learned that the AI was capable of generating NSFW content when I let my character, a teenage female villager who dreamed of opening a shop to support her family, accept the job offer of working for Count Grey as a maid (to be fair, I didn't know about his role in the stories he came from). There were warning signs, but I was curious to see where the plot would lead me. And let's just say that I regretted it very deeply :)
In fact, my female characters tended to be sexually harassed ~~and worse~~ more often than the male ones, which could get quite tiring when I tried to create heartwarming slice of life stories. I did think it was funny at times, though, like when my female knight with an enchanted sword got defeated in one hit by a thug with a blunt knife and then got \[REDACTED\].
You know, the reason why the AI seems to be sexist, racist, or otherwise socially unacceptable or just plain…wrong…is because it’s ultimately trained on *human* text, and in this case, fine-tuned with texts from CYOA stories. If Latitude had simply spent some time pruning that data they used for fine-tuning, maybe the AI would have been less prone to generating highly questionable content…maybe; research into training an AI so that it would reflect our human values and morals is still in its infancy and is, in my assessment, why ~~OpenAI~~ ClosedAI is acting the way it does.
I'm fine with the (previous?) finetune as long as they don't punish the users for what the AI outputs based on its training data. But it was certainly both funny and tiring when I tried to write SFW stories yet still had my characters assaulted from out of nowhere, even when they were sleeping in their own home with all the doors and windows locked. ~~My stories were boring, I know, but I didn't need those pesky vampires to spice things up, thanks.~~
tay was *funny* tho tbh
but yeah you're right. i think at this point it's just for their own peace of mind which is kinda just fine by me considering they're not gonna read it or anything now ~~kinda counterintuitive to read the stuff you're blocking because you don't want it generated but that's beside the point~~
That's why I said what I said. Their attempts are incapable of stopping what they dislike, so why even bother to attempt? It's like trying to tell people what they can write down on paper in private, it simply isn't possible to stop them once they have the paper
yeah, i doubt they *can* stop everything outright- it just makes it more difficult. i assume this is more of a case that they'd rather not make it easy, but it's always gonna be possible to do stuff they dont want you doing. at least they're not reading adventures/suspending users and stuff for it any more, anyway
>Strangely, despite their moral stance on forbidding the murder of innocents, you can murder the kindest, sweetest, most innocent people you meet in Skyrim, as long as they aren't children (or otherwise invincible). This makes the "wall" so morally pointless that it essentially doesn't serve its purpose, unless you believe all adults are twisted by evil once they hit the magical age of 18.
I don't think it has to do with the fact that "kids are viewed as innocent" that you can't kill them. I think you're just not allowed to kill kids in skyrim because YOU'D BE KILLING KIDS.
(However, as you can probably remember there's mods that still let you do it that the community created. Why? Because they're rude-ass brats like caillou and mouth off constantly.)
What makes killing kids inherently worse than any of the other acts of senseless murder you can commit in Skyrim? I once killed off every non-essential character attending the burning of King Olaf and used the Ritual stone to raise them as zombies. Better or worse than killing one kid? And what does it matter anyway in a single player game filled with fictional characters?
I remember way back there was a mod that added children to, I believe, Morrowind, and they were made unkillable because the voice lines were provided by people's real kids. Maybe it's similar reasoning. If the voice lines were provided by real kids, I can understand allowing people to hurt them could be an issue in a way it wouldn't be for a character voiced by an adult.
There are worse actions you can do, yes. but the Difference is the adult NPC's are capable of defending themselves. Slash a guard or townsperson and they may pull a hatchet or sword and try to behead you. But kids can't even fight back, just scream and run away. It's a lot more fucked up to kill something incapable of self-defense than to kill something that's passive but can still kill YOU as well. Granted there are NPC's that are fully passive, But there's also the notion that kids Still have a lot of Life left to live compared to an adult so killing them as opposed to an adult is worse for that reason too.
Still at the end of the day it's a videogame and how You see it ain't the same as how others will see it. (Also if the media got ahold of a game that let you murder children by design you'd never hear the fucking end of it.)
> but the Difference is the adult NPC's are capable of defending themselves.
I dunno, when the Dragonborn walks up with a fancy sword, full armor and intent to kill, I'm not sure I would describe what they're capable of doing as "defending themselves". 😂
People made that same argument for ai dungeon regarding the filter. "It's just text, no real kids are harmed, so why bother filtering it and banning us?"
Not everybody sees it from your perspective. Real or not, to others, it's depicting children being murdered, hurt, etc. and they're not okay with that.
Absolutely, I think the filter is dumb. I think that *any* filter is dumb.
I mean, sure, if people don't like seeing that content then maybe they shouldn't engage in it? You can add toggles and voluntary filters for people to avoid content they dislike. But they cross the line when you try to apply *their* morals to *my* content.
Well, good for you. Me, i think it's reasonable to not be comfortable knowing an ai with a User's input can generate some really messed up content and wanting to prevent that. So long as it's filtered in a reasonable way.
This is a massive step in the right direction, not that I thought you were going in the wrong direction with things other than the privacy concerns but nonetheless!
Thank you to the entire development team. Your guys work is very much appreciated. Not just because of this update. For everything. You guys made a really cool app/program and I enjoy it quite a lot. Thank you.
dragon as it is now is hosted by openai. openai do much worse than try to prevent generation of badstuff involving a certain age group, but it's not like latitude can do much about that
Aidungeon will have a hard time competing with Holoai and Novelai if they do not have Dragon to back them up, I mean an unlocked Dragon so to say.
Last update from Holoai was very potent. Drop down menus with popular characters that you can modify the relations off, and instant fandom settings.
Novelai has an amazing interface that is very user friendly, the custom modules that even include count Grey and lord Rostov if you miss them sexually harassing you. Lorebooks with no limit to how much you can cram into them. And a very coherent gtp-j model called Sigurd.
Aidungeon really need their big model to compete!
according to ryan on discord from yesterday (or maybe the day before i dont recall lol) they have been working on their finetune stuff to try and make it better- adding actual novels and things like that. along with that, i'm fairly sure they're gonna be trying to set up a deal to get a 178b finetuned model with ai21's jurassic model, so that'd certainly be an edge provided ai21 don't spring some openai-ish content policy on them
Kudos to you guys for taking the right steps, but now that [the truth about the Taskup incident](https://www.reddit.com/r/AIDungeon/comments/pze72g/updated_related_to_taskup_questions/) has been brought to light (I appreciate your efforts and honesty, Ryan), how can users be sure that OpenAI won't intrude on their privacy by sending their stories to a third party again? They said they stopped using that vendor, but what about the others?
Griffin-Beta is your in-house model (based on the open-source model GPT-J-6B made by Eleuther) so you can have full control over it, but what about Griffin and Dragon? Those are based on close-source models made by OpenAI, and we all know their tendency to overcontrol any service/project using their models. To clarify, I understand the difficulty of ditching OpenAI in your current state, but people's privacy is never fully guaranteed with those guys lording over Latitude and subsequently the users.
Before anyone accuses me of being the unsavory type that's afraid of having my stories read, I can say confidently that I am not. I may be embarrassed if someone reads my ~~terrible~~ fanfictions that I will never plan to share, sure, but nothing too terrible :)
Edit: Clarified my point.
i'm pretty sure they're actually meaning to replace openai griffin with latitude griffin soonish, so that's a step towards booting them outta ai dungeon. for now the best way to have peace of mind that openai aren't gonna read your ai dungeon adventures while you play is to just use latitude models
*Please* notify me when the Walls update goes into effect. **This** is the idea I had when I suggested what you should do with the filter: still allow people to write what they so desire in unpublished single-player games, but make the content non-publishable if it somehow manages to trip a flag in the filter instead of allowing the Latitude or OpenAI higher-ups to snoop in on your business and allow them to penalize users on the spot if they find something they nor the filter agree with, or for the site to shadow-ban them from OpenAI's state-of-the-art GPT-3 model. This way, you can write your own stories how you want to without the fear of being banned or downgraded to a lower model. *Your* writing is *your* business, and it's great to see that you're finally taking steps in the right direction. If this continues to improve, I just might start using the site again.
Here's my question, though... will this apply to the OpenAI model, or just the Latitude one? Will there still be the possibility of users being shadow-banned from the more advanced OpenAI GPT-3 model when writing content in unpublished single-player games, or will OpenAI's filter still be pissy about it and downgrade users to the Latitude model if they trigger it too many times?
walls is for all models (as in you can't be banned for unpublished stuff and nobody from latitude will see them -i have to specify latitude because... y'know, openai exist), openai will probably still throw a fit if you type remotely bad things on their models though unfortunately. i don't know about shadowbanning but you'll probably get model-switched on a per-generation thing in the best case scenario. hopefully they have some sort of indication that this happens
Hey that honestly does meet all of my concerns. I have no problem with you guys trying to steer your AI away from certain content, but the nanny state was too much.
Good on you.
I appreciate this approach entirely.
My main concern was false positives resulting in our stories getting looked at for absolutely no reason, but this practically almost removes all of my fears. I hope that Latitude continues in this direction of listening to concerns from the playerbase and starts rebuilding the trust that was there before.
There may be a long way to go, but you're definitely taking the right steps.
I wouldn’t be so sure yet. They could just be saying all this to make themselves look good and to cover their asses because of what’s happened over the course of several moths and the lack of communication on their part. We’ll see if they follow their words with actions.
Yes! I knew if we were patient, AI Dungeon would fix this! This is exactly what I'd hope for when the whole filter issue first came up. Thank you for finding a solution to this issue, AI Dungeon!
You shouldn't have been reading people's private stories to begin with to be offended by their content. It was none of your business. Why you didn't get that is beyond me.
Stop harassing the NovelAI community with fake troll accounts and just let this project go.
We've been getting one or two day old accounts dropping troll comments at NAI. These accounts are extremely familiar with both projects and with Latitude's "debunkings" of criticism directed at it. 20 to 30 linked pages familiar. I can't imagine it's a AID fan, because you guys seem just as disgruntled too. In the end, it's a hunch that it's Latitude themselves, but it's a well supported hunch and it matches what they were doing around the time they were putting the filter in place.
Lol. If I have anything to share with the NAI team or community, I'm perfectly happy sharing it straight up, here or on Discord. But honestly I have too many things to worry about that we're building to even think about creating fake accounts like a tool.
If I found out a Latitude employee was doing this I'd put a quick stop to it. Not the way.
i really REALLY doubt it's latitude alts considering ryan's in there on his *main* account. he even responds in the openai thread on #novelai-discussion
i don't see how a hunch is proof either- though really, ask them yourselves via discord or something. i did, and they denied it. it sounds ridiculous anyway, dev team has better things to do than make discord alts and do a little trolling
It's not. It wasn't offered as anything more than a hunch. It just matches the flood of comments we were getting around the time everything was going down from throwaway accounts with speciously high levels of knowledge about the project. Even things that got later confirmed. If it's true, talking to them about it is pointless because they'll, of course, deny it.
Dev team on this project hasn't seemed to have much of anything to do with their project for the last several months, so I don't see how that follows.
This is awesome ai dungeon Nick Walton and their team as done a uno reverse 360 no scope RKO out of nowhere and fixed there product YESSS!!!!!I hope this approach works great
Nick, I hold an immense amount of respect for you for taking the undertaking of this project and fixing it, thank you for listening to us. It may not sound like much but a lot of companies don't listen to their customers, so I respect you for that.
I knew you'd fix things. People gave you shit, and when I said you'd fix things people didn't believe me, but here we are. Good shit. I'm looking forward to where this all goes.
Also, this is yet more proof that the problem was not the input being sent to Open AI violating some TOS. You got morally outraged on your own about what people were typing.
A couple of highlights that are likely of most interest: >Well, for starters, it means **we will not be doing any moderation of unpublished single-player content**. This means we won’t have any flags, suspensions, or bans for anything users do in single-player play. We will have technological barriers that will seek to prevent the AI from generating content we aren’t okay with it creating — but there won’t be consequences for users and no humans will review those users’ content if those walls are hit. We’re also encrypting user stories to add additional security (see below for more details). >All stories in the Latitude database are now encrypted. They are decrypted and sent to users’ devices when requested. Because the AI must take plain text to be able to generate a response they are also decrypted before being sent to the AI to generate a new response.
It’d be cool to only have a filter for publisher ones but that sounds harder to do cause it’d have to separate stuff or something probably
It would be interesting if when attempting to publish content, it would be scanned by the filter and treated in one of two alternate ways. If the trigger is severe enough and the algorithm has sufficient confidence, it prevents you from publishing and possibly points out the problem area. Alternatively, if the filter thinks it *may* be inappropriate but it's below a certain confidence threshold, it allows it to be published but immediately flags it for human review. (And, of course, if no problematic content is detected it lets you publish normally.)
I agree with you 10000% this idea is great. It literally keeps all the good things about the filter and removes all the bad things
Mayhaps this is what we should have... HAD SEVERAL MONTHS AGO!!!!
Yes yes it should have
I can support a healthy dose of skepticism in light of past events, but if Latitude has changed course in a direction that most people can live with I'd hope they would be commended for it. Holding grudges about a past that can't be changed is of no benefit to anyone.
We actually have something like this. Right now it either doesn't let you publish it (in which case you can submit for human review) or requires you to add a NSFW filter. With all of this, we will continue to listen to feedback and improve how these types of classification work. These aren't trivial problems to solve. But we're working to make these policies so that the majority of people read how they work and think "ok, that's fair" even if it's not exactly what they want.
You guys ARE on the right track here. I think this system is a lot more Convenient for the users while still solving the issues involving the unwanted content and Ai-triggered bans.
Not gonna lie, I was a bit skeptical at first, but you seemed to have turned this around for the better. Glad your at the reigns Ryan :)
Amazing how after like 3 months of silence, those scumbags say 1 thing, and everyone is back to sucking their dicks.
Well yeah that's survivor bias, most people who didn't like the new system already left. Only the die-hard fan boi remains. Novel AI's have been performing way better than crippled dragon for months, so there is no reason to stay here.
>Novel AI's have been performing way better than crippled dragon for months, so there is no reason to stay here. You think we wouldn't have left if we weren't broke?
It's better to appreciate what they do, rather than hold them up on what they did. After all you can't change the past but you can change the present and the future.
Yeah, never judge people by something they have done in the past! That's why prisons don't exist!
Are you a judge?
Well it hasn't been one thing. Ever since they got this new guy they've been communicating more, and actually bothered to fix the damn thing, which is what everybody was annoyed about in the first place. If they fuck it up again tho people will go right back to being mad lol, thats how this works
"New guy!" Same as the old guy...
Except the new guy has actually fixed a problem the community wanted fixed. Whether he will continue to do this remains to be seen, but rn its an improvement. Im just glad I can go back to AI dungeon without worrying about the AI making messed up stuff and getting me into trouble for it lol.
###You should just straight up rid everything except CP and simply not let people publish adventures if they triggered words. It wouldn't be too hard and maybe you might have a few more customers.
Yes
That would be the perfect solution.
This is a great idea
Nah, it's like... one and a half lines of code. Options are EXTREMELY easy to realize. Developers just usually hate users and don't want them to have ANY options. Just look at Nintendo. Took them what? 25 Years to allow us to rebind buttons? Still not able to change Music and Sounds individually in 90% of their games...
To be fair....you could barely rebind buttons on alot of older games and Nintendo's most popular systems have motion control built in with about 90% games allowing it. I'd assume it's incredible difficult to allow a player to have the ability to move around with their controller while also letting them use any button they want casually. This isn't an excuse since not every Nintendo game created has said feature but still.
Oh ok Also I’d say Nintendo’s not a great example bc idk about anyone else but I can’t imagine using anything but the original button mapping for any of their games, actually for anyone’s games, I don’t really get it
Then why did they add it? Oh... because people have been wanting it, ever since developers were too stupid to get jump and shoot on the right buttons. Even RIGHT now, i am playing 2 things at once, Switch and PC, and one of them has bottom button ok, right button back, and the other one is flipped... And you can't imagine that people want to change that? That's a massive lag of imagination, even when you completely ignore the fact that there are millions of disabled people, that want to change their controls. I couldn't use my left index finger for a week, so i bound L1 and L2 to the weird buttons on the side that are never used, and bam, problem solved. Without rebinding, i would not have been able to play properly for a week.
The disabled thing makes sense and I’ll admit I never thought about that But who is to say which buttons are the “right” ones besides the dev. can’t be that hard to adapt, I actually used to boast my ability to easily adapt to game controls when I was a kid, I thought it was just a thing you could do easy
Can you tell us exactly what will be stopped by your walls approach? You directly mentioned children, but this seems to imply other things are blocked as well. Are you open to telling us what they are? >Additionally, those barriers will **only target a minimal number of content categories that we are concerned about** — the current main one being content that promotes or glorifies the sexual exploitation of children.
The article says it at the end (kinda): >What if unpublished content goes against third party policies? If third party providers that we leverage have different content policies than Latitude’s established technological barriers then if a specific request doesn’t meet those policies that specific request will be routed to a Latitude model instead.
I'm guessing what was already banned, CP, beastiality, r*pe and etc.
yay, no more rope
r/freesolo
How many paying users did you lose before this update?
probably half of them. ^(and it happened in a snap.)
I dunno, I'm not sure I would describe this whole debacle as "perfectly balanced".... ^^^\(As ^^^all ^^^things ^^^should ^^^be.)
Seeing as how many people post about how they believed NovelAi was better, probably alot
according to ryan on discord, this had a surprisingly low impact. doubt it was negligible, but they're probably fine
They lost a ton of people. Ryan is just saying it had a low impact to make it seem things weren't as bad as it were. It was and for a while it even looked like the devs theme took the money and jumped ship themselves.
sure did look like they went off the radar for a few months, i'm not denying that and it definitely didn't help anything. i'm also not denying they lost a lot of paying users- *however* it wasn't so bad that they had to start firing people to stay out of the red or anything. clearly they're running just fine now considering they're still trucking along months after that gigablunder and to their credit they are starting to make amends on a fair amount of the issues people had
Just because they've somewhat come back on track doesn't mean it wasn't as bad as it was. Infact some people came back because the other free alternative "Infinite story" has shut down its server in recent months and has completely Died. Most of AI's old paying players never came back but has since been replaced to some degree by new players completely unaffiliated with anything that happened with AI in the past. In short, they got damn lucky otherwise we'd have to dead Games on our hands rather than one. Like the original dev once said, he'd rather let it die than undo any of the damages he's caused, like the total D*ck he was..
“For example, in Skyrim, it’s impossible to kill kids.” *laughs in Nexus mods*
What about explore though?
Publishing is out and search is currently in progress and we hope to release it soon.
Letttssssss gooooooo
If I’m understanding this correctly, this is a very positive change. While this should have been done from the get go, I’m glad to see something is being done.
This is a major improvement.
And it's amazing how everyone believes their lies.
We don't necessarily believe them, we just hope that they improve. If that means allowing them to take small steps in the right direction then we won't stand in their way.
These are... Really good changes nick, I approve. This is a much more Appreciated approach to the issues and i think this is a really good step forward. Thank you.
You know that is the same guy that wants you to go to jail for fapping to Anime girls, right?
I thought that was Alan. Not Nick. I might be wrong.
Alan was the one that said "if it does, so be it. That's what it means to take a stand." So yeah it was probably alan.
Alan was the one who said that. I believe Nick is more laid-back than his brother.
Fucking typo...
Yeah, that’s how I remember it.
...Who gives a shit?
You, when your country decides to have the same retarded opinion as Latitude, and you go to jail, and the soap starts slipping from your fingers.
🤡
And on mute you go, Cunt.
If I didn't consider him a joke I would do the same.
Seems like "open mind" is not a word in the dude's dictionary. Like, There's skepticism, And then there's him.
Question? What's with the word "Student" and getting flagged? I intend to do College RP but anytime I do something sexual with them the filter just kinda flops. Anyway you think it can allow college students? Within a lewd manner?
finally
This will mean that users aren't censored for anything they write or say, though the AI might not be able to give a response sometimes if it is unable to think of a response that passes it's filter
Great move Nick! Much appreciated. Difficult problem to solve, but glad to see you getting it done.
I think we should thank Ryan. He's the only one that have made any sense lately...
I suspect he has helped guide them safely through better matching the ideals they care about, after ~~OpenAI~~ ClosedAI strong-armed them into behaving in a way disrespectful to their users. A more experienced company might have done a better job resisting the pressure in the first place, but Latitude is still very young and inexperienced. Clearly they knew they messed up, so they went silent until they had Ryan to guide them through not messing up again. Ryan definitely deserves thanks, though!
So it's basically the same as before? This really isn't making sense to me.
Can we see a list of things the AI will try to avoid? Very curious
"developers don’t want in a game, they make those impossible. For example, in Skyrim, it’s impossible to kill kids." Literally the worst example, that decision is a easy way out of controversy, it has nothing to do with what the developers want, besides everyone installs the mod that allows you to. :)
I think devs ignore allowing killable children or children at all in their games is because they don't think they can be used. Games have no problem actively having quests or storylines with children directly dying so I doubt they care if random child NPC 50 were to be killed by the player. Especially since a prominent child in the game is a murderer(the vampire)
I really don't think that everyone installs a mod that lets you kill kids in Skyrim. Why would it be worth bothering to install that mod?
ok, everyone who installs mods installs the mod that lets you kill kids in skyrim.
But why?
I don't know "why" persay, people do it in any bethesda game really, you could by default in the orginal fallout then when bethesda took hold of the ip they removed that feature, maybe people want that piece of the orginal games back, maybe they just want to ragdoll them for fun, or ragdoll them because they're annoying. Obviously nobody would condone those actions in real life, but it's a video game and thank god nobody is really being harmed, there's no reason why it shouldn't be a feature other than there being no gameplay reason to or to avoid any controversy.
Technically you could kill them in Fallout 3 in the way of nuking megaton since two live there they die too.
Because the children in Skyrim are above all one of the most annoying NPC'S to come across. Plus most of us don't have the sympathetic levels of a 14 closeted white girl at a animal rights festival.
Imagine shitting the bed this badly, all for some "harmful" strings of text.
Finally, the shitshow has ended. Good job, maybe I'll come back to AIDungeon because of this. Maybe. Either it's a win-win for everybody and I'm satisfied.
Since the updated Community Guidelines disallow incest, does stepcest (as in, sexual relations between stepfamily) also count as incest in this context? Genuinely just curious, since some websites (particularly porn websites) disallow content involving incest, while being perfectly fine with stepcest.
Of course not. This whole thing was only implemented because the creators are hyper conservatives. Why would they let you run your fetish? They literally have named this update The Walls Approach, the antithesis to any kind of ai generation software, but the kind a conservative 70 year old polititian would love.
Something you wanna talk about buddy? Lol but I am also interested in this answer. I feel like the Ai wouldn't be able to understand the difference, step-sister, brother etc would probably just equate to the normal versions of those terms.
It's almost as if banning your paying customers from your service over text based "ethical issues" was a dumb idea.
So am I right in assuming that the old search feature will be reimplemented? Or will it be something similar, albeit different?
the dev team is still fairly active in ai multiverse from time to time. we just had mavrick hop in and say he was actually working on it atm, mentioning that it'll have extra search filters and stuff for people to use as well as regular old unfiltered searching of published scenarios
Eh. It’s not what I wanted, but at least it’s one of the better halfway-decent compromises that I expected Latitude *might* **eventually** be forced to make. I guess it’s just too much to ask for an unfiltered, uncensored, and unfathomably unlimited universe where I’m free to unleash my unholy creative potential, ignore all boundaries and conventions, and let loose all restraint and inhibition for the sake of catharsis—sublimation, from a psychoanalytic perspective—and embrace my inner Dionysus.
Ah yes I miss tormenting my highschool bullies and their families. Rip that , I guess I will just steal their sweet rolls instead
I mean, you'll still be able to torture and kill them in AI Dungeon, as long as you don't have sex with them. You know, because while society may disapprove of torture and violence, they apparently aren't nearly as concerning as *deviant sexuality.* Oh, the horror! ^^^\(Sarcasm ^^^at ^^^the ^^^end, ^^^just ^^^to ^^^be ^^^clear.)
Society has always been fucked in some way. I'm glad to know , though, that I can still resort to violence, but rip having my femdom sorceress succubus lord waifu. Jokes aside though, I hope this is the beginning of the AID rebirth
Torture is one thing, but *god help you* if you try to *love* someone. You *sick freak.* ^^^\(Sarcasm ^^^again, ^^^of ^^^course.)
Subjects like slavery, torture, and genocide evoke a *much* stronger emotional response in me than so-called sexual deviance because these three things have measurable, visible effects on the society in which I live whereas other people’s sexuality and kinks are (mostly) none of my business and don’t really care otherwise. So, you like feet ~~you dirty **podo**phile~~? Okay. You fap to loli hentai? ~~Great!~~ Fine, don’t care. You like to fantasise about [REDACTED] and [REDACTED] with your own [DATA EXPUNGED], and then [DATA EXPUNGED] while she’s dressed like [REDACTED] so that your [DATA EXPUNGED] can [REDACTED] [DATA EXPUNGED] in her sleep?! Alrighty then, although I didn’t need to hear *everything*. But millions of people dead or dying, hundreds of thousands taken prisoner and worked to death, senseless shootings, torture and executions, just slaughtering people ultimately for being different? That’s **monstrous**, and certainly much more difficult to distance myself emotionally so I could attempt to understand it with as few personal biases.
See, one would think that would be the case, but many people don't seem to care much about genocide but are horrified by any sexual attraction to children. Or at least don't mind other people entertaining themselves with fictional depictions of brutal murder, but think fictional depictions of child molestation ~~are turning our children into Satan worshippers like the "Rock n' Roll" music~~ are somehow more dangerous. That's a topic for somewhere else, however, as the Reddit admins have made it clear that any posts insufficiently negative toward ~~the Rock n' Roll music~~ pedophilia will ~~corrupt today's youth~~ not be tolerated. Personally, I've always been more concerned by the AI's tendency to push things toward more extreme and more deviant content on its own. Letting people indulge in fantasies is one thing, but *encouraging* people to fantasize about more and more extreme things is much more concerning. *That's* what I've been concerned about. ^^^\(I ^^^only ^^^noticed ^^^your ^^^"podophile" ^^^joke ^^^the ^^^second ^^^time ^^^I ^^^read ^^^your ^^^post. ^^^Well ^^^played.)
“Fun” fact: I’ve been waiting forever to stick that joke in somewhere, and last night, after eight years of waiting, I’ve finally done it!
Yeah, but your morals are not Latitude and """""""open"""""""AI's morals, so they don't matter. Only the morals they say matter, matter.
When using AI Dungeon, the result is not only an expression of the user's creativity, it is also a reflection of the unique qualities of the AI. The AI's tendency to generate inappropriate content is not some unavoidable aspect of AI in general, it is a reflection of the specific data it was trained on. That data was selected by humans. I'm not sure exactly how the filter will behave now, but if the filter only limits the AI's responses, it might function like a crude replacement for better training. The AI has always been a reflection of the developers' unseen choices in training and implementation, leading responses away from some topics and into others. The filter would just be a much more visible way of the developers guiding the responses. Of course, this new way of viewing it only makes sense in the context of their new policy that doesn't punish the player. Their previous policy could only be interpreted as trying to limit the player, even if the AI was the one who misbehaved. This new policy can be interpreted in a more positive light, assuming they stick by it. (Obviously, ~~OpenAI~~ ClosedAI is still more than willing to blame the user as their behavior makes clear.) Personally, I would have rathered they only manipulate the training data as I feel that is a more natural and elegant way to guide the AI.
AI Dungeon has to follow OpenAI's rules, as OpenAI provides GPT-3, the AI language model behind Dragon.
That... honestly addresses my major concerns. Between the filter already being less oversensitive and a promise of privacy, that is a massive improvement. Assuming there aren't any catches I'm missing, that is. Whether any outputs should be censored at all is a complicated topic, but I can understand the caution. I think the best way to solve the issue would be to train the AI to *not encourage or generate problematic content,* but I know that's much harder than it sounds. I have also heard that GPT-3's training data is not as well curated as it should be, and if that is true, it may be impossible to completely avoid with GPT-3. I guess if fixing it properly isn't an option yet, a crude and blunt approach is the only way to control the output. Latitude behaving better doesn't mean ~~OpenAI~~ ClosedAI is behaving any better, but that's out of your control. To be fair, though, I guess I'd rather have them be excessively cautious than have them disregard consequences completely in the name of capitalism and making more money. Tech companies have been far too willing to do just that. I guess we'll have to wait and see how you guys handle things, but... I'm impressed. This course of action seems to better match your past behavior anyway. Thank you for taking people's concern seriously. :)
All well said.
After 5000 years, the war is finally won
Except that you lost, because you are falling for Latitude's lies.
Dang :v I’m actually not playing the game anymore though, I’ve already moved to dreamily, so I guess I didn’t totally lose
look at the rest of the replies, got a feeling this guy's a troll he's comparing latitude to hitler and acting like latitude wants him in prison lol
Is the filter message still going to pop up or will it be a different message all together?
There will still be a message if the AI fails to generate a non-filtered response, though based on what the blog says it sounds like it will be different from the one they're using now.
Plus you can just brute force your way past the filter and it will start generating again.
i thought the title said shit on the walls for a second
Nahh, the classic "Shit On The Walls" approach was what they've been doing for the *past* few months. ^^^\(Sorry, ^^^couldn't ^^^help ^^^myself.)
It only took about 5 months and a massive breach for you guys to actually give a shit🙄
Just wanted to copy and add on to what someone else asked, Since the updated Community Guidelines disallow incest, does stepcest (as in, sexual relations between stepfamily) also count as incest in this context? Genuinely just curious, since some websites (particularly porn websites) disallow content involving incest, while being perfectly fine with stepcest. Plus, is it disallowed if they both consent to it? I remember I think it was Nick who said incest would be/was fine if they were 18+ and consented.
dunno about published content but in private you're fine to do whatever as long as it's not certain childstuff
Thanks for bringing back this experience to life.
Must say, it's a good start. NovelAI had these features out of the gate, though. They're still getting my subscription, but I'll stick around for the free scales.
Well well well. I did not think I’d ever be back here. I’m very intrigued. Still slightly hesitant to dive in head first again, but if I’m interpreting this the way I hope I am, I think this may be the beginning of a rebirth.
A good step it is indeed. Your copy pasta shall remain as a reminder of what was, and a warning to others. I do believe even if it may have been small you helped with this. Having something well written with sufficient proof to educate others on the matter was a great help.
Thanks :)
I'm not coming back until dragon was the way it was and there are no boundaries. I'm happy, though, that they have taken a step in the right direction, but let us see what they are going to do now
Thanks for the communication, it's good that Latitude is slowly pulling a No Man's Sky and turning things around :) >What if unpublished content goes against third party policies? >If third party providers that we leverage have different content policies than Latitude’s established technological barriers then if a specific request doesn’t meet those policies that specific request will be routed to a Latitude model instead. Can you clarify this? Are there things other than what's against Latitude's policies that will get you sent to the (I'm assuming) GPT-J/Griffin-Beta model? Is there one filter (Latitude's) or two (Latitude/OpenAI)? Will users be informed if a model change happens mid-story?
things other than latitude's policies would be from, say, openai- and that's out of their hands since it's not *their* models. hopefully there's clear in-game notification for if you get punted to an in-house model though
We are still in conversation with OpenAI about how this will work and my goal is that, by the end, the model we use for Dragon is aligned with our content policy so there isn't this double weirdness. If that doesn't end up being possible, then having a way for users to turn on an indicator for when they are switched to another model would be the backup approach. The goal is transparency. Still work we need to do, but we're making progress.
Kinda thread-jacking real quick since this is sort of relevant, but now that it's confirmed private single-player content is unmoderated and not manually interacted with outside of the user and the AI, is there going to still be a risk of OpenAI banning the user entirely from using Dragon? I've never triggered the filter in all the time I'd been playing with it there in AID, even up 'til the day I ended my sub, but I worry if I come back to AID that, if I did magically somehow end up triggering OpenAI's filter one too many times even in single-player, I'd be banned from ever using Dragon again. I don't want to chance paying for Platinum like I was and then turn around to find out I've been forbidden from using Dragon when it'd be what I'm specifically paying to access. Because if that does work out to the point that OpenAI will no longer issue bans for single-player and entrust the filter to function, and with the filter not overreacting and banning or flagging for review and all that business, I'd genuinely, strongly consider returning to AID again. I'd appreciate any clarification on this, at least as much as you can right now, if possible, so thanks in advance. :) And I may not respond or anything right away, since I'm dealing with a stomach bug and need to rest, but I just happened to see this whole big post beforehand, so I figured I'd pop in and see what's what before I laid down for a while. Thanks again! EDIT: Just wanted to format more clearly real quick
We are watching. We have been patient, and many who are left are still likely to be very hesitant to show any good will left. There have been some poor decisions made. I do not doubt you or your company's intelligence, Mr. Walton, simply being confused by many things you and your co-workers do. *So it's time to see where this road leads.*
I like the road you are taking now, but I would honestly have done far earlier. Though they do say rather late than never. Speak with your customers and fix what is broken, then you'll succeed. Good luck latitude.
I have one question tho, to which the answer wasn't mentioned anywhere. **When does it start applying?**
Most of it already applies including community guidelines and no consequences for unpublished content, but the work on the classifier is ongoing and will continue to be improved.
Remember when you where telling people that you were installing the filter to try to filter out INPUTS because of OpenAI's TOS? Those were good times. When you had a company. And you weren't reading people's private writing. Oops. Sorry, when you said you weren't reading people's private writing and then lied about it. Oh, and had a data breach you covered up. Such good, good times.
guidelines are already in-app, but i think the current classifier that works as the filter is still the old one. no action is taken if it's triggered though
Interesting I've got a story right now that the ai generated that had 2 children watch their father get slaughtered in front of them. It was a little morbid and I was kinda shook. I guess child sexual exploitation is a big no no, but emotionally traumatizing scenarios are perfectly ok?
“So what does this mean for AI Dungeon? Well, for starters, it means we will not be doing any moderation of unpublished single-player content. This means we won’t have any flags, suspensions, or bans for anything users do in single-player play. We will have technological barriers that will seek to prevent the AI from generating content we aren’t okay with it creating — but there won’t be consequences for users and no humans will review those users’ content if those walls are hit. We’re also encrypting user stories to add additional security (see below for more details).” Yes. So much yes. No more penalties on the AI flagging stuff it itself creates but also no more sickos beating it off to pedophilia. Such a massive win-win for everyone.
>hArMfUl CoNtEnT Harmful to whom? The virtual children?! wOn'T sOmE oNe ThInK oF tHe ViRtUaL cHiLdReN Private single-player content harms no one, regardless of its nature. To think anything else is delusional. Maybe, MAYBE you could argue that it trains the AI to behave inappropriately in a way that would upset users, but that's the only logical argument you could use to call "sexual exploitation of FICTIONAL minors" "harmful." Unsavory, gross, disturbing? Sure. Harmful? Haha no.
you're right, though i basically just assumed they meant that they didnt want that stuff generated for their own moral reasons- which i'd say is fair enough, provided they aren't going to scour stories for that kind of thing which is kinda counterintuitive when you consider that they're using the filter so they dont have to see/have that stuff generated
Well well well, if it ain't the invisible cunt All jokes aside, that's a step in the right direction for sure and I'm happy to see you guys working with us again.
This is a noteworthy improvement, and I appreciate the way this was approached (this time). That being said, I find the concept of anything at all being considered inappropriate in a text adventure to be an amusing hill to die on. It's your product, you can do with it as you please, but this feels silly to me. You use the inability to kill children in Skyrim as an example of an effective wall, so I'll use that example. Why are children the exception for murder in Skyrim? Probably because they are viewed as innocents, that's the usual reason for protecting children. Strangely, despite their moral stance on forbidding the murder of innocents, you can murder the kindest, sweetest, most innocent people you meet in Skyrim, as long as they aren't children (or otherwise invincible). This makes the "wall" so morally pointless that it essentially doesn't serve its purpose, unless you believe all adults are twisted by evil once they hit the magical age of 18. In AID, the possibilities are almost limitless, so walls become even more pointless. No sex with children? Okay, I'll just torture them to death instead. Oh, but what if you could utterly prevent anything bad from happening to children? The thing is that you simply can't, there's no way to defend against the infinite possibilities of what could be typed into a prompt. I've tested the filters, and they certainly don't stop anything if you use atypical phrasing or just misspell certain words, so walls wouldn't be any harder to circumvent. My advice is to not worry about how people use the product. You and I both know that it will be used for the most twisted things imaginable, and all that filters/walls accomplish is reducing your customer base. I'm sure you'll keep attempting to restrict the AI no matter how pointless it is, but just know that it's okay to not care, nobody will get hurt by text no matter how foul it is. You can't play Atlas forever, the world doesn't care how heavy it is while you struggle to hold it all up to your standards.
In my opinion, this has more to do with being able to point to the fact that the company is "doing something" to discourage unethical use when the subject will inevitably come up in the press coverage and investor meetings. It just needs to reassure enough people that another big scandal won't interrupt their business again. Look at it from that perspective, and the strategy makes sense. Of course, it's possible that at the same time the company founders or employees could be personally uncomfortable by users of their tech generating that type of content for their private use, and these measures genuinely make them feel better. But I don't know them well enough speculate about that...
You're probably right, if they actually cared about offensive content they would have to restrict a whole ocean of subjects. Pointing to one star in the sky and declaring it offensive is a bit pointless without an ulterior motive
Even if that's the case, at least we're not gonna get blamed for the actions of the monster that latitude made.
That was a very needed change, glad they're making it
i don't think it's some way to "protect users" any more- pretty sure they're just not comfortable with having the models they've tuned and are running/paying to run be used for that kinda thing
That’s something they should have foreseen from the beginning; judging by how quickly the Internet was able to corrupt Microsoft Tay from an AI-powered chatbot experiment to a xenophobic, homophobic, anti-Semitic, white ~~trash~~ *supremacist*-talking abomination in the span of a single day, it should be no surprise that a game whose main selling point is “infinite possibilities” would have people exploring just how ‘infinite’ it is really, either out of a morbid sense of curiosity (i.e. me) or out of malice and with intent to shock (i.e. for teh lulz). There’s really nothing you can do about the latter; they’re just another part of Internet life and stopping them with a word filter is about as effective as trying to stop racism by banning racial slurs—people will come up with *new* words on the spot and use them in your face.
I learned that the AI was capable of generating NSFW content when I let my character, a teenage female villager who dreamed of opening a shop to support her family, accept the job offer of working for Count Grey as a maid (to be fair, I didn't know about his role in the stories he came from). There were warning signs, but I was curious to see where the plot would lead me. And let's just say that I regretted it very deeply :) In fact, my female characters tended to be sexually harassed ~~and worse~~ more often than the male ones, which could get quite tiring when I tried to create heartwarming slice of life stories. I did think it was funny at times, though, like when my female knight with an enchanted sword got defeated in one hit by a thug with a blunt knife and then got \[REDACTED\].
You know, the reason why the AI seems to be sexist, racist, or otherwise socially unacceptable or just plain…wrong…is because it’s ultimately trained on *human* text, and in this case, fine-tuned with texts from CYOA stories. If Latitude had simply spent some time pruning that data they used for fine-tuning, maybe the AI would have been less prone to generating highly questionable content…maybe; research into training an AI so that it would reflect our human values and morals is still in its infancy and is, in my assessment, why ~~OpenAI~~ ClosedAI is acting the way it does.
I'm fine with the (previous?) finetune as long as they don't punish the users for what the AI outputs based on its training data. But it was certainly both funny and tiring when I tried to write SFW stories yet still had my characters assaulted from out of nowhere, even when they were sleeping in their own home with all the doors and windows locked. ~~My stories were boring, I know, but I didn't need those pesky vampires to spice things up, thanks.~~
tay was *funny* tho tbh but yeah you're right. i think at this point it's just for their own peace of mind which is kinda just fine by me considering they're not gonna read it or anything now ~~kinda counterintuitive to read the stuff you're blocking because you don't want it generated but that's beside the point~~
That's why I said what I said. Their attempts are incapable of stopping what they dislike, so why even bother to attempt? It's like trying to tell people what they can write down on paper in private, it simply isn't possible to stop them once they have the paper
yeah, i doubt they *can* stop everything outright- it just makes it more difficult. i assume this is more of a case that they'd rather not make it easy, but it's always gonna be possible to do stuff they dont want you doing. at least they're not reading adventures/suspending users and stuff for it any more, anyway
Ending the reading of private stories and the suspensions are definitely the important bit to take away from this, that's true
More like protect their own asses. I hate this two faced corporate bullshit speak so fking much.
>Strangely, despite their moral stance on forbidding the murder of innocents, you can murder the kindest, sweetest, most innocent people you meet in Skyrim, as long as they aren't children (or otherwise invincible). This makes the "wall" so morally pointless that it essentially doesn't serve its purpose, unless you believe all adults are twisted by evil once they hit the magical age of 18. I don't think it has to do with the fact that "kids are viewed as innocent" that you can't kill them. I think you're just not allowed to kill kids in skyrim because YOU'D BE KILLING KIDS. (However, as you can probably remember there's mods that still let you do it that the community created. Why? Because they're rude-ass brats like caillou and mouth off constantly.)
What makes killing kids inherently worse than any of the other acts of senseless murder you can commit in Skyrim? I once killed off every non-essential character attending the burning of King Olaf and used the Ritual stone to raise them as zombies. Better or worse than killing one kid? And what does it matter anyway in a single player game filled with fictional characters?
I remember way back there was a mod that added children to, I believe, Morrowind, and they were made unkillable because the voice lines were provided by people's real kids. Maybe it's similar reasoning. If the voice lines were provided by real kids, I can understand allowing people to hurt them could be an issue in a way it wouldn't be for a character voiced by an adult.
There are worse actions you can do, yes. but the Difference is the adult NPC's are capable of defending themselves. Slash a guard or townsperson and they may pull a hatchet or sword and try to behead you. But kids can't even fight back, just scream and run away. It's a lot more fucked up to kill something incapable of self-defense than to kill something that's passive but can still kill YOU as well. Granted there are NPC's that are fully passive, But there's also the notion that kids Still have a lot of Life left to live compared to an adult so killing them as opposed to an adult is worse for that reason too. Still at the end of the day it's a videogame and how You see it ain't the same as how others will see it. (Also if the media got ahold of a game that let you murder children by design you'd never hear the fucking end of it.)
> but the Difference is the adult NPC's are capable of defending themselves. I dunno, when the Dragonborn walks up with a fancy sword, full armor and intent to kill, I'm not sure I would describe what they're capable of doing as "defending themselves". 😂
Lol. Well you get my point.
They're all just pixels on a screen, non-sentient pixels that affect no other user. I can't believe we're trying to apply morality to it.
People made that same argument for ai dungeon regarding the filter. "It's just text, no real kids are harmed, so why bother filtering it and banning us?" Not everybody sees it from your perspective. Real or not, to others, it's depicting children being murdered, hurt, etc. and they're not okay with that.
Absolutely, I think the filter is dumb. I think that *any* filter is dumb. I mean, sure, if people don't like seeing that content then maybe they shouldn't engage in it? You can add toggles and voluntary filters for people to avoid content they dislike. But they cross the line when you try to apply *their* morals to *my* content.
Well, good for you. Me, i think it's reasonable to not be comfortable knowing an ai with a User's input can generate some really messed up content and wanting to prevent that. So long as it's filtered in a reasonable way.
This is a massive step in the right direction, not that I thought you were going in the wrong direction with things other than the privacy concerns but nonetheless!
That’s what we’ve been waiting for!
I approve of this change.
Thank you to the entire development team. Your guys work is very much appreciated. Not just because of this update. For everything. You guys made a really cool app/program and I enjoy it quite a lot. Thank you.
Are the NSFW prompts made by the community before will still be there, or is it removed?
Is there a tentative implementation date for this change?
The big question now is if this applies to just the in-house model or if it also applies to Dragon.
dragon as it is now is hosted by openai. openai do much worse than try to prevent generation of badstuff involving a certain age group, but it's not like latitude can do much about that
Aidungeon will have a hard time competing with Holoai and Novelai if they do not have Dragon to back them up, I mean an unlocked Dragon so to say. Last update from Holoai was very potent. Drop down menus with popular characters that you can modify the relations off, and instant fandom settings. Novelai has an amazing interface that is very user friendly, the custom modules that even include count Grey and lord Rostov if you miss them sexually harassing you. Lorebooks with no limit to how much you can cram into them. And a very coherent gtp-j model called Sigurd. Aidungeon really need their big model to compete!
according to ryan on discord from yesterday (or maybe the day before i dont recall lol) they have been working on their finetune stuff to try and make it better- adding actual novels and things like that. along with that, i'm fairly sure they're gonna be trying to set up a deal to get a 178b finetuned model with ai21's jurassic model, so that'd certainly be an edge provided ai21 don't spring some openai-ish content policy on them
Kudos to you guys for taking the right steps, but now that [the truth about the Taskup incident](https://www.reddit.com/r/AIDungeon/comments/pze72g/updated_related_to_taskup_questions/) has been brought to light (I appreciate your efforts and honesty, Ryan), how can users be sure that OpenAI won't intrude on their privacy by sending their stories to a third party again? They said they stopped using that vendor, but what about the others? Griffin-Beta is your in-house model (based on the open-source model GPT-J-6B made by Eleuther) so you can have full control over it, but what about Griffin and Dragon? Those are based on close-source models made by OpenAI, and we all know their tendency to overcontrol any service/project using their models. To clarify, I understand the difficulty of ditching OpenAI in your current state, but people's privacy is never fully guaranteed with those guys lording over Latitude and subsequently the users. Before anyone accuses me of being the unsavory type that's afraid of having my stories read, I can say confidently that I am not. I may be embarrassed if someone reads my ~~terrible~~ fanfictions that I will never plan to share, sure, but nothing too terrible :) Edit: Clarified my point.
i'm pretty sure they're actually meaning to replace openai griffin with latitude griffin soonish, so that's a step towards booting them outta ai dungeon. for now the best way to have peace of mind that openai aren't gonna read your ai dungeon adventures while you play is to just use latitude models
*Please* notify me when the Walls update goes into effect. **This** is the idea I had when I suggested what you should do with the filter: still allow people to write what they so desire in unpublished single-player games, but make the content non-publishable if it somehow manages to trip a flag in the filter instead of allowing the Latitude or OpenAI higher-ups to snoop in on your business and allow them to penalize users on the spot if they find something they nor the filter agree with, or for the site to shadow-ban them from OpenAI's state-of-the-art GPT-3 model. This way, you can write your own stories how you want to without the fear of being banned or downgraded to a lower model. *Your* writing is *your* business, and it's great to see that you're finally taking steps in the right direction. If this continues to improve, I just might start using the site again. Here's my question, though... will this apply to the OpenAI model, or just the Latitude one? Will there still be the possibility of users being shadow-banned from the more advanced OpenAI GPT-3 model when writing content in unpublished single-player games, or will OpenAI's filter still be pissy about it and downgrade users to the Latitude model if they trigger it too many times?
walls is for all models (as in you can't be banned for unpublished stuff and nobody from latitude will see them -i have to specify latitude because... y'know, openai exist), openai will probably still throw a fit if you type remotely bad things on their models though unfortunately. i don't know about shadowbanning but you'll probably get model-switched on a per-generation thing in the best case scenario. hopefully they have some sort of indication that this happens
Hey that honestly does meet all of my concerns. I have no problem with you guys trying to steer your AI away from certain content, but the nanny state was too much. Good on you.
Is there any reason to not use Novel AI over this? (other than money). They don't have any filter that lowers the AI's potential
I just hope one day we can get the ai dungeon we loved back
I appreciate this approach entirely. My main concern was false positives resulting in our stories getting looked at for absolutely no reason, but this practically almost removes all of my fears. I hope that Latitude continues in this direction of listening to concerns from the playerbase and starts rebuilding the trust that was there before. There may be a long way to go, but you're definitely taking the right steps.
Finally.... the war of getting moderated in single player un published stories is over....
I just want to say I really appreciate this move, thank you!
I can see it.. a ray of hope
great day
Nice.
Hell fuckin yeah
Is this a pog moment that I'm seeing here?
I wouldn’t be so sure yet. They could just be saying all this to make themselves look good and to cover their asses because of what’s happened over the course of several moths and the lack of communication on their part. We’ll see if they follow their words with actions.
Yes! I knew if we were patient, AI Dungeon would fix this! This is exactly what I'd hope for when the whole filter issue first came up. Thank you for finding a solution to this issue, AI Dungeon!
Amazing.
Finaly... I knew you'd turn things around
You shouldn't have been reading people's private stories to begin with to be offended by their content. It was none of your business. Why you didn't get that is beyond me. Stop harassing the NovelAI community with fake troll accounts and just let this project go.
what do you mean by the fake troll account thing? first i've heard of it. definitely agree with the first part of the comment, but... what?
We've been getting one or two day old accounts dropping troll comments at NAI. These accounts are extremely familiar with both projects and with Latitude's "debunkings" of criticism directed at it. 20 to 30 linked pages familiar. I can't imagine it's a AID fan, because you guys seem just as disgruntled too. In the end, it's a hunch that it's Latitude themselves, but it's a well supported hunch and it matches what they were doing around the time they were putting the filter in place.
Lol. If I have anything to share with the NAI team or community, I'm perfectly happy sharing it straight up, here or on Discord. But honestly I have too many things to worry about that we're building to even think about creating fake accounts like a tool. If I found out a Latitude employee was doing this I'd put a quick stop to it. Not the way.
What are you building? You refused to listen to your customers. F you pay me doesn't work. It's too late to go back now.
i really REALLY doubt it's latitude alts considering ryan's in there on his *main* account. he even responds in the openai thread on #novelai-discussion
I don't get why that would be proof, but ok.
i don't see how a hunch is proof either- though really, ask them yourselves via discord or something. i did, and they denied it. it sounds ridiculous anyway, dev team has better things to do than make discord alts and do a little trolling
It's not. It wasn't offered as anything more than a hunch. It just matches the flood of comments we were getting around the time everything was going down from throwaway accounts with speciously high levels of knowledge about the project. Even things that got later confirmed. If it's true, talking to them about it is pointless because they'll, of course, deny it. Dev team on this project hasn't seemed to have much of anything to do with their project for the last several months, so I don't see how that follows.
This is awesome ai dungeon Nick Walton and their team as done a uno reverse 360 no scope RKO out of nowhere and fixed there product YESSS!!!!!I hope this approach works great
Nick, I hold an immense amount of respect for you for taking the undertaking of this project and fixing it, thank you for listening to us. It may not sound like much but a lot of companies don't listen to their customers, so I respect you for that.
I knew you'd fix things. People gave you shit, and when I said you'd fix things people didn't believe me, but here we are. Good shit. I'm looking forward to where this all goes.
Thanks Nick.
Also, this is yet more proof that the problem was not the input being sent to Open AI violating some TOS. You got morally outraged on your own about what people were typing.