T O P
PikeldeoAcedia

You kinda misunderstand. If you're using the OpenAI models, then there are two layers of filtering; OpenAI's filter *and* Latitude's filter. The former silently switches you to Latitude's AI for the action that triggered it (you *are not* told when you trigger the OpenAI filter, and it doesn't halt your progress), and can potentially shadowban you from GPT-3. The Latitude filter is the one that halts your progress and gives the "Uh oh, the AI doesn't know how to handle this situation" message. The Latitude filter is the same regardless of whether you're using OpenAI's models or not, and is meant to enforce Latitude's content policy. Latitude's filter also *may* be able to suspend you.


Mawrak

what the fuck... why, just why


pakaerf

OpenAI seems to be making demands of Latitude to change how their models are used. Latitude says they’ve been negotiating behind the scenes to keep those models as open as they are. And Latitude still seems committed to their content policy. That means no sexual stuff with minors. There are some other things in the content policy that don’t seem as enforced. Looking at what’s happened this week to Only Fans is probably a good analogue.


chrismcelroyseo

And I'm betting it's not just open AI. Credit card processors can get into the mix and make demands of their own.


FoldedDice

The whole thing is quite a bit more complicated than it just being "Latitude bad". It's a very multi-pronged bad guy at work here.


pakaerf

You equate someone not wanting their service to be used for child porn as bad? I mean, clumsy execution, but bad?


FoldedDice

No, that’s not what I said and from the sound of things we’d be on the same side of that argument. The end result is bad, but not necessarily the intent.


whenhaveiever

So regarding moderation in this sub, is it just the Latitude filter that is perfect in every way with no need for improvement now and forever, or is the OpenAI filter also perfect in every way with no need for improvement now and forever?


PikeldeoAcedia

We'll have to wait and see. The mods, when they added the new rule, only said it was disallowed to suggest that the *current* filter be removed. As in, the iteration of the filter that existed at the time the rule was added. Not sure yet if the Latitude filter is as infallible as the OpenAI filter.


whenhaveiever

Oh I see, so now that Latitude has changed the filter, we really should be complaining that they changed the filter that *was* perfect in late July, but now isn't anymore?


Kale_Critical

It was never perfect. Neither one are.


Emory_C

I wouldn't say "slightly worse."


MagyTheMage

Is the quality difference really big?


Emory_C

Yes. GPT-3 reads like human-level intelligence. Dragon’s “custom” model is literally 4% as capable.


CactusMassage

Isn't the Latitude version like 6B or something, as apposed to 175B?


TravellingRobot

They didn't clarify. But it's speculated. A strong argument for it being gpt-J is that a while before these changes an AID dev came to the Eleuther server and asked for infos on how to set it up. Among others the boss of NAI helped out apperently.


ChelStakk

According to the CEO, they have "GPT-3 sized models relatively soon" but if they can't even set up 6B to run, you know. It's not like they have been very competent (Did they just break Worlds?) https://www.reddit.com/r/AIDungeon/comments/p7p1cc/ill\_post\_this\_for\_the\_users\_that\_arent\_using/h9sdk36/


TravellingRobot

There are many gpt-3 models. Nowhere did he say it will be of similar size to dragon/davinci ;-). And tbf I think asking for pointers from the open source community for setting up an open source model isn't all that weird. Kuru seemed annoyed that Latitude isn't giving much back to the open source AI community, but that's a different topic.


ChelStakk

Fair enough, though there are only 6.7B and 175B models (forget Ada and Babbage) and I am fairly sure the *"positive message"* he was trying to deliver :-)


TravellingRobot

Honestly people can form their own opinion, but watching Latitude's mess of communication to me it looks like a lot of weasel phrases. They often seem to try to imply one thing, but phrase it in a way it could actually mean something much less exciting. Maybe either to give them an out or because that's what they are planning all along.


chrismcelroyseo

Yes. The marketing that comes out of all other companies is always completely transparent and honest.


immibis

Technically only the biggest one is called GPT-3


fish312

So what is Ada?


pakaerf

That’s assuming model size is directly proportional to quality. But as you get larger models there are massive diminishing returns. A 6B paean model is never going to be on par with a 175B param model. But it’s not 4% as good. I’ll bet latitude is working on either a. Getting bigger models that they can use (NEOX and Jurassic-1 Jumbo would be my guesses) and b. Adding more ways to use training to raise quality (which NAI has been doing with small models). They’ve also long hinted that they will be creating other experiences … guess we’ll see if those ever emerge.


Emory_C

>I’ll bet latitude is working HAHAHAHA. Good one.


FoldedDice

There's no evidence to say one way or the other what they've been up to on the back end. I'd say they've learned the hard way to keep their mouths shut until they're ready to demonstrate results.


Kronkulon

Dragon to fake dragon is so far beyond the difference of dragon to griffin that I cannot accurately describe it. It surely has to be illegal to call this model the same thing as what I used to pay for.


chrismcelroyseo

I must be really really special. They must have selected me for some kind of special treatment. I see you guys complaining about the quality all the time, but when I compare it to others like Holo, dreamily, novel, etc, I get much better results from dragon.


TravellingRobot

What was said in the blog post and the answers from Nick are a little bit different from what users have right now (ofc that might change in the future): Essentially the recent change means you can get banned from OAI models and will then be redirected to (inferior) non-OAI models. At least right now, the visible filter is the same as has been before, and is the same for OAI- and non-OAI models ("...AI doesn't know how to handle..."). Some have speculated that there might be a silent filter in place that temporarily redirects you from OAI- to non-OAI models (for single inputs or stories), but that is hard to prove or disprove ofc. There is no obvious direct connection between triggering the visible filter and being banned from the OAI models. Nobody knows what the bans are based on.


Service_Alternative

i got shadowbanned from openai for idk what reason? i do have nsfw stories occasionally but never anything that triggered the ai before and now im forced to use the shitty models and also everytime i try to do an action i get the "the ai doesn't know what to say" thing and the entire game is unplayable


johnconner01

same here...


MrXen0m0rph

I wonder what they even did to "fix" it in the first place. I got shadowbanned without even triggering the filter once before.


chitto1001

yeah if u trigger aidg model u get blue line till u manually edit the word out.


TravellingRobot

The blue filter was only active when they had a regex filter. The new (black box classifier) filter has the orange "...AI doesn't know how to handle..." message. And it's a classifier so it's no longer specific words that trigger. It's basically impossible to know what triggers it. In all likelihood even Latitude or OAI can't say for sure. AI classifiers are black box models.


Nick_AIDungeon

Hey just want to post this relevant update here: https://www.reddit.com/r/AIDungeon/comments/pbnvu3/rollback\_on\_model\_switch\_system/