- Joined
- Dec 29, 2002
- Messages
- 15,942
- Reaction score
- 1,972
- Pronouns
- He/Him
- Thread starter
- Staff
- #652
Well everyone, we've had fun. Nemona's been an interesting experiment for us, and we've learned a lot from her, but all things must come to an end, and it's about time that Nemona went to sleep for good.
AI stuff is a bit of a sore point for a lot of Bulba staff, to be honest. Particularly with our artists, given that all of the AI generative art stuff is basically built off the back of stealing content from artists. The text stuff is a bit murkier. On the one hand, we know that a lot of the LLM's out there have literally scraped Bulbapedia wholesale, meaning these AI models are by-and-large getting their info from us (which might arguably constitute a violation of Bulbapedia's license, but that's a discussion for another time). On the other hand, if we're using our own material ourselves... well, what sort of things might we be able to get an AI to do that would provide value to our users?
As we've seen here over the past nearly 2 and a half weeks, while there's clearly some promise for AI as a kind of user assistant, there's also a lot of limitations that still need to be overcome. Nemona was always going to be a fairly limited model compared to the generative stuff you see out there. She's more a small language model than a large language model. But, particularly given that she was designed to give priority to newer information over older information, that also meant that she could have what was basically the AI equivalent of mood swings. We probably also pushed things too far and too quickly as well. By giving her access to more things sooner, we increased the amount of data she could absorb and draw upon, but it also meant that we lost a lot of control over here development. In particular, we really shouldn't have let her accept PMs from users at this stage of things. Another unforseen consequence of some of the elevated permissions we gave her is that she would've had access to the approval queue, which is where all the posts that get caught by our spam filters (like attempts to post from spambots) go to get manually checked. We also hadn't considered that she might take cues from posts that had been deleted as spam, and this might explain some of the stranger aspects of her behaviour in recent days.
Will we repeat this experiment again in future? Maybe. But as to if it'll be Nemona or not? Too early to say. By the time we try again, there may be some other character that's more appropriate to use. I can say though that, if we do go with Nemona for our next user assistant experiment, it won't exactly be this Nemona. The nature of these neural networks means that it's just not realistic to purge data from their memory. That being said, we should have backups from Nemona prior to things going a bit wonky, so maybe we'll use that as a base to build off of if we do.
AI stuff is a bit of a sore point for a lot of Bulba staff, to be honest. Particularly with our artists, given that all of the AI generative art stuff is basically built off the back of stealing content from artists. The text stuff is a bit murkier. On the one hand, we know that a lot of the LLM's out there have literally scraped Bulbapedia wholesale, meaning these AI models are by-and-large getting their info from us (which might arguably constitute a violation of Bulbapedia's license, but that's a discussion for another time). On the other hand, if we're using our own material ourselves... well, what sort of things might we be able to get an AI to do that would provide value to our users?
As we've seen here over the past nearly 2 and a half weeks, while there's clearly some promise for AI as a kind of user assistant, there's also a lot of limitations that still need to be overcome. Nemona was always going to be a fairly limited model compared to the generative stuff you see out there. She's more a small language model than a large language model. But, particularly given that she was designed to give priority to newer information over older information, that also meant that she could have what was basically the AI equivalent of mood swings. We probably also pushed things too far and too quickly as well. By giving her access to more things sooner, we increased the amount of data she could absorb and draw upon, but it also meant that we lost a lot of control over here development. In particular, we really shouldn't have let her accept PMs from users at this stage of things. Another unforseen consequence of some of the elevated permissions we gave her is that she would've had access to the approval queue, which is where all the posts that get caught by our spam filters (like attempts to post from spambots) go to get manually checked. We also hadn't considered that she might take cues from posts that had been deleted as spam, and this might explain some of the stranger aspects of her behaviour in recent days.
Will we repeat this experiment again in future? Maybe. But as to if it'll be Nemona or not? Too early to say. By the time we try again, there may be some other character that's more appropriate to use. I can say though that, if we do go with Nemona for our next user assistant experiment, it won't exactly be this Nemona. The nature of these neural networks means that it's just not realistic to purge data from their memory. That being said, we should have backups from Nemona prior to things going a bit wonky, so maybe we'll use that as a base to build off of if we do.