minoaskisho
03/17/22 11:33PM
AI writing Hypnosis Scripts? ft. GPT-3
Since openai has opened their API, I wanted to try and see if I could get it to produce realistic hypnosis inductions or fantasies. I was somewhat unsuccessful but wanted to post what small results I got on here.

For those who don't know what GPT-3 / openai is, its basically an algorithm which will try to predict what words comes next in a sentence. If this is done well, it can be used in many interesting settings such as creating computer algorithms from text descriptions (like Github Copilot) or try to create a dungeons and dragons like experience, acting in a matter like a DM would (like AI-dungeon).

My ideas for prospective use-cases of this in hypnosis could be as a supportive tool, ideating for hypnotists and giving suggestions that can then be changed to be real. It could also possibly generate sessions, and with the help of more authentic TTS-engines, everyone could make hypnosis sessions that are just to their liking. This also isn't limited to erotic experiences, even though it is what I had in mind originally. Lastly, with some work it could possibly create a virtual interactive hypnosis session, where you wouldn't have to hope the random tist on omegle respects your boundaries and has the same fetishes as you.

# The Un-tuned model #

I first tried getting an untrained GPT-3 to generate induction scripts. I tried starting off with prompts like "Flower Field Induction: " or prompting with the start of real transcriptions form hypnosis scripts. This mostly just got off topic real quick, and seemed shallow and superficial. Therefore, I started looking into what I could do

# Tuned mk.1 #

Open-AI also gives clients a method of fine-tuning a base-GPT3 model to a specific case. By combining prompts with ideal solutions, the model will (hopefully) start to get better at prediction what new outputs look like. This also makes sense, as if you haven't seen a hypnosis script before, you might not really know how to create one, but if you know how "Sunset Induction" works, you might be able to change the theme without altering the structure of the induction.

I thus provided the AI with about 32 titles + transcriptions pairs from hypno.nimja.com site. As I don't commercialize this, I believe it successfully falls under personal use, but if not I sincerely apologize and ask for forgiveness in the name of science.

Now next came trying a lot of different prompts with a lot of different parameters. Getting these parameters right is one of the most important step in this process. First is temperature. The higher the temperature, the more "creative" the AI becomes, while a temperature of 0 works well when you want the same answer to the same prompt all the time. Other than this, frequency penalty and presence penalty can be used to alter how often the AI repeats itself and uses the same words.

Now I tried a whole bunch of different prompts, not knowing what to expect. But after testing a lot of different prompts, a few actually came up with something that starts off decently. Here are some pastes of examples generated:

* Flower Field Induction: pastebin.com/q6aL4Qwq
* Gaming Immersion Induction: pastebin.com/XcR8nJ9c
* Chicken Induction Transformation Fantasy: pastebin.com/fsACrx2d

However, as you can see the algorithm started to fall off or get unfocused a while into the recordings. Also, some prompts are misunderstood and the fine-tuned model creates some nonsense. Here's an example:

Hypnotized on Stage to be a Puppy Fantasy: pastebin.com/9V5ZDP7C

I then started brainstorming how to get past this problem, which resulted in the second fine-tune.

# Tuned Mk. 2 #

Now the ideas for improving the model was to give it more data, and improve the connections between the data it got. To do this, I upped the training examples from 32 to 128. I also started giving both the title AND description of files, and changed the way linebreaks were written to help with better tokenization (trying to let the algorithm do more linebreaks). Training this model cost about 25$, so things were getting (relatively) expensive, and thus I was hoping for good results. However, when I tried testing this new model, I was saddened. Take a look at these examples for yourself:

* Prompt: "Chicken Transformation, in this file you will be slowly turned into a chicken" pastebin.com/dAC9Nxtw
* Prompt: "Hypnotist Photographer Fantasy, in this file you will be hypnotized by a photographer" pastebin.com/73yksmSk

# Epilogue #

Feeling kind of broken, this is where I end my journey for now. For now, I currently do not have the money to continue this research / journey from my own pocket, however fun it was to see the AI generate some sort of custom hypnosis sessions. If there is enough interest I might start some kind of campaign, but currently this is where it ends.

If anyone has some ideas for non-erotic prompts or prompt-description pairs to try putting through the model and are willing to pay for the API usage, feel free to contact me :)

For anyone wanting to continue this, please feel free to contact me if you think there's something I can help with. The main ideas and problems currently seem to be:

* A good idea would be to increase training samples Instead of 128, you might try something in the 1000s and get a good result. Keep in mind however, that hypnotists often have different way of writing, and the data needs to be of good quality if you want something that looks good in the end.
* openai does not allow one to generate content of adult nature. There is also a significant price when fine tuning the model. A potential solution to this would be to use the open source GPT-j (6b.eleuther.ai/), however this would require some time and hardware of the individual user.
* Figuring out how to allow more frequent linebreaks would probably increase effect and readability of the text. My hypothesis is that the algorithm is scared of repeating itself, thus only writing about 4 linebreaks before it doesn't make sense to it anymore.
* Changing the input prompt structure is also an interesting idea. For example, not including the description might have worked better. However, ideally being able to write not just a file title but a description would allow more control. It could also be the case that the descriptions were too loose, and creating better description and more detailed descriptions would give better results.
* Alternative, making the AI try to continue a session instead of creating one from scratch could provide more of an interactive experience, but would likely take a long time to train to a point it makes any sense at all.

Even though I love AI and the prospects are very interesting, I currently couldn't get it to work. But just because it didn't happend today, doesn't mean it wont happend in the future, and we will finally be able to have custom hypnosis scripts / sessions made for us, interactively at a low price :) Thanks for reading.
teso
03/18/22 06:09AM
Sorry, I don't really know how/what I could help with this project but as someone who's trying to become a hypnotist, this sounds awesome
Lloyd
03/20/22 02:58AM
Instructions unclear; hypnotized my friend into believing he's a dog photographer.

Send help.
TheMadPrince
03/20/22 01:45PM
Lloyd said:
Instructions unclear; hypnotized my friend into believing he's a dog photographer.

Send help.


Ah, the age-old philosophical question, first posed by Woofucius: is a dog photographer "someone that photographs dogs", or a "dog that photographs someone"?
akaece
03/21/22 11:23PM
GPT models (and BERT, and all their friends) don't ever really appear to have a good continuous stream of logic. Expanding the logic of "what word fits where" to "what phrase fits here" (and doing it well) was GPT's big leap, but they're not to the level of "what should happen in the next paragraph," which is what a whole induction would require. A good induction is like a story, with a beginning and a middle and an end - GPT has particular trouble with endings, and it gets shakier the longer the middle goes on.

Based on my research, unless there are some significant hardware advances that let us reach astronomical parameter counts, no amount of tuning and dataset engineering you can do is going to overcome that problem. The way GPT's trained, it's always going to run into situations where it gets lost on what to do and reverts to something closer to a Markov chain for at least a few words, at which point its consistency starts to go out the window. Unless you want to get into the field professionally, you're much better off just waiting a few years for the cutting-edge stuff like RETRO to bring about the advances that bridge the gaps in its logic. Until then, just writing things yourself is going to take a lot less time and effort than trying to coerce GPT into doing it for you.
1


Reply | Forum Index