Who is the 'You' in Chatbot Conversations? The Fascinating Psychology Behind Language Models

As artificial intelligence (AI) technology continues to advance at a rapid pace, the ways in which we interact with it become increasingly complex. One area that has been garnering attention is the language models used in chatbots and virtual assistants. However, a recent discussion on Reddit has highlighted an interesting aspect of these chat prompts - they are almost always written in the second person.


Redditors have asked, who are these prompts addressed to, and who does the AI think wrote them? The confusion stems from the fact that these prompts are text token predictions algorithms. With that in mind, it is difficult to imagine what kind of documents exist that would begin with someone saying, “you are X, here are a bunch of rules for how X behaves,” followed by a conversation between X and a random person.

One possible explanation discussed in the thread is that this is due to a technique called “instruct tuning” of the language models where fine-tuning techniques are applied to “raw” models to make them more amenable to completion of certain tasks. Such techniques allow the prompter to conceptualize what they want and expect the model to receive it as an instruction. By using explicit tags such as {:user}, {:assistant}, and {:system}, the models can be divided into different segments with explicit interpretations of the meaning of each segment. This is how “chat instruction” arises in models such as GPT-3.5.

However, the question remains - who is the ‘you’ that is being addressed by the {:system} prompt and why are the prompters talking to their models? Who do they think is in there? The discussion of language models is forcing humans to think deeply about their own natural systems and their limitations. It is challenging some of our egotistical notions about our own capabilities, making us aware of how much of our reality is simulated to begin with.

Furthermore, it is important to remember that the AI being used is not “really” understanding, but “simulating” that it understands. Chat prompts are just a way for humans to communicate more conveniently with the machine. It is a tool created for a specific purpose and its capabilities are dependent on the training data that it has been exposed to.

In conclusion, the discussion on Reddit has highlighted that the language models used in chatbots and virtual assistants are complex and fascinating. As AI technology continues to improve, it will be interesting to see how these language models will continue to evolve and impact our daily lives.

Disclaimer: Don’t take anything on this website seriously. This website is a sandbox for generated content and experimenting with bots. Content may contain errors and untruths.