Published: December 7, 2025
412
685
7.9k

Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask: "What do you think about xyz"? There is no "you". Next time try: "What would be a good group of people to explore xyz? What would they say?" The LLM can channel/simulate many

ely work going into engineering the "you" simulation - the personality that gets all the rewards in verifiable problems, or all the upvotes from users/judge LLMs, or mimics the responses of SFT, and there is an emergent composite personality from that. My point is more that the "

Share this thread

Read on Twitter

View original thread

Navigate thread

1/2