ParserComp 2023 (belated): The Fortuna

Admittedly, ParserComp 2023 is old news by now, but I’m not quite done with it. I declared my intention to play all sixteen of its games, and I got through only seven before the judging period’s deadline. My mistake: I got a very late start, but thought I could still get through them all, because I was really thinking in terms of IFComp, which has the famous two-hour rule to encourage smaller games. ParserComp has no such rule, and this year’s entries included two more games by the author of Finn’s Big Adventure. I did consider continuing to play and write up the entries here after the deadline passed, as blogs have no deadline, but I didn’t do that.

Still, there’s one entry that I really want to comment on here, because it’s so of-the-moment. The Fortuna, a thriller set on a cruise ship, uses an LLM to generate text. Now, from where I sit, the judgment I’ve seen passed on LLMs and generative AI in general has mostly been negative: that their lack of any mooring in reality makes them unsuitable for dealing with facts, while applying them to fiction and creative works is a threatens the livelihoods of human artists (whose livelihoods are precarious enough already) while ultimately resting on their labor without fair compensation. But the author of this piece is a true believer in the liberatory potential of the AI revolution. (I thought for a while that they might be pretending, mainly because of the illustrations: these too are AI-generated, and, while the first few shown are reasonable enough that I mistook them for stock photographs, they start turning toward subtle monstrosity before long. I’ve now played enough of the game to be reassured of its sincerity, but it’s conceivable that there’s a twist that I simply haven’t had the patience to reach.)

Now, you may be thinking “LLM-generated IF? Like AI Dungeon, then?” Not quite: the only generated text in the game is NPC dialogue provoked by freeform input. It’s in some ways a natural choice, supplementing one of IF’s traditional weak spots while limiting the damage that the LLM can do to the narrative, but with the current state of the tech, the effect is deeply peculiar. It makes the dialogue unnatural in a characteristically ChatGPT-ish way. Characters don’t really converse so much as use your words as prompts for paragraph-long essays with lots of “I believe” statements:

Journalism is an incredibly important field that allows us to gather, analyze, and share information with the world. It is the perfect blend of storytelling and investigation, allowing journalists like myself to shine a light on important issues and hold those in power accountable. Journalism has the power to shape public opinion, influence policy, and promote social change. As a woman in journalism, I am particularly passionate about using my platform to advocate for women’s rights and to amplify the voices of marginalized communities. It is crucial that we have diverse perspectives and stories represented in the media, and I am determined to be a part of that movement.

The game tells you to befriend these characters to advance the plot — I could comment on how pathologically utilitarian this view of “friendship” is, but that’s really a problem endemic to IF as a whole, not this game in particular. Befriending a character basically means asking them about their pet topics repeatedly. Which they tend to circle back to regardless, often with suspiciously similar verbiage:

> ask nancy “What are you doing here?”

I’m here as a reporter for the New York Times, covering the recent women’s rights rally in the city. It’s important to give a voice to the issues that affect women and make sure their stories are heard. This rally is a powerful platform for women to come together, raise awareness, and fight for their rights. As a journalist, it’s my responsibility to report on these events and shed light on the challenges women face in our society.

> ask nancy “I mean, what are you doing on this deck in particular?”

I’m conducting research for a new article I’m working on. I’m investigating the gender wage gap in the corporate world and the impact it has on women’s rights. I believe it’s important to shed light on these issues and advocate for equality and fair treatment for women in the workplace.

> ask nancy “Why are you on the pool deck?”

I’m on the pool deck because I’m investigating the recent increase in drownings in the area. As a reporter, it’s my job to shed light on important issues and bring attention to them. It’s not enough for me to just report the facts; I want to use my platform to make a difference. I believe that by raising awareness about the importance of water safety and advocating for women’s rights, we can start to address this problem and prevent future tragedies.

For this player, at least, the end result is that I gave up on trying to treat this as anything other than Infocom-style “ASK/TELL” dialogue with unusually elaborate “I don’t understand” messages. The LLM simply doesn’t have enough knowledge about its context within the game to be able to converse about that context meaningfully. It’s been said that the ability to figure out what you do and don’t need to pay attention to, and to filter out the unimportant, is the quintessential skill in all videogames, and it’s possibly even more paramount when the text is being produced in a void of authorial intent, where failing to realize that the nonsense is nonsense has the potential to send you down infinitely-deep but ultimately empty rabbit-holes. So there’s the paradox of this game: the whole thing is a showcase for a feature that it structurally encourages you to ignore! (But I suppose the same could be said of a lot of graphical games…)

It’s got me wondering about other possibilities, though. The dialogue feels meaningless because it’s disconnected from the rest of the game — even the “befriending” mechanic seems like it’s just scanning your input for keywords and ignoring the response:

> tell david “Physics sucks and physicists are bad people. People like you are responsible for Hiroshima. I hate you. I hate you. I hate you.”

I’m sorry if you feel that way, but as a research scientist in the field of physics, my goal is to make a significant contribution to my field and to help others through my work. I understand that there have been events in history, like Hiroshima, that have used scientific discoveries in destructive ways. However, it is important to remember that science itself is not inherently good or bad – it is how we choose to use and apply it that determines the outcome. I am driven by a desire to contribute positively to the world and leave a lasting legacy, and I strive to do so with compassion and responsibility.

David Millman is now your friend!

But what if it didn’t ignore the output? What if it tried to determine whether you had befriended a character on the basis of what they said?

Well, for one thing, you’d have to retune the output considerably to make that work. In this game, I don’t think I’ve seen any character actually express camaraderie or affection (or indeed any emotion at all). But assume that’s a solved problem. Would it make for a better game? I don’t know that it would! It might be more responsive, but it would be less intentional, less about the story the author wants to tell and more about figuring out how to manipulate the LLM.

But on the other hand, game devs routinely let general-purpose libraries handle things like physics and 3D rendering, sometimes with unanticipated effects. Will it someday be possible for developers to simply give an AI actor parameters and direction and motivation, and trust it to ad-lib its lines? I suppose that’s the dream this game represents. And I’m willing to keep an open mind about it. But we’re not there yet, and it’s not at all clear that LLM are anything more than a dead end in its pursuit.

No Comments

Leave a reply