ParserComp 2022: Lantern

Okay, we’re not off to an auspicious start here. Lantern is a Windows app with a hybrid hypertext/command-line interface in a graphical display, written in a Lua-driven engine called LÖVE, which doesn’t seem to be designed primarily for parser IF. The first thing we’re told is “The story is about a blind man carrying a lantern, trying to solve the mystery of his blindness. As he walks around the only sense is hearing, smell, touch, and taste.” Interactive fiction has historically had a substantial following among blind people, as it’s one of the few forms of computer game that they can play, provided it’s in a format that’s friendly with text-to-speech software. This game isn’t.

It starts with your basic amnesia plot: you’re stuck in three rooms (that I could find), with no idea how you got there, and puzzled by the fact that “your sense of sight is missing”. Except I’m more puzzled by why the player character thinks that. If I found myself in a strange room and unable to see anything, I wouldn’t think “I must be blind”, I’d think “Gee, it’s completely dark in here”. I suppose the lit lantern in your hands is supposed to address this, but even with that, I really think my first thought would be “There’s something wrong with this lantern. It’s giving off heat but not light. Is there some kind of cover I have to open?”

I struggled a bit with the UI. The introductory instructions are longer than the screen, and at first I thought it was impossible to scroll — it doesn’t recognize scrollwheel or arrow keys, but can be dragged with the mouse. If you type a command and press enter, and the command isn’t one that the game understands, absolutely nothing happens — no error message, not even clearing the command line. To make it worse, what commands are recognized is highly contextual, even for things that shouldn’t be. You can’t refer to objects that aren’t currently named on the screen — and pretty much every command response has its own screen, because output is all based around nodes, like a Twine game. So if you, say, examine an object, the response acts like a modal pop-up, blocking all other interaction until it’s closed.

In fact, what this scoping suggests is that typed commands are simply mapped onto highlighted keywords or pairs of keywords from the output text, including the sense organs (“fingers”, “ears”, etc.) and so forth that are (otherwise bafflingly) included in your inventory listing. Verbs are effectively fake: “touch” is a synonym for “fingers” and so forth. Thus, in contradiction to the spirit of ParserComp and possibly its actual rules as well, the game seems to be primarily built for mouse input, with the command line as an afterthought. The parsing is lousy enough that I’m not convinced that it’s even happening — the game recognizes so few commands that the game could very well just be looking up the entire command string in a table. Supporting this hypothesis is the fact that adding an extra space between words is enough to turn a valid command into an invalid one.

It’s a difficult game to communicate with — the whole premise is that it’s giving you less information than you want, and it doesn’t make it easy to make your intentions known via input either. After a few hours of just exhaustively trying all the possible combinations of keywords, I’m giving up on it. I solved a bunch of inventory puzzles but I haven’t solved the mystery of blindness. I don’t know what proportion of the game I’ve seen. I’ve seen enough.

No Comments

Leave a reply