Strife: Doom Engine

[Update: Looks like a lot of what I say in this post is false. See the comments.]

Strife is the only game on my stack that uses the Doom engine, so let’s talk a little about what that means.

Back in 1994, I spent a few months working for one of Id Software’s competitors, Looking Glass Technologies, working on their texture-mapping routines. Given the coordinates of a polygon and their corresponding positions in a texture image, we had to render the the texture onto the polygon in perspective as fast as possible. These days, this sort of operation would be handled in hardware and abstracted through a library like Direct3D or OpenGL, but we didn’t have those things. Instead, we wrote highly-optimized code to loop over the polygon, scanline by scanline, find the appropriate point in the texture, and copy the pixel color over.

Overdraw was our nemesis: each polygon was expensive enough to render that it was a big waste whenever we rendered a polygon that was covered up by something else. Even when a polygon was only partly covered-up, it was worthwhile to try to figure out how much of it was visible and only render that part.

Sometime in the middle of all this, Doom was released. It was clear that it didn’t have all the capabilities of our library — we were rendering polygons in perspective at arbitrary angles, while Doom seemed to be only capable of horizontal and vertical surfaces, and could only rotate the camera about a vertical axis (no tilting up or down). But it was really fast. Faster, in fact, than could be entirely explained by the simplification made possible by using only horizontal and vertical surfaces. Add to this the complication that they were using highly irregular map layouts: instead of using a grid of map tiles, like Wolfenstein 3D or Ultima Underworld or System Shock, the map was a collection of walls of arbitrary length at arbitrary angles, which more or less defeats the means we had been using to eliminate overdraw.

By now, the secrets are well known. They had in fact managed to completely eliminate overdraw through a single stroke of genius: they didn’t render polygons at all. They rendered the entire scene at once, in vertical scanlines. For each horizontal position, the engine goes pixel by pixel, rendering ceiling until it hits wall, then rendering wall until it hits floor. I’m glossing over a lot of details, but that’s the essence of the Doom engine right there.

This has a couple of consequences. For one thing, it’s basically impossible for a Doom-engine game to take advantage of modern 3D hardware, because modern 3D hardware is all about rendering polygons. I can imagine someone making a Direct3D version of System Shock by taking the source code and remapping all the graphics functions to Direct3D equivalents. It might not be a perfect fit, but I imagine it would be doable with a little massaging. But there’s basically nothing in the Doom engine that even vaguely resembles a Direct3D call.

Second, the fact that the view was always horizontal in Doom wasn’t just a matter of the programmers not bothering to implement it, as with jumping and crouching. It is in principle impossible for the Doom engine to tilt the camera, because that would ruin the vertical scanlines — suddenly you’d have them intersecing the edges of walls and so forth.

strife-distortionAnd yet, Strife allows the player to look up and down. It manages this by cheating: instead of tilting the camera up, all it really does is render a higher-up slice of the same horizontal view. This isn’t quite the same thing as moving the camera upward. Rather, it’s an unnaturally distorted view, more like what you’d get by taking a photograph with the camera tilted and then looking at the photograph at an oblique angle, or something like that. (I’ll try to find or make some illustrations explaining this better.)

It’s easy to interpret this distortion as mere perspective, though, unless you’re really close to something, which makes it more noticable.

5 Comments so far

  1. Jason Dyer on 12 Jul 2008


    I’m fairly certain (although I haven’t been involved with the scene lately) that it isn’t the only one.

  2. nothings on 12 Jul 2008

    “they didn’t render polygons at all. They rendered the entire scene at once, in vertical scanlines. For each horizontal position, the engine goes pixel by pixel, rendering ceiling until it hits wall, then rendering wall until it hits floor.”

    This is actually a popular misconception. Wolf 3D worked by raytracing each column into the scene. Doom works by drawing scene polygons from front to back, scan-converting them into columns, and clipping them against a “floating horizon” to eliminate overdraw 100%. Actually two horizons, one at the top and one at the bottom. (The floor spans have to be drawn horizontally, so you have to convert the vertically-scanned-out spans to horizontal spans for floor and ceiling, and they might do this a bit more subtlely.)

    And that means it’s also not that hard to draw with a 3D API; just conver the front-to-back to back-to-front, and voila draw. (The problem then is in how to draw the scene without drawing everything in the entire level, and I don’t know how this is handled.)

    Some other experimental engines tackled the vertical rotation problem by drawing real vertical rotation. To do this, they first drew the scene sheared (as you describe), with some padding, to a temporary buffer. This can then be efficiently rendered (with a “floor mapper”) to the screen to effect the tilt. (However, this requires 2x the total texture mapping work, so it was never done in any of the software texture mapping engines I know of; it’s useful for e.g. grid-based voxel engines that also rely on a z-up simplification.)

    This is mathematically legitimate; I call it reprojection. If you project a scene onto a window, replace a painting of the scene where the window is, and tilt the view _without moving the camera_, the visual result is still correct.

    However, it’s still limited in how far it can be tiled without requiring excessive amounts of padding.

  3. fabien on 17 Apr 2009

    “The problem then is in how to draw the scene without drawing everything in the entire level, and I don’t know how this is handled.”

    You are hitting the exact problem that John Carmack had to fight when he programmer Quake engine.

    The solution was to slice the level in a 3D BSP and precalculated the bsp leaf visible, it is known as the Probably Visible Set (PVS). That’s how overdraw was solved.

  4. Sajt on 21 Aug 2009

    It should also be stated that Ultima Underworld wasn’t based on a grid either. It had arbitrary angles, and even slopes (which Doom didn’t have), and all this at a time before Wolfenstein 3D. I believe it used actual 3D polygons instead of “2.5D” like Doom or Duke Nukem 3D. Of course the drawback was that Ultima Underworld was really slow and somewhat ugly (it had no perspective correction).

  5. Antome on 2 Mar 2012

    No, Doom used 2d top down view polygons, whose height is scanned in a 2d manner. The 2d data you see in the automap is all the data needed to scan the scene, apart from texture maps.
    These polygon are closed surface with a height, a simple multiplication and you’d get the height of each column based on distance.

Leave a reply