What Games Count As Retro?

Or more importantly, when did games become modern?

It seems as though people have been debating which games count as retro for as long as there have been such thing as retro games. This makes sense, but in all the decades since the dawn of video games, we seem to be no closer to a consensus. I can hardly claim to be the authority on modern and retro games, but I can at least respond in turn to the various competing delineations as well as positing my own.

The first and most common argument is that it’s simply a matter of time. Anything that’s above a certain age is retro. Sounds good, but what age do we use? Ten years should be fine. After all, look at how much can change in a rough decade. There were eleven years between Super Mario Galaxy and Super Mario 64, and even more shockingly, only eight years between Super Mario 64 and Super Mario Bros. 3. So let’s flip back our calendars and see which games are newly retro, shall we? Ten years ago, we had games like Bloodborne, Metal Gear Solid V, and The Witcher III. Great! Those are retro.

Okay, that doesn’t seem right. And it never will be right. You can extend the number of years all you like, but it will always be an arbitrary number. Furthermore, leaps in technology are less pronounced than they used to be. For another example, the first-person shooter genre was pioneered by Wolfenstein 3D all the way back in 1992. Fifteen years later, Crysis became the new gold standard of FPS technology, being used as the benchmark for high-end PCs years after the fact. It has now been almost eighteen years since Crysis was released. Not only has there not been a comparable leap in technology since then, it’s basically impossible to imagine such a leap happening again.

A competing theory is that console generations define modern and retro. Something one generation ago isn’t retro, but two generations ago is. This explains why the aforementioned games aren’t quite retro yet (never mind the fact that Metal Gear Solid V was cross-generational). We really can’t expect console generations to be that helpful, though. For one thing, the entire framework of console generations was seemingly created by Wikipedia editors (true story!) before being adopted by the industry. For another, much like on the software side, there is something of a technological plateau. Console generations such that they exist are gradually getting longer, not to mention this current generation has seen cross-generational games play a greater part than ever before. It’s quite possible that we’ll see games released for PlayStation 4 after the PlayStation 6 has launched, which would throw a massive wrench into the works.

There are other arguments still, such as that online multiplayer introduced the modern era. This doesn’t hold up to scrutiny when you consider console games have had online multiplayer as far back as the Sega Dreamcast, and PC games have had online multiplayer even longer. Others still would argue that it’s what goes on behind the scenes that defines the modern era; expansive budgets and ballooning development time have changed the industry irrevocably. This is an interesting hypothesis, but the lack of transparency in development means we don’t really have many hard numbers to refer to, and speculation only gets us so far.

So, it has fallen on me to present my own framework. And I have one. The transition into the modern era of video games began when video games themselves started to change. I don’t mean that a new game was different from an old game. I mean that individual games began to change. As internet connectivity stopped being a novelty and started to become the norm, we entered an age in which the game is no longer a fixed point. It’s something that can be changed as the developer of publisher sees fit. Revisions of games have existed for a long time, but now those revisions could be applied with little to no input from the player. Suddenly, the game they’ve been playing is different. Future generations may not see the games of today the same way we see them now, not due to changing tastes and relativity, but because the games might actually become different.

It’s hard to put a date on when this change happened. Patches and downloadable content have existed for a long time, but they started to become commonplace sometime midway through the life of the PlayStation 3 and Xbox 360. At first, their implementation was subtle. Patches would be issued to correct problems in the game, DLC would add something to the game that wasn’t there before, etc. The ability to patch games and add to them post-launch became more ubiquitous over time and it opened new doors in game development. One such development is the concept of the day one patch. Rather than patching a game after release when bugs were discovered, games were now being patched on or before release date to address any bugs or other problems that may have been discovered between the game’s nominal completion and release. Patches can still be released post-launch, but they don’t need to be limited to bug fixes. Maybe they can tweak the game’s balance based on player feedback. Maybe they can add entirely new things to the game. Maybe they can take things out of the game. The possibilities are seemingly endless.

There are lots of reasons to be concerned about this. One of the most common arguments is that it’s created an industry where games are unfinished on release, need day one patches for basic functionality, will have numerous bugs that need to be patched, and will have paid downloadable content that should have been included in the game. I do not find this argument entirely convincing. Even if it was true, it wouldn’t be that different from the days before. People like to wax nostalgic for the halcyon days of the PlayStation 2, when games were finished products, but this overlooks how many releases there were like Devil May Cry 3 Special Edition, Silent Hill 2 Director’s Cut, Metal Gear Solid 3: Subsistence, and Persona 3: FES that were just existing games with some new content and features, and that’s without even getting into how many games were reprinted with minor bug fixes. If anything, things have improved since then. Rather than needing to wait for a reprint to be released for bugs to be fixed, existing copies of the game can be patched, and the new content added to re-released games can be paid DLC rather than requiring players to buy the whole game again. These are obvious improvements.

It’s not all positive, however. Playing a game unpatched is cumbersome in the best of times, impossible in the worst. While the changes made are often for the better, there have been times that they’ve introduced problems, and rolling back to the prior version of a game isn’t always an option. But putting aside good and bad, sometimes it’s simply a matter of a game changing too much. The final balance patch for Dragon Ball FighterZ went beyond just balance changes; it substantially changed how the game was played. Love it or hate it, it’s effectively a different game now. Everyone will have a different answer for when Dragon Ball FighterZ was at its best, but that doesn’t really matter when you can’t easily play those versions. In another time, maybe this current version would have been Dragon Ball FighterZ 2. In this hypothetical timeline, some or maybe all of the DLC characters wouldn’t have been in the first game, but a version of it would be preserved, while the sequel would also exist for those who prefer the new system. In the present, that is not the case. One game can change radically and leave the original version to the dustbin of history. Even when the changes are not radical, they still make a difference. Letting developers constantly tweak their games based on player feedback can encourage questionable choices. Recall Elden Ring, which saw various balance changes made post-launch in a frantic effort to appeal to as many people as possible. Sometimes it’s best to let sleeping dogs lie, but it can be hard to resist the temptation to keep going back, like a director who can’t make up their mind about the perfect edit of their film.

Conversely, patches and DLC are also a theoretical nightmare for game preservation. Remember those day one patches that are necessary to ensure basic functionality? It’s easy to imagine that some day, the servers that provide those patches will no longer be in operation. At that point, playing the game in any decent way might become impossible. We could trust the platform holders to keep the servers alive, or we could trust publishers to ensure up to date versions of their games are always available, but these are both risky propositions, especially given the present volatility of the industry. So we’re stuck with a present where only the latest patch is available, and a potential future where only the unpatched version is available. Neither is ideal.

In the current era of gaming, you cannot know for sure that the game you played is the same as it was when your friend played it, nor can you be sure it will be the same if your friend plays it later, nor can you be sure if it’s even the same when you play it compared to when you played it last. When phrased like this, it starts to sound like we’ve moved past modern gaming into postmodern gaming. The game itself is not a fixed point. Perhaps we can’t even agree on what a game is anymore. What makes games count as art? If we can’t know for sure what one game is, maybe we can’t know for sure what any game is.