Updates: RSS feed.
Tags: game making, television, movies, fiction, poetry.
Years: 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017.


Unreal Code

Screen Shot 2017-03-22 at 17.35.58

Have been beetling my way through the different parts of Let’s Play: Ancient Greek Punishment: CPU Edition over the last days, now through with Prometheus, Zeno, the Danaids, and Sisyphus. Just Tantalus and the menu system to go, really, before it’s basically a finished product (though I need to think a bit about having a CPU indicator in the game, as I think that would look nice – though it’s proving annoying to think of how to do it).

Today’s entertainment was while working on Sisyphus. That level has a really specific ‘fail state’ animation where, if you’re not pressing the buttons fast enough while Sisyphus is pushing the boulder uphill he starts getting pushed backwards back to the bottom. I was working away on the code for this and then tried to test it only to realise that this can never happen to the CPU playing the game – it’s always going to be ‘pressing the buttons’ fast enough, it will never be pushed back down the hill (except by the automatic fail state at the top of the hill). So I actually had to deactivate the computer player and implement controls for a human player specifically so I could test what happens when there is a fail state.

As it happens, this revealed that the fail states weren’t working at all – Sisyphus would jump all over the screen in whacky ways. Again, this behaviour is invisible when the computer plays because the fail states aren’t triggered. So I had to debug all the failure stuff by testing with my pathetic human ability to fail, fixing the code up so it reflects the original game faithfully and well. So in the end I spent most of my time working on the Sisyphus level actively engaged in writing and fixing code that will literally never be processed by the game once its’ complete. But, of course, it has to be there for reasons of authenticity – if the failure code weren’t there, there would be no counterpoint to the computer’s repeated ‘success’ (success at failure, in a way?), and it feels like that encoded possibility of failure is needed for the success to register properly.

On the other hand, to the extent that the code cannot be triggered, it’s not entirely clear that it’s really there? Like, a really smart compiler would be able to determine that the code cannot be executed, for instance, and just not include it at all. But that said, JavaScript is an ‘interpreted’ language, which means it doesn’t get compiled and this kind of vestigial code is still ‘legitimate’ I suppose. I suppose? So it’s there and not there. Schrödinger’s code.

Unreal code,
Under the brown fog of a winter dawn,


struggle();

Screen Shot 2017-03-16 at 17.04.21 3

I finished making the Prometheus level/version/minigame of Let’s Play: Ancient Greek Punishment: CPU Edition the other day, which means I’ve now had a chance to go through various of the required conceptual grapplings involved in this particular edition of the series. As per usual, my assumptions going in have been kind of rejected/realigned thanks to the realities of actually sitting down and building the game itself – perhaps the most important argument for making games a reality even if they just seem like a ‘funny idea’ or whatever. You may not entirely know what you’re doing. I rarely do.

Going into this game my idea was that the code would be more or less identical to the original game, except that I would disable user input and instead would have computer code (running on some kind of timer) triggering the required inputs – generally speaking this would mean the computer alternately triggering keypress events for ‘g’ and ‘h’ over and over again. It turned out, however (for me at least), that simulating keypresses (or mouse clicks) didn’t actually work out (fast enough) for me. I struggled with it for while, doing the usual trawling of the internet, but never found a satisfactory approach.

But the very fact it wasn’t working kind of fits into the narrative of the game, I guess. If I can’t get the computer to do things that way, then that’s simply not how the computer would play the game. It’s kind of a truism. Rejecting the kind of human-centric idea of the computer having to trigger keyboard input meant I could rethink how a computer might interact with the game, at which point it seemed suddenly very clear that the computer would simply call a function to cause Prometheus to struggle. Why would it bother to go a circuitous route? Above I called the method struggle() but now I’m calling it INPUT() for sheer computeriness.

Thus the game work by loading the game as per usual, but instead of allowing for human input, there’s just a function you need to call again and again to struggle (or push a boulder or run a race etc.) and that’s what the computer player does, in the form of JavaScript’s setInterval() function, which runs the same code repeatedly with some interval in between. That’s the ‘AI’ of the CPU player in this game. I did think for a while about the idea of the CPU player being a kind of separate script from the game proper, so it was like the CPU was playing the game ‘from the outside’, but I don’t think that’s necessary for the game to make sense.

Perhaps most importantly, when I run the game and watch the little Prometheus struggling bravely (forever), it seems to feel like something. It’s weird to look at, knowing the the code is both generating the situation and the response to the situation at the same time, kind of eery and wrong. Which is great, obviously.

So: so far so good and thanks for asking.


It’s all in my head

Is is as if you were playing chess 3

At coffee this morning Rilla was talking about how she wants to write beautiful programs, in the sense of complex code that makes beautiful things happen systematically (she’s teaching an advanced programming course in our department, so this of course is a very reasonable thing to be thinking about right now). And it was interesting to me how instantaneously my mind bounced off that as an approach to making from my own perspective. I very clearly see how great that kind of programming is, and it yields things I perpetually amazed by, but apparently my mind just doesn’t work that way? (Or, presumably more likely, I just have gone in another direction and my mind isn’t trained to think that way.)

Rilla proposed a difference between us in terms of our interest in where the play of a game is ‘located’, which is an interesting way of putting. I guess I think it’s maybe more about the ‘ideas’ of the game, but that probably tips my hand in a sense. When I look at the kind of stuff I make (and especially after the first year or two), it’s very much a case of (relatively) simple programming (rarely even something you could call ‘systems’) that are attempting to convey or trigger more complex ideas (complex in the sense of conceptual). So something like It is as if you were playing chess, for instance, is ludicrous simple code-wise – just dragging circles and replaying the positions of chess games – but is “about” much more than that (the idea of play as physical performance, the idea of expertise, the idea of play as a form of labour, etc.).

In fact I wonder if locating the complexity in the mind or in the code (or perhaps also separately in the aesthetics?) is something where you need to just choose? I wonder if making a conceptually complex game that is also very complex programatically just starts to be too complex? Maybe this is a problem some people have when they’re designing weird (and wonderful, surely) magnum opuses that are just so complicated on so many levels that they’re kind of impossible to finish or, if finished, kind of impossible to play and “get”? I don’t know, this is just a thought. Maybe you need simplicity in the “other” aspects of a game (or any interactive thing – or any artwork?) in order to sustain the complexity of one particular part of it.

And of course it’s not the case the “simple” parts are therefore easy to work with – creating simplicity is incredibly difficult. v r 3 is killing me on the implementation despite being technically very, very ordinary (make a space, put some water in it), despite ultimately being a game that is more about thinking and looking than about anything complex development-wise. And I assume it may well be similar for a really complicated piece of programming – say a procedurally generated game. The conceptual layer may well be quite simple compared to the code, but I’m willing to believe there might be a huge amount of (conceptual) grappling needed to work out the kind of simplicity that can sit atop the complex systems beneath in order for the game to be at all accessible to a player.

Or maybe it’s possible to make an insanely complex game in concept, aesthetics, and mechanics. How the hell would I know? I’m just sitting here doing my thing, man.

Back to work.


Compression of artefacts

Screen Shot 2017-03-13 at 14.08.04 Screen Shot 2017-03-13 at 14.07.33

In what are hopefully the final stages of making v r 3 now. I have every Unity and third-party water working sufficiently well in the space that I think it’s ‘showable’ now. That is, I’ve either tweaked things enough or made peace with their appearance enough to move forward. This means I’ve been paying more attention to the idea of the game as something I need to put out into the world. And so:

WebGL. I’m not going to fight with WebGL in order to make a web-playable version of the game. I’m sure it’s possible, but at the moment the build crashes almost instantly with various script errors that don’t make sense to me immediately and I feel certain that if I engage with this I’ll spend another couple of weeks trying to get things in order. And even then I’m not convinced that all the waters will even work in the web context. Better to just restrict this to a desktop experience I think.

Title. Realised I need a title screen, so I’ll get onto that at some point. Can’t just crash land in a gallery of waters. It will be so much more helpful if a little screen comes up that says “v r 3” and then you crash land in a gallery of waters. Right? I know.

Compression. When I build the game right now it’s kind of bigger than I “want”. I’m not sure what I mean by that, but it is (or rather was) around 180MB uncompressed, which seems kind of excessive. It’s basically the fault of all the millions of textures (images) that the different waters use to create their effects, some of which are are almost incomprehensibly enormous. This gets me to one slightly interesting thing, which is the idea of compressing the textures. Obviously if I compress them they’ll get smaller and the overall game will get smaller as a result. On the other hand, if I compress them, am I somehow diluting the artistic intent of the creator of the water? Again, it’s that same question of parameters applied to texture compression – to what extent is it my prerogative/duty/right to alter the default composition of one of the artefacts that I’m treating as artworks here? It’s like, if the Warhol doesn’t quite fit on the wall you don’t cut some of it off, so…

Anyway I don’t have an answer to that one. I guess I’ll compress them because my hatred of large file sizes is greater than my desire for originalism in this particular scenario. Also, if I’m interpreting the documentation correctly, Unity can do a particular kind of compression that doesn’t actually affect the texture when you play, but only in terms of when it’s being stored, so perhaps that’s win-win. For what it’s worth, the two images above are of a compressed and uncompressed texture respectively… or it’s the other way around… I can’t tell the difference/

That’s what I’ve got. Some days are harder than other.


Water-borne disasters

Screen Shot 2017-03-09 at 12.54.05

I wrote briefly about some experiences with water gone wrong in v r 3 a little while back, but since I’ve been rebuilding the game from scratch to eliminate some lighting problems, I’ve run into a couple of my old weird friends again, like “super sharp sticky-uppy water” (above) and so on.

The above image is what happens if I literally just take a specific water I bought and resize it to fit it into my plinth. In a sense, then, this is “just what the water looks like” when you apply that simple transformation to it. And it’s a transformation I don’t think is terribly unreasonable – I need the water to exist in a smaller area, so I resize it. The fact that the water ends up looking like this is, I guess, most of all a result of the sometimes extreme parameterisation of the various waters I’m installing in the gallery. There are generally quite a large number of factors to be controlled for any given water, and they tend to rely on one another for an overall coherence of appearance. Thus, if you resize the water above, you also need to make sure that you change other aspects like the frequency and amplitude of its waves so that they make sense in the new context.

On the other hand, the water above looks pretty amazing, I’m not going to lie. And there have been quite a few instances of “broken” water that has looked pretty spectacular. Some water, for instance, completely ignores its scaling and just renders to the horizon anyway, leading to the entire gallery being half submerged. Some water has little undulating pieces of itself sticking through the sides of the plinths.

So there’s certainly a beauty to these distortions of shaders and scripts. As I’ve heard a few times, there’s probably another gallery show just in that idea itself. There’s a challenge there I think, though, in terms of what the “rules” for such a show would be. I don’t think it’s necessarily fair to just work out a way fuck up a shader or other Unity asset so it looks ridiculous and then display that. It’s true that that appearance would be “within its parameters”, but it feels a little in bad faith perhaps? (As I write this I’m not so sure.) Instead I guess I’d want some sort of “reasonable” transformation (like the plinth scaling) that turns out to create strange effects. But of course lots of the problems I have with the scaling in v r 3 don’t end up being visually interesting like the jagged water – most of the time you just end up with a completely smooth plane or something along those lines, so that kind of wouldn’t work anyway.

In the end I always want some sort of underlying formal reason for things being the way they are, I suppose. I feel like that gives a player a framework for understanding what they’re seeing, rather than just what they’re seeing being kind of the end of it, like “look how crazy this is! ha ha!” I think in the past I’ve called this something like the “ground truth” of a project, I don’t know what a good name is, but it’s certainly a vital quality. Most importantly, I think, it really helps me make decisions and move forwards – you know whether a particular idea/technical process/visual meets the criteria of your core guiding framework. So for v r 3 I know that the underlying concept is that I’m wanting to provide a context for thinking about water as a technical/aesthetic object in the context of videogames. Everything I’m doing, from the positioning of the sun in the sky, to the shape of the plinth, to the parameterisations of the waters themselves is informed by the objective.

Without something so strict, I think I’d just constantly get lost or feel like I’m kind of deceiving the player – pretending there’s coherence and meaning where there isn’t any. Thus to do a project where I’m showing “messed up” waters, it’d need some guiding idea like “water when you scale it down” or “water with each parameter set to a maximum value” or something, and even then, with that “or something”, you can tell that I’m not yet a  believer in that version of the project. Something more like “messed up visions of water I encountered while making a different project about water” might be acceptable too, I guess, but it’s less conceptually interesting to me.

Etc. etc. etc. Sorry.