Nock nock McFly... Are you in there?

Last night I spent about 2 hours figuring out why my latest nock recorded test wasn't working. The amount of recorded data was large (35+ Mb) so it wasn't easy to just scan through it. Even MacVim, which I use for large text files, was sluggish1 as some of the captured requests and/or responses were quite long2.

The problem was with a query that was reportedly returning 0 rows while actually it was recorded with over 2,000 rows. This lead me to a wild goose chase through the internals of nock (and my own modifications to it), dumping data, re-recording tests (unsurprisingly with same results) and so on.

At some point I started suspecting that nock was either broken (not working as specified in its README) in the way it handles multiple requests to the same path or that there was some other issue with loading larger arrays from JSON. After a while I finally noticed that one of the dumps was indicating a recorded body that made no sense: it wasn't corrupted which excluded bad loading - it was 100% well formed... and nonsensical for that point in recorder time.

Then it finally dawned on me that maybe it wasn't the problem in nock but in my recorded test files... and indeed, when I finally looked at them, I noticed that the MacVim's sluggishness was only partially so - most of it was actually duplicate recordings of requests which on scrolling seemed like sluggishness (as there was no visible refresh between duplicate lines). After I cleaned up all the duplicates the test worked - from the first attempt.

The problem it turns out is that during the initial run (the recording run) I started the nock recording twice. This wasn't wrong as I was actually dumping the intercepted requests into two different files. But nock didn't stop the recording after the first dump, nor did it detect that a recording was already running, nor did it exclude already dumped nocks from subsequent dumps.

So my second file (from the second dump) was all messed up with duplicates and with parts of the first file at the beginning because there were two nock recorders running in parallel, capturing the same information and dumping it to the same file. The solution was simple: invoke nock.recorder.clear every time the data is dumped.

But to prevent wasting my time again sometime in the future, I made changes to nock so that it throws an exception if there is an attempt to start recording while recording is already in progress.


Footnotes


[1] This turned out to partially be a hint.


[2] The test was gathering all my tweets, bulk-posting them to CouchDb, verifying the commonality of the loaded docs, updating all those docs and then again verifying the changed data.

Author

Ivan Erceg

Software shipper, successful technical co-founder, $1M Salesforce Hackathon 2014 winner