💾 Archived View for d.moonfire.us › blog › 2017 › 12 › 04 › nanogenmo-retrospective captured on 2022-04-29 at 12:23:01. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-04-28)
➡️ Next capture (2022-06-03)
-=-=-=-=-=-=-
Well, a month has gotten and I've finished this year's entry on time for NaNoGenMo[1]. I managed to follow my original plan pretty closely, though I ended up using the contingency plan in case I ran out of time. This entry focused mainly on a “complicated” plot with various twists and interplay between characters (actors).
1: https://github.com/NaNoGenMo/2017/issues/12
My repository[2] public for the entire thing and plan on keeping it open. Actually, this will probably end up the foundation for next year's entry when I focus less on plotting out a novel and more on writing the individual scenes and paragraphs.
2: https://gitlab.com/dmoonfire/girl-kills-dragon
The structure of the plot remained from the above link. The plot starts with a simple plot (something bad happens, hero goes to the villain, there is a battle, hero comes home). To make it a more interesting plot, I went through a series of iterations where I injected a plot element (a node) into the acyclic directed graph (DAG) where it fits. So, a simple trip to the villain has a few detours, there is a training montage, or a trial. The system also allows for new characters (actors because it is shorter) to be introduced on both sides, have adventures, and may be die.
The final result had 1,516 nodes with an additional 530 nodes of “back story” which I later intend to be fodder for the relationship scenes. It also created 187 actors at 392 locations across 14 countries. The final word count was 52,587 words.
The final result, as a single Markdown file, can be found in the repository[3].
3: https://gitlab.com/dmoonfire/girl-kills-dragon/blob/master/NaNoGenMo.md
I thought it would be useful to mention things that worked and didn't work for me. There is no specific order to these, just interesting points.
Even though I'm working on many Typescript projects, I decided to write this in C# which is currently my “best” language where I know all the tools, libraries, and how it works together. Not having to fight the tools made life a lot easier to code for an hour at a time (my lunch break) or steal a few hours while the kids take a bath.
It wasn't all “well known” though. This was my first real .NET Core language which had excellent support with Linux (my preferred platform at home) and Windows. With a set seeded random, it produced identical results on both platforms which made it a lot easier to debug some of the nastier bugs.
I knew before I started that I was going to build as much as possible using Inversion of Control[4]. I've toyed with it for a few projects but this was the first one that used DI/IoC from the beginning. I went with Autofac[5] because it had .NET Core support. It worked out really well. It took a bit to wrap my mind around the disconnected nature of items but overall, it ended up being very smooth (though some of my constructors had a dozen parameters).
4: https://en.wikipedia.org/wiki/Inversion_of_control
I typically switch to TDD when I'm solving problems. When I started, I was manually creating the various objects. As the primary code started getting more and more parameters in those constructors, I found myself writing default ones but it was hard to keep up. Things got easier to work after I spent two days setting up a test-specific IoC container.
This is something I should have done on the first week, not the third.
When I first started using the IoC, I made most of the parameters public auto-properties. This worked fine until I had to make a change and realized that I had other classes using those properties instead of trusting the IoC to provide them directly. This coupling made life difficult which is why I ended up making most of the DI-injected properties private to that class. It also created a far more encapsulated logic, which was good.
There was one case where I did use property injection for a base class, but it was specific because every plot injection needed a dozen classes to function and I didn't want to mess with constructors that just passed it into the base class.
I could have created a `PlotInjectorContext` class, which is probably what I would have done in the future to avoid property injection (which made me feel dirty).
With the iterative approach, I encountered a few places where someone would leave in two different directions. These were some of the most complicated problems to solve (probably about a third of the month). I ended up creating various validations to test when the graph failed. Every time I encountered one, I would write a validation to help detect it.
The resolvers were ones that went through after all the plots were injected and figured out when everything happened, the names of people involved, etc. Having these as distinct classes that were injected by interface (`IPlotResolver`).
I handled resolution order by having a `ResolveOrder` property on each one. When one resolver (`SecondPlotResolver`) needed a first one (`FirstPlotResolver`), I would have the second one take the first as a constructor parameter and then use that to determine order.
public int ResolveOrder => _firstPlotResolver.ResolveOrder + 1;
This ended up being a really nice pattern. If there were multiples, I would just use `Math.Max()` to figure out the order.
Validations were put into week one to run after all the plot injections. This worked pretty well, but I still struggled to find when something would go horribly wrong. This didn't get better until the fourth week when I split the validation and resolvers into ones that run at the end of the process and those which ran at the end of every injected plot.
Having the validation run at the end of each injection meant the system stopped right at the point that things became unstable instead of having me adjust the length to track it down. I also introduced a `plotId` which was an unused variable simply to let me put a breakpoint on the iteration to trace through the code.
The last major change was to have the timing resolution be able to sweep out all calculated times for the nodes and rebuild it. Once I had that, I was able to have a clean count of narrative nodes verses back story so 1,500 nodes wouldn't end up with 287 narrative (what I wrote) and the rest as back story.
The important lesson is to “fail fast” and move the validation as close as possible to where it can fail.
I can't describe how much I love ReSharper[6]. It isn't available on Linux, so I used Visual Studio Code[7] on my home machine. The refactoring and clean-up tools are sadly lacking outside of ReSharper, so I would throw crappy code wherever it fit while working at home, then the first thing I would do is clean up using ReSharper to ensure things were neat.
6: http://www.jetbrains.com/resharper/
7: https://code.visualstudio.com/
I would have used Rider[8] but I can't afford it and don't have any OSS project large enough to beg for a license.
8: https://www.jetbrains.com/rider/
I tried to come up with names by starting with a consonant and vowel inventory. That didn't work and I didn't have time to figure it out. In the end, I banged up a Markov chain generator and fed it English and Japanese names to come up with something.
I wanted to build up the specific details (like red hair or gender expression) that I could use in a madlib-style story generation. I think the idea has sound but it is complicated. I think I could spend an entire month (next year) working that out.
In the end, I just stopped with my third attempt at the system and just jammed the names into it. I'm not happy with `Detail` and `Noun` but it will take a lot more to get those workable. It just don't “feel” right to me.
My coworkers aren't fond of this, but I refactor constantly when I'm trying to solve problems. It like flipping one of those metal puzzles over trying to find a new way of getting it to work.
Having tools like ReSharper to reorganize the code was a godsend as I tried one thing and then another. Using different names, splitting code or combining helped with getting over the nastier bugs.
Though, Jetbrains, please, please make a command-line tool for apply code formatting. I would use it with every compile if I had it. It would also be nice if there were other formatting tools I could have used in Linux but… no.
Along the process, I ended up creating DGML, JSON, and Dot file outputs that I could look at the results at any point. This let me visually see how the scenes and chapters connected together.
Sadly, there is no good DGML visualization tools outside of Visual Studio. I'm surprised at this, but it was worth the difficulties of using the DGML viewer when I tried to arrange the chapters.
For my first NaNoGenMo, this was a lot of fun. The output is decades away from what I can write by hand, but for what I set out, I think I produced something I could be proud to show off.
Categories:
Tags:
Below are various useful links within this site and to related sites (not all have been converted over to Gemini).
https://d.moonfire.us/blog/2017/12/04/nanogenmo-retrospective/