work in progress notes, extending this prior post,
here's a graph:
there are three curves blended together,
each color is a work-flow type:
red = iterative
blue = parallel
green = two smaller iterative work-flows done in parallel
(you'd think they'd create white when combined together,
but the way I combined them, they create puke green)
the x-axis represents trial runs of a monty carlo simulation, sorted.
the y-axis represents the quality of the result for each trial run.
the graph is generated using this code
first, note that the simulation is not tuned to reality — the distributions for the steps, like generating work, improving work, and voting on work, have not been set based on empirical data — I plan to do that soon.
in any case, with the random values I put in,
the iterative has the highest mean, .854,
compared with .840 for the parallel,
but the blue parallel curve peeks above the red iterative curve on the right.
in fact, the parallel workflow usually gives better results than the iterative,
but when it does worse, it does much worse, bringing down the average.
another way of saying this is that the parallel workflow has higher variance.
the green hybrid curve does strictly worse than the iterative workflow,
but has a slightly higher mean than the parallel, .841.
the main positive take-away at this point is that the curve is "interesting", in that there's a sense in which both the red and blue curves are "better" — this has to do with the particular settings for the distributions, but the fact that the curves can be interesting is what we're really after, because we want to get a sense for what input conditions cause one method to be better than the other.
a possible negative take-away is that the hybrid approach didn't do so well. Now this could also be due to the chosen settings, and it could be that there is some setting for which the hybrid approach performs best. It could also be that the particular hybrid approach that I used wasn't the best one, so I'll need to fiddle with that to...