@jack-waugh said in STAR-like method ("reverse STAR"?):
@rob said in STAR-like method ("reverse STAR"?):
To me the most fair method would give every voter equal influence on the outcome.
I am in emphatic agreement. I suspect the lack of this equality is a very key tool that the ruling class uses to keep the rest of us down. I do not know how to articulate the sufficient conditions for equality of influence. I believe that one of the necessary conditions, however, is Frohnmayer balance.
I haven’t seen a solid argument for why being “Frohnmayer balanced” automatically is better. There are many balanced methods that are terrible. But it, alone, shouldn't be the goal. A good method probably achieves it.
Vote splitting is the real enemy, and having it balanced (e.g. For and Against voting) doesn't eliminate vote splitting per se, but it tries to balance it between "clustering" vote splitting and "declustering" vote splitting. Clustering is what you get with plain old plurality, where you want to reduce similar candidates, incentivizing parties and primaries. For and against adds a declustering effect on top of that. But it's a fairly crude one, since it is still plurality.
As for recursive IRV, you could make it immediately Frohnmayer balanced simply by doing some sort of Borda count at the deepest level. I'd be willing to bet that, if you did, it would still converge to the same result. I will try it, though, it should be super easy to do.
I actually haven't “proven” anything regarding recursive IRV, but I do have it fully implemented now, thanks to a good bit of help from chatGPT, which made it so much easier. It's actually surprisingly fast, even when going way deeper than I'd expect is needed. I'd still like to build some sort of visualizer for it that's better than just a big dump of Json which is what I currently have.
However, although not a proof per se, I can walk through the logic a bit of why I think it is superior to anything else out there in terms of fairness. In general I will talk about it as if it recurses to Infinity, even though in practice, it should be completely unnecessary to go more than a couple levels deep for real world purposes.
First the background. We may all have different ideas of what fairness is, but to me it is all about removing vote splitting. Vote splitting is when irrelevant alternatives disproportionately draw votes away from relevant candidates. STAR, IRV, and Condorcet methods all attempt to remove irrelevant alternatives, but each have their flaws. With Condorcet methods, obviously there isn't always a Condorcet winner and that is the singular flaw (in that case, it “kicks the can down the road”, leaving you to find another way to settle who the winner is). With STAR, you're using score totals to guess what the irrelevant alternatives are, and that isn’t a lot more than a guess and is absolutely subject to vote splitting at stage one. With IRV you are using a process of elimination to estimate which are irrelevant. Since IRV depends on (inverse) plurality to determine who to eliminate, it also is subject to vote splitting at the elimination stage.
However, IRV, for all the complaints about it, does do one thing correctly, and that is that the process of elimination that it uses is always better than not using it, in terms of increasing the accuracy and resistance to vote splitting of the final result. That alone doesn't set it apart from other methods, but it just needs to be pointed out: process of elimination is always a good thing in terms of reducing the vote splitting effect, compared to using plurality directly.
And that last point is what recursive IRV taps into. If, in seeking a final winner, we get better results than plurality by using the process of elimination, why can’t we use the process of elimination to get better results than plurality for deciding who to eliminate? We’re always using plurality, but with each increased level of recursion, we are making it more indirect. The more indirect the usage of plurality, the better the result.
So recursive Irv, unlike every other voting method I know, can apply its effect over and over until we decide it's “good enough.” In that way, it’s analogous to how we address the computer science problem “binary representations of floating point numbers can never exactly represent certain numbers, like 1/3”. While true in theory, we can just add enough bits and the problem gets less and less significant. If 32 bit floats aren’t good enough, use 64 bit. If 64 bit isn’t good enough, use 128 bit. There is no limit to how many times you can increase the number of bits, and each time you, say, double them, the precision is increased by a predictable amount. It is a straightforward problem to increase the precision. While it has costs, obviously, you don’t need to reinvent anything if you need more precision, you just apply more of the same.
All my tests shows that recursive IRV will find a Condorcet winner, if they exist, very quickly. I’ve never seen it take more than one extra recursion level. But it doesn’t “seek out the Condorcet winner”, instead, finding the Condorcet winner is a byproduct of a process that makes the method more accurate, more resistant to vote splitting. But since that process still works in the absence of a Condorcet winner, we have something that never has to fall back on some ugly and imperfect kludge.