Sunday, March 16, 2025

Can algorithms make event dishonest out of date?

Our work proposes a unique rule that barely, however nonetheless not optimally, reduces the potential beneficial properties from manipulation. The theoretical rule we suggest identifies a small set of groups as “vital”, which we outline as groups that would kind coalitions that result in a crew within the coalition being the clear winner.

We classify every potential event final result into one in every of 5 buckets relying on how shut it’s to having a crew win all their video games. If the event has a crew that wins all their video games, we declare that crew the winner. If the event is much from having a crew that wins all their video games, we select a winner uniformly at random. For tournaments which might be “shut” to having a crew that wins all their video games, we establish these groups that would considerably achieve from manipulating the event. We show that there aren’t many such groups, and for every “shut” event, we assign particular possibilities to every crew in order that the beneficial properties from manipulation for teams of measurement three are 50% at most (and nonetheless the optimum 33% for teams of measurement two).

One frequent concern about our mannequin is that we assume all outcomes of the bottom reality are deterministic, e.g., A all the time beats B, and this will not align with actual tournaments. In spite of everything, underdogs all the time have an opportunity! Do our outcomes maintain if we permit for randomized outcomes, e.g., A beats B 80% of the time? It seems that, because of prior work, the reply to this query is “sure” as a result of the worst case situations are these with deterministic outcomes. Associated work reveals that if we prohibit the win likelihood of all video games to, say, the 60–40% interval, then we are able to count on to lower the beneficial properties from manipulation because the event turns into extra aggressive.

In one other try to beat the impossibility consequence we introduced earlier than, we introduce a new mannequin for figuring out which manipulations are helpful. Within the mannequin outlined up to now, groups within the manipulation coalitions deal with their joint likelihood of successful as a uniform mass. That’s, a crew within the coalition doesn’t care which different crew’s probabilities of successful go up or down, even whether it is their very own — an assumption that’s unlikely since groups naturally care about their very own probability of successful, or are no less than just a little egocentric. To mannequin the idea that groups in manipulating coalitions are nonetheless just a little egocentric, we introduce weights to the manipulation calculations to replicate this.

We noticed that if every crew weights their very own probabilities of successful twice as a lot as that of the opposite groups within the manipulating coalition, there exist guidelines that fulfill properties 1 and three for tournaments with at most six groups. We conjecture that, below this mannequin, there could certainly exist guidelines that fulfill properties 1 and three precisely. We additionally present that for a number of in style guidelines, a big weight is required for the rule to fulfill properties 1 and three.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles