Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Regulation of Algorithmic Collusion, Refined: Testing Worst-case Calibrated Regret

Jason Hartline · Chang Wang · Chenhao Zhang


Abstract:

We study the regulation of algorithmic (non-)collusion amongst sellers in dynamicimperfect price competition by auditing their data as introduced by Hartline et al..We develop an auditing method that tests whether a seller's worst-case calibrated regret is low. The worst-case calibrated regret is the highest calibrated regret that outcomes compatible with the observed data can generate. This method relaxes the previous requirement that a pricing algorithm must use fully-supported price distributions to be auditable. This method is at least as permissive as any auditing method that has a high probability of failing algorithmic outcomes with non-vanishing calibrated regret. Additionally, we strengthen the justification for using vanishing calibrated regret, versus vanishing best-in-hindsight regret, as the non-collusion definition, by showing that even without side information, the pricing algorithms that only satisfy weaker vanishing best-in-hindsight regret allow an opponent to manipulate them into posting supra-competitive prices.We motivate and interpret the approach of auditing algorithms from their data as suggesting a per se rule. However, we demonstrate that it is possible for algorithms to pass the audit by pretending to have higher costs than they actually do. For such scenarios the rule of reason can be applied to bound the range of costs to those that are reasonable for the domain.

Chat is not available.