|M.Sc Student||Wahrmann Lyron|
|Department||Department of Applied Mathematics||Supervisor||Mr. Amir Ronen|
The usage of input provided by many agents often enables accurate forecasting of future events. A major difficulty in performing such predictions is that the agents are required to invest a costly effort for providing their input.
We introduce a new class of mechanism design problems called Prediction Games. During the game, agents' inputs are gathered to produce a future event prediction. Paying agents is allowed in order to encourage them to compute their inputs.
Preliminaries: There are n agents; each has a private Boolean input xi and a cost of accessing it c. Given are a Binary Prediction Function f(x1,...,xn) and an IID distribution on the inputs (q=Pr[xi=1]). The (conditional) influence Ii is defined as the probability that agent i's input determines f(.).
The goal is to design a mechanism M which in truthful equilibrium predicts f(.) and pays in expectation as little as possible i.e. a truthful implementation.
Definition: Let M approach agent i with message m.
The canonical payment scheme is defined by:
vi(m) = viT=c/[(1 - qM)Ii(m)] f(.) is correct
Definition: A canonical mechanism is a truthful implementation with the following properties:
Theorem: For every prediction problem there exists a canonical implementation with an expected payment which is optimal among all other implementations.
Theorem: If f(.) is anonymous, a canonical mechanism which approaches the agents according to a random order is an optimal implementation of it.
Theorem: Let f(.) be a weighted threshold function. fw,q(x) =1 iff ∑i=1n wixi ≥ q.
A canonical mechanism which approaches the agents by a decreasing weight fixed order dominates every other fixed order mechanism.
Theorem: Every f(.) can be implemented with an expected payment of
B(M) = O(c/[qM (1-qM)n])
Theorem: Every anonymous f(.) can be implemented with an expected payment of
B(M)=O((cn2)/[(1-qM)2min(p,1-p)log(n)]) where p=Pr[f(x) = 1].
Exact predictions may require high payments. Allowing a small probability of error might result in a much cheaper implementation.
Definition: An e-approximation of a prediction problem is a truthful implementation such that f(.) is correct with probability ≥ (1-e).
Theorem: Let 0 < e < ½. Every f(.) can be (1-e)-approximated with an expected payment: B(M) £ O(n2c/[e(1 - e)]).