Economic and algorithmic truth serum


The capability of collecting high quality input information from a diverse population of human agents for building a machine learning system is crucial and fundamental to implementing the idea of involving society in the loop algorithmically. Strategic human agents who hold private data can choose to report their information arbitrarily; yet the mechanism designer (e.g., the practitioner collecting training data for a machine learning system) has no ground-truth verification to validate the reports.


"Call it what you will, incentives are what get people to work harder." -- Nikita Khrushchev

I'm interested in reading people's mind via developing economic and algorithmic truth serum. Existing solutions designed payment and scoring functions to better engage rational human agents to contribute high quality data. There the solution concept builds on soliciting redundant answers for each task from different peer agents, to serve as references answers to score or pay each agent. I explore the design of incentive aligned system for data collection from machine learning and computation's perspective. I also research on how to design a good incentive-compatible algorithm with unknown human behavioral models.


Ongoing works

Yang Liu and Yiling Chen
Surrogate Scoring Rules and a Dominant Truth Serum for Information Elicitation
in preparation.

  • We extend strictly proper scoring rules to their agnostic versions, where we only have access to a noisy ground-truth. We name such scoring rules as Surrogate Scoring Rules.
  • Equilibrium notion, that is it is agent's best interest to truthfully reveal their answer, if all other agents are truthfully reporting, as typically adopted in the game theory context, renders its solution not robust for many practical applications, for several good reasons. We propose the first dominant truth serum for such information elicitation problems.

On Learning with Noisy Data and Incentive Design.

Randomized Wagering Mechanism for Eliciting Confidence, with Yiling Chen and Juntao Wang.

Wagering v.s. Scoring Rule: an Experimental Study, with Yiling Chen and Juntao Wang

Relevant works

Yang Liu and Yiling Chen
Machine Learning aided Peer Prediction.
ACM EC 2017, Cambridge, United States


Yang Liu and Yiling Chen
Sequential Peer Prediction: Learning to Elicit Effort using Posted Prices.
AAAI 2017, San Francisco, United States.


Yang Liu and Yiling Chen
A Bandit Framework for Strategic Regression.
NIPS 2016, Barcelona, Spain.


Zehong Hu, Yang Liu, Yitao Liang and Jie Zhang
A Reinforcement Learning Framework for Eliciting High Quality Information.
Machine Learning in the Presence of Strategic Behavior at NIPS'17, Long Beach, United States.

  • We build a reinforment learning framework to push peer prediction mechanisms towards more practical settings that (1) agents may not be fully rational (2) the right scaling of payment is learned through interacting with agents.

Yang Liu and Chien-Ju Ho
Incentivizing High Quality User Contributions: New Arm Generation in Bandit Learning.
AAAI 2018, New Orleans, United States.

Yang Liu and Yiling Chen
Learning to Incentivize: Eliciting Effort via Output Agreement.
IJCAI 2016, New York City, Uniteed States

Shang-Pin Sheng, Yang Liu and Mingyan Liu
A Regulated Oligopoly Multi-Market Model for Trading Smart Data.
IEEE SDP 2015, in conjunction with IEEE INFOCOM 2015, Hong Kong, China.