|M.Sc Thesis||Department of Industrial Engineering and Management|
|Supervisors:||Prof. Rafaeli Anat|
|Assoc. Prof. Smorodinsky Rann|
The analysis of the optimal punishment in law enforcement has led psychologists on the one hand to suggest that the immediacy of punishment is the crucial variable and should be emphasized, and economists on the other hand to suggest that the cost of punishment is the critical variable. The current research asks how these recommendations can be jointly implemented, in order to increase efficiency while reducing costs.
Two alternative solutions to reduce costs are considered, instead of using frequent immediate punishments (FIP). The first is the use of infrequent punishment following a violation (the “rare immediate punishment,” RIP), the second is the use of less costly punishment (the “Bad Lottery Immediate Punishment”, BLIP). Under this solution, the violator receives an immediate warning signal indicating there is some chance that he or she will receive a large punishment later.
Recent research in experimental economics suggests that BLIP is likely to be the optimal method when the punishment is high enough. However, careful quantification of the relative value of the variability and rarity effects is required to identify the optimal method when violating the law maximizes expected value
These robust predictions are tested by three experiments involving repeated choices between two buttons (lawful and unlawful in abstract setting) differed in the expected payoff. In the first experiment the lawful choice had the higher expected value relative to the unlawful choice. In the second and the third experiments, the unlawful choice had the higher expected value. The difference between those two experiments lies in the degree of variability, which is much higher in the third experiment relative to the second.
Results indicate surprising lack of difference between BLIP and FIP and significant difference between BLIP and RIP. For all three experiments the proportion of lawful choices is higher under the BLIP treatment than under the RIP treatment. The findings are compared to the quantification of the reinforcement learning model proposed by Barron and Erev (2000).