Working Papers


Papers on game-theoretic probability and finance by the Rutgers–Royal Holloway research group: Work on game-theoretic probability and finance in English by the Tokyo research group:

Rutgers–Royal Holloway papers in chronological order

  1. "The game-theoretic Capital Asset Pricing Model", by Vladimir Vovk and Glenn Shafer (first posted March 2002).
    Using only the game-theoretic framework and an efficient-market hypothesis, this article derives predictions that are similar to those of the standard CAPM, but are clearer and more precise. International Journal of Approximate Reasoning 49 175–197 (2008).
  2. "Game-theoretic capital asset pricing in continuous time", by Vladimir Vovk and Glenn Shafer (first posted December 2001; also posted on arXiv).
    This article describes a continuous-time version of the game-theoretic capital asset pricing model described in Working Paper 1.
  3. "A new understanding of subjective probability and its generalization to lower and upper prevision", by Glenn Shafer, Peter R. Gillett and Richard B. Scherl (October 2002).
    Instead of asking whether a person is willing to pay given prices for given risky payoffs, the article asks whether the person believes he can make a lot of money at those prices. International Journal of Approximate Reasoning 31 1–49 (2003).
  4. "The origins and legacy of Kolmogorov's Grundbegriffe", by Glenn Shafer and Vladimir Vovk (first posted February 2003; also posted on arXiv).
    The Grundbegriffe appeared in 1933. The article examines the work of the earlier scholars whose ideas Kolmogorov synthesized and the developments in the decades immediately following. A shorter version, which does not cover the later period, appeared as "The sources of Kolmogorov's Grundbegriffe" in Statistical Science, 21, 70–98 (2006).
  5. "A game-theoretic derivation of the sqrt(dt) effect", by Vladimir Vovk and Glenn Shafer (January 2003; also posted on arXiv).
    In the game-theoretic framework, market volatility is a consequence of the absence of riskless opportunities for making money.
  6. "Kolmogorov's contributions to the foundations of probability", by Vladimir Vovk and Glenn Shafer (January 2003).
    The article reviews three stages of Kolmogorov's work on the foundations of probability: (1) his formulation of measure-theoretic probability, 1933, (2) his frequentist theory of probability, 1963, and (3) his algorithmic theory of randomness, 1965–1987. A version of this working paper appeared in Problems of Information Transmission 39 21–31 (2003).
  7. "Good randomized sequential probability forecasting is always possible", by Vladimir Vovk and Glenn Shafer (first posted June 2003).
    It is possible, using randomization, to make sequential probability forecasts that will pass any given battery of statistical tests. A version of this working paper appeared in the Journal of the Royal Statistical Society, Series B 67 747–763 (2005).
  8. "Defensive forecasting", by Vladimir Vovk, Akimichi Takemura, and Glenn Shafer (first posted September 2004; also posted on arXiv).
    For any continuous gambling strategy used for detecting disagreement between forecasts and actual labels, there exists a forecasting strategy whose forecasts are ideal as far as this gambling strategy is concerned. A version of this working paper appeared in the AI & Statistics 2005 proceedings.
  9. "Experiments with the K29 algorithm", by Vladimir Vovk (October 2004).
    The K29 algorithm for probability forecasting (proposed in Working Paper 8) is studied empirically on a popular benchmark data set.
  10. "Defensive forecasting for linear protocols", by Vladimir Vovk, Ilia Nouretdinov, Akimichi Takemura, and Glenn Shafer (first posted February 2005; also posted on arXiv.)
    The K29 algorithm is generalized from binary to arbitrary linear forecasting protocols. A version of this working paper appeared in the ALT 2005 proceedings.
  11. "On-line regression competitive with reproducing kernel Hilbert spaces", by Vladimir Vovk (first posted November 2005; also posted on arXiv).
    For a wide range of infinite-dimensional benchmark classes one can construct a prediction algorithm whose cumulative quadratic loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus O(sqrt(N)). The proof technique is based on defensive forecasting. A version of this working paper appeared in the TAMC 2006 proceedings.
  12. "Discrete dynamic hedging without probability", by Glenn Shafer (March 2005).
    Even if the price of a security is not governed by a probability measure, a European option in the security can be hedged in discrete time by trading in the security and an instrument that pays its variance. A non-probabilistic bound on the error of the hedging is given.
  13. "Non-asymptotic calibration and resolution", by Vladimir Vovk (first posted November 2004; also posted on arXiv).
    The article analyzes a new algorithm (K29*, a modification of the K29 algorithm) for probability forecasting of binary observations. A version of this working paper appeared in Theoretical Computer Science (ALT 2005 Special Issue) 387, 77–89 (2007).
  14. "Competitive on-line learning with a convex loss function", by Vladimir Vovk (first posted May 2005; also posted on arXiv).
    Standard on-line learning algorithms can only deal with finite-dimensional (often countable) benchmark classes. This article presents results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. The proof technique used is based on defensive forecasting. A version of this working paper appeared in the ALT 2005 proceedings.
  15. "From Cournot's principle to market efficiency", by Glenn Shafer (first posted November 2005).
    A revival of Cournot's principle can help us distinguish clearly among different aspects of market efficiency.
  16. "Competing with wild prediction rules", by Vladimir Vovk (first posted December 2005; also posted on arXiv).
    The regularity of a prediction rule D is measured by its "Holder exponent" h, informally defined by the condition that |D(x+dx)-D(x)| scales as |dx|h for small |dx|. The usual Hilbert-space methods cease to work for h<1/2. This article develops Banach-space methods to construct, for each p in [2,infinity), a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule of Holder exponent h > 1/p + epsilon plus O(N-1/p). A version of this working paper appeared in Machine Learning (COLT 2006 Special Issue) 69, 193–212 (2007).
  17. "Predictions as statements and decisions", by Vladimir Vovk (June 2006; also posted on arXiv).
    The theory of competitive on-line learning can benefit from kinds of prediction that are now foreign to it, first of all from the kinds studied in game-theoretic probability. An abstract of this working paper appeared in the COLT 2006 proceedings.
  18. "Leading strategies in competitive on-line prediction", by Vladimir Vovk (August 2007; also posted on arXiv).
    For any class of prediction strategies constituting a reproducing kernel Hilbert space one can construct a leading strategy: the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. The loss function is assumed to be given by a Bregman divergence or by a strictly proper scoring rule. Theoretical Computer Science (ALT 2006 Special Issue) 405 285–296 (2008).
  19. "Merging of opinions in game-theoretic probability", by Vladimir Vovk (first posted August 2007; also posted on arXiv).
    This article gives constructive, point-wise, and non-asymptotic game-theoretic versions of several results on "merging of opinions" previously obtained in measure-theoretic probability and algorithmic randomness theory. Annals of the Institute of Statistical Mathematics 61 969–993 (2009).
  20. "Defensive forecasting for optimal prediction with expert advice", by Vladimir Vovk (August 2007; also posted on arXiv).
    Defensive forecasting is competitive with the Aggregating Algorithm and handles "second-guessing" experts, whose advice depends on the learner's prediction.
  21. "Continuous and randomized defensive forecasting: unified view", by Vladimir Vovk (August 2007; also posted on arXiv).
    There are two varieties of defensive forecasting: continuous and randomized. This note shows that the randomized variety can be obtained from the continuous variety by smearing Sceptic's moves to make them continuous.
  22. "Game-theoretic probability and its uses, especially defensive forecasting", by Glenn Shafer (August 2007).
    This expository article reviews the game-theoretic framework for probability and the method of defensive forecasting that derives from it.
  23. "Testing lead-lag effects under game-theoretic efficient market hypotheses", by Wei Wu and Glenn Shafer (November 2007).
    Game-theoretic efficient market hypotheses identify the same lead-lag anomalies as the conventional approach: statistical significance for the autocorrelations of small-cap portfolios and equal-weighted indices, as well as for the ability of other portfolios to lead them. Because the game-theoretic approach bases statistical significance directly on trading strategies, it allows us to measure the degree of market friction needed to account for this statistical significance. The authors find that market frictions provide adequate explanation.
  24. "Continuous-time trading and the emergence of randomness", by Vladimir Vovk (first posted December 2007; also posted on arXiv).
    A new definition of events of game-theoretic probability zero in continuous time is proposed and used to prove results suggesting that trading in financial markets results in the emergence of properties usually associated with randomness. This article concentrates on "qualitative" results, stated in terms of order (or order topology) rather than in terms of the precise values taken by the price processes (assumed continuous). Stochastics 81 455–466 (2009).
  25. "Continuous-time trading and the emergence of volatility", by Vladimir Vovk (December 2007; also posted on arXiv).
    This article shows that the variation of non-constant continuous price processes has to be 2, as in the case of Brownian motion. Electronic Communications in Probability 13 319–324 (2008).
  26. "Game-theoretic Brownian motion", by Vladimir Vovk (January 2008; also posted on arXiv).
    This article suggests a perfect-information game, along the lines of Lévy's characterization of Brownian motion, that formalizes the process of Brownian motion in game-theoretic probability.
  27. "Prequential probability: game-theoretic = measure-theoretic", by Vladimir Vovk (first posted January 2009; also posted on arXiv).
    This note shows that in Philip Dawid's prequential framework game-theoretic probability can be given a natural measure-theoretic definition. In particular, it makes game-theoretic laws of probability in the prequential framework with a finite outcome space corollaries of the corresponding measure-theoretic laws. However, the resulting strategies for Sceptic are very complex, in contrast with the strategies designed in game-theoretic probability. The main result of this note has been published in: Vladimir Vovk and Alexander Shen. Prequential randomness and probability. Theoretical Computer Science (Special Issue devoted to the Nineteenth International Conference on Algorithmic Learning Theory) 411 2632–2646 (2010).
  28. "Continuous-time trading and the emergence of probability", by Vladimir Vovk (first posted April 2009).
    This article establishes a non-stochastic analogue of the celebrated result by Dubins and Schwarz about reduction of continuous martingales to Brownian motion via time change. It contains the main results of Working Papers 24 and 25 as special cases. Finance and Stochastics 16 561–609 (2012).
  29. "Lévy's zero-one law in game-theoretic probability", by Glenn Shafer, Vladimir Vovk, and Akimichi Takemura (first posted May 2009; also posted on arXiv).
    The authors prove a game-theoretic version of Lévy's zero-one law, and deduce several corollaries from it, including Kolmogorov's zero-one law, the ergodicity of Bernoulli shifts, and a zero-one law for dependent trials.
  30. "Prediction with expert evaluators' advice", by Alexey Chernov and Vladimir Vovk (first posted February 2009).
    The article introduces a new protocol for prediction with expert advice in which each expert evaluates the learner's and his own performance using a loss function that may change over time and may be different from the loss functions used by the other experts. The learner's goal is to perform better or not much worse than each expert, as evaluated by that expert, for all experts simultaneously. The conference version is published in the ALT 2009 proceedings.
  31. "A betting interpretation for probabilities and Dempster-Shafer degrees of belief", by Glenn Shafer (first posted June 2009).
    One way of interpreting numerical degrees of belief is to make the judgement that a strategy for taking advantage of such betting offers will not multiply the capital it risks by a large factor. Applied to ordinary additive probabilities, this can justify updating by conditioning. Applied to Dempster-Shafer degrees of belief, it can justify Dempster's rule of combination. A version of this paper is to appear in the International Journal of Approximate Reasoning.
  32. "How to base probability theory on perfect-information games", by Glenn Shafer, Vladimir Vovk, and Roman Chychyla (December 2009).
    This paper reviews the basics of game-theoretic probability. It is published in the BEATCS (Number 100, February 2010, pages 115–148), Yuri Gurevich's Logic in Computer Science column.
  33. "Test martingales, Bayes factors, and p-values", by Glenn Shafer, Alexander Shen, Nikolai Vereshchagin, and Vladimir Vovk (first posted December 2009; also posted on arXiv).
    A nonnegative martingale with initial value equal to one measures the evidence against a probabilistic hypothesis. Bayes factors and p-values can be considered special cases of the martingale approach to hypothesis testing. Statistical Science 26, 84–101, 2011.
  34. "Insuring against loss of evidence in game-theoretic probability", by A. Philip Dawid, Steven de Rooij, Glenn Shafer, Alexander Shen, Nikolai Vereshchagin, and Vladimir Vovk (first posted May 2010; also posted on arXiv).
    This paper extends the result of the previous one to the case where testing is performed by a free agent (Sceptic) rather than using a prespecified nonnegative martingale. This requires different proof techniques. The extended result can be applied to financial markets, in which case it provides a tool for insuring against loss of the accumulated capital (and so can be considered as an alternative, imperfect but free, to buying a lookback option). A version of this paper is published in Statistics and Probability Letters 81, 157–162, 2011.
  35. "Rough paths in idealized financial markets", by Vladimir Vovk (first posted May 2010).
    This paper partially extends the result of Working Paper 25 by showing that the variation index of right-continuous positive price processes cannot exceed 2 (i.e., be much rougher than Brownian motion). A shorter version has been published in Lithuanian Mathematical Journal 51, 274–285, 2011.
  36. "Ito calculus without probability in idealized financial markets", by Vladimir Vovk (first posted August 2011).
    The paper assumes that the price paths of the traded securities are cadlag functions, imposing mild restrictions on the allowed size of jumps. It proves the existence of quadratic variation for typical price paths. This allows one to apply known results in pathwise Ito calculus to typical price paths.
  37. "Probability-free pricing of adjusted American lookbacks", by A. Philip Dawid, Steven de Rooij, Peter Grünwald, Wouter Koolen, Glenn Shafer, Alexander Shen, Nikolai Vereshchagin, and Vladimir Vovk (August 2011).
    This paper further develops Working Papers 33 and 34. It computes upper prices of some lookback-type American options.
  38. "The efficient index hypothesis and its implications in the BSM model", by Vladimir Vovk (first posted September 2011; also posted on arXiv).
    This note shows that, in the Black-Scholes(-Merton) model and for a long investment horizon, the equity premium is close to the squared volatility of the index, unless the index can be outperformed greatly with high probability. This agrees with results of Working Papers 1 and 2.
  39. "The Capital Asset Pricing Model as a corollary of the Black-Scholes model", by Vladimir Vovk (September 2011; also posted on arXiv).
    Considering a market containing a stock and an index, this paper shows that, for a long investment horizon, the appreciation rate of the stock has to be close to the interest rate plus the covariance between the volatility vectors of the stock and the index. (If it is not, the index can be outperformed greatly with high probability.) This contains both a version of the Capital Asset Pricing Model and the result of Working Paper 38 that the equity premium is close to the squared volatility of the index. The new CAPM agrees with the CAPM of Working Papers 1 and 2. See arXiv report arXiv:1111.2846 for further research.
  40. "Kolmogorov's strong law of large numbers in game-theoretic probability: Reality's side", by Vladimir Vovk (March 2013).
    The game-theoretic version of Kolmogorov's strong law of large numbers says that Skeptic has a strategy forcing the statement of the law in a game of prediction involving Reality, Forecaster, and Skeptic. This note describes a simple matching strategy for Reality. See Tokyo Working Paper 13 for much more advanced results.
  41. "When to call a variable random", by Glenn Shafer (June 2015).
    There is considerable interest in broadening Kolmogorov's framework for mathematical probability to permit weaker probabilistic predictions. This raises questions of interpretation and terminology. For example, should all uncertain quantities in a broader framework be called random variables? The historical record, reviewed in this paper, shows that the mathematicians who introduced the term random variable reserved it for variables to which probability distributions are ascribed. Moreover, they did not assume that probability distributions can be ascribed to all variables, even those measuring outcomes of well defined repeatable experiments. This paper contends that we should follow their example on both counts. Doing so will help us integrate much that has been learned during the past century into a new framework.
  42. "Purely pathwise probability-free Ito integral", by Vladimir Vovk (first posted December 2015; also posted on arXiv).
    This paper gives a simple construction of the pathwise Ito integral for an integrand and an integrator satisfying various topological and analytical conditions. The definition is purely pathwise in that neither integrand nor integrator are assumed to be paths of processes. A shorter version has been published in Matematychni Studii 46(1), 96–110, 2017.
  43. "Getting rich quick with the Axiom of Choice", by Vladimir Vovk (first posted April 2016; also posted on arXiv).
    This note proposes a new get-rich-quick scheme that involves trading in a stock with a continuous but not constant price path. The existence of such a scheme, whose practical value is tempered by its use of the Axiom of Choice, shows that imposing regularity conditions (such as measurability) is essential even in the foundations of game-theoretic probability. The journal version has been published in Finance and Stochastics.
  44. "A probability-free and continuous-time explanation of the equity premium and CAPM", by Vladimir Vovk and Glenn Shafer (first posted June 2016; also posted on arXiv).
    This paper gives yet another definition of game-theoretic probability in the context of continuous-time idealized financial markets. Without making any statistical assumptions (but assuming positive and continuous price paths), we obtain a simple expression for the equity premium and derive a version of the capital asset pricing model.
  45. "Towards a probability-free theory of continuous martingales", by Vladimir Vovk and Glenn Shafer (first posted July 2016; also posted on arXiv).
    Without probability theory, we define classes of supermartingales, martingales, and semimartingales in idealized financial markets with continuous price paths. This allows us to establish probability-free versions of a number of standard results in martingale theory. The main applications are to the equity premium and CAPM; the results of Working Paper 44 are simplified and strengthened.
  46. "Another example of duality between game-theoretic and measure-theoretic probability", by Vladimir Vovk (August 2016; also posted on arXiv).
    This paper makes a small step towards a non-stochastic version of superhedging duality relations in the case of one traded security with a continuous price path. Namely, it shows the coincidence of game-theoretic and measure-theoretic expectation for lower semicontinuous nonnegative functionals.
  47. "How speculation can explain the equity premium", by Glenn Shafer (first posted October 2016).
    When measured over decades in countries that have been relatively stable, returns from stocks have been substantially better than returns from bonds. This is often attributed to investors' risk aversion. The game-theoretic probability-free theory of finance attributes the equity premium to speculation, and this explanation does better than the explanation from risk aversion in accounting for the magnitude of the premium.
  48. "Cournot in English", by Glenn Shafer (first posted April 2017).
    These are English translations of a few passages from Cournot's books. To provide context, a few earlier and later scholars are also quoted.
  49. "Game-theoretic significance testing", by Glenn Shafer (first posted April 2017).
    Game-theoretic probability gives us a new way to think about the problem of adjusting p-values to account for multiple testing and provides concrete rules for adjusting and combining p-values.
  50. "Bayesian, fiducial, frequentist", by Glenn Shafer (first posted April 2017).
    This paper advances three historically rooted principles for the use of mathematical probability: the fiducial principle, Poisson's principle, and Cournot's principle. Taken together, they can help us understand the common ground shared by classical statisticians, and Bayesians, and proponents of fiducial and Dempster-Shafer methods.
  51. "Non-stochastic portfolio theory", by Vladimir Vovk (December 2017; also posted on arXiv).
    This paper proposes a non-stochastic version, based on Working Paper 45, of Fernholz's stochastic portfolio theory for a simple model of stock markets with continuous price paths.
  52. "Whither Ito's reconciliation of Lévy and Doob?", by Glenn Shafer (first posted December 2018).
    Kiyosi Ito reconciled Paul Lévy's game-theoretic intuition with Joseph Doob's measure-theoretic rigor. Ito understood the reconciliation in terms of measure, but it can also be understood game-theoretically.
  53. "Pascal's and Huygens's game-theoretic foundations for probability", by Glenn Shafer (first posted December 2018).
    Blaise Pascal and Christiaan Huygens founded the calculus of chances on a game's temporal structure. Their 18th-century successors reverted to the notion of equally frequent chances, but the game-theoretic foundations merit 21st-century attention.
  54. "The language of betting as a strategy for statistical and scientific communication", by Glenn Shafer (first posted March 2019).
    The established language for statistical testing is overly complicated and deceptively conclusive. We can communicate the meaning and limitations of statistical evidence more clearly using the language of betting. This paper calls attention to a simple betting interpretation of likelihood ratios, significance levels, and p-values that facilitates the analysis of multiple testing and meta-analysis. Journal of the Royal Statistical Society A (with discussion, to appear).
  55. "On the nineteenth-century origins of significance testing and p-hacking", by Glenn Shafer (first posted July 2019).
    This paper traces the development of the Laplacean concept of significance testing and its transmission to Britain in the 19th century. Edgeworth's introduction of the English word "significance" in 1885 was a superficial and merely verbal change, but Fisher's use of the word differed from that of Edgeworth and Pearson.
  56. "How the game-theoretic foundation for probability resolves the Bayesian vs. frequentist standoff", by Glenn Shafer (first posted August 2020).
    The game-theoretic foundation for probability, which begins with a betting game instead of a mere assignment of probabilities to events, can serve as the basis for all the probability mathematics used by mathematical statistics. It can also generalize frequentist inference so that it stands beside Bayesian inference as a way of using betting. The generalization is vast.
  57. "Risk is random: The magic of the d'Alembert", by Harry Crane and Glenn Shafer (first posted August 2020).
    Systems for sequential betting touted in nineteenth-century casinos, including the d'Alembert and the Labouchere, win money with very high probability. Usually they also require the gambler to take relatively little money out of his or her pocket and hence have a astonishingly high average return on investment. Roughly speaking, the trick is to bet less after a win and more after a loss. Failure by business schools and statistics departments to teach these betting systems can blind us to similar behavior in business and finance and hamper our understanding of statistical testing.
  58. "Negative probabilities", by Yuri Gurevich and Vladimir Vovk (first posted December 2020).
    We explain, on the example of Wigner's quasiprobability distribution, how negative probabilities may be used in the foundations of probability.
  59. "Game-theoretic descriptive probability", by Glenn Shafer (first posted September 2021).
    Game-theoretic probability reproduces the usual theory of discrete-time probability without using any concept of randomness. Probabilities are understood as forecasts, and the assumption of independence is replaced by the assumption that the forecaster always makes the same forecast or uses the same forecasting rule. A probability distribution for outcomes can serve as a forecasting rule, but it can also serve as a strategy for testing a probability forecaster by betting. This involves interpreting the likelihood ratio as the payoff of a gamble. We assert objective validity for a statistical model consisting of many distributions, by claiming that it will withstand testing by betting. As explained in this paper, game-theoretic probability also allows us to use a statistical model in a purely descriptive way, without claiming objective validity. In this case, estimated parameters merely tell us which distributions in the model best describe the data, and the place of confidence intervals is taken by ranges of parameter values that forecast relatively well. In the simplest implementation, these ranges coincide with R.A.Fisher's likelihood intervals.
  60. ""That's what all the old guys said": The many faces of Cournot's principle", by Glenn Shafer (first posted January 2022).
    Cournot's principle is a family of theses about how a system of numerical probabilities can be given an objective interpretation. With one nuance or another, these theses connect a system of numerical probabilities with phenomena by asserting that certain events of high probability will happen and that certain events of low probability will not. This paper surveys the many ways Cournot's priniciple has been formulated by quoting scores of prominent authors over several centuries. I include some authors who disagreed with Cournot's principle, sometimes explicitly, and others whose formulations can be thought of either as versions of the principle or as alternatives to it. The purpose of this compilation is not to decide whether Cournot's principle is right or wrong, but to show its persistence and its role in the development of the idea of objective numerical probability. The continuity of this development may be greater and the novelty of recent contributions less than we sometimes suppose.
  61. "The martingale index", by Valentin Dimitrov, Glenn Shafer, and Tiangang Zhang (first posted July 2022).
    Day traders, traders for financial institutions, and corporate executives sometimes appear to do better than chance only because the risk of large losses is hidden or overlooked. As students of casino gambling know, one way to obscure the risk of large losses is to bet more when you losing and less when you are winning. Casino betting strategies that did this were called martingales in the 19th century. Traders in financial instruments also sometimes martingale; in fact, they are martingaling whenever they respond to a margin call. The martingale index, as defined in this paper, provides a rough but convenient way of measuring the portion of the expected return of a gambling or trading strategy due to martingaling. We calculate the martingale index for some popular casino strategies and also for some strategies that might model the behavior of some traders in S&P 500 futures, stocks, and VIX futures.
  62. "Teaching testing by betting: Towards a syllabus for game-theoretic statistics", by Glenn Shafer (first posted July 2022).
    This note sketches a syllabus for an introductory course in statistics built around the notion of testing by betting. This sketch is prefaced by an explanation of the author's personal attitudes towards betting and his personal opinions about betting's role in our culture.
  63. "Statistical testing with optional continuation", by Glenn Shafer (first posted August 2022).
    When testing a statistical hypothesis, is it legitimate to deliberate on the basis of initial data about whether and how to collect further data? The fundamental principle for testing by betting says yes, provided that you are testing by betting and do not risk more capital than initially committed. Standard statistical theory uses Cournot's principle, which does not allow such optional continuation. Cournot's principle can be extended to allow optional continuation when testing is carried out by multiplying likelihood ratios, but the extension lacks the simplicity and generality of testing by betting.
  64. "The diachronic Bayesian", by Vladimir Vovk (first posted August 2023).
    It is well known that a Bayesian's probability forecast for the future observations should form a probability measure in order to satisfy natural conditions of coherency. The topic of this paper is the evolution of the Bayesian's probability measure in time. We model the process of updating the Bayesian's beliefs in terms of prediction markets. The resulting picture is adapted to forecasting several steps ahead and making almost optimal decisions.
  65. "A Conversation with A. Philip Dawid", by Vladimir Vovk and Glenn Shafer (first posted November 2023).
    Beginning in the 1970s, Alexander Philip Dawid has been a leading contributor to the foundations of statistics and especially to the development and application of Bayesian statistics. Both Vovk and Shafer have known Philip personally for decades, and his work has served as an inspiration and starting point for much of their own work. This conversation took place remotely in August and September of 2022. Its definitive, much shorter, record is to appear in Statistical Science.
  66. "Convergence of opinions", by Vladimir Vovk (first posted December 2023).
    This paper establishes a game-theoretic version of the classical Blackwell-Dubins result. We consider two forecasters who at each step issue probability forecasts for the infinite future. Our result says that either at least one of the two forecasters will be discredited or their forecasts will converge in total variation.

Rutgers–Royal Holloway papers by topic

Statistics: 33, 34, 49, 54, 56, 62, 63, 64, 66; defensive forecasting: 7, 8, 9, 10, 11, 13, 14, 16, 17, 18, 20, 21, 22, 30; finance: 1, 2, 5, 12, 23, 37, 47, 61 (see also "continuous time"); continuous time: 24, 25, 26, 28, 35, 36, 38, 39, 42, 43, 44, 45, 46, 51; history: 4, 6, 41, 48, 50, 52, 53, 55, 57, 59, 60, 65; general: 3, 15, 19, 27, 29, 31, 32, 40, 58.

Tokyo papers in chronological order

  1. "On a simple strategy weakly forcing the strong law of large numbers in the bounded forecasting game" by Masayuki Kumon and Akimichi Takemura (August 2005, last revised in November 2005).
    The article constructs an explicit strategy that weakly forces the strong law of large numbers in the bounded forecasting game with rate of convergence O((log n / n)1/2). Annals of the Institute of Statistical Mathematics 60, 801–812 (2008).
  2. "Game theoretic derivation of discrete distributions and discrete pricing formulas" by Akimichi Takemura and Taiji Suzuki (September 2005).
    The authors illustrate the generality of discrete finite-horizon game-theoretic probability protocols. The game-theoretic framework is advantageous because no a priori probabilistic assumption is needed. Journal of the Japan Statistical Society 37, 87–104 (2007).
  3. "Capital process and optimality properties of Bayesian Skeptic in the fair and biased coin games" by Masayuki Kumon, Akimichi Takemura, and Kei Takeuchi (October 2005, revised September 2008).
    The article studies capital process behavior in the fair-coin and biased-coin games. A Bayesian strategy for Sceptic with a beta prior weakly forces the strong law of large numbers with rate of convergence O((log n / n)1/2). If Reality violates the law, then the exponential growth rate of the capital process is very accurately described in terms of Kullback divergence. The authors also investigate optimality properties of Bayesian strategies. Stochastic Analysis and Applications 26, 1161–1180 (2008).
  4. "Game-theoretic versions of strong law of large numbers for unbounded variables" by Masayuki Kumon, Akimichi Takemura, and Kei Takeuchi (March 2006).
    The authors prove several versions of the game-theoretic strong law of large numbers in the case where Reality's moves are unbounded. Stochastics 79, 449–468 (2007).
  5. "Implications of contrarian and one-sided strategies for the fair-coin game" by Yasunori Horikoshi and Akimichi Takemura (March 2007).
    The authors derive results on contrarian and one-sided strategies for Skeptic in the fair-coin game. For the strong law of large numbers, they prove that Skeptic can prevent the convergence from being faster than n-1/2. They also derive a corresponding one-sided result. Stochastic Processes and their Applications 118, 2125–2142, 2008.
  6. "A new formulation of asset trading games in continuous time with essential forcing of variation exponent" by Kei Takeuchi, Masayuki Kumon, and Akimichi Takemura (August 2007, revised January 2010).
    This article introduces a new formulation of continuous-time asset trading in the game-theoretic framework for probability. The market moves continuously but an investor trades at discrete times which can depend on the past path of the market. Bernoulli 15, 1243–1258, 2009.
  7. "Multistep Bayesian strategy in coin-tossing games and its application to asset trading games in continuous time" by Kei Takeuchi, Masayuki Kumon, and Akimichi Takemura (February 2008, revised in March 2008).
    The article studies multistep Bayesian betting strategies in coin-tossing games in the framework of game-theoretic probability. By a countable mixture of these strategies, a gambler or an investor can exploit arbitrary patterns of deviations of nature's moves from independent Bernoulli trials. The authors apply their scheme to asset trading games in continuous time and derive the exponential growth rate of the investor's capital when the variation exponent of the asset price path deviates from two. Stochastic Analysis and Applications 28, 842–861, 2010.
  8. "The generality of the zero-one laws" by Akimichi Takemura, Vladimir Vovk, and Glenn Shafer (March 2008, revised in August 2009).
    The authors prove game-theoretic generalizations of some well known zero-one laws. Their proofs make the martingales behind the laws explicit, and their results illustrate how martingale arguments can have implications going beyond measure-theoretic probability. Annals of the Institute of Statistical Mathematics 63, 873–885, 2011. The ideas of this paper are further developed in Working Paper 29.
  9. "New procedures for testing whether stock price processes are martingales" by Kei Takeuchi, Akimichi Takemura, and Masayuki Kumon (July 2009, revised February 2010).
    The authors propose procedures for testing whether stock price processes are martingales based on limit order type betting strategies. With high frequency Markov type strategies they find that martingale null hypotheses are rejected for many stocks traded on the Tokyo Stock Exchange. Computational Economics 37, 67–88, 2010.
  10. "Sequential optimizing strategy in multi-dimensional bounded forecasting games" by Masayuki Kumon, Akimichi Takemura, and Kei Takeuchi (November 2009).
    The authors propose a sequential optimizing betting strategy in the multi-dimensional bounded forecasting game in the framework of game-theoretic probability. By studying the asymptotic behavior of its capital process, they prove a generalization of the strong law of large numbers. They also introduce an information criterion for selecting efficient betting items. These results are then applied to multiple asset trading strategies in discrete-time and continuous-time games. In conclusion they give numerical examples involving stock price data from the Tokyo Stock Exchange. Stochastic Processes and their Applications 121, 155–183, 2011.
  11. "Sequential optimizing investing strategy with neural networks" by Ryo Adachi and Akimichi Takemura (February 2010).
    The authors propose an investing strategy based on neural network models combined with ideas from game-theoretic probability. Their strategy uses parameter values of a neural network with the best performance until the previous round (trading day) for deciding the investment in the current round. They compare their proposed strategy with various strategies including a strategy based on supervised neural network models and show that their strategy is competitive with other strategies. Expert Systems with Applications 38, 12991–12998, 2011.
  12. "Approximations and asymptotics of upper hedging prices in multinomial models" by Ryuichi Nakajima, Masayuki Kumon, Akimichi Takemura, and Kei Takeuchi (July 2010, revised June 2011).
    This paper contains an exposition and numerical studies of upper hedging prices in multinomial models from the viewpoint of linear programming and game-theoretic probability. The authors show that, as the number of rounds goes to infinity, the upper hedging price of a European option converges to the solution of the Black-Scholes-Barenblatt equation. Japan Journal of Industrial and Applied Mathematics 25, 1–21, 2012.
  13. "Convergence of random series and the rate of convergence of the strong law of large numbers in game-theoretic probability" by Kenshi Miyabe and Akimichi Takemura (March 2011, last revised November 2011).
    This paper studies the convergence of random series and the rate of convergence in the strong law of large numbers in the framework of game-theoretic probability, considering both the standard quadratic hedge and more general hedges. The optimality of several of the paper's results is established by constructing suitable strategies for Reality; unusually, these strategies are deterministic and constructive. Stochastic Processes and their Applications 122, 1–30, 2012.
  14. "Bayesian logistic betting strategy against probability forecasting" by Masayuki Kumon, Jing Li, Akimichi Takemura, and Kei Takeuchi (April 2012).
    This paper proposes a betting strategy based on Bayesian logistic regression modeling for the probability forecasting game. It proves some results concerning the strong law of large numbers in the probability forecasting game with side information. The proposed strategy shows a good performance against probability forecasting of precipitation by the Japan Meteorological Agency. Stochastic Analysis and Applications 31, 214–234, 2013.
  15. "The law of the iterated logarithm in game-theoretic probability with quadratic and stronger hedges" by Kenshi Miyabe and Akimichi Takemura (August 2012, last revised in April 2013).
    This paper proves both the validity and the sharpness of the law of the iterated logarithm in game-theoretic probability with quadratic and stronger hedges. Stochastic Processes and their Applications 123, 3132–3152, 2013.
  16. "Derandomization in game-theoretic probability" by Kenshi Miyabe and Akimichi Takemura (February 2014, revised in August 2014).
    The paper gives a general method of constructing deterministic strategies for Reality from randomized ones. Stochastic Processes and their Applications 125, 39–59, 2015.
  17. "A game-theoretic proof of Erdos-Feller-Kolmogorov-Petrowsky law of the iterated logarithm for fair-coin tossing" by Takeyuki Sasai, Kenshi Miyabe, and Akimichi Takemura (August 2014).
    The proof, based on a Bayesian strategy, is explicit, as many other proofs in game-theoretic probability.
  18. "Erdos-Feller-Kolmogorov-Petrowsky law of the iterated logarithm for self-normalized martingales: a game-theoretic approach" by Takeyuki Sasai, Kenshi Miyabe, and Akimichi Takemura (April 2015).
    A result similar to that of the previous paper is now proved under much weaker conditions.
  19. "Relation between the rate of convergence of strong law of large numbers and the rate of concentration of Bayesian prior in game-theoretic probability" by Ryosuke Sato, Kenshi Miyabe, and Akimichi Takemura (April 2016).
    This paper studies the behavior of the capital process of a continuous Bayesian mixture of fixed proportion betting strategies in the one-sided unbounded forecasting game, establishing the relation between the rate of convergence of the strong law of large numbers in the self-normalized form and the rate of divergence to infinity of the prior density around the origin. Stochastic Processes and their Applications 128, 1466–1484, 2018.
  20. "Game-theoretic derivation of upper hedging prices of multivariate contingent claims and submodularity" by Takeru Matsuda, Akimichi Takemura (June 2018).
    The paper investigates upper and lower hedging prices of multivariate contingent claims from the viewpoint of game-theoretic probability and submodularity. By considering a game between Market and Investor in discrete time, the pricing problem is reduced to a backward induction. For important classes of options the problem is solved in closed form, and the upper and lower hedging prices can be calculated efficiently. The authors also study the asymptotic behavior as the number of game rounds goes to infinity. Numerical results show the validity of the theoretical results.

This page is maintained by Vladimir Vovk and Glenn Shafer.   Last modified on 4 December 2023