Ab cancun

  • Published on
    23-Aug-2014

  • View
    2.727

  • Download
    1

DESCRIPTION

ABC short cours at ISBA 2016, Cancn

Transcript

  • ABC methodology and applications Jean-Michel Marin & Christian P. Robert I3M, Universite Montpellier 2, Universite Paris-Dauphine, & University of Warwick ISBA 2014, Cancun. July 13
  • Outline 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Alphabet soup ABC-MCMC 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • Untractable likelihoods Cases when the likelihood function f (y|) is unavailable and when the completion step f (y|) = Z f (y, z|) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • Untractable likelihoods Cases when the likelihood function f (y|) is unavailable and when the completion step f (y|) = Z f (y, z|) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • The ABC method Bayesian setting: target is ()f (y|)
  • The ABC method Bayesian setting: target is ()f (y|) When likelihood f (y|) not in closed form, likelihood-free rejection technique:
  • The ABC method Bayesian setting: target is ()f (y|) When likelihood f (y|) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y f (y|), under the prior (), keep jointly simulating () , z f (z| ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavare et al., 1997]
  • Why does it work?! The proof is trivial: f (i ) zD (i )f (z|i )Iy(z) (i )f (y|i ) = (i |y) . [AcceptReject 101]
  • Earlier occurrence Bayesian statistics and Monte Carlo methods are ideally suited to the task of passing many models over one dataset [Don Rubin, Annals of Statistics, 1984] Note Rubin (1984) does not promote this algorithm for likelihood-free simulation but frequentist intuition on posterior distributions: parameters from posteriors are more likely to be those that could have generated the data.
  • A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) where is a distance
  • A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) where is a distance Output distributed from () P{ (y, z) < } (| (y, z) < ) [Pritchard et al., 1999]
  • ABC algorithm Algorithm 1 Likelihood-free rejection sampler 2 for i = 1 to N do repeat generate from the prior distribution () generate z from the likelihood f (| ) until {(z), (y)} set i = end for where (y) denes a (not necessarily sucient) [summary] statistic
  • Output The likelihood-free algorithm samples from the marginal in z of: (, z|y) = ()f (z|)IA ,y (z) A ,y ()f (z|)dzd , where A ,y = {z D|((z), (y)) < }.
  • Output The likelihood-free algorithm samples from the marginal in z of: (, z|y) = ()f (z|)IA ,y (z) A ,y ()f (z|)dzd , where A ,y = {z D|((z), (y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: (|y) = (, z|y)dz (|y) .
  • A toy example Case of y| N1 2( + 2)( 2), 0.1 + 2 and U[10,10] when y = 2 and (y, z) = |y z| [Richard Wilkinson, Tutorial at NIPS 2013]
  • A toy example = 7.5 3 2 1 0 1 2 3 1001020 theta vs D theta D + D 3 2 1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 7.5
  • A toy example = 5 3 2 1 0 1 2 3 1001020 theta vs D theta D + D 3 2 1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 5
  • A toy example = 2.5 3 2 1 0 1 2 3 1001020 theta vs D theta D + D 3 2 1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 2.5
  • A toy example = 1 3 2 1 0 1 2 3 1001020 theta vs D theta D + 3 2 1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 1
  • ABC as knn [Biau et al., 2013, Annales de lIHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = q(d1, . . . , dN)
  • ABC as knn [Biau et al., 2013, Annales de lIHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = q(d1, . . . , dN) Interpretation of as nonparametric bandwidth only approximation of the actual practice [Blum & Francois, 2010]
  • ABC as knn [Biau et al., 2013, Annales de lIHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = q(d1, . . . , dN) Interpretation of as nonparametric bandwidth only approximation of the actual practice [Blum & Francois, 2010] ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • ABC consistency Provided kN/ log log N and kN/N 0 as N , for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 (j ) E[(j )|S = s0] [Devroye, 1982]
  • ABC consistency Provided kN/ log log N and kN/N 0 as N , for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 (j ) E[(j )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN , kN /N 0, hN 0 and hp N kN ,
  • Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like when m = 1, 2, 3, kN N(p+4)/(p+8) and rate N 4 p+8 when m = 4, kN N(p+4)/(p+8) and rate N 4 p+8 log N when m > 4, kN N(p+4)/(m+p+4) and rate N 4 m+p+4 [Biau et al., 2013] where p dimension of and m dimension of summary statistics
  • Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like when m = 1, 2, 3, kN N(p+4)/(p+8) and rate N 4 p+8 when m = 4, kN N(p+4)/(p+8) and rate N 4 p+8 log N when m > 4, kN N(p+4)/(m+p+4) and rate N 4 m+p+4 [Biau et al., 2013] where p dimension of and m dimension of summary statistics Drag: Only applies to sucient summary statistics
  • Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables plasma glucose concentration in oral glucose tolerance test diastolic blood pressure diabetes pedigree function presence/absence of diabetes
  • Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables plasma glucose concentration in oral glucose tolerance test diastolic blood pressure diabetes pedigree function presence/absence of diabetes Probability of diabetes function of above variables P(y = 1|x) = (x11 + x22 + x33) , for 200 observations based on a g-prior modelling: N3(0, n XT X)1
  • Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables plasma glucose concentration in oral glucose tolerance test diastolic blood pressure diabetes pedigree function presence/absence of diabetes Probability of diabetes function of above variables P(y = 1|x) = (x11 + x22 + x33) , for 200 observations based on a g-prior modelling: N3(0, n XT X)1 Use of MLE estimates as summary statistics and of distance (y, z) = ||(z) (y)||2
  • Pima Indian benchmark 0.005 0.010 0.020 0.030 020406080100 Density 0.05 0.03 0.01 020406080 Density 1.0 0.0 1.0 2.0 0.00.20.40.60.81.0 Density Figure: Comparison between density estimates of the marginals on 1 (left), 2 (center) and 3 (right) from ABC rejection samples (red) and MCMC samples (black) .
  • MA example Case of the MA(2) model xt = t + 2 i=1 i ti Simple prior: uniform prior over the identiability zone, e.g. triangle for MA(2)
  • MA example (2) ABC algorithm thus made of 1 picking a new value (1, 2) in the triangle 2 generating an iid sequence ( t)2
  • MA example (2) ABC algorithm thus made of 1 picking a new value (1, 2) in the triangle 2 generating an iid sequence ( t)2
  • Comparison of distance impact Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 1 2.0 1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 2 Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 1 2.0 1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 2 Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • Homonomy The ABC algorithm is not to be confused with the ABC algorithm The Articial Bee Colony algorithm is a swarm based meta-heuristic algorithm that was introduced by Karaboga in 2005 for optimizing numerical problems. It was inspired by the intelligent foraging behavior of honey bees. The algorithm is specically based on the model proposed by Tereshko and Loengarov (2005) for the foraging behaviour of honey bee colonies. The model consists of three essential components: employed and unemployed foraging bees, and food sources. The rst two components, employed and unemployed foraging bees, search for rich food sources (...) close to their hive. The model also denes two leading modes of behaviour (...): recruitment of foragers to rich food sources resulting in positive feedback and abandonment of poor sources by foragers causing negative feedback. [Karaboga, Scholarpedia]
  • ABC advances Simulating from the prior is often poor in eciency:
  • ABC advances Simulating from the prior is often poor in eciency: Either modify the proposal distribution on to increase the density of zs within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
  • ABC advances Simulating from the prior is often poor in eciency: Either modify the proposal distribution on to increase the density of zs within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002]
  • ABC advances Simulating from the prior is often poor in eciency: Either modify the proposal distribution on to increase the density of zs within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABC] [Ratmann et al., 2009]
  • ABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away such that ((z), (y)) > , replace s with locally regressed transforms = {(z) (y)}T [Csillery et al., TEE, 2010] where is obtained by [NP] weighted least square regression on ((z) (y)) with weights K {((z), (y))} [Beaumont et al., 2002, Genetics]
  • ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by K {((z()), (y))} or 1 S S s=1 K {((zs ()), (y))} [consistent estimate of f (|)]
  • ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by K {((z()), (y))} or 1 S S s=1 K {((zs ()), (y))} [consistent estimate of f (|)] Curse of dimensionality: poor estimate when d = dim() is large...
  • ABC-NP (regression) Use of the kernel weights K {((z()), (y))} leads to the NP estimate of the posterior expectation i i K {((z(i )), (y))} i K {((z(i )), (y))} [Blum, JASA, 2010]
  • ABC-NP (regression) Use of the kernel weights K {((z()), (y))} leads to the NP estimate of the posterior conditional density i Kb(i )K {((z(i )), (y))} i K {((z(i )), (y))} [Blum, JASA, 2010]
  • ABC-NP (regression) 180 190 200 210 220 1.71.81.92.02.12.22.3 ABC and regression adjustment S theta + s_obs
  • ABC-NP (density estimations) Other versions incorporating regression adjustments i Kb( i )K {((z(i )), (y))} i K {((z(i )), (y))}
  • ABC-NP (density estimations) Other versions incorporating regression adjustments i Kb( i )K {((z(i )), (y))} i K {((z(i )), (y))} In all cases, error E[g(|y)] g(|y) = cb2 + c2 + OP(b2 + 2 ) + OP(1/nd ) var(g(|y)) = c nbd (1 + oP(1)) [Blum, JASA, 2010]
  • ABC-NP (density estimations) Other versions incorporating regression adjustments i Kb( i )K {((z(i )), (y))} i K {((z(i )), (y))} In all cases, error E[g(|y)] g(|y) = cb2 + c2 + OP(b2 + 2 ) + OP(1/nd ) var(g(|y)) = c nbd (1 + oP(1)) [standard NP calculations]
  • ABC-NCH Incorporating non-linearities and heterocedasticities: = m((y)) + [ m((z))] ((y)) ((z))
  • ABC-NCH Incorporating non-linearities and heterocedasticities: = m((y)) + [ m((z))] ((y)) ((z)) where m() estimated by non-linear regression (e.g., neural network) () estimated by non-linear regression on residuals log{i m(i )}2 = log 2 (i ) + i [Blum & Francois, 2009]
  • ABC-NCH (2) Why neural network?
  • ABC-NCH (2) Why neural network? ghts curse of dimensionality selects relevant summary statistics provides automated dimension reduction oers a model choice capability improves upon multinomial logistic [Blum & Francois, 2009]
  • ABC-MCMC Markov chain ((t)) created via the transition function (t+1) = K( |(t)) if x f (x| ) is such that x = y and u U(0, 1) ( )K((t)| ) ((t))K( |(t)) , (t) otherwise,
  • ABC-MCMC Markov chain ((t)) created via the transition function (t+1) = K( |(t)) if x f (x| ) is such that x = y and u U(0, 1) ( )K((t)| ) ((t))K( |(t)) , (t) otherwise, has the posterior (|y) as stationary distribution [Marjoram et al, 2003]
  • ABC-MCMC (2) Algorithm 2 Likelihood-free MCMC sampler Use Algorithm 1 to get ((0), z(0)) for t = 1 to N do Generate from K |(t1) , Generate z from the likelihood f (| ), Generate u from U[0,1], if u ( )K((t1)| ) ((t1)K( |(t1)) IA ,y (z ) then set ((t), z(t)) = ( , z ) else ((t), z(t))) = ((t1), z(t1)), end if end for
  • Why does it work? Acceptance probability does not involve calculating the likelihood and ( , z |y) ((t1) , z(t1)|y) q((t1) | )f (z(t1)|(t1) ) q( |(t1) )f (z | ) = ( ) XXXXf (z | ) IA ,y (z ) ((t1) ) f (z(t1)|(t1) ) IA ,y (z(t1)) q((t1) | ) f (z(t1)|(t1) ) q( |(t1) ) XXXXf (z | )
  • Why does it work? Acceptance probability does not involve calculating the likelihood and ( , z |y) ((t1) , z(t1)|y) q((t1) | )f (z(t1)|(t1) ) q( |(t1) )f (z | ) = ( ) XXXXf (z | ) IA ,y (z ) ((t1) )((((((((hhhhhhhhf (z(t1)|(t1) ) IA ,y (z(t1)) q((t1) | ) ((((((((hhhhhhhhf (z(t1)|(t1) ) q( |(t1) ) XXXXf (z | )
  • Why does it work? Acceptance probability does not involve calculating the likelihood and ( , z |y) ((t1) , z(t1)|y) q((t1) | )f (z(t1)|(t1) ) q( |(t1) )f (z | ) = ( ) XXXXf (z | ) IA ,y (z ) ((t1) )((((((((hhhhhhhhf (z(t1)|(t1) )XXXXXXIA ,y (z(t1)) q((t1) | ) ((((((((hhhhhhhhf (z(t1)|(t1) ) q( |(t1) ) XXXXf (z | ) = ( )q((t1) | ) ((t1) q( |(t1) ) IA ,y (z )
  • A toy example Case of x 1 2 N(, 1) + 1 2 N(, 1) under prior N(0, 10)
  • A toy example Case of x 1 2 N(, 1) + 1 2 N(, 1) under prior N(0, 10) ABC sampler thetas=rnorm(N,sd=10) zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1) eps=quantile(abs(zed-x),.01) abc=thetas[abs(zed-x)
  • A toy example Case of x 1 2 N(, 1) + 1 2 N(, 1) under prior N(0, 10) ABC-MCMC sampler metas=rep(0,N) metas[1]=rnorm(1,sd=10) zed[1]=x for (t in 2:N){ metas[t]=rnorm(1,mean=metas[t-1],sd=5) zed[t]=rnorm(1,mean=(1-2*(runif(1)eps)||(runif(1)>dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){ metas[t]=metas[t-1] zed[t]=zed[t-1]} }
  • A toy example x = 2
  • A toy example x = 2 4 2 0 2 4 0.000.050.100.150.200.250.30
  • A toy example x = 10
  • A toy example x = 10 10 5 0 5 10 0.000.050.100.150.200.25
  • A toy example x = 50
  • A toy example x = 50 40 20 0 20 40 0.000.020.040.060.080.10
  • A PMC version Use of the same kernel idea as ABC-PRC (Sisson et al., 2007) but with IS correction Generate a sample at iteration t by t((t) ) N j=1 (t1) j Kt((t) | (t1) j ) modulo acceptance of the associated xt, and use an importance weight associated with an accepted simulation (t) i (t) i ( (t) i ) t( (t) i ) . c Still likelihood free [Beaumont et al., 2009]
  • ABC-PMC algorithm Given a decreasing sequence of approximation levels 1 . . . T , 1. At iteration t = 1, For i = 1, ..., N Simulate (1) i () and x f (x| (1) i ) until (x, y) < 1 Set (1) i = 1/N Take 2 as twice the empirical variance of the (1) i s 2. At iteration 2 t T, For i = 1, ..., N, repeat Pick i from the (t1) j s with probabilities (t1) j generate (t) i |i N(i , 2 t ) and x f (x| (t) i ) until (x, y) < t Set (t) i ( (t) i )/ N j=1 (t1) j 1 t (t) i (t1) j ) Take 2 t+1 as twice the weighted empirical variance of the (t) i s
  • Sequential Monte Carlo SMC is a simulation technique to approximate a sequence of related probability distributions n with 0 easy and T as target. Iterated IS as PMC: particles moved from time n to time n via kernel Kn and use of a sequence of extended targets n n(z0:n) = n(zn) n j=0 Lj (zj+1, zj ) where the Lj s are backward Markov kernels [check that n(zn) is a marginal] [Del Moral, Doucet & Jasra, Series B, 2006]
  • Sequential Monte Carlo (2) Algorithm 3 SMC sampler sample z (0) i 0(x) (i = 1, . . . , N) compute weights w (0) i = 0(z (0) i )/0(z (0) i ) for t = 1 to N do if ESS(w(t1)) < NT then resample N particles z(t1) and set weights to 1 end if generate z (t1) i Kt(z (t1) i , ) and set weights to w (t) i = w (t1) i1 t(z (t) i ))Lt1(z (t) i ), z (t1) i )) t1(z (t1) i ))Kt(z (t1) i ), z (t) i )) end for [Del Moral, Doucet & Jasra, Series B, 2006]
  • ABC-SMC [Del Moral, Doucet & Jasra, 2009] True derivation of an SMC-ABC algorithm Use of a kernel Kn associated with target n and derivation of the backward kernel Ln1(z, z ) = n (z )Kn(z , z) n(z) Update of the weights win wi(n1) M m=1 IA n (xm in ) M m=1 IA n1 (xm i(n1)) when xm in K(xi(n1), )
  • ABC-SMCM Modication: Makes M repeated simulations of the pseudo-data z given the parameter, rather than using a single [M = 1] simulation, leading to weight that is proportional to the number of accepted zi s () = 1 M M i=1 I((y),(zi ))< [limit in M means exact simulation from (tempered) target]
  • Properties of ABC-SMC The ABC-SMC method properly uses a backward kernel L(z, z ) to simplify the importance weight and to remove the dependence on the unknown likelihood from this weight. Update of importance weights is reduced to the ratio of the proportions of surviving particles Major assumption: the forward kernel K is supposed to be invariant against the true target [tempered version of the true posterior]
  • Properties of ABC-SMC The ABC-SMC method properly uses a backward kernel L(z, z ) to simplify the importance weight and to remove the dependence on the unknown likelihood from this weight. Update of importance weights is reduced to the ratio of the proportions of surviving particles Major assumption: the forward kernel K is supposed to be invariant against the true target [tempered version of the true posterior] Adaptivity in ABC-SMC algorithm only found in on-line construction of the thresholds t, slowly enough to keep a large number of accepted transitions
  • A mixture example (2) Recovery of the target, whether using a xed standard deviation of = 0.15 or = 1/0.15, or a sequence of adaptive ts. 3 2 1 0 1 2 3 0.00.20.40.60.81.0 3 2 1 0 1 2 3 0.00.20.40.60.81.0 3 2 1 0 1 2 3 0.00.20.40.60.81.0 3 2 1 0 1 2 3 0.00.20.40.60.81.0 3 2 1 0 1 2 3 0.00.20.40.60.81.0
  • ABC inference machine 1 Approximate Bayesian computation 2 ABC as an inference machine Exact ABC ABC Automated summary statistic selection 3 ABC for model choice 4 Genetics of ABC
  • How Bayesian is aBc..? may be a convergent method of inference (meaningful? sucient? foreign?) approximation error unknown (w/o massive simulation) pragmatic/empirical B (there is no other solution!) many calibration issues (tolerance, distance, statistics) the NP side should be incorporated into the whole B picture the approximation error should also be part of the B inference
  • Wilkinsons exact (A)BC ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function (, z|y) = ()f (z|)K (y z) ()f (z|)K (y z)dzd , with K kernel parameterised by bandwidth . [Wilkinson, 2013]
  • Wilkinsons exact (A)BC ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function (, z|y) = ()f (z|)K (y z) ()f (z|)K (y z)dzd , with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the assumption of a randomised observation y = y + , K , and an acceptance probability of K (y z)/M gives draws from the posterior distribution (|y).
  • How exact a BC? Using to represent measurement error is straightforward, whereas using to model the model discrepancy is harder to conceptualize and not as commonly used [Richard Wilkinson, 2013]
  • How exact a BC? Pros Pseudo-data from true model and observed data from noisy model Interesting perspective in that outcome is completely controlled Link with ABC and assuming y is observed with a measurement error with density K Relates to the theory of model approximation [Kennedy & OHagan, 2001] Cons Requires K to be bounded by M True approximation error never assessed Requires a modication of the standard ABC algorithm
  • Noisy ABC Idea: Modify the data from the start y = y0 + 1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on y
  • Noisy ABC Idea: Modify the data from the start y = y0 + 1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on y Then ABC produces an exact simulation from (|y) = (|y) [Dean et al., 2011; Fearnhead and Prangle, 2012]
  • Consistent noisy ABC Degrading the data improves the estimation performances: Noisy ABC-MLE is asymptotically (in n) consistent under further assumptions, the noisy ABC-MLE is asymptotically normal increase in variance of order 2 likely degradation in precision or computing time due to the lack of summary statistic [curse of dimensionality]
  • ABC [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS] Use of a joint density f (, |y) ( |y, ) () ( ) where y is the data, and ( |y, ) is the prior predictive density of ((z), (y)) given and y when z f (z|)
  • ABC [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS] Use of a joint density f (, |y) ( |y, ) () ( ) where y is the data, and ( |y, ) is the prior predictive density of ((z), (y)) given and y when z f (z|) Warning! Replacement of ( |y, ) with a non-parametric kernel approximation.
  • ABC details Multidimensional distances k (k = 1, . . . , K) and errors k = k(k(z), k(y)), with k k( |y, ) k( |y, ) = 1 Bhk b K[{ kk(k(zb), k(y))}/hk] then used in replacing ( |y, ) with mink k( |y, )
  • ABC details Multidimensional distances k (k = 1, . . . , K) and errors k = k(k(z), k(y)), with k k( |y, ) k( |y, ) = 1 Bhk b K[{ kk(k(zb), k(y))}/hk] then used in replacing ( |y, ) with mink k( |y, ) ABC involves acceptance probability ( , ) (, ) q( , )q( , ) q(, )q( , ) mink k( |y, ) mink k( |y, )
  • ABC multiple errors [ c Ratmann et al., PNAS, 2009]
  • ABC for model choice [ c Ratmann et al., PNAS, 2009]
  • Semi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinsons proposal ABC is then considered from a purely inferential viewpoint and calibrated for estimation purposes Use of a randomised (or noisy) version of the summary statistics (y) = (y) + Derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic
  • Summary statistics Main results: Optimality of the posterior expectation E[|y] of the parameter of interest as summary statistics (y)!
  • Summary statistics Main results: Optimality of the posterior expectation E[|y] of the parameter of interest as summary statistics (y)! Use of the standard quadratic loss function ( 0)T A( 0) . bare summary
  • Details on Fearnhead and Prangle (F&P) ABC Use of a summary statistic S(), an importance proposal g(), a kernel K() 1 and a bandwidth h > 0 such that (, ysim) g()f (ysim|) is accepted with probability (hence the bound) K[{S(ysim) sobs}/h] and the corresponding importance weight dened by () g() [Fearnhead & Prangle, 2012]
  • Average acceptance asymptotics For the average acceptance probability/approximate likelihood p(|sobs) = f (ysim|) K[{S(ysim) sobs}/h] dysim , overall acceptance probability p(sobs) = p(|sobs) () d = (sobs)hd + o(hd ) [F&P, Lemma 1]
  • Calibration of h This result gives insight into how S() and h aect the Monte Carlo error. To minimize Monte Carlo error, we need hd to be not too small. Thus ideally we want S() to be a low dimensional summary of the data that is suciently informative about that (|sobs) is close, in some sense, to (|yobs) (F&P, p.5) turns h into an absolute value while it should be context-dependent and user-calibrated only addresses one term in the approximation error and acceptance probability (curse of dimensionality) h large prevents ABC(|sobs) to be close to (|sobs) d small prevents (|sobs) to be close to (|yobs) (curse of [dis]information)
  • Converging ABC Theorem (F&P) For noisy ABC, under identiability assumptions, the expected noisy-ABC log-likelihood, E {log[p(|sobs)]} = log[p(|S(yobs) + )](yobs|0)K( )dyobsd , has its maximum at = 0.
  • Converging ABC Corollary For noisy ABC, under regularity constraints on summary statistics, the ABC posterior converges onto a point mass on the true parameter value as m .
  • Loss motivated statistic Under quadratic loss function, Theorem (F&P) (i) The minimal posterior error E[L(, )|yobs] occurs when = E(|yobs) (!) (ii) When h 0, EABC(|sobs) converges to E(|yobs) (iii) If S(yobs) = E[|yobs] then for = EABC[|sobs] E[L(, )|yobs] = trace(A) + h2 xT AxK(x)dx + o(h2 ).
  • Optimal summary statistic We take a dierent approach, and weaken the requirement for ABC to be a good approximation to (|yobs). We argue for ABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters. (F&P, p.5) From this result, F&P derive their choice of summary statistic, S(y) = E(|y) [almost sucient] suggest h = O(N1/(2+d) ) and h = O(N1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • Optimal summary statistic We take a dierent approach, and weaken the requirement for ABC to be a good approximation to (|yobs). We argue for ABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters. (F&P, p.5) From this result, F&P derive their choice of summary statistic, S(y) = E(|y) [wow! EABC[|S(yobs)] = E[|yobs]] suggest h = O(N1/(2+d) ) and h = O(N1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • Caveat Since E(|yobs) is most usually unavailable, F&P suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic.
  • ABC for model choice 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • Bayesian model choice Several models M1, M2, . . . are considered simultaneously for a dataset y and the model index M is part of the inference. Use of a prior distribution. (M = m), plus a prior distribution on the parameter conditional on the value m of the model index, m(m) Goal is to derive the posterior distribution of M, challenging computational target when models are complex.
  • Generic ABC for model choice Algorithm 4 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior (M = m) Generate m from the prior m(m) Generate z from the model fm(z|m) until {(z), (y)} < Set m(t) = m and (t) = m end for
  • ABC estimates Posterior probability (M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m . Issues with implementation: should tolerances be the same for all models? should summary statistics vary across models (incl. their dimension)? should the distance measure vary as well?
  • ABC estimates Posterior probability (M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m . Extension to a weighted polychotomous logistic regression estimate of (M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]
  • The Great ABC controversy On-going controvery in phylogeographic genetics about the validity of using ABC for testing Against: Templeton, 2008, 2009, 2010a, 2010b, 2010c argues that nested hypotheses cannot have higher probabilities than nesting hypotheses (!)
  • The Great ABC controversy On-going controvery in phylogeographic genetics about the validity of using ABC for testing Against: Templeton, 2008, 2009, 2010a, 2010b, 2010c argues that nested hypotheses cannot have higher probabilities than nesting hypotheses (!) Replies: Fagundes et al., 2008, Beaumont et al., 2010, Berger et al., 2010, Csill`ery et al., 2010 point out that the criticisms are addressed at [Bayesian] model-based inference and have nothing to do with ABC...
  • Back to suciency If 1(x) sucient statistic for model m = 1 and parameter 1 and 2(x) sucient statistic for model m = 2 and parameter 2, (1(x), 2(x)) is not always sucient for (m, m)
  • Back to suciency If 1(x) sucient statistic for model m = 1 and parameter 1 and 2(x) sucient statistic for model m = 2 and parameter 2, (1(x), 2(x)) is not always sucient for (m, m) c Potential loss of information at the testing level
  • Limiting behaviour of B12 (T ) ABC approximation B12(y) = T t=1 Imt =1 I{(zt ),(y)} T t=1 Imt =2 I{(zt ),(y)} , where the (mt, zt)s are simulated from the (joint) prior
  • Limiting behaviour of B12 (T ) ABC approximation B12(y) = T t=1 Imt =1 I{(zt ),(y)} T t=1 Imt =2 I{(zt ),(y)} , where the (mt, zt)s are simulated from the (joint) prior As T go to innity, limit B12(y) = I{(z),(y)} 1(1)f1(z|1) dz d1 I{(z),(y)} 2(2)f2(z|2) dz d2 = I{,(y)} 1(1)f 1 (|1) d d1 I{,(y)} 2(2)f 2 (|2) d d2 , where f 1 (|1) and f 2 (|2) distributions of (z)
  • Limiting behaviour of B12 ( 0) When goes to zero, B 12(y) = 1(1)f 1 ((y)|1) d1 2(2)f 2 ((y)|2) d2 ,
  • Limiting behaviour of B12 ( 0) When goes to zero, B 12(y) = 1(1)f 1 ((y)|1) d1 2(2)f 2 ((y)|2) d2 , c Bayes factor based on the sole observation of (y)
  • Limiting behaviour of B12 (under suciency) If (y) sucient statistic for both models, fi (y|i ) = gi (y)f i ((y)|i ) Thus B12(y) = 1 (1)g1(y)f 1 ((y)|1) d1 2 (2)g2(y)f 2 ((y)|2) d2 = g1(y) 1(1)f 1 ((y)|1) d1 g2(y) 2(2)f 2 ((y)|2) d2 = g1(y) g2(y) B 12(y) . [Didelot, Everitt, Johansen & Lawson, 2011]
  • Limiting behaviour of B12 (under suciency) If (y) sucient statistic for both models, fi (y|i ) = gi (y)f i ((y)|i ) Thus B12(y) = 1 (1)g1(y)f 1 ((y)|1) d1 2 (2)g2(y)f 2 ((y)|2) d2 = g1(y) 1(1)f 1 ((y)|1) d1 g2(y) 2(2)f 2 ((y)|2) d2 = g1(y) g2(y) B 12(y) . [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model suciency
  • MA(q) divergence 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insucient autocovariance distances. Sample of 50 points from a MA(2) with 1 = 0.6, 2 = 0.2. True Bayes factor equal to 17.71.
  • MA(q) divergence 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insucient autocovariance distances. Sample of 50 points from a MA(1) model with 1 = 0.6. True Bayes factor B21 equal to .004.
  • Further comments There should be the possibility that for the same model, but dierent (non-minimal) [summary] statistics (so dierent s: 1 and 1) the ratio of evidences may no longer be equal to one. [Michael Stumpf, Jan. 28, 2011, Og] Using dierent summary statistics [on dierent models] may indicate the loss of information brought by each set but agreement does not lead to trustworthy approximations.
  • LDA summaries for model choice In parallel to F& P semi-automatic ABC, selection of most discriminant subvector out of a collection of summary statistics, can be based on Linear Discriminant Analysis (LDA) [Estoup & al., 2012, Mol. Ecol. Res.] Solution now implemented in DIYABC.2 [Cornuet & al., 2008, Bioinf.; Estoup & al., 2013]
  • Implementation Step 1: Take a subset of the % (e.g., 1%) best simulations from an ABC reference table usually including 106109 simulations for each of the M compared scenarios/models Selection based on normalized Euclidian distance computed between observed and simulated raw summaries Step 2: run LDA on this subset to transform summaries into (M 1) discriminant variables Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M 1 most discriminant variables [Cornuet & al., 2008, Bioinformatics]
  • Implementation Step 1: Take a subset of the % (e.g., 1%) best simulations from an ABC reference table usually including 106109 simulations for each of the M compared scenarios/models Step 2: run LDA on this subset to transform summaries into (M 1) discriminant variables When computing LDA functions, weight simulated data with an Epanechnikov kernel Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M 1 most discriminant variables [Cornuet & al., 2008, Bioinformatics]
  • LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models [more on this later] LDA reduces correlation among explanatory variables
  • LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models [more on this later] LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated
  • A stylised problem Central question to the validation of ABC for model choice: When is a Bayes factor based on an insucient statistic T(y) consistent?
  • A stylised problem Central question to the validation of ABC for model choice: When is a Bayes factor based on an insucient statistic T(y) consistent? Note/warnin: c drawn on T(y) through BT 12(y) necessarily diers from c drawn on y through B12(y) [Marin, Pillai, X, & Rousseau, JRSS B, 2013]
  • A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y N(1, 1) opposed to model M2: y L(2, 1/ 2), Laplace distribution with mean 2 and scale parameter 1/ 2 (variance one). Four possible statistics 1 sample mean y (sucient for M1 if not M2); 2 sample median med(y) (insucient); 3 sample variance var(y) (ancillary); 4 median absolute deviation mad(y) = med(|y med(y)|);
  • A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y N(1, 1) opposed to model M2: y L(2, 1/ 2), Laplace distribution with mean 2 and scale parameter 1/ 2 (variance one). q q q q q q q q q q q Gauss Laplace 0.00.10.20.30.40.50.60.7 n=100 q q q q q q q q q q q q q q q q q q Gauss Laplace 0.00.20.40.60.81.0 n=100
  • Framework Starting from sample y = (y1, . . . , yn) the observed sample, not necessarily iid with true distribution y Pn Summary statistics T(y) = Tn = (T1(y), T2(y), , Td (y)) Rd with true distribution Tn Gn.
  • Framework c Comparison of under M1, y F1,n(|1) where 1 1 Rp1 under M2, y F2,n(|2) where 2 2 Rp2 turned into under M1, T(y) G1,n(|1), and 1|T(y) 1(|Tn ) under M2, T(y) G2,n(|2), and 2|T(y) 2(|Tn )
  • Assumptions A collection of asymptotic standard assumptions: [A1] is a standard central limit theorem under the true model with asymptotic mean 0 [A2] controls the large deviations of the estimator Tn from the model mean () [A3] is the standard prior mass condition found in Bayesian asymptotics (di eective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density [Think CLT!]
  • Asymptotic marginals Asymptotically, under [A1][A4] mi (t) = i gi (t|i ) i (i ) di is such that (i) if inf{|i (i ) 0|; i i } = 0, Cl vddi n mi (Tn ) Cuvddi n and (ii) if inf{|i (i ) 0|; i i } > 0 mi (Tn ) = oPn [vdi n + vdi n ].
  • Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value () of Tn under both models. And only by this mean value!
  • Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value () of Tn under both models. And only by this mean value! Indeed, if inf{|0 2(2)|; 2 2} = inf{|0 1(1)|; 1 1} = 0 then Cl v (d1d2) n m1(Tn ) m2(Tn ) Cuv (d1d2) n , where Cl , Cu = OPn (1), irrespective of the true model. c Only depends on the dierence d1 d2: no consistency
  • Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value () of Tn under both models. And only by this mean value! Else, if inf{|0 2(2)|; 2 2} > inf{|0 1(1)|; 1 1} = 0 then m1(Tn ) m2(Tn ) Cu min v (d12) n , v (d12) n
  • Checking for adequate statistics Run a practical check of the relevance (or non-relevance) of Tn null hypothesis that both models are compatible with the statistic Tn H0 : inf{|2(2) 0|; 2 2} = 0 against H1 : inf{|2(2) 0|; 2 2} > 0 testing procedure provides estimates of mean of Tn under each model and checks for equality
  • Checking in practice Under each model Mi , generate ABC sample i,l , l = 1, , L For each i,l , generate yi,l Fi,n(|i,l ), derive Tn (yi,l ) and compute i = 1 L L l=1 Tn (yi,l ), i = 1, 2 . Conditionally on Tn (y), L { i E [i (i )|Tn (y)]} N(0, Vi ), Test for a common mean H0 : 1 N(0, V1) , 2 N(0, V2) against the alternative of dierent means H1 : i N(i , Vi ), with 1 = 2 .
  • Toy example: Laplace versus Gauss qqqqqqqqqqqqqqq qqqqqqqqqq q qq q q Gauss Laplace Gauss Laplace 010203040 Normalised 2 without and with mad
  • Genetics of ABC 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • Genetic background of ABC ABC is a recent computational technique that only requires being able to sample from the likelihood f (|) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute signicantly to methodological developments of ABC. [Grith & al., 1997; Tavare & al., 1999]
  • Population genetics [Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010] Describe the genotypes, estimate the alleles frequencies, determine their distribution among individuals, populations and between populations; Predict and understand the evolution of gene frequencies in populations as a result of various factors. c Analyses the eect of various evolutive forces (mutation, drift, migration, selection) on the evolution of gene frequencies in time and space.
  • Wright-Fisher model Le modle de Wright-Fisher ! En labsence de mutation et de slection, les frquences allliques drivent (augmentent et diminuent) invitablement jusqu la fixation dun allle ! La drive conduit donc la perte de variation gntique lintrieur des populations A population of constant size, in which individuals reproduce at the same time. Each gene in a generation is a copy of a gene of the previous generation. In the absence of mutation and selection, allele frequencies derive inevitably until the xation of an allele.
  • Coalescent theory [Kingman, 1982; Tajima, Tavare, &tc] !"#$%&'(('")**+$,-'".'"/010234%'".'5"*$*%()23$15"6" !!"7**+$,-'",()5534%'" " "!"7**+$,-'"8",$)('5,'1,'"9" "":";7?@