Hyperasymptotic approximation to the top, bottom, and charm pole mass

We construct hyperasymptotic expansions for the heavy quark pole mass regulated using the principal value (PV) prescription. We apply such hyperasymptotic expansions to the $B/D$ meson masses, and $\bar \Lambda $ computed in the lattice. The issue of the uncertainty of the (top) pole mass is critically reexamined. The present theoretical uncertainty in the relation between ${\bar m}_t$, the $\bar{\rm MS}$ top mass, and $m_{t, \rm PV}$, the top pole mass regulated using the PV prescription, is numerically assessed to be $\delta m_{t,\rm PV}= 28\;{\rm MeV}$ for ${\bar m}_t =163$ GeV.


I. INTRODUCTION
We construct hyperasymptotic expansions for the heavy quark pole mass regulated using the principal value (PV) prescription along the lines of [1]. We generalize the discussion of that reference by including possible ultraviolet renormalons. We then apply such expansions to various observables. The name hyperasymptotic we borrow from [2]. For a treatise of hyperasymptotic expansions in the context of ordinary differential equations see [3].
In [1] we studied observables characterized by having a large scale Q ≫ Λ QCD , and for which the operator product expansion (OPE) is believed to be a good approximation. We computed them within an hyperasymptotic expansion. More specifically, the perturbative part of the OPE was summed up using the PV prescription: S PV . The difference between S PV and the full nonperturbative (NP) result is assumed to exactly scale as the intrinsic NP terms of the OPE. In general terms: where the last term refers to genuine higher order terms in the OPE (d 0 > d > 0). Then, since S PV can not be computed exactly, we obtain it approximately along an hyperasymptotic expansion (a combination of (truncated) perturbative sums and of NP corrections). This is possible if enough terms of the perturbative expansion are known, and if the divergent structure of the leading renormalons of the observable is also known. This allows us to have a clear (parametric) control on the error of the computation. Two alternative methods were considered in [1] depending on how the truncation of the leading perturbative sum is made: (1) N and μ ∼ Q large but finite: (2) N → ∞ and μ → ∞ in a correlated way. We considered two options: where c 0 > 0 but c is arbitrary otherwise. Note that in case (1), c can partially simulate changes on the scale or scheme of α X . d is the dimension associated to a given renormalon.
Note that in this paper d can be positive (infrared renormalons) or negative (ultraviolet renormalons), unlike in [1], where only positive d's were considered. Note also that genuine NP corrections are only associated to positive d's. We will not study the modifications the inclusion of ultraviolet renormalons produce in case (2). In this paper we are mainly concerned in the scenario where the leading renormalon is of infrared nature and subleading renormalons can be ultraviolet and/or infrared. This is the case of the pole mass. In such scenario the precision we can obtain in case (2) is limited by the approximate knowledge of the leading infrared renormalon and we cannot add further to the discussion given in [1]. Different is the case (1), which we discuss in the next section.
The structure of the paper is as follows. In Sec. II we review the general case. Compared with [1] we include the possible effect of ultraviolet renormalons. In Sec. III we study the pole mass of a heavy quark in the large β 0 approximation. We use it as toy-model to test our methods. We then move to real QCD. In Sec. IV we study the B=D meson mass and lattice evaluations ofΛ. Finally, in Sec. V we do a dedicated study of the top mass.
In general we will avoid to make explicit the scheme (X) and scale (μ) dependence unless necessary.

II. GENERAL FORMULAS
A. N large and μ ∼ Q ≫ Λ QCD . Eq. (3). Case (1) This case was already discussed at length in [1]. We now give the general expression after the inclusion of ultraviolet renormalons (for a more detailed discussion see [4]). It can be written in the following way where S P ≡ X N P ðjd min jÞ where the asymptotic behavior associated to renormalons with dimensions ≤ jdj is included in p ðasÞ n , and d 0 is the dimension of the closest renormalon to the origin in the Borel plane fulfilling that jd 0 j > jdj. Ω d is a modification of the definition of terminant given in [5] that is more suitable to our case. Whereas in [5] (p N α N × the) terminant refers to the completion of the superasymptotic approximation to give the complete result, here Ω d is the completation of the part of the perturbative series associated to the singularity located at u ≡ β 0 t 4π ¼ d 2 in the Borel plane using the PV prescription. For the case of infrared renormalons (d > 0) the general analytic expression of Ω d can be found in [1]. For a generic ultraviolet renormalon (d < 0) that produces the asymptotic behavior ð8Þ Joining all terms together we have Note that in this case μ is in the denominator. If we set the anomalous dimension to zero (b 0 ¼ db), If one takes μ very large this term will be quite small. In practice, if we take μ ∼ Q, we may need this term.
S PV will be computed truncating the hyperasymptotic expansion in a systematic way. This means truncating Eq. (5) as follows (note that we always define D to be positive): For each value of the couple (D; N), we can state the parametric accuracy of S ðD;NÞ PV ðQÞ. For instance for S ð0;N P Þ the error would be (up to a numerical and a ffiffiffiffiffi ffi α X p factor) This is what is commonly named the superasymptotic approximation. For S ðjd min j;0Þ the parametric form of the error reads (up to a numerical and a possible α 3=2 X factor): where d is the location of the next renormalon closest to the origin. This corresponds to the first term in the hyperasymptotic approximation. The expression for the error in the general case S ðD;NÞ PV ðQÞ reads (N ≠ N P but large) where d is the location of the next renormalon closest to the origin after D.

B. m PV , general formulas
For the case of the heavy quark mass, which we discuss at length in this paper, we have where m P ≡m þ X N P n¼0 r n α nþ1 ðμÞ; ð20Þ the coefficients r n for n ≤ 3 were computed in [6][7][8][9]; where now K ðPÞ X and K ðPÞ X;i read where we have applied the general expression obtained in [1] to this case. In particular (b and s n are defined in [1]), Finally, The coefficients c k are pure functions of the β-function coefficients, as first shown in [10]. They can be found in [11][12][13]. At low orders they read (c 0 ¼ 1) Note that Therefore, m P is nothing but the pole mass truncated to order N ¼ N P . Our knowledge of the other terminants, Ω 2 and Ω −2 , is limited. We do not know the renormalization group structure of Ω −2 , except in the large β 0 . On the other hand, the renormalization group structure of Ω 2 is exactly known (provided the coefficients of the beta function are known to all orders). The reason is that it is linked to the kinetic operator of the HQET Lagrangian, the Wilson coefficient of which is protected by reparametrization invariance [14]. Therefore, it has no anomalous dimension and the Wilson coefficient is 1 in dimensional regularization to all orders in perturbation theory. Still, in the large β 0 approximation, the coefficient Z X 2 is equal to zero. If it is different from zero beyond the large β 0 approximations has been a matter of debate [15]. We will retake this discussion in the following sections.
We also give the formulas that apply to Eq. (4), i.e., to case (2): the limit ðN; μÞ → ∞. A general discussion can be found in [1]. It was argued that the limit (2A) was likely to be logarithmic divergent (see also [16]), and no formulas could be found that are valid beyond the large β 0 approximation. Therefore, we will not study this case further. For the limit (2B) formulas with NP exponential accuracy were found in [1] generalizing results from [17]. These formulas were valid beyond the large β 0 approximation. For the specific case of the pole mass they read where m A ¼m þ lim μ→∞;2BÞ X N A n¼0 r n α nþ1 ðμÞ; ð30Þ As discussed in [1], there is more than one way to take the μ → ∞; (2B) limit. One is to take Eq. (67) of [1] for N A instead of limit (B) of Eq. (4). Both methods are general but require the knowledge of r n and the beta function coefficients to all orders. This potentially limits their applicability in practice. Another option is to interpret the μ → ∞ limit as a change of scheme (where μ 0 ∼m): This method still requires the knowledge of r n to all orders. On the other hand, there is no need to know the β-function to all orders. Irrespectively of which of the above methods we use to take the μ → ∞ limit we have lim μ→∞;2BÞ where 2=χ ¼ 1 − c 0 αðμ 0 Þ. The right-hand side of Eq. (33) can not be computed exactly. An approximated determination can be obtained by approximating the Borel transform to (u ≡ β 0 t 4π ) where N max is the number of perturbative coefficients that are known.
Here the discussion runs parallel to the discussion for the static potential in the large β 0 approximation made in Sec. III of [1]. Nevertheless, we do not have the same analytic control as for the static potential. Note also that now we have ultraviolet renormalons. Moreover, the pole mass has the extra complication that it is ultraviolet divergent and needs renormalization. This makes the Borel transform more complicated and we do not have the exact μ factorization one has in the static potential. We take the Borel transform from [18][19][20]: where ð36Þ This expression has been derived in the MS scheme.
Whereas the scheme dependence of the first term in Eq. (35) can be reabsorbed in changes of μ and c MS (it would then be equivalent to a change of scale), controlling the scheme dependence of RðuÞ is more complicated. We will not care much, as RðuÞ has to do with the high energy behavior, and should only affect m P , the finite sum. Therefore, when we change from the MS to the lattice scheme, we will just change c MS → c latt and leave RðuÞ unchanged. Strictly speaking then, the object we compute in the lattice scheme is not the pole mass, still it will have the same infrared behavior. The fact that we will obtain the same result after subtracting m P from m PV in both cases will be a nice confirmation that high-energy cancellation has effectively taken place and what is left is low energy. 2 The value of c latt that we use is the same to the one used in [1]. To determine it we take the n f ¼ 0 number for a Wilson action of d 1 ¼ 5.88359 [25] and use c latt ¼ −2ð 5 6 þ 2πd 1 β 0 Þ. This is enough for our purposes, as we only use this scheme for checking the consistency between the results obtained with different schemes. Note that this yields two values of c latt if we introduce the n f dependence of β 0 : c latt ðn f ¼ 0Þ ¼ −8.38807 and c latt ðn f ¼ 3Þ ¼ −9.88171.
A. N large and μ ∼m ≫ Λ QCD . Eq. (3). Case (1) We now take Eq. (19) in the large β 0 approximation and truncate it at different orders in the hyperasymptotic expansion. We then compare such truncations with the exact solution. We can study (even if in the large β 0 approximation) up to which values ofm the OPE is a good approximation of m PV . Remarkably enough we can actually check more than one term of the OPE (hyperasymptotic) expansion. Note that in the large β 0 approximation Ω 2 ¼ 0, but not Ω −2 , which in the large β 0 approximation reads (η We also explore the scheme dependence by performing the computation in the lattice and the MS scheme. We will do these analyses for the cases with n f ¼ 0 and n f ¼ 3.
The first in view of comparing with quenched lattice simulations, the second to simulate a more physical scenario, for which we can draw some conclusions that could be applied beyond the large-β 0 limit. In Figs n Þα nþ1 , and m PV −m P −mΩ m − P 2N P n¼N P þ1 ðr n −r ðasÞ n Þα nþ1 −mΩ −2 with n f ¼ 0 light flavors. In the counting of Eq. (15) this corresponds to (0,0), (0; N P ), (1,0), (1; N P ), (2,0) precision. We do such computation in the lattice (Fig. 1) and the MS (Fig. 2) scheme. In Fig. 3 we compare the results in the lattice and MS scheme. We observe a very nice convergent pattern in all cases down to surprisingly small scales. To visualize the dependence on c, we show the band generated by the smallest positive and negative possible values of c that yield integer values for N P . The size of the band generated by the different values of c (the c dependence) decreases after introducing Ω m to its associated sum. On the other hand Ω −2 (an ultraviolet renormalon) gives a very small contribution, in particular in the lattice scheme. This is consistent with interpreting the lattice scheme as the MS scheme using a much higher renormalization scale μ for the scale of the strong coupling.
Let us discuss the results in more detail. We first observe that them dependence of m PV is basically eliminated in m PV − m P , as expected. This happens both in the lattice and MS scheme. The latter shows a stronger c dependence. This is to be expected, as in the MS, we truncate at smaller orders in N. This makes the truncation error bigger. As we can see in the upper panel of Fig. 3, both schemes yield consistent predictions for m PV − m P . We can draw some interesting observations out of this analysis. For m PV − m P is better to choose a larger factorization scale, if we have enough coefficients of the perturbative expansion. This is particularly so at large distances: We can still get good results at very large distances in the lattice scheme.
We now turn to m PV − m P −mΩ m . Adding the new correction brings much better agreement with expectations (which we remind is to get zero). After the introduction of mΩ m , the MS scheme yields more accurate results than the lattice scheme. This can already be seen in the upper panel of Fig. 3, and in greater detail in the lower panel of Fig. 3.
m PV − m P −mΩ m shows some dependence onm, which is more pronounced in the lattice than in the MS scheme. As in the large β 0 approximation the difference between both schemes is somewhat equivalent to a change of scale, these results point to that μ ¼m in MS scheme is close to the natural scale and minimize higher order corrections. Note that the lattice scheme computation is equivalent to the MS scheme choosing μ latt ¼ μ MS e −c latt 2 e c MS 2 . This gives around a factor 30!. Once P 2N P n¼N P þ1 ðr n − r ðasÞ n Þα nþ1 is incorporated in the prediction most of the difference between schemes disappears. The effect of introducing Ω −2 is very small, in particular in the lattice scheme. This is to be expected, since the lattice scheme corresponds to a larger renormalization scale μ. In any case, the difference between schemes gets smaller and smaller as we go to higher orders in the hyperasymptotic expansion, in particular at short distances. We also want to stress that this analysis opens the window to apply perturbation theory at rather large distances. Note that in the upper panel plots in Figs. 1, 2, and 3, we have gone to very large distances.
As some concluding remarks let us emphasize the following points. m PV − m P is more or less constant with relatively large uncertainties. This is to be expected, as the next correction in magnitude ismΩ m which is approximately constant (mildly modulated by ffiffiffiffiffiffiffiffiffi αðμÞ p ). After introducing this term the error is much smaller and we can see more structure. In particular we are sensitive to FIG. 2. As in Fig. 1 but in the MS scheme. The values of N P for c positive are, for instance, N P ¼ 6 for 1=m ¼ 0.003, N P ¼ 5 for 1=m ∈ ½0.0045; 0.0105, N P ¼ 4 for 1=m ∈ ½0.012; 0.03, N P ¼ 3 for 1=m ∈ ½0.0315; 0.0825, N P ¼ 2 for 1=m ∈ ½0.084; 0.2235, N P ¼ 1 for 1=m ∈ ½0.225; 0.6105 and N P ¼ 0 for 1=m ∈ ½0.612; 1.5. 0 ≈ 400 MeV. The value of N P depends on the scale 1=m we use. For instance for c positive, N P ¼ 9 for 1=m∈½0.003;0.0045, N P ¼ 8 for 1=m ∈ ½0.006; 0.0015, N P ¼ 7 for 1=m ∈ ½0.0165; 0.0435, N P ¼ 6 for 1=m ∈ ½0.045; 0.01185, N P ¼ 5 for 1=m ∈ ½0.12; 0.321, N P ¼ 4 for 1=m ∈ ½0.3225; 0.876 and N P ¼ 3 for 1=m ∈ ½0.8775; 1.299. P 2N P n¼N P þ1 ðr n − r ðasÞ n Þα nþ1 . Here we find (at the level of precision we have now) a sizable difference between lattice and MS. This can be expected: P 2N P n¼N P þ1 ðr n − r ðasÞ n Þα nþ1 is the object we expect to be more sensitive to the scale.
Another interesting observation is that truncated sums behave better in the lattice scheme than in the MS scheme. Nevertheless, this could be misleading. The sums are truncated at the minimal term. Therefore, one needs more terms in the lattice scheme. If the number of terms is not an issue (which could be the case with dedicated numerical stochastic perturbation theory (NSPT) [26,27] computations in the lattice scheme) then the lattice scheme looks better. But as soon as Ω m is introduced in the computation MS behaves better (at least in the large β 0 approximation).
Overall, we observe a very nice convergence pattern up to (surprisingly) rather large scales in the lattice and MS scheme. The agreement with the theoretical prediction (which is zero) is perfect at short distances. The estimated error is also expected to be small. It will be interesting to see if this also happens beyond the large β 0 .
We now turn to the n f ¼ 3 case. To easy the comparison with [1], we use the same value: The general conclusions do not change if we fix Λ MS (in the large β 0 approximation) using the world average value of α. We note that Λ QCD for the physical case (n f ¼ 3) is smaller than for the n f ¼ 0 case (if one sets the physical scale according to r −1 0 ≈ 400 MeV). On top of that the running is less important. All this points to that the convergence should be even better than in the n f ¼ 0 case (and it was quite good already there). We show our results in Figs. 4-6 (these are the analogous of Figs. 1-3 but with n f ¼ 3). These plots confirm our expectations. Down to scales as low as 667 MeV we see no sign of breakdown of the OPE. This is so in both the lattice and the MS schemes. Note that the precision we get is extremely high as we go to small scales: Using truncation (c): m P þmΩ m þ P 2N P n¼N P þ1 ðr n − r ðasÞ n Þα nþ1 , one gets m PV in the MS scheme with a precision below 1 MeV at scales of the order of the mass of the bottom, and in the lattice scheme with a precision below 2 MeV. Using truncation (d): n Þα nþ1 þmΩ −2 , the precision does not significantly change, in particular in the lattice scheme. This reflects that ultraviolet renormalons play a minor role. The rest of the discussion follows parallel the one for n f ¼ 0. Fig. 1 but with n f ¼ 3 light flavors. The value of N P depends on the scale 1=m we use. For instance for c positive,

FIG. 4. As in
.045; 0.115, N P ¼ 7 for 1=m ∈ ½0.12;0.315, N P ¼ 6 for 1=m ∈ ½0.32; 0.865, and N P ¼ 5 for 1=m ∈ ½0.87; 1.5. In the above numerics, we have used the exact expression for Ω m and Ω −2 . In full QCD, we will not know the exact expression. Therefore, it makes sense to study how well the exact result is reproduced by its semiclassical expansion. We observed in [1] that Ω m is very well saturated by the first terms of such expansion. Truncating the expansion produces differences much smaller than the typical precision of the different terms of the hyperasymptotic expansion. For Ω −2 , we compare in Tables I and II the exact result and the truncated semiclassical expansion for an illustrative set of values. We observe that the exact result is very well saturated by the first terms of the expansion computed in Eq. (38). Truncating the expansion produces differences much smaller than the typical precision of the different terms of the hyperasymptotic expansion. As expected n f ¼ 3 is better than n f ¼ 0. Note that in the large β 0 approximation we exactly have Λ ¼ μe −2π=ðβ 0 αðμÞÞ .
An alternative, very effective, presentation of the above results can be done by plotting the relative accuracy of the prediction at each order in α, and at each order of the superasymptotic expansion. We note that we have one observable for each value ofm. Therefore, for illustration, we take two extreme cases. We use m PV withm ¼ 1.25 GeV andm ¼ 163 GeV. For the theoretical prediction we take the smallest positive value of c corresponding to lattice or MS scheme. 3 We use the exact expressions for Ω m and Ω −2 . Nevertheless, the NNLO truncated expression for Ω m is precise enough to yield the same result. For Ω −2 we could truncate earlier with no visible effect. We show the results in Fig. 7. We stress that several terms of the hyperasymptotic expansion are included. First, we nicely see that, once reached the minimum, N ∼ N P , both schemes yield similar precision, but in the lattice scheme (bigger factorization scale μ) more terms of the perturbative expansions are needed to reach the same precision. We can see a gap when Ω m is included, with significant better precision in the MS scheme. One important lesson one may extrapolate from this exercise is that, if the number of perturbative coefficients is fixed, the smaller the renormalization scale μ, the better. One can obtain much better precision for an equal number of perturbative coefficients. Another observation is that the minimal term determined numerically need not to coincide with the minimal term FIG. 5. As in Fig. 1 but with n f ¼ 3 light flavors and in the MS scheme. The value of N P depends on the scale 1=m we use. For instance for c positive, 29; 0.775, and N P ¼ 1 for 1=m ∈ ½0.78; 1.5.  I.mΩ −2 in the large β 0 approximation for n f ¼ 0 in r −1 0 units compared with Eq. (38) truncated at different powers of α. Upper panel computed in the MS scheme. Lower panel in the lattice scheme. Lattice seems to be better but both schemes yield very good results.    In the lattice scheme the effect is so small that it cannot be seen and the precision is set by the next renormalon located at d ¼ 3. If one makesm small, m ¼ 1.25 GeV, green and orange points mix in the MS scheme. This effect is more pronounced in the lattice scheme, where one can continuously move from the orange to the green points. The effect of the ultraviolet renormalon is very small and the precision is set by the u ¼ 3=2 renormalon.
B. ðN;μÞ → ∞. Eq. (4). Case (2) We take Eq. (29) in the large β 0 limit by setting b ¼ 0. As before, we have no analytic expressions to compare with (unlike the case of the static potential). Therefore, we directly focus on taking the limit (2B) and numerically check its convergence and how it compares with method (1).
Method (2B) has the pleasant feature that the generated OðΛ QCD Þ correction complies with the OPE. It also yields results that do not depend on N (and μ) anymore. Still, it has some errors and does not reach the precision of method (1). There is a residual scheme dependence associated to uncomputed terms of OðαΛ QCD Þ. Part of it can be estimated by the residual dependence in c 0 . In order to estimate it, we compute m A for different values of c 0 . On the one hand c 0 cannot be very large, as c 0 αðmÞ should be relatively close to zero. On the other hand we cannot make c 0 αðmÞ to get arbitrary close to zero, as the OðΛ QCD Þ correction diverges logarithmically in c 0 . We also note that there is a value of c 0 ¼ c 0 min that makes that K ðAÞ X ¼ 0 so that the OðΛ QCD Þ correction vanishes. Therefore, we compute m A for different values of c 0 . For illustration we show some results in Fig. 8. We draw lines for m PV − m A − K ðAÞ X Λ X at c 0 ¼ 1 and c 0 ¼ c min generating a band. We also explore the dependence on the scheme by comparing the results in the lattice and MS scheme. We stress again that, in the large β 0 approximation, lattice and MS schemes basically correspond to a redefinition of μ, but quite large indeed. On the other hand the final result is μ independent. Nevertheless, the way the μ → ∞ limit is taken is fixed by N A , as defined in Eq. (4), which is dependent on μ. This explains why different results are obtained.
In Fig. 8, we also compare with results obtained using method (1), more specifically we compare with m PV − m P −mΩ m , as they both have analogous power accuracy (though method (1) is parametrically more precise). For Ω m we take the exact expression but using its approximated expression does not change the discussion, as the difference is very small. What we see is that the MS scheme yields more precise predictions than the lattice scheme, and that method (1) yields considerable better results than method (2B).
Another issue specific of method (2B) is to determine how large we need to take N (and consequently μ) of the truncated sum such that it approximates well m A . For illustrative purposes we show the convergence in Fig. 9 for n f ¼ 3 in the lattice and MS scheme. We find that we have to go to relatively large values of μ (and N) to get it precise. This can be a problem if one wants to go beyond the large β 0 . This problem would be less severe if one can use the asymptotic expression for the coefficients beyond certain n. Nicely enough, we find that the use of asymptotic expression for the coefficients for n > N Ã (∼3 in the MS and ∼8 in the lattice scheme) is very efficient and basically yields the same results as the exact result. Finally, we also remind that to approximate well m A by the truncated sum is more costly for small values of c 0 .

IV.Λ PV FROM LATTICE AND B PHYSICS
We now abandon the large-β 0 approximation. Our aim is to determineΛ PV . We will determine it first in gluedynamics (n f ¼ 0) in Sec. IV B. To study the scheme dependence of the result it will be useful to estimate the higher order coefficients of the β function in the Wilson action lattice scheme. We do so in the next section.
A. β-function coefficients in the Wilson action lattice scheme In [22,23] it was shown that renormalon dominance allowed to give an accurate value for β latt 3 assuming that c 3 [see Eq. (44)] is already saturated by the renormalon in the MS scheme. We can estimate higher order terms of the β function in the lattice scheme (using the Wilson action) by also assuming that for n > 3 the coefficients c n in the MS scheme are saturated by the renormalon. We show such estimates in Table III. These coefficients of the β function improve the agreement with the phenomenological parametrization of α latt ð1=aÞ obtained in [28] in the range β ∈ ð6; 6.8Þ (see Fig. 10). It is also worth mentioning that we observe a geometrical growth of the coefficients of the β function. Elucubrative, this would indicate that the beta function in this scheme has a finite radius of convergence, and one can take the ansatz which would have a pole at around β ¼ 6=g 2 ≃ 3.8.

B.Λ PV from lattice
We determineΛ PV in gluedynamics (n f ¼ 0) from the energy of a meson made of a static quark and a light valence quark: δm PV latt has the following asymptotic series in powers of α ¼ α latt ð1=aÞ: where r ðasÞ n ðνÞ ν ¼ c ðasÞ n , since m PV and δm PV latt have the same leading infared renormalon (located at d ¼ 1). The coefficients c n are known from n ¼ 0 ÷ 19 in the lattice scheme for a Wilson action [21][22][23]. We then adapt Eq. (19) to δm PV latt to determineΛ PV : where δm P latt ¼ 1 a c n α nþ1 . In the counting of Eq. (15) this corresponds to (1,N P ) precision. The expression we use for Ω m is Eq. (21) truncated to Oðα 3 Þ here and in the rest of the paper. The error committed by this truncation is smaller than the error associated to Z m . Therefore, we will neglect it in the following. The renormalon behavior associated to subleading renormalons of E MC ðaÞ is not well known, except that the next singularity in the Borel plane is expected to be at juj ¼ 1 (d ¼ 2). Therefore, we stop the second perturbative expansion at N 0 ¼ 2N P such that the reminder should be of OðaΛ 2 QCD Þ. For the coefficients c ðasÞ n we use Z latt m ðn f ¼ 0Þ ¼17.9ð1.0Þ [23]. We also truncate the 1=n expansion in Eq. (26) to Oð1=n 3 Þ. This means using the estimates for β 3 and β 4 listed in Table III. We take E MC ðaÞ from [30][31][32]. These points expand over the following energy range: 1=a ∼ 2.93r −1 0 ÷ 9.74r −1 0 . We show our results in Fig. 11. They follow the same logic than Figs. 1-6 in Sec. III. We observe that the subtraction of the perturbative expansion accounts for most of the 1=a dependence. Still we have enough precision to be sensitive to OðaΛ 2 QCD Þ effects. A strict fit setting the OðaΛ 2 QCD Þ correction to zero gives a large χ 2 red ∼ 6-7. The inclusion of a pure Ka term to Eq. (45) gives a good fit. 4 The statistical error is small and the χ 2 red ¼ 1.17=1.06 (for the smallest jcj with positive/negative c value) is good. Overall, we obtain (using the smallest c positive, which means N P ¼ 7 except for β ¼ 5.7 where we have N P ¼ 6) This number is not very different from the number obtained in [29] using a superasymptotic approximation truncated at the minimal term determined numerically (typically this always gives slightly better results than truncating at the minimal term predicted by theory).
Let us now discuss the error budget in Eq. (46). The first error is the statistical error of the fit. The remaining errors are different ways to estimate the error produced by the approximate knowledge of the hyperasymptotic expansion. One possibility is to take the modulus of the difference with the evaluation using the c negative with the smallest possible modulus. This is the second error we quote in Eq. (46). The last error we include is due to the variation of Z latt m ðn f ¼ 0Þ ¼ 17.9ð1.0Þ [23] (correlated with the  Fig. 1 of [29] including more terms in the perturbative expansion using the β-function coefficients listed in Table III. 4 Unlike for the pole mass, it is not clear what is the operator of the OPE that would produce the NP correction and the associated u ¼ 1 renormalon. Therefore, if for the pole mass we can be certain that the NP correction has the form Ka, without any anomalous dimensions nor any nontrivial lnðaÞ dependence, we can not exclude the possibility that this OðaÞ correction may have a non trivial anomalous dimension and/or lnðaÞ dependence. error of c n ). The error it produces in Ω m is small. Comparatively, most of the error associated to Z m comes from the differences in . The difference we obtain is −0.08. This is significant, showing that the 1=n corrections are sizable in the lattice scheme. On the other hand, the difference is well inside the error associated to Z m . Actually, the difference with evaluations including the Oð1=n 4 Þ corrections in the asymptotic expressions for c ð as Þ n is -0.03. This shows a convergent pattern, which we illustrate in Table IV. Overall, the largest source of uncertainty comes from the incomplete knowledge of P N 0 ¼2N P N P þ1 1 a ½c n − c ðasÞ n α nþ1 , which is closely linked to the incomplete knowledge of Z m . This discussion points to that more accurate determinations of Z m can be possible and, then, that the error ofΛ PV associated to Z m could be made smaller. We believe these issues deserve further study that we leave for future work.
One error that we do not include here is the error associated to α. From the lattice point of view, we are talking of the relation between αð1=aÞ and r 0 . We use the phenomenological formula deduced in [28]. The error of this formula is claimed to be around 0.5-1% in the range β ∈ ð5.7; 6.92Þ [having a look to Fig. 10 a more conservative range could be (6,6.8)].

Scheme dependence
It is interesting to consider the scheme dependence of Eq. (46). In [29] relative large differences were found for fits toΛ after (approximated) scheme conversion to the MS scheme. The real problem is not transforming the coefficients c n from the lattice to the MS scheme, but transforming α latt to α MS with enough precision (in a way we need the relation between α latt and α MS with NP, exponential, accuracy). This needs the coefficients of the β function in the lattice scheme to high orders. We show estimates in Table III. The inclusion of these coefficients of the β function makes that the determinations ofΛ PV in the MS and lattice scheme approach each other as we include more terms in the perturbative expansion of the relation between α MS and α latt . We show the comparison in Table IV.  C.Λ PV pot from lattice As an extra check of the method, we now consider the ground state energy of two static sources in the fundamental representation at a fixed distance r 0 computed in the lattice: E Σ þ g ðr 0 ; aÞ. This object has the same renormalon as twice the pole mass. Following [33] we define the quantitȳ where Δ is just a constant to fix the normalization at r ¼ r 0 . ForΛ pot ðaÞ we perform an OPE assuming r 0 ≫ a, and compute it using the PV prescription. We then havē We show the results in Fig. 12. The lattice data is taken from [34,35], as analyzed in [33]. A nicely flat curve appears. This object does not show OðaΛ 2 QCD Þ artifacts. This is consistent with the discussion in [28], though there the discussion was only made for energy differences. This has the potentially important consequence that potentials computed with different β's can be related with perturbation theory with good accuracy. There is no need to subtract independent constants for each β ≡ 6=g 2 .

D.Λ PV from B meson mass
We now move to the physical case with n f ¼ 3 light quarks. Using HQET we approximate the B=D meson mass by (we use spin averaged masses) It is not the aim of this paper to determinem b (norm c ). We are rather interested to know the error associated to determinations of m PV ifm is known, and vice versa. We will then later use this analysis for the top quark mass determination. For this purpose we use its hyperasymptotic approximation To make the error analysis we use the bottom case, and takē m b ¼ 4.186 GeV from [36]. We obtain (we have added a −2 MeV to the relation between the MS bottom mass and the pole mass due to the charm quark [13]) For the variation of μ we take the range μ ∈ ðm b =2; 2m b Þ.
For Z m we take Z MS m ðn f ¼ 3Þ ¼ 0.5626ð260Þ from [13]. For the variation of α we take Λ ðn f ¼3Þ MS ¼ 332 AE 17 MeV from [37]. The central value has been obtained with N P ¼ 3 (c ¼ 0.3611). Therefore, the last term of Eq. (50) is set to zero, as we do not have more terms of the perturbative expansion. In the counting of Eq. (15) the precision is then (1,0). Within the hyperasymptotic counting the P N 0 ¼2N P n¼N P þ1 ½r n − r ðasÞ n α nþ1 term should roughly scale as (assuming the next renormalon is located at juj ¼ 1) This is the expected scaling if μ ∼m. Nevertheless, the dependence on μ will be quite different depending on whether the next renormalon is ultraviolet (∼μ −2 ) or infrared (∼μ 2 ). Actually, the magnitude is also expected to be different, being more important for an eventual infrared renormalon. As the situation is somewhat uncertain we do not dwell further in this issue. To roughly estimate the size of subleading terms we could compute with In the counting of Eq. (15) the precision is then ð1; N max − N P Þ. The difference is below 1 MeV (after including ½r 3 − r ð as Þ 3 α 4 , otherwise the difference is 7.5 MeV). Even computing with N P ¼ 1, which formally allows us to reach the next renormalon located at 2N P ¼ 2 [i.e., (1,N P ) precision in the counting of Eq. (15)], the difference is ∼7 MeV. Alternatively, the remaining μ scale dependence of m P ðm b=c Þ −m b=c Ω m also gives a measure of the uncomputed P N 0 ¼2N P N P þ1 ½r n − r ðasÞ n α nþ1 term, as such scale dependence should cancel in the total sum. We will then take it as the associated error. This is the first error quoted in Eq. (54). Actually, the error associated to Z m is also a measure of the lack of knowledge of higher order terms in perturbation theory. Therefore, there is some degree of double counting by considering these two errors separately.
It is interesting to analyze the error of the superasymptotic approximation of m PV (only computing m P ). If we vary μ in the range μ ∈ ðm b =2; 2m b Þ, we obtain (N P ¼ 3, Nicely enough, it agrees with Eq. (51) within one sigma. This is also so if we take N P ¼ 2 (c ¼ 1.7935): m b;P ¼ 4922 þ107 −167 MeV. We find that the scale dependence of the superasymptotic approximation is large. Therefore the inclusion ofmΩ m is crucial to make the result much more scale independent. On the other hand note that there is no error associated to Z m at this order (which is, in any case, comparatively small).
Using Eq. (51) and Eq. (49) we can determineΛ PV . We work at leading order in 1=m. We obtain where we have included an extra error source compared with Eq. (51). This extra error is associated to the Oð1=mÞ corrections. The existence or not of genuine NP 1=m corrections may introduce a significant error. In case they exist, if we take the hyperfine energy splitting as a measure of 1=m corrections, we find shifts from the central values of order ∼46 MeV and ∼140 MeV for B and D mesons, respectively. As Eq. (54) has been obtained from the B meson spin-average mass we conservatively estimate the error associated to genuine NP 1=m corrections to be of order ∼46 MeV, as it is the most we can do from phenomenology and perturbation theory. Let us recall however that recent lattice simulations point to much smaller genuine NP 1=m corrections for the spinindependent average [38]. Earlier direct determinations ofΛ PV or m PV can be found in [39,40]. The formulas are equivalent to those used here to one order less (using N P ¼ N max ¼ 2). They also include less terms in the sum in Eq. (26). More recently, a determination ofΛ has been obtained in [38] using lattice data. In this case the formulas are equivalent to those used here since N P ¼ 3 (see Eq. (57) of Ref. [1]) except for the fact that the scale μ was always fixed equal to the heavy quark mass and that the mass was obtained in the MRS scheme [41]. In this reference is also given the relation between the PV and the MRS mass. Using it we obtain (where we combine quadratically the error of Z MS m and Λ MS ) The prediction of [38] translates then toΛ PV ¼ 435ð31Þ, where we only include the error quoted in [38]. In particular, we do not include the error in Eq. (55). Note that Eq. (55) scales like OðΛ QCD Þ, whereasmΩ m scales like Oð ffiffiffi α p Λ QCD Þ. There is a 40 MeV difference with the number given in Eq. (54). 10 MeV can be understood because the value ofm b used in [38] is around 10 MeV bigger. Another 10 MeV can be understood by the inclusion of 1=m nonperturbative effects. The remaining 20 MeV difference are more difficult to identify, though they are well inside uncertainties. Leaving aside the different α's used, another source of difference is the value of Z m . The value used in [38] comes from [42] (where the effect of scale variation was not included in the error analysis). This determination used a sum rule that is free of the leading pole mass renormalon. The possibility of using sum rules to determine the normalization of renormalons was first considered in [43]. For the determination of Z m , sum rules were first used in [12]. Later sum rule analyses can be found in [44]. Alternatively one can use the ratio of the exact and asymptotic expression of the coefficients r n to determine Z m as in [13,[21][22][23]45]. For an extra discussion on this issue see [46]. Finally, it is worth mentioning that Z m can be determined either from the static potential or from the pole mass (and its relatives). The only value of Z m that uses the static potential is from [13]. A preference for determinations of Z m from the static potential can be theoretically motivated, as they are less affected by subleading renormalons. There are no ultraviolet renormalons and the next infrared renormalon is located at u ¼ 3=2.
On the other hand, the pole mass is expected to have renormalons at juj ¼ 1. Only in the event that there is no u ¼ 1 renormalon and the effect of the u ¼ −1 renormalon is subleading both determinations would be on equal footing on theoretical grounds. In any case, irrespectively of this discussion, consistent numbers are obtained between different analyses.
E. ðN;μÞ → ∞. Eq. (4). Case (2B) All previous determinations ofΛ PV have been obtained using limit (1). For completeness we have also explored how limit (2B) performs forΛ PV , even though it is, in principle, less precise. We have considered the different methods to take the limit (2B) discussed at the end of Sec. II B, and compared with the numbers obtained above. We first consider the evaluation of m PV using the right-hand side of Eq. (33) with Eq. (34). The central value is determined using c 0 min ¼ 1.076, which is the value that makes K ðAÞ MS ¼ 0, and μ 0 ¼m b ¼ 4.186 MeV. We obtain Λ PV ¼ 453 MeV. The difference with Eq. (54) is 24 MeV, which is quite reasonable. We can also explore the μ dependence. Taking the variation μ 0 ∈ ðm b =2; 2m b Þ, we obtainΛ PV ¼ 453 −36 þ55 ðμÞ MeV. Comparatively with Eq. (54) the μ scale dependence is much larger. We next consider the limit as taken in Eq. (32). This requires the knowledge of the coefficients r n to all orders. For n > 3 we take the asymptotic expression. On the other hand the running of the beta function is only needed to one loop. This allows us to go to orders as high as N A ¼ 3000 (though it already converges at smaller values of N A ). Remarkably enough, we obtain the same result than before: 453 MeV. There is a residual dependence on c 0 . For illustration, if we take instead c 0 ¼ 2, we obtainΛ PV ¼ 438 MeV (the result using the right hand side of Eq. (33) with Eq. (34) yields the same number), and the scale dependence is larger: þ99 ðμÞ. The value of c 0 we have used to make the analysis can be an issue. As discussed in [1,17], taking χ − 2 very small deteriorates the convergence and larger values of N A are needed. This problem aminorates by taking larger values of c 0 . Since for the limit as taken in Eq. (32) we can go to very large N A this is not a problem. We have also performed a similar analysis with n f ¼ 0 and r 0 units, relevant for the analyses performed in Sec. IV B. The discussion follows parallel to the one we just had with the difference that we now know 20 coefficients of the perturbative expansion. The value we obtain:Λ PV ¼1.37r −1 0 (using a quadratic fit) is indeed quite close to the value obtained in Sec. IV B, though less precise.
We have more problems with the other ways to take the μ → ∞ limit discussed at the end of Sec. II B. The direct use of N A in Eq. (4) or of N A in Eq. (67) in [1] requires, besides the coefficients r n to all orders, the β-function coefficients to all orders as well. We do not have them. Instead we use truncated version of the β function. This makes the numerical calculation much more challenging, since the running in μ is more complicated. Therefore, we had problems to go to very large N A . For N A ≥ 200 we find instabilities is some cases. As mentioned before, the value of c 0 we use to make the analysis can be an issue. Taking χ − 2 very small deteriorates the convergence. This problem aminorates by taking larger values of c 0 . In the lattice scheme determination of quenchedΛ PV we indeed observe convergence to the value obtained before using c 0 ¼ 2. Using c 0 min ¼ 1.076 the convergence is less good. Determinations in the MS scheme do not show convergence if we stop at N A ≤ 200, though with an slight better behavior using c 0 ¼ 2 rather than c 0 min . Overall, as the precision we get with method (2B) is worse than with method (1), we will not study this limit in more detail.

A. About the pole mass ambiguity
The top quark mass is one of the key parameters of the standard model. A lot of experimental work has been devoted to its determination (see for instance [47][48][49]). Whereas this is a matter of debate, it is typically assumed that the masses obtained from experiment correspond to the pole mass. Thus, there has been an ongoing discussion on the intrinsic uncertainty of these determinations (see for instance [45,50], and [51] for a more recent discussion). We believe that, without further qualifications, the question is ill posed, or may lead to confusion. It is well known that the pole mass is well defined (infrared finite and gauge independent) at finite (albeit arbitrary) order in perturbation theory [52]. It is also well known that such series is divergent. 5 Therefore, no numerical value can be assigned to the infinite sum of the perturbative series of the pole mass. Truncated sums are well defined but depend on the order of truncation (a detailed discussion relevant for the analysis made in the present paper can be found in [1]). These truncated sums can be related with observables or with intermediate definitions of the heavy quark mass, like the PV mass (which regulates via Borel resummation the infinite sum), in a well-defined way.
In this context, the shortest answer to the above posed question is that the ambiguity (of a well-defined mass) is zero. As a matter of principle, m PV (or m P ) can be defined with arbitrary accuracy (this also applies to any threshold mass), if one computes high enough orders of the perturbative series, and ifm is given. One can discuss (actually one can compute) the scheme/scale dependence (if they have) of them. In this respect, there is no much conceptual difference with respect to asking about the scheme/scale dependence of minimal subtraction schemes for the heavy quark masses. 5 Actually this is only proven in the large β 0 approximation [19,20], and it is also supported by numerical analyses [21][22][23], but there is no analytic proof.
A quite a different question is to determine the typical difference (that not ambiguity) between (reasonable) different definitions of the pole mass. The short answer to this question is that the differences are (at most) of order Λ QCD for (reasonable) different definitions of the pole mass. We emphasize that one can not be more precise unless stating the specific definition used for the pole mass. For instance, the difference between m PV and m P is of Oð ffiffiffi α p Λ QCD Þ with a known prefactor. Truncating the perturbative series at order N near N Ã are also legitimate definitions of the pole mass. The typical difference between truncating at different N is of order Λ QCD : see for instance Eq. (62) of [22]. One could even use M B as a definition for the pole mass. Its difference with m PV is of order Λ QCD . If one defines an imaginary mass by doing the Borel integral just above the positive real axis, the difference with m PV is of OðiΛ QCD Þ. The authors of [45] choose to divide this number by π and take the modulus as their definition of the ambiguity. These examples illustrate that, even if the ambiguity is of OðΛ QCD Þ, the coefficient multiplying Λ QCD is arbitrary.
Overall, it should be clear that no much more can be said, and we are indeed against of dwelling too much on this issue. Instead, we strongly advocate to avoid generic discussions about the pole mass, which is not well defined beyond perturbation theory, and restrict the discussion to the precision and errors of specific, NP well-defined, heavy quark masses the perturbative expansion of which can be related with the perturbative expansion of the pole mass.
Once working with NP well-defined heavy quark masses like m t;PV or m t;P , we can address the more relevant question of determining the precision with whichm t can be determined if m t;PV or m t;P is known (and vice versa, if m t is known what is the uncertainty of m PV ) with nowadays knowledge of the perturbative expansion. In other words, with which precision the theoretical expression is known. For reference we will take the valuem t ¼ 163 GeV in the following. We will see in the next section that indeed the precision is quite good and that the error is significantly smaller than typical numbers assigned for the ambiguity of the pole mass. We will not dwell in this paper on the precision with which m t;PV or m t;P can be determined from experiment as such discussion is observable dependent.

B. Decoupling and running
We now turn to an issue specific to the top quark (as compared with the bottom and charm quark). The top quark mass is much larger than Λ QCD . The latter is the scale that characterizes renormalon associated effects and it is the precision we want to achieve. This obviously generates ratios of quite disparate scales. In the context of threshold masses with an explicit infrared cutoff ν f , this calls for resummation of the large logarithms: ln ν f =m t . This is possible, and first done in [33] in the RS scheme (see also [46] for an extra discussion on this issue). Here, we approach the problem in a different way. We want to work with expansions for the perturbative series of the pole mass truncated at the minimal term: m P , and to improve upon it using hyperasymptotic expansions. Nevertheless, at the scale of the top mass, we do not have enough terms to reach the asymptotic behavior of the perturbative expansion. We use instead that the top quark pole mass and the pole mass of a fictitious top quark with mass m 0 t share the same leading infrared renormalon. Therefore, the leading infrared renormalon cancels in the difference. We can then decrease the top mass in a renormalon free way until we reach a top mass low enough that we can use the hyperasymptotic expansion. Such renormalon free running is determined by the following function (not compulsory to take μ ¼m but it simplifies the computation) This formula is correct up to N ∼ 2N P , sincemΩ m and r ðasÞ n are independent ofm [see Eq. (19)], so that their derivative with respect tom vanishes. The coefficients r ðn f Þ n are evaluated for n f massless particles. In the context of the MSR threshold mass the running is implemented in a similar way (see, for instance, [44]). Equation (56) makes explicit that such running is just a natural consequence of the relation between observables and their OPEs (for illustration, it follows from the fact that M B − M D , the B minus D meson mass difference is free of the leading infrared renormalon), and not linked to an specific threshold mass definition.
There is still another issue specific to the top quark: there are two heavy quarks (the bottom and charm), with masses much larger than Λ QCD , that generate extra corrections to the pole-MS mass relation due to the finite mass of the bottom and charm quark. Therefore, we have form ∼m t m PV ðmÞ ¼m þ where it is implicit that N max (the number of known terms of the perturbative expansion) is not large enough to see the decoupling of the bottom nor charm and certainly N max < N P . n f stands for the number of active flavors. At the top mass scale we take n f ¼ 5. The Oðα 2 Þ term of δm was computed in [53] and the Oðα 3 Þ term in [54]. Note as well that at Oðα 3 Þ there is a new contribution including a vacuum polarization of the bottom and charm at the same time. We name it δm ðn f Þ bc and it has been computed in [50]. As we decrease the value ofm t the bottom and charm quark will decouple. This decoupling will be absorbed in δm ðn f Þ b=c=bc , which are polynomials in powers of α ðn f Þ . In general this is not just changing n f in the original expressions from n f ¼ 5 to n f ¼ 4 or 3. The explicit expressions can be found in the Appendix B.
The renormalon is associated to scales smaller than the bottom and charm quark masses. Therefore, such scales should be decoupled before we talk about the hyperasymptotic expansion. As we have mentioned above we do such decoupling by varying the mass of the top till reaching a fictitious top with a mass small enough such that, first the bottom, and later the charm, decouple. Overall, our final formula is the following: We emphasize that F ðm; n f Þ is expanded in powers of α before integration. We take μ b small enough such that the bottom decouples and μ c small enough such that the bottom and charm decouple, and also such that we reach the asymptotic limit of the pole-MS mass perturbative expansion with the existing known coefficients. Therefore, The Oðμ c e − 2π β 0 αðμ c Þ ð1þlnð2ÞÞ Þ term stands for subleading corrections in the hyperasymptotic expansions, which are not known.
Let us now discuss in more detail the dependence on the bottom and charm quark, in particular the effects associated to the fact that they have masses much bigger than Λ QCD (for the analysis we takem b ¼ 4.186 GeV andm c ¼ 1.223 GeV [36] but the sensitivity to the specific values we use is very tiny). As already discussed in [19], the natural scale of a n-loop integral is notm t butm t e −n . For the case of the bottom versus charm quark it was observed in [13] 6 that the charm quark effectively decouples at order α 2 =α 3 for the case of the charm quark effects in the bottom pole mass-MS mass relation. If we lower the mass of the top we can also observe at which scales it is more convenient to decouple the bottom and charm quark in the top pole mass-MS mass relation. This can be illustrated in Fig. 13, where we plot the corrections associated to the bottom and charm with and without decoupling in terms of the fictitious top mass (assuming a single heavy quark). Obviously for very large top masses it is not convenient to do the decoupling. Nevertheless, as we decrease the mass of the top it becomes much more effective to decouple, first the bottom, and afterwards the charm quark. Once this is done the corrections due to the bottom and charm masses to Eq. (59) are very small. Comparatively to other errors, the uncertainty associated to the Oðα 4 Þ corrections is negligible. Also the correction associated to the bottom and charm quark masses to Eq. (58) is, comparatively to the total running, very small. From this analysis we will take as central values μ b ¼ 20 GeV and μ c ¼ 5 GeV. For these values we obtain Zm The specific value depends on μ b and μ c but the good convergence and smallness of this correction holds true for other values of μ b and μ c . The implementation of the decoupling of the bottom and charm in [50] produces a much larger correction. An even larger effect is observed in the implementation performed in [45], where the perturbative expansion is always performed at the scale of the top mass (using renormalon based estimates for the higher order coefficients), decoupling the bottom, and later the charm, depending on the order of perturbation theory. Therefore, we take our numbers as optima, and the error negligible compared with other uncertainties. We next explore the convergence pattern of the perturbative expansion. We first consider the perturbative expansion associated to F . We find Zm We observe a convergent pattern. For the last two terms the convergence deteriorates. On the other hand the perturbative expansion becomes sign alternating. This may indicate sensitivity to the u ¼ −1 renormalon. We discuss this further in the next section. For sign-alternating asymptotic perturbative expansion the left-over is ∼ − 1=2× (the last computed term) (see [5]). 7 Therefore, we take it as the error of the truncation of the perturbative expansion, which is the error we quote in Eq. (61). We also explore the dependence of Eq. (58) on μ b and μ c . The dependence is very small, as we can see in Fig. 14. For μ c the variation is negligible, and for μ b one gets variations of ∼5 MeV for a central value of μ b or around 20 GeV. Therefore, we will neglect it for the total error budget. The other source of error is associated to the approximate determination of Eq.
Finally, we also include the error associated to α. Combining all errors we obtain m t;PV ð163 MeVÞ By far the largest uncertainty is associated to α. For the purely theoretical error budget, the error is associated to higher order corrections in perturbation theory. They show up in different ways. One is the approximate knowledge of Z m , which shows up in Ω m . The other is the error in μ, which is a measure of the Oðe  7 We emphasize that these arguments do not apply to IR renormalons (and in particular to the u ¼ 1=2 renormalon). would profit from higher order perturbative computations. We have also explored other sources of uncertainty, and find them to be comparatively very small: the error (and the effect) associated to the finite mass of the bottom and charm quark is found to be very small, and similarly for variations in the values of μ b and μ c .
It is also useful to make the error estimate of the ratio of the PV and MS top mass. We obtain Note that there is no ambiguity error associated to this number. Except for α all errors are associated to the lack of knowledge of higher order terms of the perturbative expansion. In comparison with [45] we find that our result is less sensitive to Z m and to its associated error.

C. juj = 1 renormalons
The perturbative expansion of F ðm; n f Þ is free of the u ¼ 1=2 renormalon. Therefore, it is the ideal object on which to study the subleading renormalons of the pole mass. In principle, these are located at u ¼ 1 and u ¼ −1. The existence of an infrared renormalon at u ¼ 1 has been a matter of debate [15]. The existence of an ultraviolet renormalon at u ¼ −1 can be established in the large β 0 approximation [19,20] but not beyond. With respect to this discussion some interesting observations can be drawn out of our analysis. The coefficients f n show an interesting dependence in n f (with changes of sign of different powers of n f ). In Table V we give the numbers of f n for different values of n f and also in the large β 0 approximation. We observe that for n f ¼ 3 the Oðα 4 Þ flips sign. For n f ¼ 6, the Oðα 3 Þ and Oðα 4 Þ flip sign. The situation is somewhat puzzling. Let us first note that the sign of the coefficients would be interchanged compared with the large β 0 predictions (for n f ¼ 3). This could still be understood from a u ¼ −1 renormalon if Z X −2 flips sign from the large β 0 prediction to real QCD. This would indicate a large dependence of Z X −2 on n f compared with what has been seen for Z X m , where the large β 0 approximation gave the right sign and order of magnitude. For n f → ∞, the results agree with QED expectations (β 0 becomes negative and the perturbative series is nonsign-alternating). For n f ¼ 6 we observe that the last two terms are negative. One may then wonder if what we are seeing for n f ¼ 6 (and maybe also for n f ¼ 3) is that the u ¼ −1 renormalon becomes effectively infrared. Obviously, we need higher order coefficients f n to clarify this issue. 8 It is usual lore that infrared renormalons dominate over ultraviolet ones (this is somewhat based on large β 0 analyses where ultraviolet renormalons are typically suppressed by the factor ∼e d c X 2 whereas infrared renormalons are enhanced by the factor ∼e −d c X 2 ). If we take this seriously, and also the numbers we obtain for f n as an indication of the existence of the u ¼ −1 renormalon, this may indicate that the u ¼ 1 renormalon is indeed zero. In this respect, it is worth mentioning the analysis of [38] where the NP correction associated to the u ¼ 1 renormalon was found to be zero within errors. This is consistent with this discussion.
On the theoretical side it is also interesting to see where the u ¼ 1 renormalon would show up in a perturbative computation of the heavy quarkonium mass. For the purposes of this discussion, the heavy quarkonium mass would read where V 0 is the static potential, and V 1 is the 1=m Q potential. OPE analyses in the static limit show that V 0 does not have renormalon at u ¼ 1. The virial theorem: h p 2 m Q i nl ¼ hrV 0 0 i nl , also guaranties that the kinetic term does not have such u ¼ 1 renormalon. Therefore, any possible u ¼ 1 infrared renormalon of the pole mass should cancel with the analogous infrared renormalon of the V 1 =m Q TABLE V. The coefficients f n of F ðm; n f Þ. Note that f 4 ðn f ¼ 0Þ has a 9% error from the determination in [9]. The n f ¼ 10 20 case is used as a test for comparison with the large β 0 . The last three (four) rows are the coefficients f n in the large β 0 approximation. The coefficients of the perturbative expansion of the pole mass itself are also a polynomial in powers of n f . The sign dependence of the different powers of n f has been studied in [55,56]. potential. The fact that the latter can be written in a closed way in terms of Wilson loops [57] may open a venue on which to study this issue in further detail. This is postponed to future work.

VI. CONCLUSIONS
In this paper we have constructed hyperasymptotic expansions for the heavy quark pole mass (and for associated quantities) regulated using the PV prescription along the lines of [1]. We generalize the discussion of that reference by including possible ultraviolet renormalons. Such organization of the computation allows us to have a parametric control of the error committed when truncating the hyperasymptotic expansion.
In Sec. III the hyperasymptotic expansion of the pole mass of a heavy quark in the large β 0 is computed. We use it as a toy-model observable to test our methods. It works as expected. We can see the u ¼ 1=2 infrared renormalon and the u ¼ −1 ultraviolet renormalon. The next infrared renormalon is located at u ¼ 3=2. Compared with the static potential case studied in [1] in the large β 0 approximation, infrared renormalons are located at the same points in the Borel plane. On the other hand, the pole mass has ultraviolet renormalons, whereas the static potential does not. In practice the main difference comes from the relevance of the u ¼ −1 renormalon. In general, because of the u ¼ −1 renormalon, it is necessary to stop the second perturbative expansion [see Eq. (19)] at N ∼ 2 × 2π β 0 α , otherwise the perturbative series would start to diverge, as we can observe in Fig. 7 in the MS scheme. Nevertheless, the importance of this renormalon heavily depends on the factorization scale μ one uses. If one takes μ high enough, one could indeed do perturbation theory until N ∼ 3 × 2π β 0 α , where the u ¼ 3=2 renormalon shows up. We can see the irrelevance of the u ¼ −1 renormalon in the lattice scheme, which is equivalent to the MS scheme with a much larger μ, in Fig. 7. One should keep in mind, though, that one needs perturbation theory to a much higher order in the lattice scheme to reach the same precision than in the MS scheme. We expect this qualitative behavior of ultraviolet renormalons to also hold true beyond the large β 0 approximation.
We next move to real QCD. We have performed determinations ofΛ PV using quenched lattice QCD. For these observables perturbative expansions to high orders are available [21][22][23]. This allows us to test the method and go beyond the superasymptotic and the leading term in the hyperasymptotic approximation. We observe OðaΛ 2 QCD Þ corrections for the B meson mass in the static approximation, but not for an analogous observable from the static potential. Nevertheless, we do not have enough precision to quantitatively study these effects. The limiting factor is the error of the normalization of the leading renormalon, and, related, the lack of knowledge of the higher order beta function coefficients. The latter affects the Oð1=nÞ corrections to the asymptotic formula of the perturbative series coefficients. These effects are sizable in the lattice scheme. On the other hand they are quite small in the MS scheme. On top of that the higher order coefficients of the perturbative expansion of δm latt are not known with enough precision to disentangle the subleading renormalon (their error is strongly correlated with the error of Z m ). All these considerations forbid quantitative analyses beyond the leading term in the hyperasymptotic approximation. Further investigations are needed to improve on these issues, particularly on the error of Z m , which also affects the discussion below.
We also determineΛ PV from the physical B meson mass assuming that the MS heavy quark mass is known. The result can be found in Eq. (54). In this analysis, we determine the error associated to the incomplete knowledge of the perturbative expansion in determinations of the heavy quark mass. We translate this result to the case of the top mass, which we study in detail in Sec. V. In this section the issue of the uncertainty of the (top) pole mass is critically reexamined. In particular, the bottom and charm quark finite mass effects are carefully incorporated. In our implementation we find these to be very small. We find the present uncertainty in the relation betweenm t and m PV to be (form t ¼ 163 GeV) where we have combined the theoretical errors quoted in Eqs. (63) and (64) in quadrature. There is no ambiguity associated to the renormalon in this number. The precision is systematically improvable the more terms of the perturbative expansion get to be known in the future. Interestingly enough, it seems we have found some evidence for the existence of the next renormalon at u ¼ −1 but not of a possible renormalon at u ¼ 1. We believe this makes very timely a quantitative determination of the renormalization group structure of the u ¼ −1 renormalon, which to our knowledge is lacking. We leave this for future work.

ACKNOWLEDGMENTS
We thank M. Steinhauser for comments on the manuscript. C. A. thanks the IFAE group at Universitat Autònoma de Barcelona for warm hospitality during part of this work. This work was supported in part by the Spanish FPA2017-86989-P and SEV-2016-0588 grants from the ministerio de Ciencia, Innovación y Universidades, and the 2017SGR1069 grant from the Generalitat de Catalunya; and by the Chilean