Skip to main content
Log in

Price of Anarchy for Highly Congested Routing Games in Parallel Networks

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

We consider nonatomic routing games with one source and one destination connected by multiple parallel edges. We examine the asymptotic behavior of the price of anarchy as the inflow increases. In accordance with some empirical observations, we prove that under suitable conditions on the costs the price of anarchy is asymptotic to one. We show with some counterexamples that this is not always the case, and that these counterexamples already occur in simple networks with only 2 parallel links.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Attouch, H.: Variational Convergence for Functions and Operators. Pitman, Boston (1984)

    MATH  Google Scholar 

  2. Beckmann, M.J., McGuire, C.B., Winsten, C.B.: Studies in the Economics of Transportation. Yale University Press, New Haven (1956)

    Google Scholar 

  3. Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation, volume 27 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge (1989)

    Google Scholar 

  4. Cole, R., Tao, Y.: Large market games with near optimal efficiency. In: Conitzer, V., Bergemann, D., Chen, Y. (eds.) Proceedings of the 2016 ACM Conference on Economics and Computation, EC ’16, July 24–28, 2016, pp. 791–808. ACM, Maastricht (2016)

  5. Correa, J.R., Schulz, A.S., Stier-Moses, N.E.: Selfish routing in capacitated networks. Math. Oper. Res. 29(4), 961–976 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  6. Correa, J.R., Schulz, A.S., Stier-Moses, N.E.: Fast, fair, and efficient flows in networks. Oper. Res. 55(2), 215–225 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Correa, J.R., Schulz, A.S., Stier-Moses, N.E.: A geometric approach to the price of anarchy in nonatomic congestion games. Games Econ. Behav. 64(2), 457–469 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. de Haan, L.: On Regular Variation and its Application to the Weak Convergence of Sample Extremes, volume 32 of Mathematical Centre Tracts. Mathematisch Centrum, Amsterdam (1970)

    Google Scholar 

  9. Dumrauf, D., Gairing, M.: Price of anarchy for polynomial Wardrop games. In: Spirakis, P., Mavronicolas, M., Kontogiannis, S. (eds.) Internet and Network Economics: Second International Workshop, WINE 2006, Patras, Greece, December 15–17, 2006. Proceedings, pp. 978–3-540-68141-0. Springer, Berlin (2006)

  10. Englert, M., Franke, T., Olbrich, L.: Sensitivity of Wardrop equilibria. Theory Comput. Syst. 47(1), 3–14 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Feldman, M., Immorlica, N., Lucier, B., Roughgarden, T., Syrgkanis, V.: The price of anarchy in large games. In: Wichs, D., Mansour, Y. (eds.) Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, June 18–21, vol. 2016, pp. 963–976. ACM, Cambridge (2016)

  12. Florian, M., Hearn, D.: Network equilibrium and pricing. In: Hall, R.W. (ed.) Handbook of Transportation Science, pp. 373–411. Springer US, Boston (2003)

  13. González Vayá, M., Grammatico, S., Andersson, G., Lygeros, J.: On the price of being selfish in large populations of plug-in electric vehicles. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 6542–6547 (2015)

  14. Josefsson, M., Patriksson, M.: Sensitivity analysis of separable traffic equilibrium equilibria with application to bilevel optimization in network design. Transp. Res. B Methodol. 41(1), 4–31, 1 (2007)

    Article  Google Scholar 

  15. Koutsoupias, E., Papadimitriou, C.: Worst-case equilibria. In: STACS 99 (Trier), volume 1563 of Lecture Notes in Computer Science, pp. 404–413. Springer, Berlin (1999)

  16. Law, L.M., Huang, J., Liu, M.: Price of anarchy for congestion games in cognitive radio networks. IEEE Trans. Wirel. Commun. 11(10), 3778–3787 (2012)

    Article  Google Scholar 

  17. Mas-Colell, A.: On a theorem of Schmeidler. J. Math. Econ. 13(3), 201–206 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  18. Milchtaich, I.: Generic uniqueness of equilibrium in large crowding games. Math. Oper. Res. 25(3), 349–364 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  19. Milchtaich, I.: Social optimality and cooperation in nonatomic congestion games. J. Econ. Theory 114(1), 56–87 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  20. O’Hare, S.J., Connors, R.D., Watling, D.P.: Mechanisms that govern how the price of anarchy varies with travel demand. Transp. Res. B Methodol. 84, 55–80, 2 (2016)

    Article  Google Scholar 

  21. Panageas, I., Piliouras, G.: Approximating the geometry of dynamics inppotential games. Technical report, arXiv:1403.3885v5 (2015)

  22. Papadimitriou, C.: Algorithms, games, and the Internet. In: Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, pp. 749–753. ACM, New York (2001)

  23. Patriksson, M.: Sensitivity analysis of traffic equilibria. Transp. Sci. 38(3), 258–281 (2004)

    Article  Google Scholar 

  24. Pigou, A.C.: The Economics of Welfare, 1st edn. Macmillan and Co., London (1920)

    Google Scholar 

  25. Piliouras, G., Nikolova, E., Shamma, J.S.: Risk sensitivity of price of anarchy under uncertainty. In: Proceedings of the Fourteenth ACM Conference on Electronic Commerce, EC ’13, pp. 715–732. ACM, New York (2013)

  26. Roughgarden, T.: The price of anarchy is independent of the network topology. J. Comput. Syst. Sci. 67(2), 341–364 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  27. Roughgarden, T.: Routing games. In: Algorithmic Game Theory, pp. 461–486. Cambridge University Press, Cambridge (2007)

  28. Roughgarden, T., Tardos, É.: How bad is selfish routing? J. ACM 49(2), 236–259 (electronic) (2002)

    Article  MathSciNet  MATH  Google Scholar 

  29. Roughgarden, T., Tardos, É.: Bounding the inefficiency of equilibria in nonatomic congestion games. Games Econom. Behav. 47(2), 389–403 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  30. Roughgarden, T., Tardos, É.: Introduction to the inefficiency of equilibria. In: Algorithmic Game Theory, pp. 443–459. Cambridge University Press, Cambridge (2007)

  31. Schmeidler, D.: Equilibrium points of nonatomic games. J. Statist. Phys. 7, 295–300 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  32. Wardrop, J.G.: Some theoretical aspects of road traffic research. In: Proceedings of the Institute of Civil Engineers, Pt. II, vol. 1, pp. 325–378 (1952)

  33. Youn, H., Gastner, M.T., Jeong, H.: Price of anarchy in transportation networks: efficiency and optimality control. Phys. Rev. Lett. 101, 128701 (2008)

    Article  Google Scholar 

Download references

Acknowledgments

Riccardo Colini-Baldeschi is a member of GNAMPA-INdAM. Roberto Cominetti gratefully acknowledges the support and hospitality of LUISS during a visit in which this research was initiated. Roberto Cominetti’s research is also supported by FONDECYT 1130564 and Núcleo Milenio ICM/FIC RC130003 “Información y Coordinación en Redes”. Marco Scarsini is a member of GNAMPA-INdAM. He gratefully acknowledges the support and hospitality of FONDECYT 1130564 and Núcleo Milenio “Información y Coordinación en Redes”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Colini-Baldeschi.

Additional information

This article is part of the Topical Collection on Special Issue on Algorithmic Game Theory (SAGT 2016)

Appendices

Appendix A: Regularly Varying Functions

The reader is referred to [3] for an extended treatment of regularly varying functions. Here we gather a few basic properties that are useful for our results.

Lemma 1

Letβ > 0 and let Θ be a continuous and strictly increasing function, then the following properties are equivalent:

  1. (a)

    the function Θ isβ-regularlyvarying,

  2. (b)

    the function Θ−1is\(\frac {1}{\beta }\)-regularlyvarying,

  3. (c)

    for allγ > 0

    $$\lim\limits_{x \to \infty} \frac{1}{x}{\Theta}^{-1}(\gamma {\Theta}(x)) = \gamma^{1/\beta}. $$

Proof

The equivalence of (a) and (b) is proved in [8, page 22]. The equivalence of (b) and (c) is immediate, since, by setting u = Θ(x), we have

$$\frac{1}{x} {\Theta}^{-1}(\gamma \cdot {\Theta}(x)) = \frac{{\Theta}^{-1}(\gamma \cdot u)}{{\Theta}^{-1}(u)} \to \gamma^{1/\beta}. \qquad $$

Lemma 2

If Θ is a continuous and strictly increasingβ-regularlyvarying function, thenx ⋅Θ(x) and\({{\int }_{0}^{x}} {\Theta }(s) \ \mathrm {d} s\)are (1 + β)-regularlyvarying functions.

Proof

For the functionx ⋅Θ(x) it suffices to note that

$$\lim\limits_{x \to \infty} \frac{ax {\Theta}(ax)}{x {\Theta}(x)} = a \cdot a^{\beta} = a^{1+\beta}. $$

For \({{\int }_{0}^{x}} {\Theta }(s) \ \mathrm {d} s\) a direct application of l’Hôpital rule gives

$$\lim_{x \to \infty} \frac{{\int}_{0}^{ax} {\Theta}(s) \ \mathrm{d} s}{{{\int}_{0}^{x}} {\Theta}(s) \ \mathrm{d} s} = \lim_{x \to \infty} \frac{ {\Theta}(ax) a}{ {\Theta}(x)} = a^{\beta} a = a^{1+\beta}. \qquad $$

The following two lemmata appear in [3, Proposition 1.5.7].

Lemma 3

Fori = 1, 2,let Θibe a continuous and strictly increasingβi-regularlyvarying function. Then Θ1 ∘Θ2isβ1β2-regularlyvarying.

Lemma 4

LetΘ1and Θ2be two continuous and strictly increasingβ-regularlyvarying functions, then Θ1 + Θ2isβ-regularlyvarying.

Appendix B: Omitted Proofs

1.1 Proofs of Section 4

In the whole subsection, for the sake of simplicity, we call x the flow on e1 and y the flow on e2.

Proof (Proof (of Theorem 4))

Let us study the price of anarchy for M ∈ (2ak, 2ak+ 1], keeping in mind that a ≥ 2.

Equilibrium cost. :

In the subinterval M ∈ (2ak, ak + ak+ 1] we have

$$\begin{array}{ll} x^{*}=M-a^{k}, &\qquad c_{1}(x^{*})= M-a^{k}\le a^{k + 1},\\ y^{*}=a^{k}, &\qquad c_{2}(y^{*})= a^{k}. \end{array} $$

For M ∈ (ak + ak+ 1, 2ak+ 1] we have

$$\begin{array}{ll} x^{*}=a^{k + 1}, &\qquad c_{1}(x^{*})= a^{k + 1},\\ y^{*}=M-a^{k + 1}, &\qquad c_{2}(y^{*})= a^{k + 1}. \end{array} $$

Therefore

$${\mathsf{WEq}}({\Gamma}_{M})= \left\{\begin{array}{ll} (M-a^{k})^{2}+a^{2k}&\;\text{for } M\in(2a^{k}, a^{k}+a^{k + 1}],\\ Ma^{k + 1}&\;\text{for } M\in(a^{k}+a^{k + 1},2a^{k + 1}]. \end{array}\right. $$
Optimal cost. :

In order to compute the optimal cost

$${\mathsf{Opt}}({\Gamma}_{M})=\min\limits_{0\leq y\leq M} yc_{2}(y)+(M-y)^{2} $$

we decompose the problem over the intervals Ij = (aj, aj+ 1] on which c2(⋅) is constant, namely, we consider the subproblems

$$C_{j}=\min\limits_{y\in I_{j},y\leq M}a^{j + 1}y+(M-y)^{2}. $$

We observe that for jk + 2 we have ajak+ 2 ≥ 2ak+ 1M so that Cj is infeasible and therefore \({\mathsf {Opt}}({\Gamma }_{M})=\min \{C_{0},C_{1},\ldots ,C_{k + 1}\}\). In fact, we will show that \({\mathsf {Opt}}({\Gamma }_{M})=\min \{C_{k-1},C_{k}\}\).

Let us compute Cj. Since (My)2 is symmetric around M, the constraint yM can be dropped and then the minimum Cj is obtained by projecting onto [aj, aj+ 1] the unconstrained minimizer yj = Maj+ 1/2. We get

$$ C_{j}= \left\{\begin{array}{ll} a^{j + 1}a^{j}+(M-a^{j})^{2}&\text{ if } M<a^{j}+\frac{a^{j + 1}}{2},\\ a^{j + 1}\left( M-\frac{a^{j + 1}}{2}\right)+\left( \frac{a^{j + 1}}{2}\right)^{2}&\text{ if } a^{j}+\frac{a^{j + 1}}{2}\leq M\leq a^{j + 1}+\frac{a^{j + 1}}{2},\\ a^{j + 1}a^{j + 1}+(M-a^{j + 1})^{2}&\text{ if } M>a^{j + 1}+\frac{a^{j + 1}}{2}. \end{array}\right. $$
(5)

Claim 7

Forjk − 1we haveCj = aj+ 1aj+ 1 + (Maj+ 1)2andCj− 1Cj.

Proof

The expression for Cj follows from (5) if we note that \(M > 2a^{k} \ge \frac {3}{2}a^{j + 1}\). In order to prove that Cj− 1Cj we observe that

$$\begin{array}{@{}rcl@{}} C_{j-1} \ge C_{j} &\iff& (a^{j})^{2}+(M-a^{j})^{2} \ge (a^{j})^{2}a^{2}+(M-a^{j}a)^{2}\\ &\iff& 2(a^{j})^{2} + M^{2} - 2Ma^{j} \ge 2(a^{j})^{2} a^{2} + M^{2} - 2Ma^{j} a \\ &\iff& Ma^{j}(a-1) \ge (a^{j})^{2}(a^{2}-1) \\ &\iff& M \ge a^{j}(a + 1) = a^{j}+a^{j + 1}. \end{array} $$

Since M > 2ak = ak + akaj + aj+ 1, this holds true. □

Claim 8

Ck+ 1 = ak+ 2ak+ 1 + (Mak+ 1)2Ck− 1.

Proof

Since \(M \le 2 a^{k + 1} \le a^{k + 1}+\frac {a^{k + 2}}{2}\) we get the expression for Ck+ 1 from (5). Then

$$\begin{array}{@{}rcl@{}} C_{k-1} \le C_{k + 1} &\iff& (a^{k})^{2} + (M-a^{k})^{2} \le a^{k + 2}a^{k + 1} + (M-a^{k + 1})^{2} \\ &\iff& 2(a^{k})^{2} + M^{2} - 2Ma^{k} \le (a^{k})^{2}a^{3} + (a^{k})^{2}a^{2} + M^{2} - 2Ma^{k + 1} \\ &\iff& 2Ma^{k}(a-1) \le (a^{k})^{2}(a-1)(a^{2}+ 2a + 2) \\ &\iff& 2M \le a^{k}(a^{2}+ 2a + 2). \end{array} $$

Since M ≤ 2ak+ 1 it suffices to have 4ak+ 1ak(a2 + 2a + 2) which is easily seen to hold. □

Combining the previous claims we get that \({\mathsf {Opt}}({\Gamma }_{M})=\min \{C_{k-1},C_{k}\}\). It remains to figure out which one between Ck− 1 and Ck attains the minimum. This depends on where M is located within the interval (2ak, 2ak+ 1] as explained in our next claim. In the sequel we denote

$$\begin{array}{@{}rcl@{}} \alpha&=&1+\frac{a}{2}\\ \beta&=&1+\frac{a}{2}+\sqrt{a-1}\\ \gamma&=&\frac{3}{2}a \end{array} $$

and we observe that

$$2\leq\alpha\leq\beta\leq\gamma\leq 2a. $$

Claim 9

ForM ∈ (2ak, 2ak+ 1] we have

$${\mathsf{Opt}}({\Gamma}_{M})= \left\{\begin{array}{ll} C_{k-1}= (a^{k})^{2} + (M-a^{k})^{2}&\text{ if } M\in (2a^{k},\alpha a^{k})\\ C_{k-1}= (a^{k})^{2} + (M-a^{k})^{2}&\text{ if } M\in [\alpha a^{k},\beta a^{k})\\ C_{k}= a^{k + 1}(M-\frac{1}{4}a^{k + 1})&\text{ if } M\in [\beta a^{k},\gamma a^{k}]\\ C_{k}= (a^{k + 1})^{2} + (M-a^{k + 1})^{2}&\text{ if } M\in (\gamma a^{k},2 a^{k + 1}]. \end{array}\right. $$

Proof

Recall that a ≥ 2. From (5) we have Ck− 1 = (ak)2 + (Mak)2 whereas the expression for Ck changes depending where M is located.

  1. (a)

    Initial intervalM ∈ (2ak, αak).

    Here \(M<a^{k}+\frac {1}{2}a^{k + 1}\) so that (5) gives Ck = ak+ 1ak + (Mak)2. Hence, clearly Ck− 1Ck and OptM) = Ck− 1.

  2. (b)

    Final intervalM ∈ (γak, 2ak+ 1].

    Here \(M>\gamma a^{k} =\frac {3}{2}a^{k + 1}\) so that (5) gives Ck = (ak+ 1)2 + (Mak+ 1)2. Proceeding as in the proof of Claim 7, we have Ck− 1Ck if and only if Mak + ak+ 1. The latter holds since \(M \ge \frac {3}{2}a^{k + 1} \ge a^{k + 1}+a^{k}\). Hence OptM) = Ck.

  3. (c)

    Intermediate intervalM ∈ [αak, γak].

Here \(a^{k}+\frac {1}{2}a^{k + 1}\leq M \leq \frac {3}{2}a^{k + 1}\) so that (5) gives

$$C_{k}= a^{k + 1}\left( M-\frac{1}{2}a^{k + 1}\right)+\left( \frac{1}{2}a^{k + 1}\right)^{2}=a^{k + 1}\left( M-\frac{1}{4}a^{k + 1}\right). $$

Then, denoting z = M/ak we have

$$\begin{array}{@{}rcl@{}} C_{k-1}\le C_{k} &\iff& 2(a^{k})^{2} + M^{2} - 2Ma^{k} \le a^{k + 1}M -\left( \frac{1}{2}a^{k + 1}\right)^{2}.\\ &\iff& z^{2}-z(2+a)+\left( 2+\frac{1}{4}a^{2}\right)\leq 0\\ &\iff& 1+\frac{1}{2}a-\sqrt{a-1} \leq z \leq 1+\frac{1}{2}a+\sqrt{a-1}. \end{array} $$

The upper limit for z is precisely β while the lower limit is smaller than α. Hence OptM) = Ck− 1 for M ∈ [αak, βak] and OptM) = Ck for M ∈ [βak, γak].

Figure 4 illustrates the different intervals in which the equilibrium (above) and the optimum (below) change. Notice that OptM) varies continuously even at breakpoints, whereas WEqM) has a jump at ak + ak+ 1. We now proceed to examine the price of anarchy which will be expressed as a function of z = M/ak.

Fig. 4
figure 4

Breakpoints for optimum and equilibrium

From the expressions of WEqM) and OptM) (see Fig. 4) it follows that PoAM) = 1 throughout the initial interval M ∈ (2ak, βak). Over the next interval M ∈ [βak, ak + ak+ 1] we have

$${\mathsf{PoA}}({\Gamma}_{M})=\frac{(a^{k})^{2}+(M-a^{k})^{2}}{a^{k + 1}(M-a^{k + 1}/4)}=\frac{1+(z-1)^{2}}{a(z-a/4)}, $$

which increases from 1 at z = β up to (4 + 4a2)/(a(4 + 3a)) at z = 1 + a.

At M = ak + ak+ 1 the equilibrium has a discontinuity and PoAM) jumps to (4 + 4a)/(4 + 3a) and then it decreases over the interval \(M\in (a^{k}+a^{k + 1},\frac {3}{2}a^{k + 1})\) as

$${\mathsf{PoA}}({\Gamma}_{M})=\frac{a^{k + 1}M}{a^{k + 1}(M-a^{k + 1}/4)}=\frac{z}{z-a/4}. $$

Finally, for \(M\in (\frac {3}{2}a^{k + 1},2a^{k + 1}]\) the price of anarchy continues to decrease as

$${\mathsf{PoA}}({\Gamma}_{M})=\frac{a^{k + 1}M}{(a^{k + 1})^{2}+(M-a^{k + 1})^{2}}=\frac{az}{a^{2}+(z-a)^{2}}. $$

going back to 1 at z = 2a which corresponds to M = 2ak+ 1.

Thus the price of anarchy oscillates over each interval (2ak, 2ak+ 1] between a minimum value of 1 and a maximum of (4 + 4a)/(4 + 3a). This completes the proof of Theorem 4. □

Proof 15 (Proof (of Theorem 5))

Consider a parallel network with two edges with a quadratic cost c1(x) = x2 on the upper edge and a lower edge cost defined by linearly interpolating c1, that is, for a ≥ 2 we let (see Fig. 5)

$$c_{2}(y)=(a^{k-1}+a^{k})y-a^{k-1}a^{k},\quad \text{for } y\in{[a^{k-1},a^{k}]},\quad k\in\mathbb{Z}. $$
Fig. 5
figure 5

x2 and its linear interpolation

Note that c1 and c2 are convex. Consider the optimal cost problem

$${\mathsf{Opt}}({\Gamma}_{M})=\min\limits_{{\begin{array}{c}x+y=M \\ x,y\geq 0 \end{array}}}x^{3}+y c_{2}(y). $$

Since the function h(y) = yc2(y) is non-differentiable, the optimality condition reads 3x2h(y). In particular, the subdifferential at y = ak is

$$\partial h(a^{k})=[a^{2(k-1)}(2a^{2}\,+\,a),a^{2k}(2\,+\,a)] $$

and there is a range of values of M for which the optimal solution is y = ak. The smallest such M is obtained when 3x2 = a2(k− 1)(2a2 + a). This gives as optimal solution y = ak and x = ak− 1b, with \(b=\sqrt {(2a^{2}\,+\,a)/3}\), corresponding to Mk = ak− 1[a + b] with optimal value

$${\mathsf{Opt}}({\Gamma}_{M_{k}})=a^{3(k-1)}[b^{3}+a^{3}]. $$

In order to find the equilibrium for Mk we solve the equation x2 = c2(y) with x + y = Mk. A routine calculation gives x = ak− 1c and y = ak− 1d with

$$\begin{array}{@{}rcl@{}} c&=&\frac{1}{2}\left[\sqrt{(a + 1)^{2}+ 4a^{2}+ 4(a + 1)b}-(a + 1)\right],\\ d&=&a+b-c. \end{array} $$

Note that 1 < d < a so that y ∈ (ak− 1, ak), and therefore the equilibrium cost is

$${\mathsf{WEq}}({\Gamma}_{M_{k}})=a^{3(k-1)}[c^{3}+(a + 1)d^{2}-ad]. $$

Putting together the previous formulae we get

$${\mathsf{PoA}}({\Gamma}_{M_{k}})=\frac{c^{3}+(a + 1)d^{2}-ad}{b^{3}+a^{3}}. $$

For a = 2 this expression evaluates to \({\mathsf {PoA}}({\Gamma }_{M_{k}})\sim 1.0059\) from which the result follows. □

Proof (of Theorem 6)

Take a fixed sequence αk > 0 such that \(\alpha _{k + 1}/\alpha _{k}\to \infty \) and consider a game \({\Gamma }_{M}=(\mathcal {G},M,\boldsymbol {c})\), where \(\mathcal {G}=(V,E)\) is a simple Pigou network with two parallel links with costs given by

$$\begin{array}{@{}rcl@{}} c_{1}(x)&=&c(x):= \left\{\begin{array}{ll} \mathrm{e}^{}&\text{ for } x<1,\\ \mathrm{e}^{x}/x&\text{ for } x\ge 1, \end{array}\right.\\ c_{2}(y)&=&\bar{c}(y):=c(\alpha_{k + 1})\;~\text{for } y\in (\alpha_{k},\alpha_{k + 1}]. \end{array} $$

Since we are interested in asymptotic results, we are concerned only with the case c(x) =ex/x.

Depending on the location of M, the equilibrium is given by

$$\begin{array}{@{}rcl@{}} 2\alpha_{k} < M \le \alpha_{k}+\alpha_{k + 1} &\Longrightarrow& y^{*}=\alpha_{k}, x^{*}=M-\alpha_{k},\\ \alpha_{k}+\alpha_{k + 1} < M \le 2\alpha_{k + 1} &\Longrightarrow& y^{*}=M-\alpha_{k + 1}, x^{*}=\alpha_{k + 1}. \end{array} $$

We note that at the point Mk = αk + αk+ 1 the equilibrium has a discontinuity. The cost inmediately to the right of this point is

$$\begin{array}{@{}rcl@{}} {\mathsf{WEq}}({\Gamma}_{M_{k}^{+}})&=&\lim\limits_{M\downarrow M_{k}} {\mathsf{WEq}}({\Gamma}_{M}) \\ &=& \alpha_{k + 1}c(\alpha_{k + 1})+(M_{k}-\alpha_{k + 1})c(\alpha_{k + 1})\\ &=&M_{k} c(\alpha_{k + 1}). \end{array} $$
(6)

Let us now turn to computing the optimum

$${\mathsf{Opt}}({\Gamma}_{M})=\min\limits_{0 \le x \le M}xc(x)+(M-x)\bar{c}(M-x)=\min\limits_{j} C_{j} $$

which we decomposed into the restricted minima Cj given by

$$C_{j}=\min_{\alpha_{j}< M-x\le \alpha_{j + 1}}xc(x)+(M-x)c(\alpha_{j + 1}).$$

The unconstrained minimum for each j is obtained by solving the equation \(\mathrm {e}^{x}=\frac {\mathrm {e}^{\alpha _{j + 1}}}{\alpha _{j + 1}}\) so that denoting

$$\begin{array}{@{}rcl@{}} x_{j}&=&\alpha_{j + 1}-\ln\alpha_{j + 1},\\ y_{j}&=&M-\alpha_{j + 1}+\ln\alpha_{j + 1}, \end{array} $$

we have the following expression for the constrained minimizers \(\tilde y_{j}\) and the values Cj

$$\begin{array}{@{}rcl@{}} y_{j} < \alpha_{j} &\Longrightarrow& \widetilde{y}_{j} = \alpha_{j}\;\;\;\, \Longrightarrow C_{j}=\mathrm{e}^{M-\alpha_{j}} + \frac{\alpha_{j}}{\alpha_{j + 1}}\mathrm{e}^{\alpha_{j + 1}},\\ y_{j} > \alpha_{j + 1} &\Longrightarrow& \widetilde{y}_{j} = \alpha_{j + 1} \Longrightarrow C_{j}=\mathrm{e}^{M-\alpha_{j + 1}} + \mathrm{e}^{\alpha_{j + 1}},\\ \alpha_{j} \le y_{j} \le \alpha_{j + 1} &\Longrightarrow& \widetilde{y}_{j} = y_{j} \;\;\;\;\Longrightarrow C_{j}=\frac{\mathrm{e}^{\alpha_{j + 1}}}{\alpha_{j + 1}}(1+M-\alpha_{j + 1}+\ln \alpha_{j + 1}). \end{array} $$

We remark that yj varies continuously with M, and therefore the same holds for \(\tilde y_{j}\) and Cj. It follows that OptM) is also continuous in M.

Claim 10

LetM = Mk. For klargeenough we have

$${\mathsf{Opt}}({\Gamma}_{M_{k}})=\min\{C_{k-1},C_{k},C_{k + 1}\}.$$

Proof

  1. a)

    Cjisdecreasing forjk − 1. Indeed, for jk − 1 we have M > 2αj+ 1 so that

    $$y_{j}=M - \alpha_{j + 1} + \ln \alpha_{j + 1} > \alpha_{j + 1} + \ln \alpha_{j + 1} > \alpha_{j + 1} $$

    and therefore

    $$C_{j}=\mathrm{e}^{M-\alpha_{j + 1}} + \mathrm{e}^{\alpha_{j + 1}}.$$

    Denoting h(x) :=eMx +ex, the inequality CjCj− 1 is equivalent to h(αj+ 1) ≤ h(αj), which holds because h is decreasing on the interval [0,M/2] and since \(\alpha _{j} \le \alpha _{j + 1} \le \alpha _{k} \le \frac {M}{2}\).

  2. b)

    Cjis increasing forjk + 1. We first show that if k is large enough then yj < αj. Considering the expression of yj, this inequality is equivalent to \(M < \alpha _{j} + \alpha _{j + 1} - \ln \alpha _{j + 1}\). Now, since M ≤ 2αk+ 1 ≤ 2αj it suffices to show that

    $$2 \alpha_{j} < \alpha_{j} + \alpha_{j + 1} - \ln \alpha_{j + 1} $$

    which can also be written as

    $$\frac{\ln \alpha_{j + 1}}{\alpha_{j + 1}} + \frac{\alpha_{j}}{\alpha_{j + 1}} < 1.$$

    Now, our choice of the sequence αj implies that the right hand side tends to zero as \(j\to \infty \), proving that yj < αj for jk + 1 provided that k is chosen large enough. In this situation we have for all jk + 1

    $$C_{j}=\mathrm{e}^{M- \alpha_{j}}+\frac{\alpha_{j}}{\alpha_{j + 1}}\mathrm{e}^{\alpha_{j + 1}}. $$

    In order to show that CjCj+ 1 we note that

    $$\begin{array}{@{}rcl@{}} C_{j} \le C_{j + 1} \iff \mathrm{e}^{M- \alpha_{j}}+\frac{\alpha_{j}}{\alpha_{j + 1}}\mathrm{e}^{\alpha_{j + 1}} \le \mathrm{e}^{M- \alpha_{j + 1}}+\frac{\alpha_{j + 1}}{\alpha_{j + 2}}\mathrm{e}^{\alpha_{j + 2}}. \end{array} $$

    For x ≥ 1 the function ex/x is increasing so that

    $$\mathrm{e}^{M- \alpha_{j}}+\alpha_{j}\frac{\mathrm{e}^{\alpha_{j + 1}}}{\alpha_{j + 1}} \le \mathrm{e}^{M- \alpha_{j}}+\alpha_{j}\frac{\mathrm{e}^{\alpha_{j + 2}}}{\alpha_{j + 2}}. $$

    It remains to replace αj by αj+ 1 on the right for which it suffices to prove that the function

    $$g(x):=\mathrm{e}^{M-x}+x\frac{\mathrm{e}^{\alpha_{j + 2}}}{\alpha_{j + 2}} $$

    is increasing for xαk+ 1. Indeed, since \(g^{\prime }(x)=\frac {\mathrm {e}^{\alpha _{j + 2}}}{\alpha _{j + 2}}-\mathrm {e}^{M-x}\) we have

    $$\begin{array}{@{}rcl@{}} g^{\prime}(x) \ge 0 &\iff& M-x \le \alpha_{j + 2} - \ln \alpha_{j + 2} \\ &\iff& M \le x + \alpha_{j + 2} - \ln \alpha_{j + 2}, \end{array} $$

    which is true for xαk+ 1 iff \(M \le \alpha _{k + 1} + \alpha _{j + 2} - \ln \alpha _{j + 2}\). Since M ≤ 2αk+ 1, it is enough to have \(\alpha _{k + 1} \le \alpha _{j + 2} - \ln \alpha _{j + 2}\), and since \(x \mapsto x - \ln x\) is increasing for x ≥ 1, it suffices to prove that \(\alpha _{k + 1} \le \alpha _{k + 2} - \ln \alpha _{k + 2}\), that is,

    $$ \frac{\ln \alpha_{k + 2}}{\alpha_{k + 2}}+\frac{\alpha_{k + 1}}{\alpha_{k + 2}} \le 1. $$
    (7)

    By our choice of αk this holds for k large enough, which completes the proof of Claim 10.

Claim 11

LetM = Mk. For all klarge enough we have

$$ {\mathsf{Opt}}({\Gamma}_{M_{k}})=\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}(1+\alpha_{k}+\ln \alpha_{k + 1}). $$
(8)

Proof

From the proof of Claim 10 we have

$$\begin{array}{@{}rcl@{}} C_{k-1}&=& e^{M-\alpha_{k}}+e^{\alpha_{k}}= e^{\alpha_{k + 1}}+e^{\alpha_{k}}\\ C_{k + 1}&=&e^{M-\alpha_{k + 1}}+\frac{\alpha_{k + 1}}{\alpha_{k + 2}}e^{\alpha_{k + 2}}=e^{\alpha_{k}}+\frac{\alpha_{k + 1}}{\alpha_{k + 2}}e^{\alpha_{k + 2}} \end{array} $$

and it is easy to see that Ck− 1Ck+ 1 so that in fact \({\mathsf {Opt}}({\Gamma }_{M_{k}})=\min \{C_{k-1},C_{k}\}\).

Now, the expression of Ck depends on the location of \(y_{k}=M-\alpha _{k + 1}+\ln \alpha _{k + 1}\) with respect to the interval [αk, αk+ 1]. Substituting the value of M = αk + αk+ 1 we get \(y_{k}=\alpha _{k}+\ln \alpha _{k + 1}\) so that clearly yk > αk. Also, for k large we have yk < αk+ 1 since

$$\frac{y_{k}}{\alpha_{k + 1}}=\frac{\alpha_{k}}{\alpha_{k + 1}}+\frac{\ln\alpha_{k + 1}}{\alpha_{k + 1}}\to 0.$$

It follows that

$$C_{k}=\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}(1+M-\alpha_{k + 1}+\ln \alpha_{k + 1})=\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}(1+\alpha_{k}+\ln \alpha_{k + 1}) $$

and therefore it remains to show that CkCk− 1. The latter is equivalent to

$$\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}(1+\alpha_{k}+\ln \alpha_{k + 1})\leq \mathrm{e}^{\alpha_{k + 1}}+\mathrm{e}^{\alpha_{k}}$$

which can be rewritten as

$$\frac{1}{\alpha_{k + 1}}(1+\alpha_{k}+\ln \alpha_{k + 1})\leq 1+\mathrm{e}^{\alpha_{k}-\alpha_{k + 1}}.$$

Since the right hand side tends to 0, this inequality holds for k large enough. □

Conclusion

Let us compute the price of anarchy just to the right of Mk, namely

$${\mathsf{PoA}}({\Gamma}_{M_{k}^{+}})=\frac{{\mathsf{WEq}}({\Gamma}_{M_{k}^{+}})}{{\mathsf{Opt}}({\Gamma}_{M_{k}^{+}})}.$$

Since OptM) is continuous in M, using (6) and (8) we get

$$\begin{array}{@{}rcl@{}} {\mathsf{PoA}}({\Gamma}_{M_{k}^{+}})&=&\frac{(\alpha_{k}+\alpha_{k + 1})\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}}{\frac{\mathrm{e}^{\alpha_{k + 1}}}{\alpha_{k + 1}}(1+\alpha_{k}+\ln \alpha_{k + 1})}\\ &=&\frac{\alpha_{k}+\alpha_{k + 1}}{1+\alpha_{k}+\ln \alpha_{k + 1}}\\ &=&\frac{\frac{\alpha_{k}}{\alpha_{k + 1}}+ 1}{\frac{1}{\alpha_{k + 1}}+\frac{\alpha_{k}}{\alpha_{k + 1}}+\frac{\ln \alpha_{k + 1}}{\alpha_{k + 1}}}\to\infty \end{array} $$

from which we get the conclusion

$$\limsup\limits_{M\to\infty}{\mathsf{PoA}}({\Gamma}_{M})=+\infty. $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Colini-Baldeschi, R., Cominetti, R. & Scarsini, M. Price of Anarchy for Highly Congested Routing Games in Parallel Networks. Theory Comput Syst 63, 90–113 (2019). https://doi.org/10.1007/s00224-017-9834-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-017-9834-1

Keywords

Navigation