# A Network Decomposition

We consider a decomposition of the following network utility optimization problem

SYS:

Here each route $r\in{\mathcal R}$ had a utility $U_r(\Lambda_r)$ for rate $\Lambda_r$. We then maximize the aggregate of these utilities subject to the condition that the rates using a resource $j\in{\mathcal J}$ do not exceed the capacity of that resource $C_j$.

Here, we saw two examples of queueing systems which implicitly optimized a network utility optimization: a closed queueing network, optimizing a $\log$ function; and TCP Reno, optimizing an inverse square function. The closed queueing network just naively stored-and-forwarded packets on each route. TCP Reno had an internal network which, just like the closed network, stored-and-forwarded packets but also included a congestion controllers on the each route which controlled the number of packets on each route. In other words, adding the congestion controllers changed the optimization from a log utility function to an inverse square function. We now wish to mathematically describe that how that change in the optimization occurred.

In addition to the system optimization , we consider the following two further optimization problems:

NET($w^*$):

USR$_r(q_r)$:

LIN$_r$:

• The network optimization represents the rate allocated by a queueing network.
• The user optimization represents the optimization conducted by a congestion controller.

We show that the system optimization can be decomposed into the network and the users sub-problems.

### Decomposition: a quick discussion

Consider the following optimization

• Here for each route $r\in{\mathcal R}$, $u_r$ is a concave function and each ${\mathcal C}_r$ is a convex set.

It is fairly, immediate that this is equivalent to each route, $r\in{\mathcal R}$, solving the optimization, i.e.

In this way the optimization is decomposed in to smaller problems that can be solved for each route in ${\mathcal R}$.

### The Network Decomposition: further discussion

Unfortunately, the optimization does not in general have the nice product structure discussed above. So, instead, let’s look at it’s Lagrangian function:

[Network Decomposition] Given $U'_r(0)=\infty$, there exists variables $\Lambda^*=(\Lambda^*_r: r\in{\mathcal R})$, $w^*=(w^*_r:r\in{\mathcal R})$, $q^*=(q^*_r:r\in{\mathcal R})$ where $\Lambda^*$ optimizes $(w^*)$, $w_r$ optimizes USR$_r$($p_r^*$) for $r\in{\mathcal R}$, and LIN$_r$ holds iff $\Lambda^*$ optimizes SYS.

Let’s see what conditions $w^*$ must satisfy in order for NET($w^*$) to have a solution that is also optimal for SYS.

After adding slack variables, $z$, and Lagrange multipliers, $q$, the Lagrangian of NET($w^*$) is

Here, we use the shorthand $p_r=\sum_{j\in r} q_j$. The Lagrangian for SYS differs having $U_r(\Lambda_r)$ in place of $w_r\log \Lambda_r$. The (KKT) conditions for optimality are complementary slackenss, dual/primal feasiblity, and stationarity. The complementary slackness and dual/primal feasibility conditions: $q^*_jz^*_j=0$ and, $q^*_j\geq 0$, $\sum_{r:j\in r} \Lambda^*_r \leq C_j$ $j\in{\mathcal J}$, are the same for NET($w^*$) and SYS. Thus, the optimality condition that differs is stationarity. For the NET($w^*$) and SYS problem, this is

Due to the first equality, we can equivalently express $w^*_r$ in terms of $p_r^*$ in the second equality

Notice the first term is LIN$_r$, and the second term is precisely the optimality condition for USR$_r(p^*_r)$. In otherwords, the conditions which $w^*$ must satisfy for the optimal solution to NET($w^*$) to be optimal for SYS are precisely to satisfy LIN$_r$ and optimality of USR$_r(p_r^*)$. So, we can use NET($w^*$), USR$_r(p_r^*)$ and LIN$_r$ to construct a solution to SYS.

Conversely, we know an optimal solution to SYS exists. As discussed, this satisfies the same complementary slackness and feasibility conditions as NET($w$) [for any $w$]. The SYS problem must also satisfy the stationarity condition $U_r(\Lambda^*_r)=p^*_r$. So we can just define a variable $w_r^*:=\Lambda_r^*p^*_r$. Thus we have LIN$_r$ and the stationarity condition for NET($w^*$) and, as argued between and , $w^*_r$ is also an optimal solution to USR$_r(q^*_r)$. Thus the optimality of SYS implies the existence of solutions simultaneously solving $(w^*)$, USR$_r$($p_r^*$) and LIN$_r$.

• Decomposition results often rely on some form of separability. Eg. $\max_{x_1\in{\mathcal X}_1, x_2\in{\mathcal X}_2 } L(x_1)+L(x_2)=\max_{x_1\in{\mathcal X}_1}L(x_1) + \max_{x_2\in{\mathcal X}_2 } L(x_2)$ where $L$ is an objective function or a Lagrangian. This decomposition is somewhat different: it considers what conditions the parameters of certain sub-optimizations must satisfy to solve a global optimization.

In the context of a queueing network, we interpret this result as follows. A congestion controller on each route $r$, chooses a number of packets to send across the network $w_r$. As we saw for a closed queueing network, the network solves the optimization $(w^*)$ the packets receive rate $\Lambda_r$ and delay/loss $q_r$ satisfying $LIN_r$. On receiving $q_r$, the controller updates $w_r$ to optimize USR$_r(q_r)$. This feedback loop optimizes $SYS$ by respectively solving $NET$, $USR$ and $LIN$. In the context of allocating resources, we interpret $w_r$ as wealth decided to be spent by route $r$ and $p_r$ the price per unit of resource $\Lambda_r$.