Efficient priority queueing routing strategy on mobile networks

Mobile networks are intriguing in recent years due to their practical implications. Previous routing strategies for improving transport efficiency have little attention what order should the packets be forwarded, just simply used first-in-first-out queue discipline. Here we apply priority queueing discipline to shortest distance routing strategy on mobile networks. Numerical experiments show it not only remarkably improves network throughput and the packet arriving rate, but also reduces average end-to-end delay and the rate of queueing delay. Our work may be helpful in routing strategy designing on mobile networks.


I. INTRODUCTION
Traffic is widespread in engineering and social systems [1]. For example, information delivery over the World Wide Web [2] and the transportation on highway system [3]. The rapid development of society led to a huge increase of traffic volume in many real networked communication systems [4]. An essential challenge is how to enhance transport efficiency. It's found that the optimal performance on networks depends mainly on the structural characteristics and the routing strategy [5]. But for the existing networks, the structure has been fixed, such as the Internet, even if its structure is far from optimal, it is difficult to change. Hence, more effort to improve transport efficiency are in routing strategies.
The shortest path routing strategy is widely used in real communication systems [6]. In shortest path strategy, a packet walks to its destination along the shortest path. It will easily lead to congestion on hub nodes as most packets tend to pass through the links with high betweenness [7]. This fact consequently motivates researchers have proposed many other improved versions, which are considered global or local information on networks. For example, Yan et al. proposed efficient routing strategy that each node have the whole networks global topological information [8]. They found that network capability is improved more than 10 times by optimizing the efficient path. However, the strategy based on global information may be practical for small or medium size networks but not for large real communication such as the Internet, World Wide Web due to large storage capacity in each node and heavy communication cost on searching global information in networks [9]. Therefore, scientists proposed many other strategies based on local information. Wang et al. proposed local information routing strategy, in which packets are routed only based on local topological information according neighbor's degree of each node with a tunable parameter β. They found maximal capacity corresponds to β = −1 in the case of identical nodes delivering ability [10]. Liu et al. proposed an adaptive local routing strategy based on realtime load information, in which the node adjusts the forwarding probability with the dynamical traffic load(packet queue length) and the degree distribution of neighbouring nodes. They found it can improve the transmission capacity by reducing routing hops [11]. Wang et al. proposed mixing routing strategy by integrating the degree of node and the 2 number of packets with parameter. It's found this strategy can improve the efficiency comparing with the strategy by adopting exclusive local static information [12].
With the development of mobile technology, networks of mobile agents are widely used. A typical example is mobile ad-hoc network, in which agetns move randomly and two agents can transfer data packets with each other only when the distance between them is less than a critical value [13,14]. Yang et al. proposed a random routing strategy on mobile networks, in which packets deliver to a randomly selected agent in the communication circle. They found an algebraic power law between the throughput and the communication range with an exponent determined by the speed [15]. Moreover, Yang and Tang proposed an adaptive routing strategy, in which incorporates geographical distance with local traffic information through a tunable parameter. They found there exists an optimal value of the parameter, leading to the maximum traffic throughput of the network [16].
In previous work, queue discipline has little attention in routing strategies. Queue discipline refers to the manner in which packets are selected when arrival packets exceed maximal processing capacity of agent [17]. The most common discipline is first-in-first-out. However, this is certainly not the only possible queue discipline. The second is last-infirst-out discipline. A typical example is the last clicked information may be served first in the World Wide Web. The third is priority schemes, in which packets are given priorities, the ones with higher priorities to be selected ahead of those with lower priorities, regardless of their time of arrival to the agent. For example, paid packets with high priority are delivered first when downloaded from a certain web site.
In most existing work, first-in-first-out discipline is widely used. It is simple and convenient, but far from optimal. AS our knowledge, only a few articles use other discipline. Tadić et al. study the web-graph model with last-in-firstout discipline [18][19][20][21]. Kim et al. introduce a priority routing strategy, in which each packet is pre-set a priority. They found the traffic behavior is improved in the congestion region, but worsened in the free flow region [22]. Tang and Zhou define an effective distance by considering simultaneously the waiting time and remaining path-length to the destination, according to which the packets queue in a descending order of the effective distance. They found it can remarkably enhance the network throughput [23]. Du et al. propose a shortest-remaining-path-first queueing strategy in which a packet's priority is determined by the distance between its current location and destination. They found the traffic efficiency is greatly improved, especially in the congestion state [24]. Zhang et al. introduce a dynamicinformation-based queueing strategy, They found the network capacity has no obvious change, but it got significant improvement for some traffic indexes such as average end-to-end delay, rate of queueing delay [25].
In previous studies, few researchers have applied priority discipline to routing strategy on mobile networks. However, in the real dynamic network, packets are often assign priority. For example, important military information often has a higher delivered priority on mobile ad hoc networks in the battlefield environment. In this article, we propose a shortest-distance-first(SDF) routing strategy on mobile networks, in which packet is delivered first if the distance between location of neighbor agent and its destination is shortest. Compared with first-in-first-out queueing discipline, our strategy not only remarkably improves network throughout and the packet arriving rate, but also reduces average end-to-end delay and the rate of queueing delay.

A. Network model and queueing strategy
Traffic is simulated on a random network of mobile agents, i.e., N agents (numbered from 1 to N ) move on a square-shaped cell of size L × L. Periodic boundary condition is used. Initially, agents distribute randomly in the area. At each time step ∆t, moving direction of an agent is re-directed randomly, while its speed v is selected to be a constant for simplicity. At the same time, a total of R packets are generated in the network, whose source and destination are randomly selected. All the agents have a same communication radius α. Two agents can realize a communicate with each other only when the distance between them is less than α.
The queueing strategies are described as follows: (1) Parameters setting. We set the number of agents N = 800, delivery capability of each node C = 1, time step ∆t = 1, total simulation time T = 5000, and the queue buffer of each agent is unlimited.
(2) At each time, R packets with random sources and destinations are generated in the network.
(3) Update the mobile agents position as follows: where θ i (t − 1) is the angle of moving direction of the ith agent at time t − 1 with respect to x-axis, generated by sampling from a uniform distribution in [−π, π], x i (t) and y i (t) are the coordinates of ith agent at time t.
(4) For each agent, take out M packets in the queue. Let the number of packets in the queue is K.  If C = 0, go to step (2) for next T.
(6) For a packet m in the system, D m (t) is defined as the shortest distance between location of neighbor agent and destination of the packet, namely, where m = 1, 2, ...M represents the packet of agent i, K is the set of neighbors of agent i, x k (t) and y k (t) represent the coordinates of the neighbor agent k at time t, x l (t)and y l (t) are the coordinates of the packet's destination at time t.  (10) Compute the order parameter η(R), the critical packet generating rate R c and the average end-to-end delay T , etc.

B. Critical packet generating rate
In the present studies the traffic is determined by two competitive factors. The one is the removing packets determined by routing strategy, the communication radius, density of agents, moving speed, and the number of packets transferred. The other is the number of packets produced each time step. When the generating rate of packets is small, new packets can arrive quickly their destination, the load will keep unchanged or even zero, called a free-flow state. When the rate increases to a certain value, averagely at each time step there appear some new packets that can not be delivered to their destination on time. This aggregation of new packets will increases rapidly the load of the network. In reality, a network has a limited capacity, a persistent overload on which will lead to onset of a congestion, i.e., a collapse of the system. To characterize the throughput of a network, we exploit the order parameter where N p (t) represents the total number of packets existing in the whole network at time t. When R is less than a critical value of R c , there is a balance between the generated and removed packets, which implies η(R) = 0. When R becomes larger than R c , a transition occurs from a free-flow state to a congestion. A higher R c corresponds to a better algorithm.

C. Average end-to-end delay
End-to-end delay refers to the time taken for a packet to be transmitted across a network from source to destination.
It is defined as where K is the number of links, D trans represents time to send bits into link, D proc represents nodal processing time, D prop represents propagation delay, and D queue represents time waiting in the queue. Here we focus on D prop and D queue , thus neglect D trans and D proc for simplicity. Average end-to-end delay is defined as where D i represents end-to-end delay of packet i, N arrive is the number of arrived packets. Average end-to-end delay < D > is an important measurement of a network's performance. In real communication networks, packets 5 have a finite life time to avoid wasting the network resources [27,28]. For example, an error of packet's destination may cause the packet to be transmitted endlessly. Therefore, if a packet has been transferred more than finite life time, it will be removed from the networks even if it has not achieved its destination. Obviously, high value of < D > means more packets will be removed before they can achieve their destination.

D. The rate of queueing delay
The rate of queueing delay is defined as where D i−end−end is the end-to-end delay of packet i, D i−queue is the total delay in the queues of packet i, and N arrive is the number of arrived packets. In general, Q reflects the degree of customer satisfaction. In many systems, such as airlines systems and the World Wide Web, users become impatient if Q is large.

E. Packet arriving rate
Packet arriving rate reflects to the number of generated packets divided by the number of arrived packets. It is defined as where N arrive is the number of arrived packets and N create the number of generated packets. Obviously, A is an index of system throughput. In the free-flowing state, the generated packets are sent to the destination on time, so A is quite close to 1, while in the congestion state, the generated packets accumulate in the network and can not be distributed timely, thus A is smaller than 1.
III. RESULTS Figure 1 shows two typical results of η versus R with a selection of α = 1 at low (v = 0.1) and high (v = 1) moving speed, respectively. For each specific case, there exists a finite value of R c , at which a transition from free-flow to congestion occurs in a sharp interval of R. For description convenience, we represent the rapid increase of η by using R c at which η starts to be non-zero. We can find the R c of the shortest-distance-first strategy is larger than that of the first-in-first-out strategy, especially at high moving speed. In fact, R c of the first-in-first-out strategy is ∼ 30 at v = 1, while that of the shortest-distance-first strategy is ∼ 700 . Obviously, the shortest-distance-first strategy remarkably improves the network throughput. Figure 2(a) shows the dependence of R c on v at α = 1 and Fig. 2(b) shows the relationship of R c and α at v = 0.3. We can find that R c of the shortest-distance-first strategy is larger than that of the first-in-first-out strategy at different speed v and communication radius α. From Fig. 2(a), R c of the shortest-distance-first strategy increases rapidly with v increases in the beginning, and keeps stable (∼ 700) when v is large than 0.7, while that of the first-in-first-out strategy increases with v increases and reaches the maximum at v = 0.1 and then decreases slowly. Figure 2(b) represents R c of two strategies increases with the communication radius α increases, but R c of the shortest-distance-first increases faster than that of the first-in-first-out strategy. Figure 3 represents the dependence of average end-to-end delay < D > on R, and the insert shows that in the free-flow state(R < R c ). We find < D > of two strategies increases as R increases. From the insert, we can see < D > of the shortest-distance-first strategy is a little lower than that of the first-in-first-out strategy in the free-flow state.
However, form Fig. 3, < D > of the shortest-distance-first strategy is much lower than that of the first-in-first-out strategy in the congested state. respectively. From Fig. 4(a), Q of two strategies is close to 0 when R is small, and with R increase, Q increases quickly, when R increases to ∼ 80, Q of the first-in-first-out strategy is close to the maximal value 1, while that of the shortest-distance-first strategy is just ∼ 0.46. From Fig. 4(b), in the free-flow state, the packet arriving rate A of the two strategies is close to 1, while in the congestion state, A of the first-in-first-out strategy decreases more quickly than that of the shortest-distance-first strategy, when R increases to 1000, A of the first-in-first-out strategy is close to the minimal value 0 while that of the shortest-distance-first strategy is ∼ 0.53 and ∼ 0.8 at v = 0.1 and   1, respectively. It is obvious that the shortest-distance-first strategy can achieve higher value of packet arriving rate and get lower value of the rate of queueing delay than the first-in-first-out strategy.

IV. CONCLUSIONS
Traffic on mobile networks is a challenging work. However, previous routing strategies have little attention what order should the packets be forwarded, just simply used first-in-first-out queue discipline. Based on queueing theory, we propose a shortest-distance-first strategy, in which packets those have shortest distance are delivered first, regardless of their arrival time.
Our strategy improves remarkably network throughput than the first-in-first-out strategy, especially when agents move at high speed. In addition, our strategy increases the packet arriving rate, and reduces average end-to-end delay and the rate of queueing delay.
We also find critical packet generating rate R c of the first-in-first-out strategy increases as moving speed increases at the beginning and reaches the maximin value at v = 0.1 and then decreases slowly, while that of the shortestdistance-first strategy increases until v = 0.7 and then remains stable (∼ 700). Besides, R c of the two strategies increases as communication radius increases. Finally, it should be pointed out the priority queue discipline can be applied to other routing strategies. Our work may be helpful in routing strategy designing on mobile networks.