From Structure to Activity : Using Centrality Measures to Predict Neuronal Activity

It is clear that the topological structure of a neural network somehow determines the activity of the neurons within it. In the present work, we ask to what extent it is possible to examine the structural features of a network and learn something about its activity? Specifically, we consider how the centrality (the importance of a node in a network) of a neuron correlates with its firing rate. To investigate, we apply an array of centrality measures, including In-Degree, Closeness, Betweenness, Eigenvector, Katz, PageRank, Hyperlink-Induced Topic Search (HITS) and NeuronRank to Leaky-Integrate and Fire neural networks with different connectivity schemes. We find that Katz centrality is the best predictor of firing rate given the network structure, with almost perfect correlation in all cases studied, which include purely excitatory and excitatory–inhibitory networks, with either homogeneous connections or a small-world structure. We identify the properties of a network which will cause this correlation to hold. We argue that the reason Katz centrality correlates so highly with neuronal activity compared to other centrality measures is because it nicely captures disinhibition in neural networks. In addition, we argue that these theoretical findings are applicable to neuroscientists who apply centrality measures to functional brain networks, as well as offer a neurophysiological justification to high level cognitive models which use certain centrality measures.


Introduction
Understanding large complex networks has become increasingly important due to an increase of networked data.As such, traditional graph theory, whose typical concern is small artificial graphs, has been expanded to include the study of large realworld networks. 5,12,14,64nalyses targeting the network properties of cortical networks have entered neuroscience as well 6,14,65 because these properties lead to interesting network dynamics. 42,52It is by now also well established that many neural systems reveal what is called the "small-world property", 58,66 a network structure that has predominantly local connections, but where a fraction of random long-range connections exists as well, which can serve as short cuts for efficient information exchange.A number of papers have analyzed the stability and dynamical properties of small-world networks of neurons, spiking and nonspiking. 23,50,54his network structure is known to support a high efficiency of information exchange and high capacities in associative memory networks, which implement Donald Hebb's classic idea of "cell assemblies". 15,29,35,41,62It also seems crucially related to the generation of epileptiform brain activity. 43,49,61mall-world properties of cortical connectivity have further been related to general intelligence and creativity. 33,39,513][4] The reader is referred to Ref. 14 which gives a good overview of network theory in general applied to structural and functional networks in the brain.The particular property of networks and their relation to cortical networks this paper is concerned with, however, is the concept of centrality.
A centrality measure assigns a centrality (importance) value to each node in a network.Numerous measures of centrality have been designed, each capturing slightly different aspects of what it means for a node to be important.The simplest measure of centrality is In-Degree centrality, where the centrality of a node is the number of edges pointed to it.Centrality measures have found a wide range of applications: from ranking web pages on the Internet 34,47 -where the centrality of a web page determines on what page and position it is shown after a web search, to fighting the transmission of infections 8,16,38 -where centrality is used to identify individuals with high risk of infection.As a result, it seems natural to extend these measures to the study of brain networks.While there has been a great deal of thought regarding centrality measures in functional brain networks, there has been very little study in relation to neural networks themselves.For example, centrality measures have been used to identify hubs in functional brain networks. 30,31,37,55,67Functional brain networks are created in these papers by treating each fMRI voxel as a node.Edges between nodes are inferred through time signal correlations.Centrality measures (usually In-Degree, Eigenvector, Closeness and Betweenness) are then applied to such networks to identify brain areas which are hubs.It is interesting to note that when such hubs are removed from these networks the small-world index decreases. 55In clinical neuroscience, centrality measures have been successfully applied to functional brain networks to understand and diagnose brain disorders such as epilepsy and Alzheimer's disease. 10,13,18,57For example Betweenness centrality was used along with other network properties to create a diagnostic prediction model for children with epilepsy. 57In patients with Alzheimer's disease it was found that there is low Eigenvector centrality in the left temporal region compared to healthy patients. 18here is one paper which considers centrality measures and neural networks, namely Gürel et al.'s paper. 26In this insightful paper, the average firing rate of Leaky-Integrate and Fire (LIF) neurons is predicted by using various machine learning algorithms trained on structural properties of the network.The training set consists of centrality measures on the network including: In-Degree, Page-Rank, Hyperlink-Induced Topic Search (HITS) and a specially designed measure called NeuronRank, as well as other structural properties.The authors find that the machine learning algorithms perform significantly better when centrality measures are included.
This suggests a link between neuron centrality and neural activity; it is clear that the centrality measures are providing useful information to machine learning algorithms.However, the more nuanced question of what exactly the centrality of a neuron tells us about that neuron remains unanswered.
Instead of predicting the activation of a network as a whole as Gürel et al. do, we consider the activation of individual neurons.Instead of using machine learning algorithms, we simply look at the correlation between centrality and neuron activity.We also consider how different types of networks affect the relationship between centrality and activity.Specifically, we consider the relationship in small-world and random neural networks as well as neural networks which satisfy Dale's Principle (this principle states that neurons have either excitatory synapses, or inhibitory synapses, but not both).
In addition to outlining the results of our experiments, we relate our findings to the use of centrality measures in functional brain networks.Specifically, we offer a neuron-level justification for the use of certain centrality measures on functional brain networks in an attempt to bridge the gap between neural and functional brain networks.We argue that our theoretical findings can be exploited by researchers utilizing centrality measures on functional brain networks to understand brain disorders.In particular, we argue that such researchers should experiment with Katz centrality, which is currently seldom used, to understand and exploit the relationship between brain disorders and neural centrality.We also attempt to unify our findings with the use of centrality measures in high level cognitive models.There are numerous algorithms and cognitive models which take advantage of centrality measures.For example Griffith et al.'s 24 model of memory retrieval -where words with high PageRank in semantic networks are found to be recalled more regularly.By understanding the link between centrality and neural activity, we propose a neural level justification for this model.
In Sec. 2, we outline the centrality measures used in our investigation and explain the experiments performed.Afterwards the results are presented for a number of different network setups.We find that Katz centrality almost perfectly correlates with neural activity with all network topologies, while most other measures only correlate when the networks are excitatory and randomly wired.We discuss when the correlation between Katz centrality and neural activity will hold, explain why this correlation exists, explain how Katz centrality can be modified to better capture network dynamics and finally discuss some implications for neuroscience and cognitive science.

Methods
This section first describes the centrality measures (Sec.2.1) used to characterize the importance of neurons in the different experimental setups outlined afterwards (Sec.2.2).

In-Degree centrality
In-Degree centrality is the simplest measure, where the number of edges pointing to a node determines its centrality ( Here, y → x indicates all nodes y which are adjacent to x.When the network is weighted C D (x) is simply the sum of the weights on the edges pointing to x: w. ( The w on y − → w x is the weight of the edge between x and y.

Closeness centrality
The distance between two nodes x and y in a network, denoted d(x, y), is defined as the number of hops made along the shortest path between x and y.The experiments described in this paper calculate the shortest path between two nodes using Dijkstra's algorithm. 20For a graph G = (V, E) the Closeness centrality of a node x is then 7,53 When there does not exist a path between two nodes x and y, the distance function d(x, y) returns 0. This means the Closeness centrality is not defined for isolated nodes.Equation 3 states that a node that is close to many nodes has a high Closeness centrality.
In the case when the network is weighted, the distance between two nodes x and y is given as the sum of the weights along the shortest path between x and y.However, as we want high weights to correspond to a shorter path (as we want a strong synaptic weight between two neurons to correspond to a short distance between them), for each weight w in G we use the inverse of w as the distance between the connected nodes. 44

Betweenness centrality
The Betweenness centrality of a node is the number of times that the node appears on the shortest path between two other nodes.It is given as 21 Here, σ yz is the total number of shortest paths between y and z. σ yz (x) is the number of shortest paths between y and z that include x.When the network is weighted, we apply the same techniques as in the weighted Closeness centrality to calculate shortest paths.

Eigenvector centrality
Eigenvector centrality is a measure of the influence a node has on a network.If a node is pointed to by many nodes (which also have high Eigenvector centrality) then that node will have high Eigenvector centrality.In Eigenvector centrality firstly the centrality of each node, C E (x) is initialized as 1, then the following update rule is applied 11 : Solving ( 5) yields the centrality values.Equation ( 5) can be written as In (6), A is the adjacency matrix of the graph (V, E) and λ is the largest eigenvalue of A. When this equation is solved, e is the Eigenvector centrality of all the nodes.That is, the ith element in e holds the value of the Eigenvector centrality of the ith node in V .When the graph is weighted we have the following: In matrix form this can again be written as (6), where A is not the adjacency matrix (a 0/1-matrix), but the matrix of connection weights.To find the largest Eigenvector the power method is often used. 40

Katz centrality
Katz centrality is a generalization of In-Degree centrality, where not only the adjacent nodes contribute to a node's centrality, but also the nodes k hops away 25,32 : In ( 8), y k − → x represents all nodes y, k hops away from x. Parameter α, for 0 < α < 1 λmax , where λ max is the value of the largest eigenvalue of the adjacency matrix, is a cost factor meaning the farther away a node is from x, the less it contributes.
In the case when the graph is weighted we have the following: where w k is the product of the weights along the k hop path to y.In all our experiments, we pick α = 0.1 as this is less than 1 λmax for all of the networks in our experiments.

PageRank
PageRank is related to Eigenvector centrality.The algorithm is based on the "Random Surfer Model", which considers a user surfing the Internet by randomly clicking hyperlinks.Imagine the user is standing on a node in a graph representing the current web page she is visiting.She then randomly (uniformly) chooses an edge (representing a hyperlink between two web pages), exits the node and walks to the new node.In addition, with a small probability she jumps to an arbitrary node in the network (this is to avoid getting stuck in a subgraph).This process is repeated ad infinitum.The number of times the user visits a node is then proportional to its PageRank.Firstly the PageRank, C P (x) for node x is initalized to 1, then the PageRank is updated in the following manner 47 : In (10), N is the number of nodes in the network, d is a damping factor which controls how often one randomly jumps to another node, out(y) is the outdegree of node y.This algorithm is used by Google to help rank the importance of web pages.When the graph is weighted we have Here, out w (y) is the sum of the weights on outgoing nodes of y.The PageRank can also be calculated using the power method.In our experiments we pick d = 0.85, which is the value recommended in the literature for fast and accurate convergence. 47

Hyperlink-Induced Topic Search
The HITS algorithm, sometimes called the "Hubs and Authority algorithm", is used by Microsoft to rank web pages on the Internet.In this regard it is similar to PageRank.HITS assigns two recursively defined scores to a node, a Hub value and an Authority value.The Hub score of a node is a measure of how many high scoring Authorities it points to.The Authority score of a node is a measure of how many high scoring hubs point to that node.In HITS firstly the Authority score C A (x) and hub score C H (x) are initialized to 1 for all nodes x.The following two update rules are then applied 34 : When the network is weighted we have

NeuronRank
NeuronRank, introduced by Gürel et al., 26 is a centrality measure designed specifically to help determine the activity of a neural network.It relies on the network satisfying Dale's principle, which states that the weights on the outgoing edges of nodes should either be all positive or all negative.NeuronRank is inspired by the HITS algorithm in that it assigns two centrality values to a node, in this case a source value -which represents the net effect a neuron has on a network -and a sink value -which represents the sensitivity of the neuron to other neurons in the network."Sensitive" neurons are highly affected by activity in other neurons in the network, that is their membrane potential is likely to increase if other neurons in the network are active.Both sink and source values are calculated in the following way.We create a modified adjacency matrix A, where for all nodes i and j: We initialize a vector of source values, α: where α i is 1 or −1 if neuron i is excitatory or inhibitory, respectively.The vector of sink values ω is initialized as all ones.We update α and ω as follows: ) α and ω are updated until a convergence criterion is met; specifically, when the difference between vectors at two time steps is suitably small.As we are interested in the relation between neuron activity and network structure, we only consider the sink or sensitivity value of neurons in our experiment.

Experiments
We test whether centrality measures can predict firing activity from structural network properties in a number of experiments that use different connection schemes, purely excitatory or excitatory-inhibitory populations, and homogeneous input versus input provided into a subset of the network.All network implementations use custom Python code with a time step of 0.1 ms and Euler-integration of the network dynamics.For each experiment, we created 1000 LIF neurons, 59 each with a refractory period of 10 time steps, a membrane capacitance of 10 µF, a resistance of 1 kOhm, a resting potential of 0 mV and a threshold of 1.2 mV.The synapses are modeled with δ-synapses, i.e. if a spike arrives at a neuron the post synaptic potential is instantaneously increased by 1.5 • w mV, where w is the weight on the synapse.As we wire each network, we create a weighted directed graph corresponding to the neural network using the Python package networkX. 27ach node corresponds to a neuron.For any pair of post-and pre-synaptic neurons, there is a weighted directed edge added from the node corresponding to the pre-synaptic neuron to the node corresponding to the post-synaptic neuron, weighted by the weight of the synapse.For each neuron, at each time step, with probability 0.5, we externally stimulate it with 3 mV.We run each network for 20,000 time steps.We keep track of the number of spikes of each neuron during the simulation, and when finished we normalize by dividing by the number of spikes of the most frequently spiking neuron.
1750013-5 Here, n i is the number of spikes of neuron i and n max is the number of spikes of the most frequently spiking neuron.We run the centrality measures on the directed graph representing the neural network to calculate the centrality values for each neuron.
The correlation between centrality of a neuron and its normalized number of spikes is shown.We examine the correlation for six different types of networks described below.

Experiment 1: Randomly Wired Excitatory Network
We randomly wire the network with connections.
The weights are chosen independently and identically distributed according to a uniform distribution between 0 and 1.Notice that in this experiment there are no inhibitory synapses.

Experiment 2: Randomly Wired Excitatory and Inhibitory Network
This experiment is the same as in Experiment 1, however in this case we allow for both inhibitory and excitatory synapses, with the weight of each synapse being chosen randomly between −1 and 1.

Experiment 3: Randomly Wired Excitatory and Inhibitory Network with Dale's Principle
The setup is the same as in Experiment 2. However, in this case, the inhibitory and excitatory synapses satisfy Dale's Principle.For each neuron, we flip a fair coin to decide if it is inhibitory or excitatory.We randomly choose a pair of pre-synaptic and postsynaptic neurons.If the pre-synaptic neuron is excitatory we randomly choose a weight between 0 and 1 among the two neurons.Conversely, if the presynaptic neuron is inhibitory, we choose a weight between −1 and 0 to connect the neurons.

Experiment 4: Small-World Excitatory Network
While the random wiring in the previous experiments can demonstrate a relationship between centrality and neural activity in a theoretical sense, it does not explore the relation for more biologically plausible wirings.It is proposed that the wiring of the cortex follows that of small-world networks. 6,45,66ndeed it has become increasingly popular to use such wiring when simulating neural networks. 28In this experiment we generate excitatory small-world networks using the Watts-Strogatz graph generating mechanism. 60We pick the number of neurons n = 1000, the number of nearest neighbours k = 15, and the probability of adding an edge β = 0.7.We randomly pick the weights of the synapses to be between 0 and 1.Notice that unlike the other experiments, in this experiment the number of synapses is not guaranteed to be 1000.With these parameters the number of synapses is closer to 1100.Using this network wiring scheme we hope to more closely simulate the type of networks which one might find in the cortex as to make our theoretical results applicable to clinical and experimental neuroscience who exploit centrality measures.

Experiment 5: Small-World Excitatory and Inhibitory Network
This experiment is the same as above, however here synapses can be inhibitory and excitatory, that is the weight of each synapse is chosen randomly between −1 and 1.

Experiment 6: Small-World Excitatory and Inhibitory Network with Dale's Principle
As in Experiment 3, but the wiring is generated using the Watts-Strogatz mechanism.

Results
Table 1 shows a summary of the results.In this table the mean and standard deviation of the Pearsons correlation coefficient between the centrality measure and firing activity of the model neurons are shown for each of the different network schemes.The mean and standard deviation are calculated over 10 simulations of each network, each simulation is created using a different random seed to assure a different random wiring.When NA is reported the centrality measure failed to converge for one or more of the simulations.It is known that PageRank, as well as other centrality measures using the power method such as HITS, is not guaranteed to converge when there are negative weights, 17 thus this is why NA is often reported for PageRank.Figure 1 displays a series of scatter plots, plotting the centrality of a neuron against its relative firing rate for one simulation.The wiring in this case is Table 1.The Pearson correlation between various centrality measures of a neuron and its relative neural activity in different types of network schemes.Experiments described in the methods (Sec.2.2) represent the columns.exRan is the excitatory random network described in Experiment 1; exInRan is the excitatory and inhibitory network described in Experiment 2; exInRandD is the excitatory and inhibitory network satisfying Dale's principle described in Experiment 3; exSW is the excitatory small-world network described in Experiment 4; exInSW is the excitatory and inhibitory small-world network described in Experiment 5; exInSWD is the excitatory and inhibitory small-world network satisfying Dale's principle described in Experiment 6.The rows represent the centrality measures described in Sec.Fig. 1.Scatter plots of normalized neuron activity plotted against neuron centrality with a line of best fit.For all plots, the simulations were carried out on randomly wired excitatory networks as described in Experiment 1.In this case, we can see that many centrality measures perform well.
random and excitatory as described in Experiment 1 (column exRan in Table 1).In addition we show a line of best fit for each scatter plot.As can be seen, most centrality measures correlate rather well when the network is purely excitatory.Figure 2 shows the same scatter plots, in which however the wiring scheme is an excitatory and inhibitory small-world network satisfying Dale's principle as described in Experiment 6 (column exInSWD in Table 1).As we can see, only Katz centrality correlates in this case, while the other centrality measures fail to provide any useful information.The scatter plots for Hubs and Authority are not shown in Fig. 2 as the HITS algorithm failed to converge on this network topology.It should be noted for clarity that both Figs. 1 and 2 show scatter plots for only one simulation rather than averaged over 10 simulations like the results in Table 1.It is not possible to average the scatter plots as for each simulation the network is wired differently.Figures 1 and 2 are included as to demonstrate the correlations.The reader is referred to Table 1 for our main results.Figure 3 shows a scatter plot for Katz centrality only.This simulation is setup as in Experiment 1, however, a random 10% of the neurons get an average extra 1.5 mV of stimulation.This is different from the simulations in Table 1, Figs. 1 and 2, where all neurons are equally externally stimulated.Here, the neurons split into two clusters, the top cluster representing the 10% of neurons who get extra external stimulation and the bottom cluster who get the standard external stimulation.The Pearson's correlation coefficient for this experiment when averaged over 10 simulations is 0.8398 ± 0.0215, which is lower Fig. 2. Scatter plots of normalized neuron activity plotted against neuron centrality with a line of best fit.All plots here come from simulations on excitatory and inhibitory small-world networks as described in Experiment 6.
than the correlation reported in Experiment 1 for Katz centrality (Table 1, row 5, column 1) due to the unequal stimulation.
In Fig. 4, each neuron gets an average of 6 mV of external stimulation.We again consider only Katz centrality, the rest of the setup is the same as in Experiment 1.As we can see, all neurons are firing very close to their maximum firing rate (determined by the inverse of the refractory period).The normalized firing rate is between 0.98 and 1.As such the structure of the network no longer plays an important role in how the neurons behave.The Pearson's correlation coefficient when averaged over 10 simulations for this experiment is 0.0042 ± 0.0016.It seems the high external stimulation overrides the internal stimulation from synapses in the network.A similar effect would occur when all neurons get a very low external stimulation, in which case they would be firing very little or not at all.For this reason, we propose that for the correlation to hold, the neurons in the network should be active in an intermediate range of their firing rate function.In addition, the effective average weight of the synaptic connections should be in a range that causes the diversity of In-Degrees into the neurons to be reflected in their firing rates.A too small average weight would leave the neurons' potentials basically unaffected by the activity of other neurons, and too a large weight would in turn drive neurons with a very high In-Degree into saturation.

Discussion
While most of the centrality measures correlate rather well with neuronal activity in the case of exclusively excitatory weights, it is Katz centrality which performs exceptionally for all types of networks.Importantly, Katz centrality performs equally well when the network is biologically wired, i.e. when the neural network is a small-world network.Putting this result simply, it allows one to examine only the structure of a neural network using Katz centrality and rank the neurons: if a neuron has a relatively high Katz centrality it will likely be firing more than the other neurons with lower centrality.It is proposed that the relative firing rate of a neuron in a network is important for computing attention and saliency, 46 thus Katz centrality may prove to be an invaluable tool in the study of such cognitive phenomena.However, there are several more subtle points of discussion: (1) Why does Katz centrality perform better than other centrality measures?(2) What are the properties of a network that allow this correlation to hold? (3) How can Katz centrality be modified to better capture network dynamics?(4) What are the applications of this result in terms of neuroscience and cognitive science?
The remainder of this section will consider each of these questions.

Why does Katz centrality perform better than other centrality measures?
We offer a twofold explanation to why Katz centrality and neural activity correlate so highly.Firstly, Katz centrality naturally encodes the effect one neuron k hops away has on another, the farther away it is the less effect it has.This effect is mediated by the α parameter and the weights on the edges.
Other centrality measures such as PageRank and Eigenvector at best implicitly capture this property, perhaps explaining why they perform well in the case of excitatory networks; however, they perform poorly in general for excitatory-inhibitory neural networks.Secondly, and more importantly, Katz centrality captures the effect of disinhibition in neural networks.For example consider Fig. 5.If neuron 0 is stimulated, neuron 1 will be inhibited, causing neuron 2 to be more active, which then excites neuron 3.Katz centrality captures this property as the effect of neuron 0 on neuron 3 in Katz centrality is given as α k multiplied by the product of the weights of the synapses, which in this case would be positive.Conversely, if neuron 1 is stimulated, neuron 2 would be less active, causing less excitation to neuron 3. Thus neuron 3 would be less active.The effect of neuron 1 on the Katz centrality of neuron 3 is therefore negative.Again we can see Katz centrality precisely capturing the properties of the neural network.The other centrality measures tested here do not capture this property of neural networks.This explains why Katz centrality correlates highly with neural activity compared to other centrality measures on networks with inhibitory and excitatory synapses (as can be seen in Table 1).

What are the properties of a network which will allow this correlation to hold?
We can change how the network is stimulated to change the nature of the correlation between network structure and activity.More specifically: in the simulations in Table 1, Figs. 1 and 2, all neurons were identical and received identical current stimulation; what differed was only their connectivity.This rendered all neurons statistically equivalent and allowed the correlation between centrality and neural activity to hold.This situation may apply to resting state activity in a cortical network.Any input related to cortical computations would drive additional currents into a subset of cells, which would then interact differently from the background neurons through their specific sub-network.For example, in Fig. 3 we randomly choose 10% of the neurons to get an average extra 1.5 mV of external stimulation.Beside this, the experiment is setup the same as Experiment 1.
As can be seen, the unbalanced stimulation causes the firing rate centrality relation to split, the lower cluster corresponding with the 90% background neurons and the higher cluster resulting from the neurons with further stimulation.Katz centrality alone can no longer accurately predict the activity of a neuron across the whole population, however, the relationship is still conserved within the two clusters.What's more, the activity of the higher cluster seems to "widen" the lower cluster because of synaptic connections between the neurons in each cluster.An externally driven neuron connected to another neuron now has more influence on that neuron than a background neuron connected to the same target.Katz centrality cannot detect this.In general the reason why Katz centrality does not work well in this situation is simple: the centrality measures can only take into account the structure of the network and not the external stimulation of it.For this reason, we propose that the correlation should be visible in data when the network is in a resting state with no external stimulation, 19,63 or when all neurons in the network are receiving roughly equal external stimulation, as is the case in most of our experiments above.
How can Katz centrality be modified to better capture network dynamics?
Up until now we have considered networks in a resting or background firing state, where a statistically homogeneous external stimulation seems plausible as an approximation.For these types of networks we have found that Katz centrality correlates almost perfectly with neural activity.However, when the network receives heterogeneous external stimulation, the correlation no longer holds (Fig. 3).This begs the natural question: is it possible to modify Katz centrality as to force it to correlate when a netreceives arbitrary heterogeneous external stimulation?This is indeed possible, however it requires prior knowledge of the proportion of external stimulation a neuron receives relative to the total external stimulation of the network.That is, for a weighted graph G = (V, E) representing a neural network, calculate an individual input factor, I(x) for each node x in V in the following way: In Eq. ( 19), I(x) is the total external stimulation a neuron receives over a simulation.We now modify the equation given for Katz centrality above (Eq.( 9)) to the following: (20)   Like in Eq. ( 9), α is the attenuation factor and w k is the product of the weights along the k hop path between y and x.Notice that we now have a 1750013-11 Int. J. Neur.Syst.Downloaded from www.worldscientific.comby UNIVERSITY OF PLYMOUTH on 08/10/17.For personal use only.
Fig. 6.Scatter plots of normalized neuron activity plotted against neuron centrality with a line of best fit for Katz centrality (left), and modified Katz centrality (right).For both plots, the simulations were carried out on randomly wired excitatory networks as described in Experiment 1.However, unlike Experiment 1, the networks get heterogeneous external stimulation, that is, each neuron gets a randomly chosen external stimulus in the range of 0-6 mV.
unique input factor I(x) for each neuron x, corresponding to how much it is externally stimulated.
In this modified version of Katz centrality, the centrality of a node x now depends on how much it is stimulated.In addition, any other node y contributes more centrality to x depending on how much it is externally stimulated.Figure 6 shows scatter plots comparing the performance of the modified Katz centrality (Eq.( 20)) and the canonical Katz centrality (Eq.( 9)).In the simulation, the setup is the same as in Experiment 1, however the networks get heterogeneous external stimulation, that is, each neuron is independently assigned an external stimulus taken from a uniform distribution between 0 mV and 6 mV, and is stimulated by this amount at each time step.When averaged over 10 simulations we find the Pearson correlation coefficient between modified Katz centrality and normalized firing rate is 0.938 ± 0.016, while is only 0.158 ± 0.082 for the canonical Katz centrality.It is interesting, however, that correlation is not as high compared to the results in Table 1.That is, the correlation for Katz centrality on homogeneously stimulated networks is around 0.97, whereas on heterogeneously stimulated networks, the modified Katz centrality has a correlation around 0.93.This is due to the nature of heterogeneous networks; some neurons will get little stimulation, thus will not fire at all, while others will receive high stimulation and thus are fire close to saturation (this can be seen in Fig. 6 many neurons are either not firing at all, or fire close to 1).This causes the topological structure of the network to have little effect on the activity of these neurons.

What are the applications of this result in terms of neuroscience and cognitive science?
Firstly we will consider some points regarding cognitive science.Griffiths et al. 24 exploit centrality measures in their model of memory retrieval.They create a semantic network, where each node in the network is a word, an edge drawn from one word to another if the participant finds the words semantically related.Once the semantic network is created, they run the PageRank algorithm on the network and store the PageRank of each word.They then give the participant a letter, for example "a" and the participant is asked to say the word that first comes to mind.The authors find with high accuracy (82%) it is the word with the highest PageRank that begins with that letter which is recalled.The semantic network is arguably implemented in neural networks in each of the participant's brain, where words are represented by distributed ensembles of neurons and semantic associations supposedly in the (learnable) mutual interconnections. 9,48e find that when a neural network is excitatory, PageRank predicts neural activity.Thus, we propose that the reason the word with the highest PageRank is being recalled is because neurons which represent that word also have high PageRank and are thus firing more.PageRank does not perform well when a neural network has inhibitory connections, as can be seen in Table  on the semantic network to rank the words, as we have found that Katz centrality better predicts neural activity in all our cases.
We will now consider the result in terms of systems neuroscience.Firstly, we find that the sink value (the proposed sensitivity value of a node in a network) of a neuron calculated by the NeuronRank algorithm 26 does not correlate well with its activity.However, that is not to say that it is not useful for predicting the overall activity of a neural network, which it was originally designed for.The reason the sink value does not correlate so well is because by modifying the adjacency matrix, important information is lost about the weights of the synapses.We are interested to see how well Katz centrality, when used in machine learning algorithms, predicts overall neural activity in a neural network.
These results have implications for the use of centrality measures in functional brain networks, where they are applied, for instance, to identify important brain areas, 30,31 or to understand and diagnose disorders such as epilepsy and Alzheimer's disease. 10,13,18,57However, Katz centrality is rarely utilized on functional brain networks; instead, Eigenvector, Closeness and Betweenness are preferred.In light of our results, we suggest that Katz centrality may yield superior results over more commonly used measures when applied to functional brain networks, because interactions between areas in imaging data can show positive and negative effects.Furthermore, in resting states, functional brain networks are found to "map onto" structural brain networks. 22,56t is also known that fMRI reflects the intracortical processing of a brain area. 36As such, if the neurons populating a fMRI voxel (i.e. a node in a functional brain network) have on average a high Katz centrality they will likely be firing more, thus causing that voxel to be more active.In this way, Katz centrality receives a neurophysiological justification for its use in functional brain networks.

Summary and Conclusion
In this paper, we argued that if a neural network is in a resting state and the neurons have a wide range of activation then the Katz centrality of a neuron correlates exceedingly well with its relative firing rate.We find that this correlation holds across different types of networks including biologically plausible small-world networks.We explained why Katz centrality correlates so well: it naturally captures disinhibition in neural networks and the way in which neurons interact when they are not directly connected.Other centrality measures do not include both these aspects, or at best in an implicit way.We explained, in a special case, how Katz centrality can be modified to correlate with neural activity when neurons receive heterogeneous external stimulation, for example, when the network is not in a resting state.In addition we found that when a network is purely excitatory, the In-Degree, Eigenvector, PageRank and Authority (from the HITS algorithm) centrality measures also correlate well.We discussed possible implications of the findings for cognitive science: we suggest that the main result offers a physiological justification for some cognitive models which use centrality measures.Finally, we argued that our theoretical findings are highly applicable to clinical and experimental neuroscience research, where centrality measures on functional brain networks are exploited to understand epilepsy, Alzheimer's disease and other neurological disorders.Specifically, we propose that Katz centrality may outperform other measures in these contexts.

Fig. 3 .
Fig. 3. Normalized number of spikes of 1000 neurons plotted against Katz centrality.The setup here is as in Experiment 1; the LIF network is randomly wired with 10000 excitatory synapses.A randomly chosen 90% neurons get on average 1.5 mV of external stimulation at each time step and the remaining 10% of neurons get an average of 3 mV (hence, an additional average of 1.5 mV).

Fig. 4 .
Fig. 4. Normalized number of spikes of 1000 neurons plotted against centrality.The simulation is setup as in Experiment 1.However, all neurons get an average of 6 mV of external stimulation at each time step instead of 1.5.Notice that all neurons are firing close to the maximum firing rate (determined by the refractory period of the neurons).

Fig. 5 .
Fig.5.Disinhibition in neural networks.If neuron 0 gets stimulated this will cause neuron 3 to be more active.However, if neuron 1 is stimulated, neuron 3 will be less active.Katz centrality captures this effect.
2.1.The average Pearson correlation coefficient as well as the standard deviation over 10 different simulations for each different wiring of network is reported.NA is reported when a centrality measure fails to converge on one or more of the simulations.Values are in bold when the average Pearson correlation coefficient is above 0.8.
. However, the semantic network Griffiths et al. consider does not have negative links (after all the links represent the frequencies of associated words in close associates tasks).Nonetheless, we cautiously hypothesize that Griffiths et al. might see even clearer results if they were to use Katz centrality Int.J. Neur.Syst.Downloaded from www.worldscientific.comby UNIVERSITY OF PLYMOUTH on 08/10/17.For personal use only.