Periodic Strategies a New Solution Concept-Algorithm for non-trivial Strategic Form Games

We introduce a new solution concept for selecting optimal strategies in strategic form games which we call periodic strategies and the solution concept periodicity. As we will explicitly demonstrate, the periodicity solution concept has implications for non-trivial realistic games, which renders this solution concept very valuable. The most striking application of periodicity is that in mixed strategy strategic form games, we were able to find solutions that result to values for the utility function of each player, that are equal to the Nash equilibrium ones, with the difference that in the Nash strategies playing, the payoffs strongly depend on what the opponent plays, while in the periodic strategies case, the payoffs of each player are completely robust against what the opponent plays. We formally define and study periodic strategies in two player perfect information strategic form games, with pure strategies and generalize the results to include multiplayer games with perfect information. We prove that every non-trivial finite game has at least one periodic strategy, with non-trivial meaning a game with non-degenerate payoffs. In principle the algorithm we provide, holds true for every non-trivial game, because in degenerate games, inconsistencies can occur. In addition, we also address the incomplete information games in the context of Bayesian games, in which case generalizations of Bernheim's rationalizability offers us the possibility to embed the periodicity concept in the Bayesian games framework. Applying the algorithm of periodic strategies in the case where mixed strategies are used, we find some very interesting outcomes with useful quantitative features for some classes of games.

embedding of the periodic strategies in an epistemic framework, by using simple arguments. We support all our results throughout the article by providing some illustrative examples.

Motivation for Periodicity and Periodic Strategies-A non-cooperative Concept
What we intend to show in this article is a new mathematical concept, which as we prove, is an inherent characteristic to every non-trivial, finite action-player, simultaneous, strategic form game with or without perfect information. With non-trivial we mean a game that has non-degenerate payoffs for the players, a case which is the most interesting, owing to the fact that applications of game theory to economics can be described in terms of non-trivial games. We shall call this new mathematical concept periodicity and we shall describe in detail both the mathematical implications and more importantly, the applications of it in realistic games. Of course our solution concept, which we from now on call periodicity, also applies in non-trivial games but it is a degenerate case with no practical use, which can lead to inconsistencies our algorithm. So we consider only non-trivial games in the following. As we will demonstrate, the periodicity concept actually introduces a new way of thinking, which we strongly support the non-cooperative character of this way of thinking. The most important feature of this way of thinking is that each player tries to maximize his own payoff, by observing and predicting which action of his opponent will make his payoff maximized. Non-cooperativity is realized from the fact that each player tries to maximize his own payoff, but there is an important difference with the usual way of thinking in standard game theory. In the periodic strategies context, the player "scans" his opponents actions, builds hierarchical belief systems on these actions, by assigning corresponding probabilities and investigates which of his opponents strategies will maximize his own payoffs. This is totally unconventional, with regards to standard game theory way of thinking, but we are totally in a non-cooperative context, since we are maximizing each players payoff, and not a sum or a combination of all players payoffs. The new thing is that we maximize a player's utility, having as a reference what his opponents will play.
So at a first glance, our new concept of periodicity could be considered as a new mathematical structure inherent to every non-trivial strategic form game, being just another solution concept. Intriguingly enough, our solution concept materialized by periodic strategies, can yield a payoff which in some cases in mixed strategies games, can yield a payoff equal to the one given by the Nash equilibrium payoff. And notice that we end up to, maximum, the same payoffs by thinking in a totally different way from Nash equilibrium way of thinking. Let us give a convincing example, which we will analyze in detail in a later section (see section 7). Consider the game named "Test Game" appearing in the table below, which is a two player strategic form game which is considered to be played simultaneously. We assign the following probability b 1 b 2 a 1 2,5 50,6 a 2 3,10 2,5 Table 1: Test Game distributions for an action x σ of player A, and correspondingly for an action y σ of player B: The reader can easily convince himself that this game has one mixed Nash equilibrium, which we denote (p N , q N ) and is equal to (p * N = 5 6 , q * N = 48 49 ). With p N and q N , we denote the probabilities that are assigned to player A and B respectively, as in relations (1) and (2). Now if we act in the context of the periodicity concept, player A will maximize his expected utility with respect to q (his opponent's probability), we end up to the value p * p = 1/49 for his probability distribution. Thinking in the same way, player B will end up to a mixed periodic strategy q * p = 1/6. Computing the payoffs for the Nash equilibrium and for the periodic strategies solution concept, we get quite interesting results which we briefly report here and we analyze in detail in a later section. Specifically, playing Nash, player A will maximize his own expected utility with respect to his own probability distribution p, and this process ends up in specifying the value of q, namely q N * = 48 49 , for which, player's A expected utility equals to, U 1p,q (p, q * N = 48/49) = 146 49 (3) Thinking in the same way, player B maximizing his own expected utility with respect to q, yields a value p * N = 5/6 U 2p,q (p * N = 5/6, q) = 35 6 (4) Now comes the periodic strategies way of thinking, in which case if player A maximizes his own expected utility, with respect to his opponents probability distribution q, this specifies his periodic strategy p * p = 1/49. In this case, his payoff is, U 1p,q (p * p = 1/49, q) = 146 49 (5) which is independent of q. The same applies for player's B utility for q * p = 1/6: U 2p,q (p, q * p = 1/6) = 35 6 (6) The first striking new feature is that the utility function values for the periodic and Nash playing strategies are the same, in terms of payoffs. But the even more striking feature that periodicity brings along is the following: Suppose that player A decides to play Nash, and assigns (according to the Nash equilibrium) his own probability distribution to be p * N = 5/6. Then, his own utility function becomes, U 1p,q (p * N = 5/6, q) = − 239 6 q + 42 (7) while his opponents's player B, expected utility when he plays q * N = 48/49, is equal to, U 2p,q (p, q * N = 1/3) = 485 − 239p 49 (8) Notice that the conventional game theoretic way of thinking leads to payoffs that strongly depend on what the opponent plays. So playing Nash in a non-cooperative way of thinking, we end up to opponent-dependent payoffs. Observe relations (5) and (6), in which case, player A plays his own periodic probability distribution p * p and ends up to an opponent-independent utility function. Obviously, the Nash utilities depend on what the opponent plays in contrast to the case mixed periodic strategies are chosen, in which case the corresponding payoffs are robust to what the opponents play.
So the periodicity concept has some striking conceptual advantages, in comparison to Nash equilibria. In addition, we will be able to provide a non-cooperative solution to the collective action games problem. Particularly, we will establish the result that the collective action strategy can be incorporated in a purely non-cooperative context, a feature absent up to now in the bibliography. Before closing this section, we must notice that the periodicity solution concept is practically closely related to rationalizable strategies, especially in the case of Bayesian games. However, these two concepts are not essentially related in a mathematical way. We however try in every studied case to see if there is any direct correlation between these two.

Introductory Remarks
Non-cooperative game theory [1,2] has been one of the most valuable tools in strategic decision making and social sciences for the last 60 years. Research in game theory is devoted to studying strategic interaction within a group of strategic decision makers, for which the outcomes are strongly interdependent. The term non-cooperative refers to situations where the opposing players of the game are trying to obtain the best outcomes for themselves, but their outcomes depend on the choices their opponents make. This is what distinguishes game theory from problems of single agent decision theory. One of the most important and probably most controversial principle that game theory is founded upon is the concept of rationality of the players and the common belief in rationality of the players that participate in the game. An individual is rational if he has well defined preferences over the whole set of possible outcomes and deploys the best available strategy to achieve these outcomes. To put it differently, rationality means that each player forms beliefs about his own and his opponent's possible strategies, beliefs in terms of subjective probabilities and acts according to those probabilities. By "act" we mean an optimal strategy based on the beliefs of the player for his opponent's strategies. We shall assume that every player acts rationally, that is, optimally according to his beliefs about his opponents and also that all players believe in their opponent's rationality. In addition, a very important underlying theme in game theoretic frameworks is the concept of common knowledge [3]. An event or an outcome or an information is common knowledge if all the players know it and additionally if every player knows that every players knows it etc. So the game and rationality of the players are assumed to be common knowledge. The outcomes of a game can be of various types but we shall confine ourselves to outcomes that are described in terms of a von Neumann-Morgenstern utility function [4]. So optimality for players actions is conceptually and quantitatively identical to utility maximization for players. A strategic game consists of the following three elements, namely, a finite set of players, a set of actions that each player can play, and a utility function for each player. The utility functions quantify each player's preferences over the various action profiles. Strategic form games are arguably the most fundamental ingredient in game theory since even extensive games can be reduced in strategic form games. With the term complete information is meant that all players have perfect information about the elements of the game, that is, for the players of the game, their utility functions, the set of actions, and also that no player has private information that others do not know. Furthermore, all players know that all players know everything about the game. Moreover, no uncertainty is implied about the payoff functions and also for the number of the available actions for each player. One of the most fundamental concepts in non-cooperative game theory is the Nash equilibrium, which is one of the most widely and commonly used solution concepts that predict the outcome of a strategic interaction in the social sciences. A purestrategy Nash equilibrium is an action profile with the important property that no single player can obtain a higher payoff by deviating unilaterally from this strategy profile. Based on the rationality of the players, a Nash strategy is a steady state of strategic interaction. However, as Bernheim notes in his paper [5], Nash equilibrium is neither a necessary consequence of rationality nor a reasonable empirical proposition. Despite the valuable contributions that the Nash equilibrium offers to non-cooperative games, there is a very rigorous refinement, the rationalizability solution concept and rationalizable strategies [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. In strategic form games, rationalizability is based on the fact that each player views his opponent's choices as uncertain events, each player complies to Savage's axioms of rationality and this fact is common knowledge [5]. The rationalizability concept appeared independently in Bernheim's [5] and Pearce's work [6] (a predecessor of the two papers was Myerson's work [7]). From then, a great amount of work was done towards further studying and refining the rationalizability solution concept in various games, both static and dynamic. For an important stream of papers see [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] and references therein. The Nash equilibrium and its refinements are statements about the existence of a fixed point in every game. In this paper we shall present another mathematical property of finite player, finite actions, simultaneous strategic form games, which we shall call periodicity. In general, periodicity is a solution concept, with interesting quantitative implications. The purpose of this paper is to study periodic strategies and investigate the consequences of periodicity in various cases of multiplayer perfect information strategic form games. With the terms "periodic" and "periodicity" is meant that there exist maps between the players' strategy spaces, such that they constitute automorphisms on each strategy space Q, which has the property Q n = 1 for some n ∈ N . The periodicity concept appeared in Bernheim's paper [5] in a different context and not under that name. It actually appeared as being a property of rationalizable strategies. In our case, the periodicity of a strategy can easily be investigated by using an algorithm we shall present. Using this algorithm, we shall prove that periodic strategies exist in every finite action, finite player, strategic form game with pure strategies, and some of these are rationalizable too. In addition, we shall prove that the set of periodic strategies is set stable. We shall investigate which conditions have to be satisfied, in order that a Nash equilibrium is periodic. After providing some illuminating examples, we study the generalization to multiplayer games with or without perfect information. With respect to the latter, we focus in Bayesian games mainly. Then we turn our focus on two player strategic games with mixed strategies. In this case, we shall see that the only strategy that satisfies the algorithm and thus is periodic in the strict sense, is the mixed Nash equilibrium. After that, we shall present the consequences of the algorithm when it is applied to games with mixed strategies. The results are particularly interesting quantitatively and qualitatively, for some classes of games, as we shall see. In particular, we shall establish the result that when someone uses the algorithm, he can get exactly the same payoff as the Mixed Nash equilibrium yields. This payoff does not depend on what the opponents of a player play and thus is completely different in spirit from the mixed Nash equilibrium, in which case the Nash strategy is optimal only in the case the opponent plays mixed Nash. Applying the algorithm in collective action games we found an elegant way to explain why the social optimum strategy is actually a strategy that can be chosen assuming non-cooperativity. A discussion on the difference of periodicity and cooperativity follows. As we shall demonstrate, the periodic strategies are as cooperative as the Nash equilibrium is. We establish this result quantitatively by exploiting a characteristic example and computing explicitly the payoffs in each case. The case when the two players have a continuum set of actions available is also studied. Moreover, we shall attempt to put the periodic strategies into an epistemic game theory framework. As we shall demonstrate, the periodicity number is connected to the number of types needed to describe the game. Perhaps, our concept of periodicity is best clarified using some simple epistemic terms. The reasoning behind the Nash equilibrium concept is essentially the following: I play the equilibrium because if I deviated, my opponent could respond with an action that leaves him better off, but which is worse for me. The additional step of considering the possibility that I might respond to his new action with still another one that would be better for me, but worse for him, however, is not taken. In that sense, the requirement is that the reasoning of the players has to be consistent at equilibrium, but not necessarily off equilibrium. The rationalizability concept of Bernheim [5] and Pearce [6] somehow avoids this issue and instead argues that I choose an action because this is a best response to an action that I believe my opponent will take, and I believe that he will choose that action because he believes that I shall play some, possibly different, action to which his action is a best response, and that putative action of mine in turn is caused by a belief of mine that he will choose another, possibly different, action to which that action of mine is a best response, and so on. That, is the choice of action is justified at every step of the belief hierarchy in terms of a best response to a putative action of the other player. Now, our periodicity concept could be described by a similar iteration. I play a particular action because I hope that my opponent will play an action most favorable to me, and he will in turn play that action because he hopes that I shall play an action most favorable for him, and so on again. As for rationalizability, the infinite iteration makes this solution concept consistent. And we can then study the relationship between this solution concept and the others mentioned, and in particular, for which classes of games periodicity includes mixed Nash or rationalizability and when it leads to different outcomes. Starting from a different perspective, Kabalak and Kell [35] have arrived at some results that overlap with those of the present paper, in particular for certain games like Prisoner's Dilemma. This paper is organized as follows: In section 3, we define the periodic strategies and present the algorithm of periodic strategies, in the case of pure strategies focusing for the moment in two player games. In addition we prove the set stability and provide some examples. In section 4, we generalize the periodicity solution concept in the case of multiplayer finite, simultaneous strategic form games. In section 5 we study the periodicity solution concept for games with incomplete information, quantified in terms of Bayesian games. In section 6 and 7, we investigate the consequences of the periodic strategies algorithm for mixed strategies games, by providing some general framework in terms of 2 × 2 games and specifying to some quite well known classes of games. Moreover, the collective action games are studied and also the issue of the difference between periodicity and cooperativity is addressed too. In section 8, we study the continuous strategy space case of two player strategic form games while in section 9 we incorporate the periodicity concept in a very simple epistemic game theory framework by connecting types to the periodicity number, without getting into much details however. The conclusions follow in the end of the paper.

Two Player, Perfect Information Strategic Form Games
We restrict our present study to simultaneous, strategic form games with two players A and B, in the context of perfect information, assuming that the game is played only once. The actions available to each player are considered to be finite in number, and also only pure strategies are used as a first approach. Each player can have a finite number of actions, but the actions of the two players can be different in number. Denote with M(A) the strategy space of all A's actions and with N (B) the strategy space of all B's actions. The strategic form game is then defined as: • The set of players: I = 1, 2 • The strategy spaces of players A, B, namely M(A) and N (B), and the total strategy spaceḠ = M(A) × N (B) • The payoff functions U i (Ḡ) :Ḡ → ℜ, i = 1, 2 We define two continuous maps, ϕ 1 and ϕ 2 , between the strategy spaces M(A) and N (B), which act as follows: With the payoff being defined as: the actions of the aforementioned maps ϕ 1,2 are defined in such a way that the following inequalities hold true at each step: with n some positive integer. Let us clarify the meaning of the above inequalities. We start with a pure strategy x ∈ M(A), upon which we act with the map ϕ 1 . The map acts in such a way that the inequality U 1 (x, ϕ 1 (x)) > U 1 (x, y 1 ) ∀ y 1 ∈ N (B)\{ϕ 1 (x)} holds true. What this means is that ϕ 1 maps x to B's strategy space such that this action of B's yields the highest payoff for player A playing the action x. At the next step, the map ϕ 2 acts on the strategy space of player B and yields an action So we could say that ϕ 2 • ϕ 1 (x) is an action of player A for which the utility function of player B is maximized, if it is assumed that player B plays ϕ 1 (x). If we proceed in this way, under some assumptions that we will shortly address, it is possible to end up to the initial action x of player A. We can depict this procedure with an one dimensional chain of strategies, as follows: where with the letter P we denote the procedure described in relation (11). Therefore it is possible to construct a chain of actions corresponding to the maps ϕ 1,2 so that the final action of the chain is identical to the action at the beginning of the chain. We shall call that action periodic. Notice that for periodic actions, the operator Q defined as, has the property that Q n x = x for some number n ∈ N , if the action x is periodic. In terms of the operator Q, the last inequality of relation (11) can be cast as: The periodicity property is a very important property of the space of actions of player A. It means that we can find an operator that acts as an automorphism on a subset of M(A) and leaves every element of this subset invariant under this map. To be specific, when it acts on an action that belongs to the set of periodic actions, then it can yield the original action, after a finite number of steps. It is exactly this subset of the total strategy space of player A that constitutes the set of periodic strategies of player A. We denote the set of periodic actions of player A as P(A) and correspondingly the players B, as P(B). We can formally define the periodic strategies as follows: Definition 1 (Periodicity). In a 2-player simultaneous move strategic form game with finite actions, we define periodic strategies for player A to be a subset which we denote P(A), of his available strategies M(A), such that there exists an operator Q: M(A) → M(A), such that ∃ a number n i ∈ N for which ∀ x i ∈ P(A), the operator satisfies Q n i x i = x i . It is presupposed that the map consists of maps that act in such a way that the inequalities of relation (11) are fulfilled. We call the number "n i ", periodicity number and is characteristic to every periodic strategy of the game.
Obviously, similar definitions can be given for player B. We denote the operator corresponding to player B Q ′ , and can be constructed from the maps ϕ 1 , ϕ 2 as follows, In this case, the inequalities (11) take the form: The last inequality can be written in terms of the operator Q ′ , as: There is a clear conceptual distinction between periodic strategies and rationalizable strategies, however the rationalizable strategies that are also periodic are particularly interesting. We will make this clear with some characteristic examples that we analyze in detail at the end of this section. The procedure described by relations (11) and (16) does not suggest in any way that the resulting actions under the maps ϕ 1 and ϕ 2 are best responses to some action. This is an important distinction between, the algorithm that the inequalities (11) and (16) suggest, and the procedure of finding the best responses for each player. Because this is important, we shall shed some light in this concept. Take for example the first of the inequalities (11). The underlying meaning is that, assuming that player A plays x, we search in the action set of player B the action ϕ 1 (x) for which the utility of player A, U 1 (., .) is maximized. This is exactly the converse of the procedure followed when best responses are studied. Indeed, in the best response algorithm we don't presuppose that player A will play some action, but we ask, given that player B will play an action, say b k , which action of player A maximizes his utility function U 1 (., .). Let us briefly recapitulate what we just described. In the best response algorithm, we are searching player's A set of actions but in the periodic actions algorithm described by the inequalities (11), we search the set of players B action, given that A plays a specific action. The last procedure is somewhat artificial in the game, since what we are interested for, is whether an action is periodic and not for example if this action is rationalizable. We have to say that, in regard to the number "n", which is the periodicity number defined previously, it seems that it depends strongly on the payoff details of the game, but it does not depend on the number of actions. Thereby, there is no direct connection of this number, to the number of actions and the number of players, at least for finite two player strategic form games. As we will demonstrate, this holds true even in the case of games with continuous set of actions available for each player. Periodic strategies are very inherent to every finite action strategic form game. Indeed, the following theorem exactly describes that: Theorem 1. Every finite action simultaneous 2-player strategic form game contains at least one periodic action.
Proof. The proof of the theorem is very easy since the inequalities (11) hold true. We focus on player A, but the results are true for player B too. Let us start from an action x * which is assumed to be non-periodic. If we apply on x * , the operator Q, so that the inequalities (11) are satisfied at every step, then since the game contains a finite number of actions, there will be an action x a for which there exist a finite number n, so that Q n x a = x a . Hence the chain of actions that satisfy inequalities (11) is as follows: If the above is not true for any other action apart from x a , then since the game contains a finite number of actions, this would imply that x a is periodic. So every finite action game contains at least one periodic action.
The reasoning we adopted in order to prove the theorem reveals another property of the set of periodic actions in finite action games. Actually, recall the definition of set stable strategies from Bernheim [5]. We modify the definition of set stability as follows: The set A is set stable under the action of the map Q if, for any initial x 0 ∈ A ∪ B and any sequence x k formed by taking x k+1 ∈ Q(x k ), there exists For finite sets, this implies that any sequence formed by the act of the operator Q on elements produces an x k for any initial x 0 , with x k belonging to the set stable set A.
Obviously, a similar definition as the above holds for the set of actions of player B, but this time for the operator Q ′ : N (B) → N (B).
Theorem 2. Let P(A) and P(B) denote the set of periodic strategies for players 1 and 2 respectively. The sets P(A) and P(B) are set stable, under the action of the maps Q and Q ′ respectively.
Plainly spoken, the theorem implies that the periodicity cycle of any non-periodic action x 0 results in the periodicity cycle of some action x K , that is: Proof. The proof of this theorem is contained in the proof of Theorem 1, so we omit it.

Nash Strategies
We now turn our focus on the Nash strategies that are at the same time periodic actions. Suppose that the strategy set (x * , y * ) constitutes one of the Nash equilibria of a two player finite action simultaneous move game. Then the actions (x * , y * ) are mutually best responses for the two players. In order a Nash strategy is a periodic strategy, the following two conditions must hold true, which can be contained in the following theorem: Theorem 3. In a 2-player finite action, simultaneous, strategic form game, a Nash strategy (x * , y * ) of a game is periodic if ϕ 2 (y * ) = x * with ϕ 1 ,ϕ 2 defined in such a way that the inequalities (11), (16) hold true. In addition, the periodicity number for each action is equal to one, that is n = 1 and, Proof. The proof of Theorem 3 is simple, but we must bear in mind that the maps ϕ 1,2 do not give in general the best response sets of the players involved in a game. Suppose that for the Nash strategy (x * , y * ), the relations (20) hold true. Acting on the first with the map ϕ 2 on the left, and with ϕ 1 on the second relation, again on the left, we get the relations: Using relations (20), the equations (22) become: Hence the Nash actions (x * , y * ) are periodic. The relations (23) can be cast in terms of the operators Q and Q ′ as, It is obvious that the periodicity number for the two actions is equal to one, namely n = 1.

Periodic Rationalizable Strategies
The situation when a rationalizable strategy is also periodic is particularly interesting. This can be true if the rationalizability chain is identical to the periodicity cycle. In particular, if the rationalizability chains of belief contain actions that satisfy at every step the inequalities (11) and (16). Hence, applying the algorithm at every finite game we may also identify the possible rationalizable strategies, in a quick and simple way. Of course, one should thoroughly study the game in order to find all the rationalizable strategies, but our algorithm actually tells which strategies are probably rationalizable. We shall exploit the fact that, for periodic rationalizable strategies, the periodicity chains coincide with the rationalizability cycle in a later section, where we shall develop some epistemic reasoning.

Some Examples
In order to support the results that we presented in the previous sections, we shall exploit some characteristic examples of finite strategic form games with two players. All the games are considered to be simultaneous and are played only for one time.

Games with and without Periodic Nash Equilibria-Four Choices two Player Games
We start first with Game 1A, which is an analog of one of the games Bernheim used in his original rationalizability paper [5]. We shall focus our interest on player's A choices, but similar results hold for player's B actions. Using the algorithm that the inequalities of relation (11) dictate, we can construct the following periodicity cycles, namely: b 4 a 1 0,7 2,5 7,0 0,1 a 2 5,2 3,3 5,2 0,1 a 3 7,0 2,5 0,7 0,1 a 4 0,0 0,-2 0,0 10,-1 It is obvious that the periodicity number is n = 2 for both the actions, a 1 and a 3 . Moreover, for the actions that constitute a Nash equilibrium it is not possible to construct such a cycle. Nevertheless, if we apply the algorithm (11), we obtain the following cycle: It is obvious that the cycle of the non-periodic Nash action a 2 ends up to the periodic cycle of the periodic action a 1 . This is the materialization of the Theorem 5, which states that the set of periodic actions is set stable under the operator Q. Now we focus our study to the rationalizability cycles. The actions a 1 and a 3 are both rationalizable. These two actions are both rationalizable and periodic, and moreover, the rationaliz- b 4 a 1 0,7 2,5 7,0 0,1 a 2 5,2 7,7 5,2 0,1 a 3 7,0 2,5 0,7 0,1 a 4 0,0 0,-2 0,0 10,-1 Table 3: Game 1B ability cycles for these two, coincide with the periodicity cycles. By rationalizability cycle is meant a cycle based on rationality and by rationality is meant acting optimally under some beliefs about the opponents actions. Indeed, such a cycle exists and it looks like: The reasoning behind this cycle is based on this system of beliefs: Player A considers action a 1 rational if he believes that player B will play b 3 , which is rational for player B if he believes that player A will play a 3 . Accordingly, A will consider playing a 3 rational if he believes that player B will play b 1 , which would be rational for player B if he believes that player A will play a 1 . Therefore, we obtain a cycle of rationalizability based on pure utility maximization rationality. For the Nash action a 2 it is not easy to construct such a cycle because A will be forced to play a 2 since B would never play b 1 or b 3 as a best response to a 2 . So, the Nash strategy is "forced" to be rationalizable. In this game, the non-Nash rationalizable actions are periodic actions which actually are the only periodic strategies and also the rationality cycles and periodicity cycles coincide. We now slightly modify Game 1A and we construct Game 1B. The difference is that the Nash equilibrium payoffs are changed. In this case, the periodicity cycles of the actions a 1 and a 3 remain intact, but in this case, the Nash action a 2 is also periodic, with periodicity cycle: Again, the periodicity and rationalizability cycles for the Nash action a 2 coincide.

2 × 2 Games
Now we focus our interest on 2 × 2 simultaneous strategic form games. Consider first Game 2. We examine which of the actions are periodic, which rationalizable and which both. First of all, the Nash equilibrium consists of the actions (a 2 , b 2 ). Following the reasoning of relation (11), we can construct the following periodicity cycles: Obviously, all actions have a periodicity cycle and additionally all the periodicity numbers are equal to one in this particular game. Note that the actions that enter the Nash equilibrium are also periodic. However, the action a 1 is strictly dominated by the action a 2 for all cases, so it is not rationalizable. So we can never construct a cycle based on rationality argument for this action. Indeed, player A would never consider the action a 1 to be a rational move because it is never a best response. Nevertheless, we can construct a cycle based on rationality arguments for the a 2 action. Indeed, player A would consider a 2 to be a rational move if he believed that player B would play b 2 , which would be rational for player B if he believes that player A plays a 2 . According to this line of reasoning we can construct the rationalizability cycles: with the superscript R over the arrows expressing the rationalizability arguments we have just presented. In this particular game, the set of periodic actions for player A consists of both actions a 1 and a 2 , that is P(A) = {a 1 , a 2 }, while the set of rationalizable actions that are not Nash actions is empty. The set of Nash actions consists of the action {a 2 }. This particular example is one where the Nash equilibrium happens to be periodic. In addition, this game is very useful for economic applications. Indeed, the iterated elimination of dominated strategies results to (a 2 , b 2 ) which is the Nash equilibrium. This class of games describes competition between two firms that choose quantities that they produce, knowing that the total quantity that is put in the market actually determines the price [31]. It is very interesting that a periodic Nash equilibrium in the above game, is the only action that remains after the iterated elimination of dominated strategies.

Some Very Well Studied Examples
Before closing this section, we study the periodicity properties of the players available actions in the context of some very well known games, namely, the prisoner's dilemma game, the battle of sexes game and finally the matching pennies game. Let us start with the prisoner's dilemma game. In the general case, this looks Game 3, with the Table 5: Game 3, Prisoner's Dilemma numbers a, b, c, d satisfying a < b < c < d. Now it is easy to prove that the action a 1 is rationalizable but not periodic. It is obvious that the action a 2 is periodic and the strategy (a 2 , b 2 ) contains periodic actions. Actually, the periodicity cycle in this case looks like: Clearly n = 2 in this case. Let us continue with the Battle of Sexes game. For example this game can take the form of Game 4. In this game, there are two Nash equilibria, b 1 b 2 a 1 2,1 0,0 a 2 0,0 1,2 Table 6: Game 4, Battle of Sexes namely (a 1 , b 1 ) and (a 2 , b 2 ) and both actions a 1 and a 2 are periodic and rationalizable. There are no non-Nash strategies that are rationalizable. In this game, we always have n = 1, as can be easily checked. Finally, let us present the matching pennies game, Game 5. It is an easy task to verify that n = 2 in this game and also that Table 7: Game 5, The Matching Pennies Game the actions a 1 , a 2 are both periodic and rationalizable, that is, we can construct the following cycles: Therefore, all the actions of player A are periodic (the same hold for player B's actions).
Obviously, both actions are rationalizable. This game is of particular importance, since there is no pure strategy Nash equilibrium. Hence, this acts as a compelling motivation to try to generalize the algorithm of relation (11) that takes into account pure strategies, to the case where mixed strategies are employed from the two players. This will be the subject of a later section.

Perfect-Information Extensive-Form Games
Before closing this section, let us briefly comment on the case of extensive form games and periodicity. Since every perfect information extensive form game has a strategic form game representative, then all the previous apply to extensive form games. The difference is that the strategic form representative of an extensive form game has many degeneracies, so we may have many periodicity cycles corresponding to a specific action. But we omit for brevity this study since in general all the previous apply.

Generalization to Multiplayer Games-Perfect Information Case
Having demonstrated how the periodicity works in the case of pure strategies games with two players, in this section we generalize the concept of periodicity to multiplayer strategic form games with perfect information. The incomplete information games are studied in the next section. With respect to the latter we will specialize the study in the case of Bayesian games. We start of with a concrete example and using this we will generalize periodicity to multiplayer games. Consider a three player game with the following characteristics: We define six continuous maps, which we denote ϕ ij and ϕ ji , between the strategy spaces M(i) and M(j). These maps are considered to act in the following way: The maps ϕ ij and ϕ ji , act in such a way so when we start with an action x k of player i, the following inequality holds true: Before going to the definition of periodicity in the general case of multiplayer strategic form games, let us exploit an example in order to reveal the new features that the multiplayer framework brings about. Consider GAME 1 appearing in Fig. 1 Figure 1: A 3-Player Game payoff matrix a typical 3-player game with each player having two actions available, for simplicity. The maps that act between the strategy spaces are: In Fig. 2 we can see the periodicity chains for the action a 1 of player A, always having in mind the context of the periodicity concept we gave in the 2-player game case. Lets give a verbal description of the periodicity diagram. On the arrows there appear the letters ABC, indicating which player's action is considered respectively. Player A will 3 player game a 1 Figure 2: Periodicity of strategy a 1 for the 3-player game of Fig. 1 play a 1 if player B plays b 2 and player C plays c 2 simultaneously. Below (b 2 , c 2 ) there appear the maps version of the actions (b 2 , c 2 ), as ϕ AB (a 1 ), ϕ AC (a 1 ). In addition with ϕ ij ϕ km on the graph, it is indicated ϕ ij • ϕ km . By following the B arrow, B will play b 2 if player A plays a 2 and player C plays c 2 . Following C in node "1", C would play c 2 if player B plays b 2 and A plays a 1 (we reached a periodic cycle at this point but we continue to reveal all the new structures). Back in node 2, following the C arrow, C will play c 2 if B plays b 2 and A plays a 1 . Back in node 2 following the arrow A, A will play a 2 if B plays b 2 and C plays c 2 . Accordingly, in node 3, following arrow B, B will play b 2 if A plays a 2 and C plays c 2 (we reached a set stable cycle of a 2 as we will see) and so on. Back at node 3, following A, A will play a 1 , if B plays b 2 and C plays c 2 and so on ad infinitum. So we may conclude that there are three new types of periodicity cycles, which are quantified in terms of the maps ϕ ij as follows: The most striking new feature that the multi-player game case brings along, is the fact that in the periodicity algorithm, the utility functions appear in a rather different order as we shall see. Let us take the first periodicity type, that is, ϕ CA ϕ AC (a 1 ) = a 1 . The periodicity algorithm in terms of the utility functions is (we stop where the first periodic "point" appears): Let us examine the other periodic combination of maps, that is, ϕ CA ϕ BC ϕ AB (a 1 ) = a 1 .
In terms of utility functions, the periodicity algorithm is quantified in the following way: We refrain from going into further details for the shake of brevity, we only restrict ourselves to just mention the algorithms that we can form in terms of utility functions, which are the following: In some of the cycles above, there appear some set stable cycles which nevertheless we included. So if we include all the periodic points we found in the graph, we have the following new types of periodicity (some of which belong to set stable cycles and these must be clearly indicated by finding all the periodic strategies): The periodicity corresponding to the a 2 action can be checked accordingly and it can be seen in Fig. 3. We have checked this in various non-trivial 3-player strategic form Periodicity of a 2 strategy 3 player game a 2 Figure 3: Periodicity of strategy a 2 for the 3-player game of Fig. 1 games and we may come to the conclusion that there are various types of periodicity, with their number, type and form not directly depending (at least in a canonical and obvious way) to the number of players and to the number of actions. As the number of players increases, depending on the payoffs, the complexity of the periodic strategies significantly increases. But as it will become obvious, the complexity of the algorithm depends strongly on the payoffs. Now we generalize this type of games and we proceed to a 4-player game with each player having two available actions, for simplicity. In the Fig. 4 we depict pictorially the 4 × 2 game along with the corresponding payoffs. In Fig. 5, we have sketched the periodicity diagram corresponding to the a 1 action and in addition in Fig. 6 we have presented the corresponding periodicity diagram for the  Figure 4: A 4-player game payoff matrix action a 2 . We focus on strategy a 1 , although the argument holds true for any other action in a similar way. To start, we quote below the periodic strategies which we can easily extract from the Fig. 5. These are the following: It worths analyzing in detail the periodicity algorithms corresponding to each of the above relations (41). We count from top to the bottom as type 1 periodicity for the first line, type two periodicity for the second line and so on.

Type 2
The periodicity of type two goes like this:

Type 3
In addition the periodicity of type three goes like this: Finally the periodicity of type four goes like this: Before proceeding to the formal definitions and theorems of the general multiplayer games and periodicity, we shall verbally describe the above periodic maps. To start, consider the game appearing in Fig. 5. Player A would play a 1 if players D, B and C play d 2 , b 2 and c 2 respectively. Following arrow C, player C would play c 2 if players A, B and D play simultaneously a 2 , b 1 and d 2 respectively. Following arrow B at node 2, player B would play b 1 if players A, C and D play a 2 , c 2 and d 1 respectively. Following arrow A at node 2, player A would play a 2 if players B, C and D play b 2 , c 2 and d 2 respectively. Following arrow D at node 4, player D would play d 2 if players A, B and C play a 1 , b 2 and c 1 respectively. At this point we reached the first periodic point. Following arrow b at node 4, player b would play b 2 if players A, C and D play a 2 , c 2 and d 2 respectively. Following arrow C at node 4, player C would play c 2 if players A, B and D play a 2 , b 1 and d 2 respectively. Going back to node 2, following arrow D, player D would play d 2 if players A, B and C play a 1 , b 2 and c 1 respectively. Going back to node 1, following arrow B at node 1, player B would play b 2 if players A, C and D play a 2 , c 2 and d 1 respectively. Following arrow A at node 4, player A would play a 2 if players B, C and D play b 2 , c 2 and d 2 respectively. Following arrow B at node 5, player B would play b 2 if players A, C and D play a 2 , c 2 and d 2 respectively. Player D would then play d 2 , if players A, B and C play a 1 , b 2 and c 1 respectively. Following arrow D at node 5, player D would play d 2 if players A, B and C play a 1 , b 2 and c 1 respectively. Following arrow C at node 5, player C would play c 2 if players A, B and D play a 2 , b 1 and d 2 respectively. Finally, D would play Figure 6: Periodicity of strategy a 1 for the 4-player game of Fig. 4 and C play a 1 , b 2 and c 1 respectively.

Generalization of the Periodicity Concept
Having exemplified the generalization of periodicity in the multiplayer case using simple examples, at this point we generalize the concept of periodicity to multiplayer simultaneous perfect information strategic form games and justify-support the results with some formal definitions. Consider a finite player, finite action, perfect information, simultaneous, strategic form game, with the following specifics: We define 2N continuous maps, which we denote ϕ ij and ϕ ji , between the strategy spaces M(i) and M(j), which act in the following way: The maps act in such a way so that, when starting with an action x i of player i, the following inequalities hold true: We call the action x i periodic, if at some step of the periodicity algorithm given in relation (??), we have: Let us explain the meaning of each step of the algorithm. Start with the first step, when player "i" plays x i , his payoff is maximized when his opponents play a combination of actions (simultaneously), namely the actions (ϕ ij (x i ), ϕ ik (x i ), ..., ϕ il (x i )) and so on. This procedure is repeated at every subsequent step of the algorithm.
Definition 3 (Periodicity). In a N-player simultaneous move strategic form game with finite actions, we define periodic strategies for player A to be a subset which we denote P(A), of his available strategies M(A), such that there exists an operator It is presupposed that the map consists of maps that act in such a way that the inequalities of relation (48) are fulfilled.
Periodic strategies are inherent structures to every non-trivial finite action N-player strategic form game. Indeed, the following theorem exactly describes that: Theorem 4. Every finite action simultaneous N-player strategic form game contains at least one periodic action.
Proof. The proof of the theorem is very easy, since the inequalities (48) hold true. We focus on player "i", but the results hold true for the other players too. Let us start from an action x * which is assumed to be non-periodic. If we apply on x * , the maps ϕ(ij), so that the inequalities (48) are satisfied at every step, then, since the game contains a finite number of actions namely n, there will be an action x a for which there exists an operator constructed from a finite number of maps that Qx a = x a . If the above is not true for any other action apart from x a , then since the game contains a finite number of actions, this would imply that x a is periodic. So every finite action game contains at least one periodic action. Before ending the proof it is worth to give a more rigid detailed proof. Suppose we start with the action x i of player i, which is supposed to be non-periodic. We start the algorithm for player i, The algorithm will continue for some player k, After this step, the algorithm will continue for some of the actions ϕ ki ϕ ik • ... • ϕ kl ϕ ik , if no one of the actions are repeated. Suppose the algorithm continues and comes at some point for a player m, with the following holding true at that point, Continuing the algorithm, since the game has a finite number of players and a finite number of actions, some of the actions of one or more players, will be identical to the same players action for some other player's utility function, so this proves the theorem.
The reasoning we adopted in order to prove the theorem reveals another property of the set of periodic actions in finite multiplayer simultaneous strategic form games. Actually, recall the definition of set stable strategies from Bernheim [5]. We modify this definition of set stability as follows: The set A is set stable under the action of the map Q if, for any initial x 0 ∈ A ∪ B and any sequence x k formed by taking For finite sets, this implies that any sequence formed by the act of the operator Q on elements produces an x k for any initial x 0 , with x k belonging to the set stable set A.
Theorem 5. Let P(i) denote the set of periodic strategies for player i. The set P(i) is set stable, under the action of the maps ϕ ij .
Plainly spoken, the theorem implies that the periodicity diagram of any non-periodic action x 0 results in the periodicity cycle of some action x K .
Proof. The proof of this theorem is contained in the proof of Theorem 1, so we omit it.

Noticeable new features-Remarks
The differences of the multi-player periodicity in comparison to the 2-player case are not many. In fact there is exactly one noticeable difference. It is the fact that in the two-player case, the utility functions chain goes as follows, following: and the periodicity occurs for U A , if for example we start with a periodic action of player's A. In the multiplayer case, although we may start with an action of player "i", x i and start with the utility U i , the periodicity might occur at the utility function of another player, say U m . Let us further explain this version of periodicity. At the ending point of the algorithm, player "m" will play one of his actions, when his opponents play some actions, one of which is x i , corresponding to player "i". However, this does not exclude the fact that we might end up to the utility function of player "i" again. One example of this kind is Type 2 periodicity of player C, for the three player game we studied previously in this section, or the periodicity of a 2 corresponding to the same game. Having studied the perfect information case, we now generalize our framework to include non-perfect information games.

Non Perfect Information Games-Bayesian Games
In this section we address the issue of periodicity in the case of finite games with incomplete information. Our analysis on incomplete information games is based mainly on references [?, 10,[27][28][29][30][36][37][38][39][40][41][42] and references therein. In Bayesian strategic form games and more generally in strategic form games with incomplete information, we can always associate to the game some associated complete information strategic form games. Specifically, the corresponding strategic form games are called ex-ante and interim strategic form games. Exploiting these two, we will define and study the ex-ante and interim rationalizable strategies and through these, the periodicity in the case of non-perfect information games. With respect to the latter, the interim rationalizability has two versions, the interim independent and interim correlated rationalizability. Both can be found by constructing the interim independent and interim correlated strategic form game, from the initial Bayesian game. Since the Bayesian games can be represented in terms of strategic form games, all the periodicity concepts that we developed in the 2-player and multi-player cases, hold true. We shall present the case with two players and two actions for each player, in order to simplify things. The findings can be easily be generalized for the multi-player case. Interestingly enough, the interim independent strategic form game with the Bayesian game having initially two players, corresponds to a three player game. Let us start with the Ex-Ante game and we continue with the rest of the Bayesian games. A Bayesian game is a list (N, A, Θ, T, u, p), with: • N, the number-set of players • A = (A i ) i∈ N , the set of action profiles with generic member a = (a i ) i∈ N • Θ is the set of all possible parameters θ i (in our case usually two different matrices for one of the two players) • T = (T i ) i∈ N the set of types with generic member t = (t i ) i∈ N • u i : Θ × A → R, the payoff function of player i Each player i knows his own type t i but does not necessarily knows θ, or the other players types, about which he has a belief p i (· | t i ). The game is defined in terms of players interim beliefs p i (· | t i ), which they obtain after they observe their own type, but before taking their action. The game can also be defined by ex-ante beliefs p i ∈ ∆(Θ × T ) for some belief p i . The game has a common prior, if there exists π ∈ ∆(Θ × T ), such that: In that case, the game is denoted (N, A, Θ, u, π). When modelling incomplete information, there is often no ex-ante stage or an explicit information structure in which players observe values of some signals. In the modelling stage, each player "i" has the following hierarchical belief system: • Some belief τ 1 i ∈ ∆(Θ), about the payoffs (and the other aspects of the physical world), a belief that is often referred to as the first order belief of i • Some belief τ 2 i ∈ ∆(Θ × ∆(Θ Θ )) about the payoffs and the other players first order beliefs ((θ, τ 1 −i )) • Some belief τ 3 i about the payoffs and the other players first order and second order beliefs ((θ, In the Harsanyi type space formalism [27][28][29], the infinite belief hierarchies are modelled using a type space (Θ, T, P ) and also using a type t i ∈ T i in the following way: Given a type t i and a type space (Θ, T, P ), one can compute the first order belief of a type t i , by so that the second order by A type space (Θ, t, p), and a type t i ∈ T i , model a belief hierarchy ( Given any Bayesian game (N, A, Θ, u, π), with common prior π, one can define the ex-ante game, which we denote G ex = (N, S, U ). Where S i = A T i i and for each i ∈ N and s ∈ S. For any Bayesian game (N, A, Θ, T, u, p) one can also define the interim game, which we denote G int = (N ,Ŝ,Û ), whereN = ∪ i ∈ T i and alsô S t i = A i for each t i ∈N and for each i ∈ N and s ∈ S.

Ex-ante game and Ex-ante Rationalizability
Given any Bayesian game (N, A, Θ, T, u, p) and any player "i" ∈ N , a strategy s i : T i → A i is said to be ex-ante rationalizable iff s i is rationalizable in the corresponding ex-ante strategic form game G ant [9,10,26]. Ex-ante rationalizability makes sense if there is an ex-ante stage in the game. In that case, ex-ante rationalizability captures precisely the implications of common knowledge of rationality as perceived in the ex-ante planning stage of the game [9,10]. It does impose unnecessary restriction on players beliefs from an interim perspective however. Let us look the following example [9,10,26]: Consider a Bayesian game with the following characteristics: The action space and the payoff functions are given by: Notice that player A has two types corresponding to two different payoff actions. Player B has only one payoff table and one type. The ex-ante representation of this game is equal to: To every Bayesian game corresponds an ex-ante perfect information strategic form game. The actions that are rationalizable in the ex-ante strategic form game are called ex-ante rationalizable actions. The rationalizable strategy profile in the case at hand is S ∞ (G ant = (DU, R)).
The periodicity cycle of this strategy is: In addition, we can see that the theorem which relates types to periodicity number holds true, since the types needed to describes this periodic cycles are indeed two in number.
In this case, the types are the ones that correspond to the perfect information ex-ante strategic form game, so these are seen in a perfect information perspective. Of course all the theorems holding true for finite simultaneous strategic form games, hold also true for Bayesian games since the latter are equivalent to perfect information strategic form games. We now proceed to interim rationalizability related periodic equilibria.

Interim Rationalizability
There exists a disagreement about the relevant notion of interim rationalizability in incomplete information games. One straightforward notion of interim rationalizability is to apply rationalizability to the interim game G int . An embedded assumption of the interim game is that, it is common knowledge that the belief of a player "i" about θ, −i, which is given by p i (· | t i ), is independent of his belief about the other players actions.
Particularly, his belief about (θ, t −i , a i ), is derived from some belief . This is because we have taken the expectations with respect to p i (· | t i ), in defining the interim game G int , before considering his beliefs about the other players actions. Because of this independence assumption, such rationalizability notion is called interim independent rationalizability. Through the interim rationalizability we will make contact with the periodicity concept in this case too.

Interim Independent Rationalizability
Given any Bayesian game B = (N, A, Θ, T, u, p) and any type t i of player i ∈ N , an action a i ∈ A i is said to be interim independent rationalizable for t i , iff a i is rationalizable for t i in the interim game G int . The interim Independent Rationalizability is the most complex type of rationalizability among all the rationalizability types for Bayesian games. Consider the Bayesian game we used in the previous example of the ex-ante game. The corresponding interim independent game is actually a 3-player game with player-type set N = (t 1 , t ′ 1 , t 2 ), and with the following payoff table: The first player t 1 , chooses the rows, the player t 2 , the columns and finally type t ′ 1 chooses the matrices. All actions are rationalizable as can be easily checked. Let us see the periodicity graphs for the above game. Take for example U and the corresponding periodicity graph appears in Fig. 7. This example is kind of degenerate, but the periodicity study Bayesian Periodic Interim Independent Rationalizability Figure 7: Periodicity for a Bayesian Game is identical to the study of periodicity in a 3-player strategic form game. This also proves that indirectly, using the interim rationalizability strategies, we relate the nonperfect information game to a multiplayer, perfect information, simultaneous, strategic form game and therefore all the periodicity theorems hold true in this case too. We further proceed in the same fashion and we relate periodicity to the Interim Correlated Rationalizability concept.

Interim Correlated Rationalizability
Consider a Bayesian game B = (N, A, Θ, T, u, p). Interim correlated rationalizability [9,10] allows more beliefs than interim independent rationalizability, it is a weaker concept in reference to the latter. When all types have positive probability, ex ante rationalizability is stronger than the other two interim rationalizabilities. So all ex-ante rationalizable actions are interim independent and all interim independent rationalizable actions are interim correlated rationalizable actions. The converse is not true. Thus the following holds true [9,10]: Interim correlated rationalizability captures the implications of common knowledge of rationality precisely [9,10]. In addition, interim independent rationalizability depends on the way the hierarchies are modelled, in that there can be multiple representations of the same hierarchy, with distinct sets of interim independent rationalizable actions. Moreover, one cannot have any extra robust prediction from refining interim correlated rationalizability. Any prediction that does not follow from interim correlated rationalizability alone, relies on the assumptions about the infinite hierarchy of beliefs. A researcher cannot verify such a prediction in the modelling stage without the knowledge of infinite hierarchy of beliefs. Now, the interim correlated rationalizable actions are the ones that are rationalizable in the interim correlated game. Let us see how this game is found, by using a Bayesian game [9,10]. Take Θ = (−1, 1), N = (1, 2) and the payoff matrices are: We consider the type space T = (t 1 , t 2 ), 1,1 -10,10 -10,0 a 2 -10,-10 1,1 -10,0 a 3 0,-10 0,-10 0,0 , -10,-10 1,1 -10,0 a 2 1,1 -10,-10 -10,0 a 3 0,-10 0,-10 0,0 with p(θ = 1, t) = p(θ = −1, t) = 1/2. The interim game is the following complete information game: It is easy to show that even in this Bayesian framework we can −9/2,−9/2 −9/2,−9/2 -10,0 a 2 −9/2,−9/2 −9/2,−9/2 -10,0 a 3 0,-10 0,-10 0,0 Table 9: Game 1B find a periodic action and specifically in the interim reduced game. Thereby, we indirectly demonstrated that by using the various imperfect information rationalizability concepts, we relate periodicity with Bayesian games in general. Therefore we may formalize the periodicity concept in Bayesian games.

Periodicity and Bayesian Games
We can easily understand that since every Bayesian game corresponds to some perfect information, finite player, finite action, strategic form game, the following theorem holds true: Theorem 6. Every finite action simultaneous N-player Bayesian strategic form game contains at least one periodic action.
Proof. The proof is relatively easy, since for every finite players finite action strategic form game corresponds to the interim games or the ex-ante games, which are finite action finite players games. Therefore since every finite action, finite player strategic form game has at least a periodic action, it follows that every finite action, finite player, Bayesian strategic form game, has at least one periodic strategy.
Moreover, all the arguments that hold true for perfect information games, hold true for the ex-ante and interim representations of a strategic form games. So we can generalize these arguments to Bayesian games. For the ex-ante and interim correlated representations of the Bayesian games, the following theorem holds true: In a two player perfect information ex-ante and interim correlated representations of a two-player Bayesian strategic form games, the number of types N t i corresponding to the periodic cycle of an ex-ante or interim correlated rationalizable periodic action is: The types are those corresponding to the perfect information representations of the Bayesian game and not those corresponding to the incomplete information game.
Proof. We shall call rationalizable strategies those which are rationalizable to the corresponding ex-ante or interim correlated strategic form game, without specifying to which we refer [9,10]. These results hold true for each case respectively. Having this in mind, for every such action, if the periodicity number is "n", it is possible to construct a periodicity chain with exactly 2n rationalizable actions appearing in that chain. Therefore, what is necessary to prove is that, for each action appearing in the rationalizability chain, there exist at least one type, so the minimum number of types corresponding to all the actions of the rationalizability chain is 2n. As is proved in [26], in a static game with finitely many choices for every player, it is always possible to construct an epistemic model in which, • Every type expresses common belief in rationality • Every type assigns for every opponent, probability 1 to one specific choice and one specific type for that opponent.
Thereby, for two player games, each type for player A for example, assigns probability 1, to one of his opponents actions and one specific type for that action, such that this action is optimal for his opponent. In addition, in two player games, rationalizable actions and choices that can be made under common belief in rationality are exactly the same object. Hence, we can associate to every rationalizable action of player A exactly one type which in turn assigns probability 1 to one specific rationalizable action and one specific type of his opponents types and actions. Moreover, as is explicitly proved in [26], the actions that can rationally be made under common belief in rationality are rationalizable. To state this more formally, in a static game with finitely many actions for every player, the choices that can rationally be made under common belief in rationality, are exactly these choices that survive iterated elimination of strictly dominated strategies. Hence, for two player games, we conclude that strategies which express common belief in rationality and rationalizable strategies coincide. This is owing to the fact that all beliefs in two-player games are independent, something that is not always true in games with more than two players. Therefore, when periodic rationalizable strategies are considered, the total number of types needed for a rationalizability cycle is equal to 2n. This concludes the proof. In this section, we study periodic strategies, in the case in which mixed strategies are deployed in the simultaneous strategic form game. We shall confine ourselves to 2 × 2 games for simplicity and in order to be as illustrative as possible. We introduce at first some notation, in order to generalize the pure strategy case. Let the functions a 1 (A i ) and a 2 (B i ), i = 1, 2 denote the player's A and player's B probabilities of playing A i and B j respectively. These probabilities constitute the mixed strategies of the two players. These probabilities are maps between the strategy space and the corresponding space of all probability distributions of each player. These are of the form: Correspondingly, the pure strategy utility function of each player is replaced by the compound lottery of each player's preferences of his own and his opponents strategies, which is the expected utility for the players A and B . The expected utility is defined to be: with i, j = 1, 2. For later convenience, we adopt the following notation, corresponding to 2 × 2 games: Hence, a general mixed strategy x σ for player A can be written as: and correspondingly an action y σ of player B: Note that p and q can vary in a continuous way, and the corresponding expected utility for each player is considered to be a differentiable function of p, q, with 0 ≤ p, q ≤ 1. We focus our interest to the question if a periodicity pattern underlies the 2 × 2 games in the context of mixed strategies. As in the pure strategy case, this periodicity will be materialized in terms of two maps Φ 1 , Φ 2 that constitute the automorphisms Q = Φ 2 • Φ 1 and Q ′ = Φ 1 • Φ 2 . In this case, the aforementioned maps are defined differently, in reference to the pure strategy case. Take for example player A: The operator Q has the property that there exists a positive integer "n" and some action ∈ ∆(M(A)), namely x σ , for which Q n x σ = x σ , for some positive integer "n". The actions of the maps Φ 1 and Φ 2 are defined in the mixed strategies case as: These two maps are defined in such a way so at each step the following inequalities hold true: when we consider player A. In the above inequalities "n" is some positive integer n ≥ 1. Let us interpret the meaning of the above inequalities, keeping in the back of our mind that the actions now are mixed strategies. The algorithm implied by the inequalities (70) dictates that starting with a mixed strategy of player A, namely x σ , and upon which we act with the map Φ 1 , we search in player's B set of probability distributions ∆(N (B)), in order to find which mixed strategy maximizes the expected utility of player A. At the next step (inequality 2), the map Φ 2 acts on the strategy space of player B and yields a mixed strategy is a mixed strategy of player A for which the expected utility function of player B is maximized, if it is assumed that player B plays Φ 1 (x σ ). Accordingly, just like in the pure strategy case, it is possible that the whole process ends up to the initial mixed strategy, x σ . Therefore, it is possible to form a chain of mixed strategies of the following form: where as in the pure strategy case, the letter P denotes the procedure described in relation (70) above. The mixed strategies for which we can find such a chain, we call periodic, and as in the pure strategy case, these are formally defined to be strategies that satisfy: It is obvious that in terms of the operator Q, the last inequality of relation (70) can be cast as: (73) The fact that we deal with mixed strategies is a great advantage since, the action of the map Φ 1 on x σ is equivalent to the maximization of U 1p,q with respect to q. Indeed, take for example the first inequality of relation (70). The map Φ 1 yields a strategy ∈ ∆ (N(B)), which is such that the expected utility of player A is maximized. Hence, if we differentiate U 1p,q with respect to q, the corresponding solution, say p * 1 is equal to Φ 1 (x σ ), that is: At this point we need to clarify some issues, in reference to mixed strategies. When mixed strategies are deployed, in order to find the periodic strategy for player A for example, we differentiate his expected utility with respect to q. Notice that, since the expected utility is a linear function of the variables (p, q), differentiation with respect to q will yield a solution p * 1 , and consequently player A determines a strategy that belongs to his strategic choices. Similar arguments hold for player B. This behavior occurs only in the mixed strategies case of finite strategic form games due to the linearity of the expected utilities with respect to their variables (p, q). As can be easily inferred, this procedure is in the antipode of the mixed Nash equilibrium calculation. In the latter, the differentiation of the expected utility of player A for example, with respect to p, yield a strategy q * N of the opponent, and this qualifies as the Nash solution. If player B plays his Nash solution, all actions of player A are rationalizable and actually they are equivalent, since they yield the same payoff. In the periodic strategies case, the periodicity algorithm specifies an action for player A. As we shall see in the quadratic games section, this kind of behavior does not occur when similar considerations are used for the quadratic games. We shall use the property implied by relation (74) in one of the next subsections and it will bring about interesting features in some class of games. Of course the same as above apply for player B. Thereby, the corresponding inequalities (70) for a given initial mixed strategy y σ ∈ ∆(M(A)), now become: The last inequality can be written in terms of the operator Q ′ , as: In this case, the corresponding operator Q ′ , is constructed by the maps Φ 1 , Φ 2 as follows, Hence, a periodic action y σ of player's B satisfies:

Mixed Nash Equilibria are Periodic Strategies of 2 × 2 Strategic Form Games
In this subsection we investigate the periodicity properties of the mixed Nash equilibria in 2 × 2 simultaneous strategic form games. The mixed Nash equilibria are treated somehow differently than the other mixed strategies. As it will become obvious, the periodicity of the Nash equilibria is guaranteed without applying the algorithm that the inequalities (70) and (75) dictate. Actually, as we shall demonstrate, the mixed Nash equilibria are always periodic, if the players play the Nash strategies. In addition, the Nash equilibria are the only mixed rationalizable strategies that result to a rationalizability cycle, like the one we came across to the pure strategies case. This "enforced" in some way periodicity (in the sense that it is not obtained by direct application of the algorithm) is very closely connected to the fact that in the case of mixed strategies, the players mixed strategies have an important property which is the so called "opponents indifference property". We shall further analyze this in the following. The procedure of finding the mixed Nash equilibria is based on maximizing each players expected utility function with respect to the mixed strategy that a player assigns to his own actions. We use the expression (65) and the conventions of relation (66) for the probabilities. Hence, the problem of finding mixed Nash strategies reduces to finding the optimal strategies for players A and B for the various p, q values and hence maximizing the corresponding expected utilities with respect to (p, q). Let us analyze first player's A expected utility maximization procedure. We make the following assumptions: • The game is not a trivial game (the payoff matrix is not degenerate).
• The terms (A 1 , B 2 )) are non zero. The same holds for player's B utility function.
Then, the maximization procedure of player A expected utility yields the following equation: strongly depends on the signs of the two terms appearing in the above list. Since these two are game-dependent, we assume that the result can be cast in the form q − x 0 = 0, and x 0 is determined by the aforementioned terms. Our results are not affected by the exact value of x 0 . Such a solution is guaranteed for games that have a mixed Nash equilibrium, since it is a well known fact that every game has a mixed Nash equilibrium. So when q > x 0 , the expected utility of player's A is a monotonically increasing function with respect to p and therefore, the best response of A to player B playing (q, 1 − q) with q > x 0 , is the action with p = 1. In the case q < x 0 , the expected utility of player B is monotonically decreasing with respect to p and hence, the best response of player A to player B playing (x 0 , 1 − x 0 ), is any action with 0 ≤ p ≤ 1. Correspondingly, for player B, the equation to analyze is of the form p − x ′ 0 = 0. If p > x ′ 0 , the expected utility of player B is monotonically increasing with respect to q and hence, the best response of player B to player A playing (p, 1 − p) with p > x 0 is the action with q = 1. If p < x ′ 0 , the expected utility of player B is monotonically decreasing with respect to q and thereby, the best response to player A playing (p, 1 − p) with p < x 0 is the action with q = 0. Finally, if p = x ′ 0 , the best response to player A playing (x ′ 0 , 1 − x ′ 0 ), is any action with 0 ≤ q ≤ 1. Hence, it is easy to see that a simple belief hierarchy can be formed in terms of the Nash equilibrium action. This simple belief hierarchy is formed by the actions (x 0 , 1−x 0 ) and (x ′ 0 , 1 − x ′ 0 ). As we shall see, this by itself can lead to the conclusion that it is always possible to find a periodicity cycle for the mixed Nash equilibrium. Take for example player A: When B plays the Nash strategy (x 0 , 1 − x 0 ), player A can play any of his available strategies. This is the materialization of the opponent's indifference property. The same applies for player B, that is, if A plays (x ′ 0 , 1 − x ′ 0 ), then B can play any of his available actions. Hence, all the actions of A are rationalizable only if B plays his the Nash strategy and conversely all B's actions are rationalizable only if A plays his Nash strategy. Nevertheless, only the Nash strategies are contained in the periodicity cycle. Let us clear this up a little bit more, since it is of particular importance. The rationality argument for the Nash strategies verbally can go like this: Obviously there can be a repeating pattern only if the periodic actions are the corresponding mixed Nash equilibrium ones, that is: • A will play and (p, 1 − p) if B plays (x 0 , 1 − x 0 ) and B will play (x 0 , 1 − x 0 ) (which is one of the infinitely many of his allowed actions) if A plays (x ′ 0 , 1 − x ′ 0 ) (recall that B will play any (q, 1 − q) if A plays (x ′ 0 , 1 − x ′ 0 )) and A will play (x ′ 0 , 1 − x ′ 0 ) if B plays (x 0 , 1 − x 0 ) and so on ad infinitum.
It is obvious that the mixed Nash strategies can form the following periodicity cycle (we assume for the moment that 0 < p, q < 1, so that no pure strategies are involved in our framework): Consequently, we can find maps φ 1 and φ 2 that act in the following way: Therefore, we can form operators Q M = φ 2 •φ 1 and Q ′ M = φ 1 •φ 2 , such that when these act to the mixed Nash equilibrium (N (B)), in such a way so that: Note that the maps φ 1 ,φ 2 are artificially imposed and have nothing to do with the inequalities (70) and (75). As a conclusion, it easily follows from relation (82) that (as in the pure strategy case) the mixed Nash strategies for each player are periodic, with the periodicity number for each strategy being equal to one, that is n = 1. Before proceeding to some illustrative examples, we would like to point out once more that the algorithm implied from the inequalities (70) and (75), does not necessarily yield the mixed Nash strategy, although the Nash strategy is periodic. This can be true only in some exceptional cases. It is the differentiability of the expected utilities with respect to (p, q) and the specifics of the payoff matrix that introduce this peculiarity in the mixed strategies case. We shall analyze this issue further in the next section, after we present some examples related to the present case. Let us briefly present one very well known game, in order to augment the above arguments, namely the matching pennies game. Note that this game does not have a pure strategy Nash equilibrium. The mixed Nash equilibrium is x σ = 1 2 A 1 + 1 2 A 2 and If player B plays q = 1/2 then player A can play any p and conversely if player A plays p = 1/2, then player B can play any of his available actions. When B plays any action with q > 1/2 then the expected utility of player A is monotonically increasing with respect to p and hence the optimal strategy for A is p = 1. Moreover when q < 1/2, the optimal move for player A is p = 0, since in this case the expected utility of player A is monotonically decreasing with respect to p. A similar analysis can be done for player B. Note that the best responses of player A for B playing q = 1/2 is any p. Hence, this is the set of all rationalizable strategies for player A, and likewise for player B, any q is rationalizable when A plays p = 1/2. It would be useful here, to examine the rationalizability issues a bit more.

Rationalizable Mixed Strategies
The result we found for the mixed strategies case is somewhat different compared to the pure strategies case, since the only mixed rationalizable strategies that are at the same time periodic, are the mixed Nash strategies (considering mixed strategies with 0 < p, q < 1). The other mixed rationalizable strategies do not yield periodic patterns. Here we shall formally discuss the issue of rationalizable mixed strategies in a 2 × 2 simultaneous move game framework. Let us denote R p i (G) and R m i (G) the pure strategies that are rationalizable and the mixed strategies that are rationalizable respectively. The set . The use of mixed strategies does not expand the rationalizable outcomes of a game. A rationalizable strategy that is a component of a rationalizable mixed strategy is also rationalizable as a pure strategy [5]. For a finite strategy space, we have for any player that ). In practice, in order to find the mixed rationalizable actions, that is, to construct R m i (G) from R p i (G), we find the points in the convex hull of R p i (G) for which player's "i" expected utility is maximized. We denote this set Γ. The set of mixed strategies is constructed from the set supp(∆(R p i )|Γ), that is, the set of mixed strategies consists of these strategies that assign positive probabilities the strategy set Γ. In the 2 × 2 games, the rationalizable strategies are formed by any (p, 1 − p) and (q, 1 − q), as we already mentioned earlier. This is because: Players A utility is maximized for any p ∈ (0, 1) for q = x 0 . Hence, the actions (p, 1−p), with p ∈ (0, 1) are rationalizable for A when B plays (x 0 , 1 − x 0 ). Similarly, regarding player B, the actions (q, 1 − q) are rationalizable for B when A plays the mixed strategy (x ′ 0 , 1 − x ′ 0 ). It is obvious that the repeating pattern appears only for the mixed Nash equilibrium in this type of games. Note however that we did not apply the algorithm of the inequalities (70) and (75), in order to find the periodic strategies. If we formally apply the algorithm, the differentiability of the expected utilities in terms of the mixed strategy probabilities (p, q) brings about, quantitatively and qualitatively interesting features of strategic form games, that are, up to date, not explored in the literature. This is the subject of the next section. The games that satisfy these requirements are very interesting since the robustness of the players with respect to the opponents strategies would imply, in an artificial way, the following periodicity cycle: and correspondingly: The above two relations imply that the strategies p * p and q * p are periodic with periodicity number n = 1. Note however that this periodicity is artificial and as already mentioned, stems from the robustness of the players in reference to their opponents actions. This periodicity is a direct consequence of the algorithm that the inequalities (70) and (75) imply. To see which games have the aforementioned behavior, we focus on the general characteristics of the payoff matrix. The maximization of U 1p,q with respect to q yields the following condition: while the maximization with respect to q yields relation (79). Now, we will exploit the fact that when the opponent plays a mixed Nash equilibrium strategy, the player's expected utility is independent of his own randomization over his own strategies, and at the same time, the utility is maximized. Hence, we can built games in such a way so that the mixed periodic strategies are connected in some way to the mixed Nash equilibria.

First Type of Games
Having in the back of our mind the valuable attributes of the mixed Nash equilibria we require for a game to satisfy the following conditions: where p * p and p * N are the mixed periodic and mixed Nash equilibrium for player A and q * p and q * N are the mixed periodic and mixed Nash equilibrium for player B respectively. Hence, it is obvious how the robustness of the corresponding expected utilities is achieved. Making use of relations (79) and (88), relations (89) impose some restrictions on the payoff matrices, which are: Let us illustrate this result by using a very well known game, the Battle of Sexes, which is Game 1 in the tables below. If we utilize mixed strategies, the mixed Nash b 1 b 2 a 1 2,1 0,0 a 2 0,0 1,2 Table 11: Mixed Strategies Game 1 equilibrium for this game is (p * N = 2 3 , q * N = 1 3 ). If we maximize player's A expected utility subject to q we get that ∂U 1p,q ∂q = −1 + 3p, hence the mixed periodic strategy is p * p = 1/3, regardless the value q takes. If we do the same maximization procedure for player B, we obtain the mixed periodic strategy q * p = 2/3. Let us now examine the expected utilities of the players. The expected utility for player A at the mixed "periodic" (we shall use the term periodic even though these strategies are not periodic per se, but result by using the algorithm) strategy p * p = 1/3, is equal to: and is independent of q. The same applies for players B utility for q * p = 2/3: Now, the expected utility of player A for player B playing his mixed Nash strategy q * N = 1/3, is equal to: and that of player B when A plays mixed Nash p * N = 2/3 is: Notice the last two relations that do not depend on any variable and also that the expected utilities are maximized when the opponent plays mixed Nash. Also notice that the expected utilities for the mixed periodic strategy take also their maximum values. In addition, the utilities corresponding to the periodic strategies and to the mixed Nash equilibria are equal. The disadvantage of the mixed Nash strategy, in reference to the mixed periodic strategy, is that in order the expected utility is maximized, the opponent has to play Nash. This fact renders all the strategies of the player rationalizable. However, this does not happen in the periodic mixed strategy case, where when the player plays his own periodic mixed strategy, the expected utility is maximized, regardless what the other player plays. According to this line of research, notice furthermore that if player A plays for instance his own Nash mixed strategy p * N = 2/3, his expected utility is: Accordingly the expected utility of player B q * N = 1/3 is: Obviously, the corresponding utilities depend on what the other player plays, and thus the mixed strategies of each player do not render the corresponding payoff robust to the opponent's strategies. In contrast, the mixed periodic strategies render the corresponding payoffs robust to what the opponent's choose, and in addition maximize the expected utility functions. This result is very intriguing, since the mixed periodic strategies we found, namely (p * p = 1/3, q * p = 2/3) are not rationalizable actions. Each of the two, is rendered rationalizable only if the opponent plays for some reason the mixed Nash strategy. Nevertheless, the attribute and most sound feature of the mixed periodic strategies is that the player who adopts these, always achieves equal or larger payoff in comparison to the mixed Nash payoff, regardless what his opponent plays (bear in mind that we assume 0 < p, q < 1 in order not to fall into inconsistencies).

Second Type of Games
Another type of games that has similar attributes as the one we just described, satisfies the following conditions: These conditions render the corresponding expected utilities robust in the opponent strategies. In addition, condition (97) restricts the payoff matrices so that the following conditions are satisfied: Let us illustrate this result using Game 2 in the table below. The pure strategy game has b 1 b 2 a 1 2,5 50,6 a 2 3,10 2,5 two Nash equilibria, namely (A 1 , B 2 ) and (A 2 , B 1 ), which at the same time are periodic. If we deploy mixed strategies, the mixed Nash equilibrium is (p * N = 5 6 , q * N = 48 49 ). Maximizing player's A expected utility with respect to q, we get that the mixed periodic strategy is p * p = 1/49 (which is what we expected as a result of (97)). Performing the same maximization procedure for player B, we obtain the mixed periodic strategy q * p = 1/6. Let us now examine the expected utilities of the players. The expected utility for player A at the mixed periodic strategy p * p = 1/49, is equal to: and is independent of q and additionally player's B utility for q * p = 1/6: On the other hand, the expected utility of player A for player B playing his mixed Nash strategy q * N = 48/49, is equal to: Precisely as in the previous game, the last two relations (the mixed Nash payoffs) do not depend on any variable and also the expected utilities are maximized when the opponent plays his mixed Nash strategy. In addition, the expected utilities for the mixed periodic strategy take their maximum values, which are equal to the ones obtained for the Nash strategies. However, if player A plays for instance his own Nash mixed strategy p * N = 5/6, his expected utility equals to: while the expected utility for player B, when he plays q * N = 48/49, is equal to, Obviously, the corresponding utilities depend on what the opponent plays in contrast to the case mixed periodic strategies are chosen, in which case the corresponding payoffs are robust to what the opponents play. Moreover, note that in this case too, the expected utilities for the players playing the mixed periodic strategies, are equal to the expected utilities of the players when their opponents play the mixed Nash strategy, just as in the previous game. As a final example we shall present some games that do not belong to some of the aforementioned categories of games but worth mentioning, since when the periodic mixed strategies are played, the expected utilities are higher than the ones corresponding to the mixed Nash ones. This is very valuable, since we connect the periodic strategies to the so-called collective action games.

An Exceptional Type of Games-Collective Action Games-Prisoner Dilemma Type of Games
Suppose two big oil companies have both entered a new market, which is a country that just started to produce oil and gas. Both companies extract oil and gas from that country and they transfer that oil all round the world. But, the transport of the oil is a very difficult task, since that country is a big island in the Mediterranean sea. So the companies would need a pipeline in order to transfer the oil faster and more efficiently. The government public policy allows only one pipeline, so both companies must share the pipeline, when it is constructed. The question is, who is going to fund the construction of this pipeline. In the end both companies will be benefited from the construction, but who is going to undertake the cost of this task? Such games are inherent to problems of collective action [31]. In these kind of games, the actions that make better off the players do not belong to the set of best private interest actions of the players, or more formally, the socially optimal outcome is not automatically the Nash equilibrium. The collective action games come in three forms, namely, prisoners dilemma, chicken and assurance games.
The pipeline project has two important characteristics: • The benefits of the game are non excludable • The benefits are non-rival Such a project appears in the Economic literature under the name "Pure Public Good". Non excludable means that a player that has not contributed to the project, will be benefited from the outcomes. Non-rival means that everyone who participates to the project, has payoffs which are robust against the participation of other player to the project. Such a game can be represented in matrix form in Game 3 below, which we borrowed from the book of Dixit, Skeath and Reiley [31]. It is obvious that the B 1 B 2 A 1 4,4 -1,6 A 2 6,-1 0,0 Nash equilibrium is the strategy (A 2 , B 2 ). The payoffs depend on the quality and the time that it takes to materialize the project. Obviously, the optimal action for both players is not to participate, no matter what the other player does, that is, to act as a "free rider". Apparently, the social optimum is achieved when the strategy (A 1 , B 1 ) is adopted by both players. The social optimal is always achieved when the total sum of the players payoffs is maximized. However, this is strictly a cooperative way of thinking. Note that, in the context of mixed periodic strategies, we are still working within a non-cooperative context (we shall say more on this argument in a later section). Let us analyze the mixed strategies that this game has. For doing that, we relax the constraint we imposed in the previous sections and we now have 0 ≤ p, q, ≤ 1. It is not difficult to see that the mixed Nash equilibrium is also the strategy (A 2 , B 2 ). The expected utilities of the two players for p = p * N = 0, and q = q * N = 0 are both equal to zero, that is: Applying the algorithm of periodic strategies, we maximize the expected utility of player A with respect to q and the utility of player B with respect to p respectively. The results are the periodic mixed strategies, which in this case are the pure strategies p * p = 1 and q * p = 1. The expected utilities of both players are maximized for this periodic strategy, that is: Hence, in this case, the social optimum strategy is encompassed to periodic strategies. But more importantly, we used a non-cooperative method in terms of a self maximization procedure. The fact that the two outcomes, that is, non-cooperative and cooperative ones, coincide is an artifact of the details of the game. In the next subsection we shall discuss this crucial difference in detail. This a very sound result, since this outcome is based on a formal procedure of maximization of each player's expected utility with respect to the opponent's mixed strategy. We will further analyze this result exploiting another useful example. Take for example the following game 2, which is a collective action game again. If we use mixed strategies, the mixed Nash equilibrium for b 1 b 2 a 1 0,0 6,1 a 2 1,6 3,3 . Now if we maximize player's A expected utility subject to q we get that ∂U 1 ∂q = −2 < 0 ∀ q. Hence, the expected utility is maximized when q = q * p = 0, since the utility is monotonically decreasing, with respect to q. Correspondingly, maximizing player's B expected utility with respect to p we get ∂U 2 ∂p = −2 < 0 ∀ p. For the same reason, player's B expected utility is maximized when p * p = 0. Therefore, the periodic strategies are p * p = 0, q * p = 0. A remarkable feature of the periodic strategy is that U 2 (p * p = 0, q * p = 0) = 3 and U 1 (p * p = 0, q * p = 0) = 3 while for the Nash strategies we get U 1 (q * N = 3/4, p * N = 3/4) = 2.25 and U 2 (q * N = 3/4, p * N = 3/4) = 2.25. Hence, the periodic mixed strategy (which is actually a pure strategy) yields higher payoffs for both players, in terms of their expected utility function, in comparison to the mixed Nash strategy. This is particularly interesting, since it is in the spirit of collective action games. Thereby, the procedure of finding periodic mixed strategies via the maximization of the expected utilities with respect to the opponent's mixed strategies, could serve as a formal proof why the socially optimal strategies should be played by the players, but always remaining in a non-cooperative context. It is natural to ask whether some sort of cooperativity is hidden in the algorithm of periodic actions. This however is not true, since each player maximizes his own utility and does not care (in payoff terms) for his opponent. This conceptually delicate issue, is addressed in the following subsection.

Difference of Periodicity and Cooperativity
While considering the concept of periodicity in the context of mixed strategies, a reasonable question springs to mind, which has to do with the cooperativity of the periodic strategies algorithm. Particularly, does the periodicity algorithm entails any kind of cooperativity among players, so that they result playing the strategies that benefits the most, both of them? The answer in no. As we shall demonstrate, the periodic actions and also those that result from the periodicity algorithm, are always calculated in a non-cooperative context. Cooperative game theory does not necessarily imply that each agent is agreeable and will follow instructions. Rather it means that the basic modelling unit is the group and not the individual agent. Hence, in cooperative game theory the focus is on what groups of agents, rather than individual agents can achieve. In addition, the payoffs may be freely redistributed among it's members. This assumption is satisfied whenever there is a universal currency that is used for exchange among players, and means that each coalition between players can be assigned a single value as it's payoff. Moreover, in cooperative game theory, players are allowed to make binding commitments, as opposed to non-cooperative game theory, in which they cannot. In addition, in cooperative game theory, agents are allowed to split the gains from cooperation, by making side payments between the players that form a coalition. Side payments means transfers between themselves, which consequently modify the final payoffs. On the contrary, in non-cooperative game theory each player maximizes his own payoff, and no side payments are allowed. The payoffs that each player receives are not modified for any reason, and remain the same all the time (at least in the simultaneous, static, perfect information games we are studying in this article). This is in the antipode of what is materialized in cooperative game theory. Furthermore, the real difference between cooperative and non-cooperative game theory is in the different modelling approach and the different solutions that are given to each case. For non-cooperative game theory, the important ingredient is the single agent and for cooperative game theory the group. As we explicitly showed in the previous sections, the periodic actions of a player "I", both in pure strategies and mixed strategies (the latter as outcomes of the periodicity algorithm and not periodic in the sense of section 1), are found by maximizing the player's I expected utility with respect to his opponents actions. This does not entail any sort of cooperativity, since what is actually done for a player, is a self maximization of his utility function. Each player acts non-cooperatively, since he does not care about their opponents payoffs but they care about their opponents actions and particularly they take into account those for which their own payoff is maximized. This is different in spirit from the Nash equilibrium concept, and also from conventional approaches in non-cooperative game theory, but it is within the context of non-cooperative game theory. In order to further support our argument, we shall briefly present one of the most refined cooperative game theory technique, the so called Cooperative-Competitive (CO-CO) solution concept [32] and we shall compare the results of this solution concept with the ones that result from the periodic strategies algorithm.

Cooperative-Competitive Equilibrium
We shall briefly present the cooperative game theory solution concept, known as Cooperative-Competitive solution, which was firstly introduced by [32] (see also [33]). Consider a general, two player non-zero sum game with players A and B, described by the payoff functions Φ A and Φ B , with: The choice of the strategy (a ♯ , b ♯ ), may favor more one player than the other. In such a case, the player that is better off, must provide some incentive to the other player, in order that he complies with the strategy (a ♯ , b ♯ ). This incentive is actually a side payment. Splitting the total payoff, V ♯ in two equal parts will not be acceptable, because this does not reflect the relative strength of the players and their personal contributions to their cooperativity outcomes [33]. A more realistic approach was introduced by [32] which we now demonstrate. Define the following games: The above two relations actually imply that the original game, is split in two games, a purely cooperative, with payoff Φ ♯ (a, b), and a competitive (which is a zero sum game), with payoff Φ S (a, b). In the cooperative game, the players have exactly equal payoffs, that is, they both receive Φ ♯ (a, b), while in the purely competitive part, the players have exactly opposite payoffs, namely Φ S (a, b) and −Φ S (a, b). Denote with V S , the value of the zero-sum game, with utility function Φ S (a, b).
Having found the value of the game, the Cooperative-Competitive value of the game is defined to be the pair of the following payoffs: The Cooperative-Competitive solution of the game, is defined to be the pair of strategies (a ♯ , b ♯ ), together with a side payment P S from player B to player A, such that: Obviously, the side payment can be negative, in which case player A pays player B the amount P S . Conceptually, the Cooperative-Competitive solution lies in the antipode of the algorithm that yields periodic strategies, owing to the fact that the Cooperative-Competitive solution, namely the strategy pair (a ♯ , b ♯ ), is determined by maximizing the sum of the player's and his opponent's utility. The periodic strategies on the other hand are computed by maximizing each player's own payoff, with respect to the opponent's actions.
In order to further support our arguments in a quantitative way, we shall present some characteristic examples, and we will compare the Cooperative-Competitive solution and the periodic algorithm solution.

Cooperative-Competitive Solution and Periodicity Algorithm-Some Examples
Consider the Battle of Sexes game that appears in Table 8, in the previous section.
As we demonstrated, for this game both the pure strategy pairs (a 1 , b 1 ) and (a 2 , b 2 ) are periodic strategies. Moreover, when we apply the periodic strategies algorithm to mixed strategies, we obtain a mixed strategy that yields the same payoffs as the mixed Nash equilibrium, with the difference that each player's payoff does not depend on his opponent's actions. Let us recall for convenience the results here: The mixed Nash equilibrium for this game is (p * N = 2 3 , q * N = 1 3 ) and moreover, the application of the periodic strategies algorithm yields the strategy, (p * p = 1/3, q * p = 2/3). The expected utilities of the players are: Hence, the payoff corresponding to the mixed Nash equilibrium is (U 1N , U 2N ) = (2/3, 2/3) and the algorithm of periodic strategies yields the payoffs, (U 1P , U 2P ) = (2/3, 2/3). Let us now turn our focus to the Cooperative-Competitive solution of the Battle of Sexes game. Following the procedure we described in the previous subsection, we find that the zero-sum game of the Battle of Sexes game is equal to (Table 12): We easily com- pute the values V ♯ and V S , which are equal to V ♯ = 3 and V S = 0. It is obvious that the Cooperative-Competitive strategy is constituted from any of the two strategy sets (a 1 , b 1 ) or (a 2 , b 2 ). Within the Cooperative-Competitive solution, player B must make a side payment P S = 1 to player A. Hence, in the Cooperative-Competitive solution they final utilities are (U 1CC , U 2CC ) = (2, 2). As we can see, when players cooperate, they receive a higher payoff, in reference to all other non-cooperative payoffs we presented for this game. Consequently, the strategies that are obtained from the periodic strategies algorithm are, in expected utility terms, as non-cooperative as the mixed Nash equilibrium. Let us give another example at this point, to further support the non-cooperativity of the mixed and non-mixed periodic strategies. Consider the game that appears in Table  9. As we demonstrated, the payoffs corresponding to the mixed Nash equilibrium (p * N = 5 6 , q * N = 48 49 ) and the ones corresponding to the periodic strategies algorithm (p * p = 1/49, q * p = 48 49 ) are: The strategy (a 1 , b 2 ) corresponds to the Cooperative-Competitive strategy. The values V ♯ and V S , are equal to V ♯ = 56 and V S = − 3 2 , and hence the side payment of player A to player B is P S = − 47 2 . The Cooperative-Competitive value of the game (the final payoffs of the two players) is (U 1CC , U 2CC ) = ( 53 2 , 59 2 ). By comparing the cooperative payoffs with the non-cooperative ones, appearing in equation (113), it is obvious that the non-cooperative ones are by far smaller in comparison to the cooperative ones. In conclusion, we have established that both conceptually and quantitatively, the strategies that result after applying the periodic strategies algorithm, are non-cooperative. Nevertheless, for some games, the Cooperative-Competitive strategies payoff (value in the terminology of Cooperative-Competitive equilibria) may coincide with the periodic mixed or pure strategies payoff. As it is suspected, this result is accidental and actually is an artifact of the form of the game. A class of games for which this coincidence occurs, is the Prisoner-Dilemma games. Consider for example, the game that appears in Table  10. For this example the application of the periodic strategies results to the strategy pair (A 1 , B 1 ), with payoffs (U 1P , U 2P ) = (4,4). For this game the values V ♯ and V S , are equal to V ♯ = 8 and V S = 0, and side payment of player A to player B is P S = 0. Consequently, the Cooperative-Competitive value of the game is (U 1CC , U 2CC ) = (4, 4), which is the same as the periodic one. However, this is accidental and is an artifact of the details of the payoff matrix.

Two player Simultaneous Move Strategic Form Games with a Continuum Set of Strategies
In this section, we shall study the implications of the periodic strategies algorithm to the case of two player games with continuous strategies for the players. We shall mainly focus our interest to strategic form, simultaneous, symmetric games with quadratic payoffs. There exist many examples from the economics literature that belong to this class of games, such as the Cournot and Bertrand duopoly, provision of public good and search games [34]. A natural question that springs to mind is if there are similar results in this case, as in the collective action finite games. As we shall see, the periodic strategies algorithm does not yield as interesting results as it does in the collective action games, but we shall present it in order to present all possible applications of the algorithm.
We consider a game with two players I = 1, 2, for which a continuum set of strategies is available for each player. The payoffs are in general of the following form: where x, y ∈ ℜ + . In both cases, the parameters a 5 , b 5 are assumed to be negative, in order the payoffs are concave in their own strategy. We shall also restrict the parameters (a i , b i ) to be equal, therefore assuming a symmetric game. In order to find the Nash equilibria of the above game, the following equations must be solved simultaneously : Thus, maximizing the payoffs of each player, with respect to his own strategy, yields the Nash equilibria. On the antipode of this technique, lies the algorithm of finding the periodic strategies, in which, the payoffs of each player are maximized with respect to their opponents strategies. For the case at hand, this amounts to solving simultaneously the following two equations: In order we have maxima at the critical points of equations (115), which we denote (x N , y N ), the following two conditions have to be satisfied: and simultaneously, In the case of general quadratic games, the above conditions become: Following the same line of argument, the conditions for maxima, in the case of the periodic strategies algorithm (for simplicity, we shall call these periodic strategies, although these are not periodic in the strict sense) are the following two: Apparently, in the case of symmetric games, the conditions are very much simplified to: The latter two conditions are satisfied for all the cases of quadratic games, owing to the convexity condition we imposed at the beginning. The Nash equilibria and the "periodic" points are: For simplicity, we shall apply the above in specific games and by means of two characteristic examples, we provide some intuition for our results. We shall consider two continuous, symmetric quadratic games, the "Cournot Duopoly" game and the "Provision of Public Good" game.

Cournot Duopoly
The Cournot duopoly quadratic game has the following form: where, using the notation of relation (114) (and also that in the case at hand a i = b i ) the parameters A, B, P, M are defined to be: The equilibria for this game can be found if we maximize each player's utility function, with respect to his own strategy. The critical point of the above utilities are: with the corresponding utilities: Let us discuss at this point, the periodicity of the continuous equilibrium (x * , y * ). As it obvious from the above equation, the equilibrium is not a periodic strategy, in the sense of inequalities (70). Indeed, as it can be seen, player A will play his equilibrium strategy, if player B plays as large as possible y and not his equilibrium strategy, provided that 3A − 2M is positive. Accordingly, the same applies for player B. In any case, there is no direct connection of the equilibrium strategy with periodicity, unless more conditions are imposed. This is kind of a strange result and must be a result of the specifics of the utilities of the two players. In addition, it can be a consequence of the fact that the equilibrium is not actually a maximum of the utilities, but rather is a saddle point. This behavior seems to be inherent to symmetric quadratic games, since the same thing applies to Public Good games. Let us proceed, to find what is the result of the algorithm of periodic strategies. Maximizing each player's utility function, with respect to his opponent's action, yields the "periodic" strategy, (x p , y p ) = (0, 0). As we can see, this result is not sound enough, to further study it. On the contrary, as we shall see in the case of Provision of Public Good games, the "periodic" strategies enjoy an elevated role, in comparison to the present example.

Provision of Public Good Games
The Provision of Public Good quadratic game has the following form: The Nash equilibria for this game can be found if we maximize each player's utility function, with respect to his own strategy. The critical point of the above utilities are strategies that simultaneously satisfy: with the corresponding utilities: As we can see, no periodicity arguments can be applied in this case, exactly for the same reasons as in the Cournot duopoly game. Let us proceed to the periodic algorithm strategies. Maximizing each player's utility with respect to his opponents strategy, we obtain the following strategies (x p , y p ): with corresponding utilities: Note that we can write the above utilities in terms of the utilities corresponding to the equilibrium utilities (129), namely, We can see that the periodic strategies algorithm yields smaller payoffs in comparison to the ones corresponding to the equilibrium strategies. When C = 0 the payoffs are equal, and this is the interesting fact with these games. This however is owing to the symmetry of the game and consequently no new information can be extracted from such kind of games.

Epistemic Game Theory Framework and Periodic Strategies
In this section, we shall connect the periodicity number "n" appearing in the automorphism Q n we defined earlier, to the number of types needed to describe a two player simultaneous strategic form game within an epistemic framework. We shall work assuming a perfect information context. The epistemic game theory formalism was introduced firstly in the papers of Harsanyi, in order to describe incomplete information games [27][28][29] and thereafter adopted by other authors (see for example [8][9][10] and references therein). Our approach mimics the one used in [30] and also the one adopted from Perea in [26]. In order to render our presentation complete we shall briefly present the appropriate formalism and reasoning.

Belief Hierarchies in Complete Information Games and Types and Common Belief in Rationality
Consider a two player game with a set of finite actions available for each player, which we refer to as player A and player B hereafter. A belief hierarchy for a player A of the game is constructed from a chain of increasing order beliefs in terms of objective probabilities as follows [26]: • A first order belief is the belief that player A holds for player B actions • A second order belief is what player A believes that player B believes that player A will play and so on ad infinitum. So a k−th order belief represents the belief that player A holds for the (k − 1)-th order belief of player B. The belief hierarchies express in general rational choices of the players but also the underlying theme is common belief in rationality, that is, every player believes in his opponent rationality and believes that his opponent believes that he acts rational and so on. Since belief hierarchies are constructions that are not easy to be used in practice, we introduce the concept of a type, which encompasses all the information that a belief hierarchy contains, but it is a more compact way to describe these hierarchies. Before doing that, let us quantify the belief hierarchies in a more formal way, in terms of topological metric spaces. The first order belief hierarchy is actually materialized from all the probabilities distributions over the space of uncertainty a player "i" has for his opponents. We denote this X 1 i and according to the above, X 1 i = C −i , that is to say, the uncertainty is the set of the opponents actions. Hence, the set of first order beliefs is, Following the same lines of argument, the second order space of uncertainty for player "i" is equal to: which encompasses the player's "i" opponents and in addition his opponents first order beliefs. The set of all probability distributions over the space X 2 i is the set of all second order beliefs, that is B 2 i = ∆(X 2 i ). Accordingly, continuing this process up to the k-th order we obtain the k-th order of uncertainty, which embodies the (k − 1)-th order space of uncertainty and also the (k − 1)-th order of the opponent's beliefs. Thereby, the set of k-th order beliefs is the set ∆(X k i ). A belief hierarchy b i for the player "i" is an infinite chain of beliefs b k i ∈ B k i , ∀ k, that is: Relation (136) encompasses all the verbal statements appearing in the list above. The belief hierarchy is assumed to be coherent, which means that the various beliefs which constitute the belief hierarchy, do not contradict each other, that is, for m > k Having defined coherent belief hierarchies, the epistemic framework is constructed using the definition of an epistemic type which is simply a coherent belief hierarchy for a player i. A type corresponds to some epistemic model constructed for the game, so let T i be the total number of types needed to describe player i. In addition, for every player "i" and for every t i ∈ T i , the epistemic model specifies a probability distribution b i (t i ) over the set C −i × T −i , which represents the set of choice-type of player i's opponent −i. The probability distribution b i (t i ) stands for the belief that a player i's type t i holds about player's −i actions and types, so: which is true for a two player game. A type of a player "i" embodies a complete belief hierarchy that corresponds to that player, and plainly spoken, the type is the complete belief hierarchy. Now a choice c i of player "i" is optimal for player's "i" type t i if it is optimal for the first order beliefs that t i holds about the opponents choices. Within the epistemic game theoretic framework we utilize, we can define easily common belief in rationality. Indeed, we say that the type t i believes in the opponents rationality if t i assigns positive probability to his opponents −i choices types (c −i , t −i ), in which case c −i is optimal for type t −i . Having defined the belief in opponents rationality, we define the k−fold belief in rationality [26]: • Type t i expresses 1-fold belief in rationality if t i believes in the opponents rationality • Type t i expresses 2-fold belief in rationality if t i only assigns positive probability to his opponent's types that express 1-fold belief in rationality.
• Type t i expresses k-fold belief in rationality if t i assigns positive probability to opponents types that express (k − 1)-fold belief in rationality.
With the above definition, we can define the concept of common belief in rationality in terms of types as follows: Within an epistemic model, a type t i corresponding to player i, expresses common belief in rationality, if it expresses k−fold belief in rationality for every k. In addition, we can formally define a "rational choice", when common belief in rationality is assumed in the game, as follows: A choice of player i, namely c i is rational under common belief in rationality, if there is some type t i such that: • Type t i , expresses common belief in rationality • Choice c i is optimal for this type t i Our aim is to connect the periodicity number "n" which we defined earlier, to the number of types that are necessary to describe a simultaneous two player finite action game. This connection will be done actually using the point rationalizable strategies.

The Connection of the Periodicity Number to the total Number of Types of the Epistemic Model
As we have seen in section 2, the rationalizable actions that are also periodic and at every step satisfy the inequalities (11) and (16), are particularly interesting, since for these we can connect the total periodicity number "n", to the numbers of types needed to describe the game with an epistemic model. This relation can be described by the following theorem: Theorem 8. In two player perfect information strategic form games, the number of types N t i corresponding to the periodic cycle of a rationalizable periodic action is: Proof. For every such action if the periodicity number is "n", it is possible to construct a periodicity chain with exactly 2n rationalizable actions appearing in that chain. Therefore what is necessary to prove is that for each action appearing in the rationalizability chain, there exist at least one type, so the minimum number of types corresponding to all the actions of the rationalizability chain is 2n. As is proved in [26], in a static game with finitely many choices for every player, it is always possible to construct an epistemic model in which, • Every type expresses common belief in rationality • Every type assigns for every opponent, probability 1 to one specific choice and one specific type for that opponent.
Thereby, for two player games, each type for player A for example, assigns probability 1 to one of his opponents actions and one specific type for that action, such that this action is optimal for his opponent. In addition, in two player games, rationalizable actions and choices that can be made under common belief in rationality are exactly the same object. Hence, we can associate to every rationalizable action of player A exactly one type which in turn assigns probability one to one specific rationalizable action and one specific type of his opponents types and actions. Moreover, as is proved in [26], the actions that can rationally be made under common belief in rationality are rationalizable. To state this more formally, in a static game with finitely many actions for every player, the choices that can rationally be made under common belief in rationality are exactly these choices that survive iterated elimination of strictly dominated strategies. Hence, for two player games, we conclude that strategies which express common belief in rationality and rationalizable strategies coincide. This is because all beliefs in two-player games are independent, something that is not always true in games with more than two players. Therefore, when periodic rationalizable strategies are considered, the total number of types needed for a rationalizability cycle is equal to 2n.

A Comment on Simple Belief Hierarchies and Nash Equilibria
Within an epistemic game theory context, a type t i is said to have a simple belief hierarchy, if t i 's belief hierarchy is generated by some combination σ i of probabilistic beliefs about the players choices. Plainly spoken, a type has a simple belief hierarchy if it is believed that his opponents are correct about his beliefs. As is demonstrated and proved in [26], a simple belief hierarchy, materialized in terms of some probabilistic beliefs σ i about players choices, expresses common belief in rationality, iff the combination σ i of beliefs is itself a Nash equilibrium. The converse is not always true. Hence, using the theorem above, the number of types needed to describe a simple belief hierarchy for a Nash equilibrium is 2. Obviously, if a Nash action is periodic, then n = 1 and applying relation (139), we find that the types needed in the periodic Nash case are two. We have to stress an interesting point, regarding simple belief hierarchies. When considering two player games, it is proved (see [26], theorem 4.4.3) that a type t i has a simple belief hierarchy iff t i believes that his opponent holds correct beliefs and believes that his opponent believes that he holds correct beliefs himself. Thus, he believes that he has no mistake in his prediction about his opponent beliefs, and he believes that for his opponent too. In higher order beliefs this is not anymore true, and therefore we could argue that the total number of wrong beliefs of all the two players about each other beliefs is equal to 2n − 1. Plainly spoken, the total number of mistakes that the two players do is equal 2n − 1. Here we call mistakes the beliefs σ i due to which the higher order belief hierarchy fails to be simple belief hierarchy.

Concluding Remarks
We have presented an intrinsic property of multiplayer, finite, simultaneous, strategic form games, which we called periodicity of strategies. We studied the periodicity concept in finite action games in which case we proved that every finite action two player strategic form game has at least one periodic action. Moreover, we proved that the set of periodic strategies is set stable under the map Q. Nash strategies are not always periodic as we demonstrated and we found which conditions must be satisfied in order these are periodic. As a corollary of these conditions, it follows that the periodicity number of the periodic Nash strategies is equal to one. We studied games with perfect and also incomplete information, with the latter case being quantified in terms of Bayesian games. In that case we made extensive use of various generalization of Bernheim's rationalizability concept. Next, we studied the case in which mixed strategies are taken into account, in the context of two player finite action strategic form games. As we demonstrated, the only periodic strategy in the sense of section 3 is the mixed Nash equilibrium. Applying the algorithm of periodicity to specific classes of games results in some very interesting outcomes. These are interesting both in a quantitative and a qualitative way. As we have shown, in both classes of games that we have presented, the algorithm results in some mixed strategies, which we called periodic (although these are not periodic in the strict sense of section 3), for which the payoffs of the player are equal to the mixed Nash equilibrium, or larger. This strategy gives outcomes for each player which do not depend on what the opponent will play. The periodicity in these classes of games is actually a solution concept. Moreover, the application of the algorithm to collective action games gives another interesting result. In particular, we were able to evince that the social optimum strategy can be played by adopting a non-cooperative thinking. The issue of cooperativity and periodicity was addressed too. As we substantiated, periodic strategies are as cooperative as the mixed Nash equilibrium, and we demonstrated this in a quantitative way, by exploiting some characteristic examples. Then we attempted to introduce an epistemic framework and incorporate periodic strategies in this framework. As we proved, the number of types needed to describe the rationalizability cycle of a rationalizable periodic strategy is equal to two times the periodicity number of that action. The next step of this study certainly would be the inclusion of mixed strategies in multiplayer games. Our intention in this paper was to offer a first close look at this new concept called periodicity and point out the new quantitative features it provides. Or course, introducing more than two players will increase the complexity of the arguments we used in the mixed strategy case. Moreover, the cooperativity issue in games with more than two players is much more complex than in the case we examined. This is due to the fact that the players are free to form coalitions. Periodicity then has to be reconsidered under this perspective. Clearly, the periodicity feature for finitely many actions of strategic form games can be very useful. Indeed, all the periodic actions can be found using some simple program code, a fact that is clearly a good step in finding all the rationalizable actions that are non-Nash equilibria. This result is actually a common feature of every non-degenerate finite action game, that is, every non-Nash rationalizable action is usually periodic. This can be very useful for games that have, as we mentioned, finitely many actions, since the potential non-Nash rationalizable actions can be determined by finding the periodic strategies. Finally let us stress some issues that should be carefully addressed in a future work, • 3-player mixed strategies and their relation to periodic strategies should be studied in detail. One should carefully examine whether there is any exceptional class of games with the special attributes of the two player games, that we presented in the present article. Particularly, we should check whether the algorithm of periodic strategies leads to strategies for which the expected utility of players is quantitatively higher that the corresponding Nash one, and in addition if the periodic strategies for a player, are independent of the other players action, just as in the two player case.
• Multi-player Cooperativity and Multi-player periodic strategies should also be formally addressed. The question whether the periodic strategies imply any sort of cooperativity has to be re-addressed in a multi-player context. This is because, in cases with N ≥ 3 players, two or more players, may form coalitions in order to cooperate against the rest of the players. This case is much more complex, in reference to the two player case and therefore has to be carefully investigated.
• Continuum utility functions case with a finite number of players should be thoroughly studied, to see if periodicity exists in some solution concepts.
• In the case of Bayesian games, is there any connection between the types of the imperfect information case and the corresponding Ex-ante or interim game? What about the role of periodicity and it's connection to the imperfect information types space? These questions are of fundamental importance and should be formally addressed.
We hope to address these issues in a future publication, since these exceed the purpose of this introductory to the concept paper.