World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

Modeling Emerging Interpersonal Synchrony and its Related Adaptive Short-Term Affiliation and Long-Term Bonding: A Second-Order Multi-Adaptive Neural Agent Model

    https://doi.org/10.1142/S0129065723500387Cited by:48 (Source: Crossref)
    This article is part of the issue:

    Abstract

    When people interact, their behavior tends to become synchronized, a mutual coordination process that fosters short-term adaptations, like increased affiliation, and long-term adaptations, like increased bonding. This paper addresses for the first time how such short-term and long-term adaptivity induced by synchronization can be modeled computationally by a second-order multi-adaptive neural agent model. It addresses movement, affect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The behavior of the introduced neural agent model was evaluated in a simulation paradigm with different stimuli and communication-enabling conditions. Moreover, in this paper, mathematical analysis is also addressed for adaptive network models and their positioning within the landscape of adaptive dynamical systems. The first type of analysis addressed shows that any smooth adaptive dynamical system has a canonical representation by a self-modeling network. This implies theoretically that the self-modeling network format is widely applicable, which also has been found in many practical applications using this approach. Furthermore, stationary point and equilibrium analysis was addressed and applied to the introduced self-modeling network model. It was used to obtain verification of the model providing evidence that the implemented model is correct with respect to its design specifications.

    1. Introduction

    Whenever people interact, their behavior tends to become mutually coordinated in time, or synchronized. Interpersonal synchrony has been found to enhance relationship functioning, for example, by inducing greater levels of closeness, concentration, coordination, cooperation, affiliation, alliance, connection, or bonding.1,2,3,4,5,6,7,8,9,10 In literature11,12 it was suggested that to model the complex, cyclical types of dynamics that occur, a dynamical systems modeling approach is needed. Notably, the benefits of interpersonal synchrony include patterns of mutual adaptation both in the short term and in the long term. For instance, in the context of psychotherapy, a patient and therapist who synchronize their movements8 may experience a stronger sense of sharing the present moment during a therapeutic session.13 Over multiple sessions, this increased social presence may strengthen the therapeutic bond, which allows the patient and therapist to work together more effectively.14

    This paper has as main goal — to address computational and mathematical analyses of the complex adaptive dynamics of such forms of short-term and long-term adaptivity of interaction behavior related to interpersonal synchronization and to verify the hypothesis that the underlying mechanisms put forward in the literature indeed generate these social phenomena by an emerging and adaptive interactive process. More specifically, these analyses cover three different but closely related levels:

    Analysis of mechanisms from the literature in psychology and neuroscience that are suggested to play a role in these complex multi-adaptive dynamics. These include mechanisms for different forms of (synaptic and nonsynaptic) plasticity and control over plasticity by metaplasticity (also called second-order plasticity or second-order adaptivity). Here, the underlying hypothesis in the literature is that these mechanisms put forward are sufficient and able to generate the emerging and adaptive patterns of synchronization and adaptation of the interaction behavior. This hypothesis is tested in silico in this paper by computational simulation based on these mechanisms.

    Analysis of mathematical formalization of these mechanisms behind the considered complex adaptive dynamics by an agent-based second-order multi-adaptive dynamical system. This covers analysis of conducted simulation experiments based on such formalization and includes analysis of stationary points and equilibria for the occurring dynamics and adaptivity.

    Analysis of how the specific representation of such an adaptive dynamical system by self-modeling networks used here is positioned in the wider landscape of adaptive dynamical systems. It is analyzed how the self-modeling network format can be used to provide a canonical representation for any smooth adaptive dynamical system, which also covers most neural system models.

    So, more specifically, the neural agent model that is a central focus here is an adaptive dynamical systems model based on a number of mechanisms in the literatures on cognitive, behavioral, and affective neuroscience. A neural basis for short-term behavioral adaptivity can be found in the recent work on the (nonsynaptic, intrinsic) adaptive excitability of (neural) states.15,16,17,18 By contrast, a neural basis for long-term adaptivity can be found in the classical notion of synaptic plasticity.19,20,21,22 Together, these two fundamentally different forms of adaptation yield a model of a multi-adaptive neural agent. The two forms of adaptation also interact with each other.

    The extent of adaptation that an agent requires may vary from situation to situation. The capacity to adjust plasticity to the demands of the situation relates to metaplasticity.23,24 This model of a neural agent models metaplasticity as a second-order form of plasticity that controls plasticity in a context-sensitive manner. The resulting model yields a second-order multi-adaptive neural agent, which is human-like in the sense that it incorporates an interplay of three major mechanisms for adaptivity that according to the neuroscientific literature characterizes human agents.

    Note that it is not claimed that the model is human-like in the sense that it would cover the applied neural mechanisms at a physiological level addressing that level was not within the scope of this research. Instead, these neural mechanisms were considered and modeled in a more abstract manner at a functional level. Investigating physiological scalability would be another, next enterprise to be addressed.

    This model of a neural agent further includes intrapersonal synchrony and interpersonal synchrony and their links to short-term and long-term behavioral adaptivity. To model the pathway from synchrony patterns to this behavioral adaptivity, we included both built-in intrapersonal synchrony and interpersonal synchrony detectors. Here, intrapersonal synchrony means that within an agent, actions for the different modalities occur in a coordinated manner. Interpersonal synchrony means that for each modality, the actions of the two agents occur in a coordinated manner. The addressed modalities are movement, affect, and verbal modalities. We included these three modalities because they have each been shown to be influential in interpersonal behavior.14

    We evaluated the neural agent model in a series of simulation experiments for two agents with a setup in which a number of stochastic circumstances were covered in different (time) episodes. The simulations included not only episodes with a stochastic common stimulus for the two agents, but also episodes with different stochastic stimuli for the agents. Moreover, to analyze the role of communication, stochastic circumstances were also included for episodes when communication was enabled by the environment and episodes when communication was not enabled.

    Next, as part of further analysis of the self-modeling network modeling approach it is shown how any (smooth) adaptive dynamical system can be modeled in a canonical way as a self-modeling network model. In this way, any adaptive dynamical system has its canonical representation as a self-modeling network model and can be analyzed based on this canonical representation. On the one hand, this shows that the chosen modeling approach does not introduce biases or limitations if adaptive dynamical systems are modeled using it. In particular, it shows also that the approach generalizes most common neural system models. On the other hand, this was a basis to show how stationary point analysis and equilibrium analysis for adaptive dynamical systems can be performed by using the self-modeling network representation for the specific adaptive dynamical system model introduced here.

    2. Main Assumptions and Background Knowledge

    In this section, we present the main assumptions behind the introduced adaptive neural agent model and relate them to the relevant neuroscience literatures. This grounding in neuroscience is based on pathways for a circular interplay of synchrony with both nonsynaptic plasticity16 and synaptic plasticity,19,20,21,22 thereby covering both short-term time scales and long-term time scales and their interaction. More specifically, the following underlying assumptions are made for the pathways involved; for a conceptual overview, see Fig. 1. Note that the main example used in the presentation concerns two agents A and B and their interaction.

    Fig. 1.

    Fig. 1. Conceptual overview of the processes involved in multimodal (intra- and inter-personal) synchrony and behavioral adaptivity in social interaction.

    2.1. Interpersonal synchrony leads to adaptation of interaction behavior

    Interpersonal synchrony is often followed by a behavioral change or adaptation of mutual behavior.1,2,3,4,5,6,7,8,9,10 This adaptive shift in mutual behavioral coordination has been observed, for instance, in psychotherapy sessions. Research has shown that therapists were rated more favorably and as more empathic when, beforehand, they were instructed to make their movements more synchronized with the client.25,26,27 Similarly, Ramseyer and Tschacher8 found that initial movement synchrony between client and therapist was predictive of the client’s experience of the quality of the alliance at the end of each session. Furthermore, Koole and Tschacher14 reviewed converging evidence that movement synchrony has a positive effect on the working alliance between patient and therapist. More generally, synchrony in face-to-face interactions has been found to promote interpersonal affiliation.10,28

    2.2. Behavioral adaptation after interpersonal synchrony occurs both in the form of short-term adaptation and long-term adaptation

    Much research on interpersonal synchrony has focused on short-term adaptive changes in interpersonal coordination.1,6,9,10,29 However, several lines of research have observed effects of interpersonal synchrony on long-term adaptation as well. First, developmental research has observed that movement synchrony between infant and caregivers predicts social interaction patterns of the child several years later.28 Second, research on close relationships suggests that early patterns of interpersonal synchrony predict subsequent indicators of relationship functioning, for instance, one study found that spouses’ patterns of cortisol variation converged over a period of years, indicating long-term shifts in interpersonal coordination.30

    Third and last, research on psychotherapy processes has found that markers of interpersonal synchrony in early sessions can predict the development of the therapeutic relationship8 and therapeutic outcomes.7 Long-term adaptation processes remain less well-studied than short-term adaptation processes. Nevertheless, the convergence of evidence is sufficient to conclude that interpersonal synchrony is likely to promote both short-term and long-term adaptation in interpersonal relationships.

    2.3. The behavioral adaptation relies on different neural mechanisms: Synaptic plasticity of connections and nonsynaptic plasticity of intrinsic excitability

    In the neuroscientific literature, a distinction is made between synaptic and nonsynaptic (intrinsic) adaptation. The classical notion of synaptic plasticity has been used to explain long-term behavioral adaptation.19,20,21,22 This addresses how the strength of a connection between different states is adapted over time due to simultaneous activation of the connected states. By contrast, the nonsynaptic adaptation of intrinsic excitability of (neural) states has been addressed in more detail more recently.15,16,17,18 The latter form of adaptation has been related, for example, to homeostatic regulation17 and also to how deviant dopamine levels during sleep make that dreams can use more associations due to easier excitable neurons.31 Moreover, both (synaptic and nonsynaptic) forms of adaptation can easily work together.32

    In the neural agent model these two adaptation mechanisms and their interaction have been used to model behavioral adaptivity: the former for long-term adaptation and the latter for short-term adaptation. Here an interplay of two types of adaptivity occurs. Synchrony does not only lead to short-term adaptation, but short-term adaptation itself also intensifies interaction which can lead to more synchrony which in turn can strengthen the long-term adaptation. Besides, long-term adaptivity also strengthens interaction which leads to more synchrony and consequently stronger short-term adaptivity. In this way, via multiple circular pathways a dynamic interplay occurs between synchrony, short-term adaptivity and long-term adaptivity.

    Plasticity is not a constant feature, as it often is highly context-dependent according to what is called metaplasticity.23,24 For example, “adaptation accelerates with increasing stimulus exposure”.24 Toenable such context-sensitive control of plasticity, second-order adaptation (i.e. adaptation of the adaptation) has been included in the neural agent model, which makes the model more realistic.

    2.4. The pathways from synchrony to behavioral adaptation involve synchrony detection states

    If synchrony occurs for an agent and due to this the agent adapts the interaction behavior, this suggests that agents possess a facility to notice or experience synchrony patterns for the different modalities. Indeed, the assumption is made that agents do in some way (perhaps unconsciously) detect synchrony and from there may trigger behavioral adaptation for their interaction behavior. In the pathway from synchrony patterns to changed interaction behavior patterns, such synchrony detection states can be considered as specific mediating mental states. Such a state p in general has been called a mediating state for the effect of a past pattern a on a future pattern b entailed by pattern a (Refs. 33 and 34); similarly, such a (brain) state is referred to as describing “informational criteria” for future activation.35,36 In line with previous research,37 it is assumed that not only the detected interpersonal synchrony but also the detected intrapersonal synchrony relating to a conscious emotion has a causal effect on the behavioral adaptivity.

    3. Self-Modeling Network Modeling

    The presented neural agent model is an adaptive dynamical system model designed and specified based on network-oriented modeling. The network-oriented modeling approach used here is basically a causal network modeling approach where nodes model states that have activation values that change over time and connections between these states model causal relations that have their effects in a temporal, dynamic manner on the state activations. Thus, dynamical systems are modeled. Moreover, by enabling these causal relations and characteristics for their effects on state activations to change as well, also adaptive dynamical systems are covered. This is also done in a network-oriented manner, using a so-called self-modeling network architecture. In Sec. 6, it will be shown that any smooth adaptive dynamical system can be modeled in this way. In this section, this modeling approach is briefly introduced.

    Following the network-oriented modeling approach38,39,40 used here, a temporal–causal network model is characterized by (here X and Y denote nodes of the network, also called states, and X(t) and Y(t) denote their activation values at time t):

    Connectivity characteristics

    Connections from a state X to a state Y and their weights ωX,Y.

    Aggregation characteristics

    For any state Y, some combination function

    cπY,Y (V1,,Vk) with vector of parameter values πY=(π1,Y,,πm,Y) defines the aggregation that is applied to the impacts Vi=ωXi,YXi(t) on Y from its incoming connections from states Xi.

    Timing characteristics

    Each state Y has a speed factor ηY defining how fast it changes for given causal impact.

    Note that for the sake of notational simplicity, often cπY,Y will be denoted by cY, omitting the subscript πY; this will not mean that there are no parameters, they are just left implicit.

    These network characteristics ωX,Y, cπY,Y, πY, and ηY for a given network model serve as a (formal) design specification of this network model. The following canonical difference and differential equations for temporal–causal network models used for simulation and analysis of such network models, incorporate these network characteristics ωX,Y, cY, πi,Y, and ηY in a standard numerical format :

    Y(t+Δt)=Y(t)+ηY[cπY,Y(ωX1,YX1(t),,ωXk,YXk(t))Y(t)]Δt,dY(t)dt=ηY[cπY,Y(ωX1,YX1(t),,ωXk,YXk(t))Y(t)](1)
    for any state Y and where X1 to Xk are the states from which Y gets its incoming connections. Note that (1) has a format similar to that of recurrent neural networks. Within the dedicated software environment implemented in MATLAB, a large number of currently around 60 useful basic combination functions are included in a combination function library.39

    The above concepts enable us to design network models and their dynamics in a declarative manner, based on mathematically defined functions and relations and specified in a standard table format covering all network characteristics (called role matrices, see Appendix  B). The examples of combination functions that are applied in the model introduced here can be found in Table 1. Here, for the third and fourth function, rand(1, 1) draws a random number from [0, 1] in a uniform manner and a is a persistence factor (with value 0.5 used in the simulations).

    Table 1. The combination functions used in the introduced network model.

    NotationSpecificationParametersUsed for
    Advanced logistic sumalogisticσ,τ (V1,,Vk)[11+eσ(V1++Vkτ)11+eστ](1+eστ)π1: steepness σ
    π2: excitability threshold τ
    X4X5, X10X16, X24X26, X31-X38, X45X47, X54X59, X63X71, X75X93
    Complemental differencecompdiff (V1, V2)0 if V1=V2=01|V1V2|max(V1,V2) elsenoneX18X23, X39X44 (synchrony detectors)
    Random Stepmodrandstepmodρ,δ (V)0 if 0timet mod ρδaV+(1a)rand(1,1) elseπ1: repetition ρ
    π2: step time δ
    X3 (common stimulus) X60X62, X72X74 (communication enablers)
    Random Stepmodopprandstepmodoppρ,δ (V)0 if δtime t mod ρρaV+(1a)rand(1,1) elseπ1: repetition ρ
    π2: step time δ
    X1X2 (individual stimuli)
    Euclideaneucln,λ (V1,,Vk)V1n++Vknλnπ1: order n
    π2: scaling factor λ
    X6X9, X27X30 (sensing) X48X53 (communication)

    Realistic network models are usually adaptive: often not only their states but also some of their network characteristics change over time. By using a self-modeling network (also called a reified network), a similar network-oriented conceptualization can also be applied to adaptive networks to obtain a declarative description using mathematically defined functions and relations for them as well.39,40 This works through the addition of new states to the network (called self-model states) which represent (adaptive) network characteristics. In the graphical three-dimensional (3D)-format as shown in Sec. 4, such additional states are depicted at a next level (called self-model level or reification level), where the original network is at the base level.

    As an example, the weight ωX,Y of a connection from state X to state Y can be represented (at a next self-model level) by a (connectivity) self-model state named WX,Y, which can be used to model synaptic plasticity.19,20,21,22 Similarly, all other network characteristics from ωX,Y, cY (…) and ηY can be made adaptive by including self-model states for them. For example, for adaptive excitability15,16,17 the threshold τY (for a logistic combination function) for a state Y can be represented by a (aggregation) self-model state named TY and an adaptive speed factor ηY can be represented by a (timing) self-model state named HY. Dynamics for the activation values for these self-model states are modeled by adding their own network characteristics (thus integrating them in the network structure) and applying Eq. (1) for them.

    If for all network characteristics ω, π, η for all base level states, respective self-model states W, P, H are introduced representing these network characteristics, then the canonical difference and differential equations for the base level states of the self-modeling network model are

    Y(t+Δt)=Y(t)+HY(t)[cPY(t),Y(WX1,Y(t)X1(t),,WXk,Y(t)Xk(t))Y(t)]Δt,dY(t/dt)=HY(t)[cPY(t),Y(WX1,Y(t)X1(t),,WXk,Y(t)Xk(t))Y(t)],(2)
    where PY(t)=(P1,Y(t),,Pm,Y(t)).

    This canonical difference equation is incorporated in the dedicated software environment. By instantiating this general difference equation (2) by proper values for the network characteristics for all base states Y and similarly instantiating equation (1) for all self-model states, the software environment runs a system of n difference equations where n is the number of (base and self-model) states in the network.

    When Eqs. (2) are compared to Eqs. (1), it can be noticed that at each point in time t, for the value of each network characteristic the activation value of its corresponding self-model state is used: for ηY the value HY(t) is used, for ωXi,Y the value WXi,Y(t), etc.In this way, each of these self-model states is assigned the functional role of the specific network characteristic it represents. In particular, when the activation values of these W-states, P-states, and H-states change, the corresponding network characteristics change accordingly. This makes these network characteristics adaptive.

    More mathematical background of this self-modeling network architecture construction and Eqs. (2) has been described elsewhere.39

    Usually not for all network characteristics self-model states are introduced but only for part of them. For example, in case only self-model states P for the combination function parameters π are introduced, then the canonical difference and differential equation are

    Y(t+Δt)=Y(t)+ηY[cPY(t),Y(ωX1,YX1(t),,ωXk,YXk(t))Y(t)]Δt,dY(t)/dt=ηY[cPY(t),Y(ωX1,YX1(t),,ωXk,YXk(t))Y(t)],(3)
    where PY(t)=(P1,Y(t),,Pm,Y(t)). This specific case will come back in the mathematical analysis addressed in Sec. 6.2 (to establish Theorem 2 there).

    Note that difference and differential equations (2) are not exactly in the standard format of a temporal–causal network, as HY is not a constant speed factor and also the P-and W-values are not constant. However, it can be rewritten into the temporal–causal network format when the following combination function cY*(..) is defined:

    cY*(H,P1,,Pm,W1,,Wk,V1,,Vk,V)=HcP,Y(W1V1,,WkVk)+(1H)V,(4)
    where P=(P1,,Pm).

    Based on this combination function, consider the following difference equation:

    Y(t+Δt)=Y(t)+[cY*(HY(t)P1,Y(t),,Pm,Y(t),WX1,Y(t),,WXk,Y(t),X1(t),,Xk(t),Y(t))Y(t)]Δt.(5)

    This is indeed in temporal–causal network format (1) (with speed factor 1). Now note that using (4), Eq. (5) can be rewritten as follows :

    Y(t+Δt)=Y(t)+[HY(t)cPY(t),Y(WX1,Y(t)X1(t),,WXk,Y(t)Xk(t))+(1HY(t))Y(t)Y(t)]Δt=Y(t)+[HY(t)cPY(t),Y(WX1,Y(t)X1(t),,WXk,Y(t)Xk(t))HY(t)Y(t)]Δt=Y(t)+HY(t)[cPY(t),Y(WX1,Y(t)X1(t),,WXk,Y(t)Xk(t))Y(t)]Δt,(6)
    where PY(t)=(P1,Y(t),,Pm,Y(t)).

    Equation (6) shows exactly difference equation (2) above; this confirms that the chosen combination function cY*(..) in (4) to show that the self-modeling network has a temporal–causal network format (1) works.

    As the outcome of a process of network reification is also a temporal–causal network model itself, as has been shown above, this self-modeling network construction can easily be applied iteratively to obtain multiple orders of self-models at multiple (first-order, second-order, etc.) self-model levels. For example, a second-order self-model may include a second-order (timing) self-model state HWX,Y representing the speed factor ηWX,Y for the dynamics of first-order self-model state WX,Y which in turn represents the adaptation of connection weight ωX,Y. Similarly, a second-order self-model may include a second-order (timing) self-model state HTY representing the speed factor ηTY for the dynamics of first-order self-model state TY which in turn represents the adaptation of excitability threshold τY for Y.

    In this paper, this multi-level self-modeling network modeling perspective will be applied to obtain a second-order adaptive network architecture addressing controlled behavioral adaptation induced by detected synchrony. In this self-modeling network architecture, the first-order self-model models the adaptation of the base level network that models behavior, and the second-order self-model level the control over this adaptation. As an example, the control level can be used to make the adaptation speed context-sensitive as addressed by metaplasticity literature.23,24 For instance, the metaplasticity principle “adaptation accelerates with increasing stimulus exposure”24 formulated by Robinson et al. can easily be modeled by using second-order self-model states; this actually has been done for the introduced model, as will be discussed in Sec. 4.

    4. The Adaptive Neural Agent Model

    In this section, our adaptive neural agent model is explained in some detail. The controlled adaptive agent design uses a self-modeling network architecture of three levels as discussed in Sec. 3: a base level, a first-order self-model level, and a second-order self-model level. Here the (middle) first-order self-model level models how connections and excitability thresholds of the base level are adapted over time, and the (upper) second-order self-model level models the context-sensitive control over the adaptations. Appendix B provides explanations for all of its states and a full specification of the model.

    4.1. Base level

    Figure 2 shows a graphic overview of the base level. For each agent, interaction states were modeled: states involved in sensing (indicated by sense) are on the left-hand side of each box, and states involved in execution or expression of actions (move, exp_affect, talk) on the right-hand side. In between these interaction states, within a box are the agent’s internal mental states; outside the boxes are the world states. Note that we assume that each agent also senses its own actions, modeled by the arrows from right to left outside the box.

    Fig. 2.

    Fig. 2. (Color online) Base level of the introduced adaptive agent model (upper picture) with three modalities and (in dark pink) six synchrony detection states for intrapersonal and interpersonal synchrony and how the agents interact (lower picture) according to the three modalities.

    For each agent, we modeled a few internal mental states such as sensory representation states (rep) and preparation states (prep) for each of the three modalities: movement m, expression of affect b, and verbal action v.

    Furthermore, each agent has a conscious emotion state for affective response b (cons_emotion). Each of the mentioned states is depicted in Fig. 2 by a light pink circle shape. For each modality, its representation state has an outgoing (response) connection to the corresponding preparation state and it has an incoming (prediction) connection back from the preparation state to model internal mental simulation.41,42

    Finally, there are the six synchrony detector states (depicted in Fig. 2 by the darker pink diamond shapes) which are introduced here. As in previous research37 we cover three intrapersonal synchrony detection states for the three pairs of the three modalities:

    • movement–emotion (mb),

    • movement–verbal action (mv),

    • semotion–verbal action (bv).

    These intrapersonal synchrony detection states have incoming connections from the two execution states for the modalities they address. The conscious emotion state is triggered by incoming connections from the preparation state for affective response b together with the three intrapersonal synchrony detection states.43 In addition, the conscious emotion state has an incoming connection from the verbal action execution state (for noticing the emotion in the verbal utterance) and an outgoing connection to the preparation of the verbal action (for emotion integration in the verbal action preparation).

    There are three interpersonal synchrony detection states for the three modalities m, b, and v. Each of them has two incoming connections: from the sensing state (representing the action of the other agent) and the execution state (representing the own action) of the modality addressed.

    For a few states and connections, their excitability and connection weights are adaptive depending on detected synchrony: detected synchrony leads to becoming more sensitive to sensing an agent and expressing to that agent (short-term effect) and to connecting stronger to the agent (long-term effect). Here, two different time scales for the adaptations are considered:

    On the short term, enhancing the excitability of such internal states, so that they become more responsive or sensitive (a form of instantaneous homeostatic regulation).

    On the long term, making the weights of such connections stronger so that propagation between states is strengthened (a form of a more endurable bonding).

    This applies to two types of states and four types of connections in particular, all playing an important role in the interaction behavior of the two agents:

    Short-term adaptive excitability for internal states

    The representation states for each of the three modalities.

    The execution states for each of the three modalities.

    Long-term adaptive internal and external connections

    The (representing) connections from sensing to representation states for each of the three modalities.

    The (executing) connections from preparation to execution states for each of the three modalities.

    The (observing) connections from world states to sensing states.

    The (effectuating) connections from execution states to world states.

    Thus, more synchrony detected will lead to enhanced excitability for these types of states (short-term adaptation) and for these connections to become stronger (long-term adaptation); each type of all these adaptations contributes in its own way (and time scale) to the interaction behavior of the agents. In the short term, more sensitive states for representations will lead to gaining better images of the modalities of the other agent; this will make the sensed signals better available and accessible for the agent. More sensitive states for execution will lead to better expressed own modalities, so that the other agent can sense them better.

    Over time and repeated interactions, a stronger (external) observing connection will lead to sensing the other agent better (e.g. turning sensors in the right direction and bending or getting closer to the other), and a stronger representing connection again (but now in a more endurable manner) will make the sensed signals better available and accessible for the agent. Conversely, a stronger executing connection will also contribute (in an endurable manner) to stronger expression and acting toward the other and a stronger effectuating connection to better availability (for the other) of the action effects in the world (e.g. more visible, better hearable by directing and positioning in the right direction, and bending or getting closer to the other). In Sec. 4.2, we discuss in more detail how we modeled these forms of adaptivity and their control using the principle of self-modeling of the network model.

    Finally, at the base level some world states are modeled for stimuli s that are sensed by the agents. In the simulations, they have stochastic activation levels. In some episodes one common stimulus is observed by both agents (for example when they physically meet and therefore are in the same environment), but in other episodes the agents receive different stimuli. Furthermore, also the world situation’s suitability for enabling communication between the two agents is modeled by similar stochastic fluctuations. Moreover, two context states are included to model the conditions to maintain excitability thresholds well.

    4.2. Modeling adaptation and its control

    We modeled adaptation and its control needed in the neural agent model using a “self-modeling network”39,40; see also Sec. 3. Following what has been described in Sec. 4.1, for a number of states Y adaptive excitability has been modeled via the excitability threshold τY of the logistic function used for these states (see Table 1). Moreover, the strengthening of connections from X to Y has been modeled via adaptive connection weights ωX,Y. Following Sec. 3, these adaptations have been modeled in particular through self-modeling for these τY and ωX,Y by adding the following first-and second-order self-model states:

    First-order self-model T-states TY are used for short-term adaptation of the adaptive base excitability thresholds τY for the internal representation states and execution states Y for the three considered modalities (movement, affective response, and verbal action). For each agent there are six of these T-states, both for the three representation base states and the three execution base states (both for the three modalities).

    First-order self-model W-states WX,Y are used for adaptation of the adaptive base connection weights ωX,Y for both internal and external connections for the three considered modalities; internal connections at the base level from sense states to representation states and from preparation states to execution states, and external connections from execution states to world states and from world states to sense states. For each agent there are 12 of these W-states, for the connections from world states to sensor states, from sensor states to representation states, from representation states to execution states, and from execution states to world states (all for the three modalities).

    Second-order self-model HT-states are used for control of the T-states for adaptation of the adaptive excitability thresholds τY for the internal representation states and execution states Y. For each agent there is one of these states.

    Second-order self-model HW-states are used for control of the W-states for the adaptation of the adaptive base connection weights ωX,Y. For each agent there is one of these states.

    Figure 3 shows the overall design of the network model; here, the first-order self-model states are in the middle (blue) plane and the second-order self-model states in the upper (purple) plane. The first-order states include T-states representing the excitability thresholds of representation and execution states and W-states representing the weights of the different types of adaptive connections addressed.

    Fig. 3.

    Fig. 3. Overview of the overall second-order adaptive network model.

    Recall from Sec. 3 the canonical difference and differential equation (2) for a self-modeling network. In this equation it is shown that at each time point for the values of the network characteristics the values of these self-model T-states and W-states are used. Based on this equation, by changing the activation values of these T-states and W-states over time t, the corresponding excitability thresholds and connection weights change accordingly which makes them adaptive. Such change of the values of the T-states and W-states occurs due to the influences from the detected synchronies, modeled by the upward (blue) arrows in Fig. 3 from the synchrony detection states in the base plane to the T-states and W-states in the middle plane.

    For most states the combination function alogistic is used which has an excitability threshold parameter that can be made adaptive; see the last column in Table 1. The synchrony detections states, however, have a different function called compdiff to measure the extent of synchrony. For further details, see Table 1 and Appendix  B.

    There are four second-order self-model states to control the adaptation: two second-order self-model states HTA and HTB for excitability adaptation control, one for each agent, and two second-order self-model states HWA and HWB for connection weight adaptation control, also one for each agent. These second-order self-model states are used to represent the adaptation speed (learning rate) for the adaptive excitability threshold T-states and connection weight W-states for the concerning agents A and B. Based on the canonical difference and differential equation (2) for a self-modeling network from Sec. 3, for each time point t the activation values of the second-order self-model states HTA, HTB, HWA, and HWB at t are used as the values for these network characteristics of the first-order self-model T-states and W-states for A and B. These second-order self-model states HTA, HTB, HWA, and HWB model the second-order adaptation (or metaplasticity) principle “adaptation accelerates with stimulus exposure”.24 To this end they have incoming connections (blue arrows from base plane to upper plane) from the stimulus representation states at the base level.

    5. Simulation Results

    Appendix B provides a full specification of the model as used in our simulations. As can be seen, in general the values have been chosen in a standard manner. For example, all positive connection weights are 1, except for the long-term adaptation speed self-model states HWA and HWB which are 0.01, see Table B.4. The negative connection weights for the T-states are −0.12 so that they together add up to a value −0.72, which compensates the positive weight 1 enough to get effect. The values for the steepness parameter σ (see Table B.6) for the combination function alogistic were all set on 5 which also is a kind of default value. For values of the threshold parameter τ, they often have been set at 0.5 but accordingly higher for states with multiple incoming connections.

    For all states, when the Euclidean combination function eucl is used, the order parameter n is 1 which makes it linear and the scaling factor λ is the sum of the weights of the incoming connections which normalizes it. Note that the time unit is kept abstract. Depending on the application context one might think of minutes, for example for therapeutical sessions.

    5.1. Design of the simulation experiments

    In this section, we evaluate our neural agent model in an experimental simulation paradigm. Our paradigm was set up in such a way that we could evaluate the behavior of our two agents during four different types of consecutive episodes (see Table 2 and Fig. 4) which are explained as follows. Each of these types of episodes lasts for 30 time units, so that a cycle of four episodes equals 120 time units. Our total simulation run had a duration of 840 time units and the step size (Δt) was 0.5, resulting in 1680 computational steps in total for each simulation run. This means that each cycle of four episodes was repeated seven times in each simulation. As it concerns a partly stochastic simulation, we ran 20 repetitions of each simulation with the same episodic paradigm and parameter settings, to get a sense of the robustness of the neural agent model’s behavior. It turned out that general patterns were approximately similar across all independent simulations. Therefore, we selected one simulation to discuss in the upcoming subsections.

    Table 2. Simulation paradigm of each run with the neural agent model: the pattern of stimuli and communication enables repeating every 120 time units.

    TimeEpisodeDifferent stimuli WSs,AWSs,BCommon stimulus WSsCommunication enabled Wexec-wsx,A,BWexec-wsx,B,A
    0–30Episode 1YesNoNo
    30–60Episode 2YesNoYes
    60–90Episode 3NoYesNo
    90–120Episode 4NoYesYes
    120–150Episode 5YesNoNo
    150–180Episode 6YesNoYes
    180–210Episode 7NoYesNo
    210–240Episode 8NoYesYes
    240–270Episode 9etc.etc.etc.
    etc.etc.etc.etc.etc.
    Fig. 4.

    Fig. 4. The stimuli and interaction enabling states in the neural agent model.

    Notes: From 0 to 120 time units (upper graph) and from 0 to 840 time units (lower graph): interaction enabling (multi-color) for 30–60, 90–120, etc., different stimuli (blue) of 0–60, 120–180, etc., common stimulus (purple) of 60–120, 180–240, etc. (see also Table 2).

    Regarding the four different types of episodes in this simulation, they manipulate both whether or not the two agents received the same or a different stochastic stimulus and whether or not they were able to communicate (with some stochastic variations in enabling conditions, due to environmental changes and noise) with each other (Table 2). The specific episodes for the considered example simulation are shown in Fig. 4.

    The world states wss,A and wss,B indicate the different stimuli for agent A and B from the world (activated from time 0 to 60 and then repeated every 120 time units; see the dark solid and dashed blue lines for A, respectively B). Similarly, world state wss indicates the common stimulus (activated from time 60 to 120 and then repeated every 120 time units; see the purple line). These three states have values stochastically fluctuating approximately between 0.7 and 0.9. Furthermore, the self-model states Wexec-wsx,A,B (from A to B) and the states Wexec-wsx,B,A (from B to A) indicate the communication-enabling conditions in the environment. They are activated from time 30 to time 60 thereby fluctuating stochastically roughly between 0.45 and 0.65 and then repeated every 60 time units.

    All these stochastic activation patterns indeed follow the pattern shown in Table 2 with repetition every 120 time units until end time 840.

    5.2. Behavior of the base states of the neural agent model

    For the base states, in the first phase for time 0 to 10 the representations (states reps,A and reps,B) for the stimulus are activated (the curves fluctuating around 0.8) and preparations (states prepx,A) for actions are triggered (curves going to 1); see the upper graph in Fig. 5. This leads, together with the intrapersonal synchrony detection activation (see Figs. 5 and 6), to the conscious emotion around time 10 (red curve going to 1), but this still is only internal processing as no executions of actions take place yet. The action executions (states movem,A, exp_affectb,A, and talkA,B,v) for both agents start to come up after time 10 (e.g. the purple line); this also depends on the short-term adaptations that will be discussed in Sec. 5.4. The curves immediately under these executions concern the sensing of the other agent’s actions (the senseA,x,B and senseB,x,A states); in some periods they are slightly fluctuating due to environmental noise on the communication channels. The actual communication level (the wsx,A,B and wsx,B,A states) is seen below it from 30 to 60 and from 90 to 120.

    Fig. 5.

    Fig. 5. The base states in the neural agent model from 0 to 120 time units (upper graph) and from 0 to 840 time units (lower graph). Due to the behavioral adaptivity the activation levels in response to the stimuli and interaction become stronger over time, both on the short term (within each interaction enabling interval 30–60, 90–120, etc.) and on the long term.

    Fig. 6.

    Fig. 6. The detected intrapersonal synchrony, interpersonal synchrony and short-term adaptation T-states in the neural agent model from 0 to 120 time units (upper graph) and from 0 to 840 time units (lower graph). As a form of short-term behavioral adaptivity, during each interaction interval the T-states become lower: 30–60, 90–120, etc. This makes in an adaptive manner the thresholds of the base states lower and therefore the activation values of them higher within each of these intervals, as also shows in Fig. 5.

    For the longer term, the lower graph in Fig. 5 shows that each interval with enabling conditions for communication leads to higher activations of the action executions (the purple line) until values around 0.8 are reached. This is due to a long-term behavioral adaptation that is discussed in Sec. 5.5. Accordingly, the sensing states become higher as well over this longer term, but not as high as the action executions, due to a communication bias incorporated in the model. This overall pattern shows that the enabling conditions for communication have a stronger adaptive effect on the actions than having a common stimulus.

    5.3. Behavior of the intrapersonal synchrony and interpersonal synchrony detector states

    The curves that the graphs in Figs. 6 and 7 have in common depict the detected intrapersonal synchrony and interpersonal synchrony. Here:

    Fig. 7.

    Fig. 7. The detected intrapersonal synchrony, interpersonal synchrony, and long-term adaptation W-states and HW-states of their adaptation speed in the neural agent model from 0 to 120 time units (upper graph) and from 0 to 840 time units (middle graph). The lower graph depicts the HW-states with a different vertical scale (times 103).

    The detected intrapersonal synchrony detection is represented by the states intrasyncdetA,xy and intrasyncdetB,xy shown as the light green and light blue curves going to 1 from time 0 to 15.

    The interpersonal synchrony detection is represented by the states intersyncdetA,B,x and intersyncdetB,A,x shown as the red and blue curves going to 0.4 from time 0 to 30 and further to 0.8 from time 30 to 60.

    Here it can be observed that the detection of intrapersonal synchrony takes place already in the first episode from time 0 to 30, meaning no common stimulus or communication is required. In contrast, the detection of interpersonal synchrony strongly depends on the interaction between the two agents. Note also that the former type of detected synchrony reaches a perfect level of 1, due to the coherent internal makeup of the agents, while the latter type does not get higher than around 0.8. At first sight this may look strange, given that the actual executions of actions of both agents are practically the same, as discussed above (see Sec. 5.2 and Fig. 5). However, this is due to the communication bias that was also noted above in Sec. 5.2. This demonstrates the capability of the model that it is able to distinguish a subjective personally detected interpersonal synchrony from an objective form of interpersonal synchrony detection as might be assigned by an external observer but not by the agent itself.

    5.4. The interplay between synchrony and short-term adaptation

    In Fig. 6, the synchrony detection states are shown together with the states involved in the short-term adaptation: the first-order self-model T-states that represent the adaptive excitability thresholds for representation and execution states and the second-order self-model HT-states that represent the T-states’ speed factors (adaptive learning rates). Except the synchrony detection states already discussed in Sec. 5.3, the graphs show two light green curves fluctuating around 0.6 for the HT-states and a blue curve going down to below 0.3 for the T-states.

    According to the metaplasticity principle “adaptation accelerates with increasing stimulus exposure”,24 the HT-states indeed fluctuate with the stimuli. Furthermore, in accordance with this, when one stimulus period is in transition to another stimulus period, it can be seen that there is a short dip in the values of the HT-states, as stimuli start from 0, so there is a very short period of a lower level, as can also be seen in Fig. 4. Moreover, it is clear that the T-states (e.g. the blue curve) show an opposite pattern compared to the pattern of the interpersonal synchrony detection states. In particular, in the episodes from 30 to 60 and from 90 to 120 (and so on), where the detected interpersonal synchrony is the highest, the T-states for the excitability thresholds are the lowest. This is a short-term adaptation that makes that these agent states related to the communication with the other agent have a higher excitability due to the detected interpersonal synchrony, which will have an intensifying effect on their communication. Not coincidentally, the mentioned periods are also the periods with good enabling conditions for communication (see also Sec. 5.2). It can also be noted that this tendency is a short-term effect and is reversible: the T-states get higher again when the detected interpersonal synchrony gets lower.

    5.5. The interplay between synchrony and long-term adaptation

    In Fig. 7 the synchrony detection states are shown together with the states involved in the long-term adaptation: the first-order self-model W-states that represent the adaptive weights for the connections to the representation and execution states and the second-order self-model HW-states that represent the W-states’ speed factors (adaptive learning rates). Here, except the synchrony detection states already discussed in Sec. 5.3, the graphs show the W-states (e.g. a blue curve) slowly and gradually going up to above 0.5 at time 120 and further up to about 0.8 at time 840.

    Moreover, at a very low level, the curves for the HW-states can be seen. They also fluctuate according to the metaplasticity principle “adaptation accelerates with increasing stimulus exposure”,24 but at a very low level around 0.005 (see the lower graph in Fig. 7). Again, following the same principle, when one stimulus period is in transition to another stimulus period, it can be seen that there is a short dip in the values of the HW-states. This happens because stimuli start from 0, so there is a very short period of a lower level (see Fig. 4). The pattern of the W-states indeed shows a long-term adaptation effect. It highlights that they get a repeated boost in the time intervals 30–60, 90–120, and so on, and show a form of persistency. These boosts occur specifically in these intervals for a reason. These intervals are when there are communication-enabling conditions and as discussed in Sec. 5.4 that induces synchrony and the short-term adaptation via the T-states, which in turn add to synchrony. Therefore, these two effects are at the basis of these boosts for the long-term adaptation. In this way, there is a form of interaction between short-term and long-term adaptation.

    6. Modeling and Analysis of Adaptive Dynamical Systems via their Canonical Self-Modeling Network Representation

    This section discusses how any smooth adaptive dynamical system can be modeled by a self-modeling network model. It is shown in particular that any adaptive dynamical system has a canonical representation as a self-modeling network defined by network characteristics for connectivity, aggregation, and timing. The network concepts of this canonical representation of an adaptive dynamical system provide useful tools for formal analysis of the dynamics of the adaptive dynamical system addressed. With this idea in mind, equilibrium analysis of self-modeling network models is addressed. Dynamics in network models are described by node states that change over time (for example, for individuals’ opinions, intentions, emotions, beliefs, etc.). Such dynamics depend on network characteristics for the connectivity between nodes, the aggregation of impacts from different nodes on a given node, and the timing of the node activation updates.39,40

    For example, whether within a well-connected group in the end a common opinion, intention, emotion or belief is reached (a common value for all node states) depends on all these network characteristics. Sometimes silent assumptions are made about the aggregation and timing characteristics. For timing characteristics, often it is silently assumed that the nodes are updated in a synchronous manner, although in application domains this assumption is usually not fulfilled. For aggregation, in social network models usually linear functions are applied, which means that it is often not investigated how a variation of this choice of aggregation would affect the dynamics.

    In the modeling and analysis approach used in this paper, a more diverse landscape is covered which is not limited by the fixed conditions on connectivity, aggregation or timing as are so often imposed. For connectivity, both acyclic and cyclic networks are covered here. For aggregation, both networks with linear and nonlinear aggregation are considered and for networks with nonlinear aggregation, networks with logistic aggregation are addressed but also networks with other forms of nonlinear aggregation. Finally, both synchronous and asynchronous timing are provided. The often-occurring use of linear functions for aggregation for social network models may be based on a more general belief that dynamical system models can be analyzed better for linear functions than for nonlinear functions. Although there may be some truth in this if specifically logistic nonlinear functions are compared to linear functions, such a belief is not correct in general. It has been found that also classes of nonlinear functions exist that enable good analysis possibilities when it comes to the emerging dynamics within a network model, thereby among others not using any conditions on the connectivity but instead exploiting for any network its structure of strongly connected components.

    In Sec. 6.1 it is shown that for the nonadaptive case this network-oriented modeling approach is equivalent to any dynamical systems modeling approach (Theorem 1 and Corollary 1), and in Sec. 6.2 that for the adaptive case self-modeling networks are equivalent to any adaptive dynamical systems approach (Theorem 2 and Corollary 2). In Sec. 6.3 equilibrium analysis of network models is provided and applied to the model introduced earlier.

    6.1. Dynamical Systems and their Canonical Network Representation

    Dynamical systems are usually specified in certain mathematical formats; see pp. 241–252 of Ref. 44 for some details. In the first place, a finite set of states (or state variables) X1,,Xn is assumed describing how the system changes over time via functions X1(t),,Xn(t) of time t. As discussed by Ashby44 and Port and Gelder,45 a dynamical system is a state-determined system which can be formalized in a numerical manner by a relation (rule of evolution) that expresses how for each time point t the future value of each state Xi at time t+s uniquely depends on s and on X1(t),,Xn(t). Therefore, a dynamical system can be described via n functions Fj(V1,,Vn,s) for each Xj in the following manner (see also pp. 243–244 of Ref. 44) :

    Xj(t+s)=Fj(X1(t),,Xn(t),s)for s>0.(7)

    If these functions Fj and the Xj are continuously differentiable (which also implies they are continuous), we call the dynamical system smooth. Suppose such a smooth dynamical system is given. It turns out that it can always be described in a canonical manner by a temporal–causal network model; the argument is as follows. Consider (7) where the functions Fi and Xi are continuously differentiable. In the particular case of s approaching 0 it holds

    lims0Xj(t+s)=lims0Fj(X1(t),,Xn(t),s)
    which due to continuity of the involved functions implies
    Xj(t)=Fj(X1(t),,Xn(t),0).(8)

    So, Eq. (7) also holds for s=0.

    Let Xj denote the derivative of Xj with respect to time. Applying the partial derivative (..)/s on both sides of Eq. (1)

    Xj(t+s)=Fj(X1(t),,Xn(t),s).
    Then it follows
    Xj(t+s)/s=Fj(X1(t),,Xn(t),s)/s.
    Here for the left-hand side, by the chain rule for function composition it holds
    Xj(t+s)/s=Xj(t+s)(t+s)/s=Xj(t+s).
    So, it is found that for all t and s it holds
    Xj(t+s)=Fj(X1(t),,Xn(t),s)/s.(9)

    In particular, this holds for s=0; therefore

    Xj(t)=[Fj(X1(t),,Xn(t),s)/s]s=0.(10)

    For a more detailed explanation of the argument for (10), see Appendix  A, where also the differences and relations with Ashby’s approach44 are discussed.

    Now define the (combination) function gj(V1,,Vn) by

    gj(V1,,Vn)=Vj+[Fj(V1,,Vn,s)/s]s=0.(11)

    Then it holds

    dXj(t)/dt=[Fj(X1(t),,Xn(t),s)/s]s=0=gj(X1(t),,Xn(t))Xi(t).(12)

    Comparing Eq. (12) to the canonical format (1) in Sec. 3 that defines dynamics of temporal–causal networks, it immediately follows that this matches each other as long as the speed factors η and connection weights ω are set at 1, i.e. from (12) it follows:

    dXj(t)/dt=ηXj[cXj(ωX1,XjX1(t),,ωXn,XjXn(t))Xj(t)](13)
    with ηXj=1 and cXj=gj for all j and ωXi,Xj=1 for all i and j. For an example of this, see Box 1.

    Box 1. Example of the canonical transformation of any smooth dynamical system into temporal–causal network format.

    Consider the following example dynamical system from p. 244 of Ref. 44:

    X1(t+s)=X1(t)+X2(t)s+s2,X2(t+s)=X2(t)+2s.
    This can be formalized in the format of (1) by
    X1(t+s)=F1(X1(t),X2(t),s),X2(t+s)=F2(X1(t),X2(t),s),
    where the functions F1 and F2 are defined by
    F1(V1,V2,s)=V1+V2s+s2,F2(V1,V2,s)=V2+2s.
    Then
    [F1(V1,V2,s)/s]s=0=[V2+2s]s=0=V2,[F2(V1,V2,s)/s]s=0=[2]s=0=2.
    This leads to the differential equations
    dX1(t)/dt=X2(t),dX2(t)/dt=2.
    When the (combination) functions g1 and g2 are defined by
    g1(V1,V2)=V1+[F1(V1,V2,s)/s]s=0=V1+V2,g2(V1,V2)=V2+[F2(V1,V2,s)/s]s=0=V2+2
    then the following is obtained:
    dX1(t)/dt=g1(X1(t),X2(t))X1(t),dX2(t)/dt=g2(X1(t),X2(t))X2(t).
    This is equivalent to
    dX1(t)/dt=ηX1[cX1(ωX1,X1X1(t),ωX2,X1X2(t))X1(t)],dX2(t)/dt=ηX2[cX2(ωX1,X2X1(t),ωX2,X2X2(t))X2(t)]
    with ηXj=1 and cXj=gj for all j and ωXi,Xj=1 for all i and j.

    This shows that any given smooth dynamical system can be formalized in this canonical manner by a representation in the temporal–causal network format; this notion is described in more detail in Definition 1 and Theorem 1. Note that this also shows theoretically that the use of specific values for speed factors and connection weights is not essential, as they can all be set to 1. However, they still are convenient instruments in the practice of modeling real-world processes.

    Definition 1. [Canonical network representation of a smooth dynamical system]. Let any smooth dynamical system be given by

    Xj(t+s)=Fj(X1(t),,Xn(t),s) for s0j=1,,n,
    where the functions Fi are continuously differentiable. Then the canonical temporal–causal network representation of it is defined by network characteristics ωXi,Xj, cXj, ηXj for all i and j with ωXi,Xj=1 for all i and j, cXj(V1,,Vn)=Vj+[Fj(V1,,Vn,s)/s]s=0, ηXj=1 for all j.

    This network representation has dynamics induced by the following canonical differential equations for temporal–causal networks

    dXj(t)/dt=ηXj[cXj(ωX1,XjX1(t),,ωXn,XjXn(t))Xj(t)].
    So, by the argument above, the following theorem is obtained:

    Theorem 1 (The canonical network representation of a smooth dynamical system). Any smooth dynamical system can be formalized in a canonical manner by a temporal–causal network model called its canonical network representation. Conversely, any temporal–causal network model is a dynamical system model.

    As a corollary from Theorem 1 the following well-known result immediately follows.

    Corollary 1 (From smooth dynamical system to first-order differential equations). Any smooth dynamical system can be formalized as a system of first-order differential equations.

    The latter result was also proven in a different way in pp. 241–252 of Ref. 44. See Appendix  A for some more details.

    6.2. Adaptive dynamical systems and their canonical self-modeling network representation

    In this section, it is shown how the approach described in Sec. 6.1 can be extended to obtain a transformation of any smooth adaptive dynamical system into a self-modeling network model. Adaptive dynamical systems are usually modeled by two levels of dynamical systems (see Fig. 8).

    Fig. 8.

    Fig. 8. Overall picture of an adaptive dynamical system.

    Here the higher level dynamical system models the dynamics of the parameters Pi,j of the lower level dynamical system (the lower level component in Fig. 8) that describes the dynamics of variables Xi, for example by

    Xj(t+s)=Fj(Pj,1,,Pj,k,X1(t),,Xn(t),s) for s>0.(14)

    In addition, for the dynamics of the Pi,j there will also be a dynamical system (the upper level component in Fig. 8) for s0 :

    Pi,j(t+s)=Gi,j(P1,1(t),,Pn,k(t),X1(t),,Xn(t),s).(15)

    By applying the argument from Sec. 6.2 to both levels, the following differential equations are obtained covering the entire adaptive dynamical system :

    dXi(t)/dt=ηXi[cXi(ωPi,1,XiPi,1(t),,ωPi,k,XiPi,k(t),ωX1,XiX1(t),,ωXn,XiXn(t))Xi(t)],dPi,j(t)/dt=ηPi,j[cPi,j(ωPi,1,Pi,jPi,1(t),,ωPi,k,Pi,jPi,k(t),ωX1,Pi,jX1(t),,ωXn,Pi,jXn(t))Pi,j(t)],(16)
    where all η and ω are 1. Recall from Sec. 3 the canonical differential equation (3) that defines a self-modeling network model for the case when self-model states P are introduced for the combination function parameters πY for all base level states Y:
    dY(t)/dt=ηY[cPY(t),Y(ωX1,YX1(t),,ωXk,YXk(t))Y(t)]
    where PY(t)=(P1,Y(t),,Pm,Y(t)).

    The first equation of (15) is (although in a slightly different mathematical notation) equal to Eq. (3), which shows that the former equation defines a self-modeling temporal–causal network model. In this self-modeling network model, the parameters Pi,j from the adaptive dynamical system are modeled by (aggregation) self-model P-states within the self-modeling network model for parameters in the combination functions used for states Xi in the base network defined by the states Xi. In this way a canonical self-modeling network representation is obtained for the considered smooth adaptive dynamical system; this notion is defined by Definition 2.

    Definition 2. [Canonical self-modeling network representation of a smooth adaptive dynamical system]. Let any smooth adaptive dynamical system for s0, j=1,,n, and for i=1,,k be given by

    Xj(t+s)=Fj(Pj,1,,Pj,k,X1(t),,Xn(t),s),Pi,j(t+s)=Gi,j(P1,1(t),,Pn,k(t),X1(t),,Xn(t),s),
    where the functions Fj and Pi,j are continuously differentiable. Then the canonical self-modeling network representation of it is defined by characteristics ω, π, η where all ω and η are 1 and
    cXj(Wi,1,,Wi,k,V1,,Vn)=Vi+[Fi(Wi,1,,Wi,k,V1,,Vn,s)/s]s=0,cPi,j(Wi,1,,Wi,k,V1,,Vn)=Wi,j+[Gi(Wi,1,,Wi,k,V1,,Vn,s)/s]s=0.

    This self-modeling network representation has dynamics induced by the following canonical differential equations:

    dXj(t)/dt=ηXi[cXi(ωPi,1,XiPi,1(t),,ωPi,k,XiPi,k(t),ωX1,XiX1(t),,ωXn,XiXn(t))Xi(t)],dPi,j(t)/dt=ηPi,j[cPi,j(ωPi,1,Pi,jPi,1(t),,ωPi,k,Pi,jPi,k(t),ωX1,Pi,jX1(t),,ωXn,Pi,jXn(t))Pi,j(t)].

    Thus, by the argument preceeding Definition 2, the following theorem was obtained.

    Theorem 2 (The canonical self-modeling network representation of an adaptive dynamical system). Any adaptive smooth dynamical system model can be transformed in a canonical manner into a self-modeling network model called its canonical self-modeling network representation described by Definition 2 above. Conversely, any self-modeling network model is an adaptive dynamical system model. These also apply to higher-order adaptive dynamical systems in relation to higher-order self-modeling networks.

    As a corollary it now follows that any adaptive dynamical system can be described by first-order differential equations:

    Corollary 2 (From a smooth adaptive dynamical system to first-order differential equations). Any smooth adaptive dynamical system can be formalized as a system of first-order differential equations.

    Theorems 1 and 2 demonstrate that a modeling approach based on the self-modeling network format is at least as general as any other adaptive dynamical system modeling approach. Therefore the choice for this format to model adaptive dynamical systems does not introduce any limitation. In particular, it also can be viewed as generalizing the most common types of neural network models.

    7. Stationary Point and Equilibrium Analysis for Self-Modeling Networks

    In this section, it is shown how stationary point and equilibrium analysis can be performed for self-modeling networks (Sec. 7.1) and how this can be applied to verify the correctness of the implemented self-modeling network model introduced in Sec. 4 compared to its design specifications (Sec. 7.2).

    7.1. The general analysis approach

    The following types of properties are often considered for equilibrium analysis of dynamical systems in general.

    Definition 3 (Stationary point, increasing, decreasing, equilibrium). Let Y be a network state

    Y has a stationary point at t if dY(t)/dt=0,

    Y is increasing at t if dY(t)/dt>0,

    Y is decreasing at t if dY(t)/dt<0,

    The network model is in equilibrium at t if every state Y of the model has a stationary point at t.

    Note that for mathematical analysis of dynamical system models, usually there is an emphasis on equilibrium analysis. However, in many cases no equilibria occur, for example in cases of oscillatory limit cycle behavior. In such cases, still stationary points can be analyzed. In the case, considered in this paper no equilibria occur when the environment is changing all the time.

    By considering the canonical network representation of a dynamical system, the above criteria are formulated in terms of the network characteristics: for network models, the following criteria in terms of the network characteristics ωX,Y, cY, ηY can be derived from the generic difference equation (1).38 Let Y be a state and X1,,Xk the states connected toward Y. For nonzero speed factors ηY, the following criteria in terms of network characteristics for connectivity and aggregation apply; here aggimpactY(t)=cY(ωX1,YX1(t)ωXk,YXk(t)):

    Y has a stationary point at taggimpactY(t)=Y(t)

    Y is increasing at taggimpactY(t)>Y(t)

    Y is decreasing at taggimpactY(t)<Y(t)

    The network model is in equilibrium at taggimpactY(t)=Y(t) for every state Y.

    The above criteria for a network being in an equilibrium (assuming nonzero speed factors) depend both on the connections weights ωX,Y used for connectivity and on the combination function cY used for aggregation. Note that in a self-modeling network, these criteria can be applied not only to base states but also to self-model states. In the latter case they can be used for equilibrium analysis of learning or adaptation processes.

    In particular, a network model with states X1,,Xn is in equilibrium if and only if the following n equations (called equilibrium equations) are satisfied:

    aggimpactX1(t)=X1(t),aggimpactXn(t)=Xn(t).

    These equations express relations between values of the states in an equilibrium: they indicate how values in an equilibrium relate to each other and contain as parameters network characteristics ωXi,Xj and cXj. Sometimes it is possible to solve these equations, for example, when they are linear, or when they are nonlinear Euclidean or geometric equations. When there is no equilibrium, still stationary points for a given state Xi at some time point t can be analyzed based on the above criteria, for example, (local) maxima or minima of the function Xi(t).

    The above criteria can be used to verify correctness of (the implementation of) a network model based on inspection of stationary points or equilibria in the following manner.

    Verification by checking the criteria through substitution

    (1)

    Generate a simulation.

    (2)

    For a sample of states Xj identify stationary points with their time points t and state values Xj(t).

    (3)

    For each of these stationary points for a state Xj from the chosen sample at time t, identify the values Xi(t) at time t of states Xi among X1,,Xn that are connected toward Xj.

    (4)

    Substitute all these values Xi(t) in the criterion aggimpactXj(t)=Xj(t).

    (5)

    If the equation holds (for example, with absolute deviation <102), then this test succeeds, otherwise it fails.

    (6)

    If this test fails, then it should be explored what error is causing this failure and how this error can be corrected.

    (7)

    If the test succeeds, this contributes to evidence that the implemented network model is correct in comparison with its design specification.

    In Sec. 7.2, it is shown in detail how this form of verification by substitution can be applied for an example network.

    7.2. Analysis of the introduced neural agent model

    The procedure for testing on correctness described in Sec. 7.1 has been applied to the neural agent model introduced in Sec. 4 of this paper. It has been applied for two different scenarios, in each of them for a sample of states covering all levels of the model. One scenario used for this type of test is a scenario where an equilibrium occurs, see Fig. 9. This was achieved by setting the external factors for stimuli and communication enabling to constant values 1 instead of random values as applied in the scenarios in Sec. 5 where no equilibria occur due to this randomness.

    Fig. 9.

    Fig. 9. Simulation scenario for a constant environment leading to an equilibrium. Upper graph: Initial phase time 0–50. Lower graph: Up to time 1000. For a complete legend for the colors, see Fig. 10.

    Fig. 10.

    Fig. 10. Legend for the colors in Figs. 9 and 11.

    The analysis focuses on time t=1000 and the chosen sample consists of the following states Xj: the second-order self-model state HWB (X105, green line in Fig. 9 ending up around 0.01), first-order self-model state Texecv,B (X95, orange line ending up around 0.2), second-order self-model state HTB (X103, red line ending up around 0.75), base states reps,B (X31, orange line ending up around 0.92), wsv,B,A (X53, purple line ending up around 0.97), intrasyncdetA,mv (X19, blue line ending up at time 20 around 1), intersyncdetB,A,m (X21, blue line ending up at time 50 around 1), and first-order self-model state Wprep-execv,B (X71, blue line ending up at time 500 around 1). To calculate

    aggimpactXj(t)=cY(ωX1,YX1(t)ωXk,YXk(t))(17)
    for each of these chosen states Xj, from the network characteristics the states Xi are determined with connections to Xj (see Table 3, second column) and for each of these Xi the weights ωXi,Xj(t) of the connection from Xi to Xj (see Table 3, middle part), and from the simulation data the values Xi(t) of the Xi at time t (see Table 3, right-hand part).

    Table 3. Weights ωXi,Xj(t) and simulation values Xi(t) for incoming connections from states Xi to Xj used for equilibrium analysis for the scenario with constant environment depicted in Fig. 9.

    XjIncoming XiConnection weights ωXi,Xj(t) at tState values Xi(t) at t
    X19X24, X26110.9761710.976334
    X21X24, X7110.9761710.976171
    X31X2711
    X53X3, X47100.9763340
    X71X39X4411111110.9998330.999833111
    X95X39X44, X5−0.12−0.12−0.12−0.12−0.12−0.12110.9998330.9998331111
    X103X310.080.917915
    X105X3110.917915

    Based on these connection weights ωXi,Xj(t) and state values Xi(t) in Table 3, the products ωXi,Xj(t)Xi(t) are determined (see Table 4, sixth column). Then by applying the combination function from Table 1 on these products, via Eq. (17) aggimpactXj(t) is determined for each Xj (see one but last column in Table 4). Finally, this value is compared to the state value Xj(t) of Xj to obtain the deviation=aggimpactXj(t)Xj in the last column of Table 4. In Table 4, all absolute deviations are smaller than 106. This provides evidence that the implemented model is correct with respect to its design specifications. Note that this provides evidence for correctness of the model in general, not only for this special scenario, as if there were errors, they would most probably also show their effects in this example scenario.

    Table 4. Equilibrium analysis for the scenario with constant environment depicted in Fig. 9.

    StateXjTime point tState value Xj(t)Incoming states XiImpact values ωXi,Xj(t)Xi(t)aggimpactXj(t)Deviation
    intrasyncdetA,mvX1910000.999832697X240.976170820.9998326973.3×1011
    X260.976334164
    intersyncdetB,A,mX2110000.999999995X240.976170820.9999999961.2×1010
    X70.976170816
    reps,BX3110000.917915001X2710.917915001<1017
    wsv,B,AX5310000.976334161X30.9763341640.9763341642.9×109
    X470
    Wprep-exec_v,BX7110000.99944441X3910.9994462961.9×106
    X400.999832697
    X410.999832697
    X420.999999995
    X430.999999995
    X440.999999995
    Texec_v,BX9510000.188195503X39−0.1200000000.1881955033.5×1011
    X40−0.119979924
    X41−0.119979924
    X42−0.119999999
    X43−0.119999999
    X44−0.119999999
    X51
    HWBX10310000.012837073X310.07343320.0128370732.9×1017
    HTBX10510000.740701054X310.9179150010.740701054<1017

    Yet, for still more evidence another scenario has been analyzed as well, where the environmental factors for stimuli and communication enabling do change but in a nonrandom manner; see Fig. 11.

    Fig. 11.

    Fig. 11. Simulation scenario for a nonrandomly changing environment. For a legend for the colors, see Fig. 10.

    Here, for both agents, stimulus s occurs from time 0 to time 450 and disappears from 450 to 500; this is repeated every 500 time units. Moreover, interaction is not enabled from time 0 to time 50 and is enabled from time 50 to time 400, which is repeated every 400 time units. Due to the changing environment, no equilibrium occurs here. However, there are many cases of (approximate) stationary points. In particular, stationary points have been analyzed as above for a sample of states and time points indicated in the left three columns in Table 5. Here most absolute deviations are <0.01. However, there are three of them in the order of 0.02 which is larger than expected for a stationary point. The graph indeed shows that they actually are not approximately stationary points. All in all, also these results provide evidence that the implemented model is correct with respect to its design specifications.

    Table 5. Stationary point analysis for the scenario with nonrandom nonconstant environment depicted in Fig. 11.

    State XjTime point tState value Xj(t)Incoming states XiImpact values ωXi,Xj(t)Xi(t)aggimpactXj(t)Deviation
    senseA,v,BX3019990.972980783X470.9729807830.972980783<1017
    X500.972980783
    reps,BX3119490.917915001X2710.917915001<1017
    intrasyncdetB,bvX4119490.999816153X460.9732373360.9998163562.0×107
    X470.973416098
    X410.999816153
    intrasyncdetB,bvX4119990.978349541X460.9519153030.9783495415.7×1013
    X470.972980783
    X410.978349541
    intersyncdetA,B,vX4419490.999971441X470.9734160980.9999718163.8×107
    X300.973388663
    X440.999971441
    intersyncdetA,B,vX4419991X470.9734160980.999552797−0.000447203
    X300.972980783
    X441
    wsb,B,AX5219490.973218991X460.9473471280.947347128−0.025871863
    X30
    wsb,B,AX5219990.951915303X460.9261952960.926195296−0.025720006
    X30
    wsv,B,AX5319990.972980783X470.9729807830.972980783<1017
    X30
    Wprep-exec_v,BX7119990.976505645X3910.9993136910.022808046
    X400.978349541
    X410.978349541
    X421
    X431
    X441
    Texec_v,BX9516490.404971073X39−0.120.4049716485.8×107
    X40−0.119924731
    X41−0.119924731
    X42−0.060065522
    X43−0.060065522
    X44−0.060065218
    X51
    Texec_v,BX9519990.188288623X39−0.120.1934565280.005167905
    X40−0.117401945
    X41−0.117401945
    X42−0.12
    X43−0.12
    X44−0.12
    X51
    HWBX10319490.005872159X310.03671660.005872159<1017
    HWBX10319991.4×1014X310.0399827770.0064449190.006444919
    HTBX10519490.740701054X310.9179150010.740701054<1017
    HTBX10519994.36807×1013X310.9995694270.0064449190.006444919

    8. Discussion

    In this paper, a neural agent model was introduced for the way intrapersonal synchrony and interpersonal synchrony induce behavioral adaptivity between the synchronized persons.1,2,3,4,5,6,7,8,9,10 In literature, it was advocated to use a dynamical systems modeling approach to model the complex, cyclical types of dynamics that occur.11,12 The model presented here is indeed a dynamical system model; moreover, it is multi-adaptive in that the behavioral adaptivity covers both short-term adaptations and long-term adaptations, reflecting short-term affiliation and long-term bonding. The former type of adaptation was modeled using (nonsynaptic) adaptive excitability,15,16,17,18 whereas for the latter type a more classical synaptic type of adaptation19,20,21,22 was used. Following the aforementioned literature on synchrony, both types of adaptivity were modeled as driven by the (internally detected) intrapersonal synchrony and interpersonal synchrony for the agent. By also including metaplasticity23 in the model to control the adaptations in a context-sensitive manner, the agent model became second-order adaptive. The simulations of the model have been performed using the dedicated software environment developed in MATLAB on HP Intel Core i5 and Apple Macbook Pro Intel Core i9 laptops. Execution times per run were less than a minute, for example, on the HP Intel Core i5 with MATLAB 2017a between 40 and 50s. The software environment used can be downloaded via URL https://www.researchgate.net/publication/368775720.

    Thus, this paper has focused on the emerging and adaptive effects of human social interaction, concerning emerging synchronization and related adaptive affiliation and bonding, with the therapist–client interaction as a central application option. By such simulations, for example, a therapist can get insight about how they can improve the way they interact with their clients and make therapy or counseling more effective. From the scientific perspective, the modeling also contributes formalization to this area of psychology which is almost always addressed in informal manners. The contributed approach can also be used as a solid basis for development of supporting virtual agents in that context.

    Synchrony and related patterns in the brain are also analyzed, for example, for atypical brain conditions of subjects such as for PTSD,46 epilepsy,47,48 ADHD,49,50 or autism.51 The work presented in this paper distinguishes itself from this in four ways: (1) it abstracts from the specific brain processes and instead focuses on the level of mental processes, (2) it addresses not only the emergence of synchrony but also the causal effects of synchronization on adaptivity of interaction behavior such as affiliation and bonding, (3) it does not address the emergence of synchrony from an objective external observer perspective but from a subjective perspective from the agents themselves, and (4) it focusses on typical instead of atypical conditions of subjects.

    We already engaged in computational modeling of synchrony between agents in earlier work.52,53 However, in the models described there, no (subjective) internal detection of synchrony takes place. Moreover, in the first one no adaptivity was covered,52 and in the second one another type of adaptivity was incorporated, namely of internal connections from representation states to preparation states.53 As far as we know, previous work37 describes the only other computational agent model where subjective synchrony detection is addressed. However, by that model no long-term behavioral adaptivity is covered and also no adaptive intrinsic excitability is addressed, whereas both are included in the current model.

    Earlier work1,29 addressing behavioral adaptation due to coordinated actions used a dynamic form of the “bonding based on homophily” principle54 was used to model the effect of coordination of emotions and actions on behavioral adaptivity but no (subjective) detection of synchrony was used.

    In this paper, also mathematical analysis was addressed for the modeling approach applied. The first type of analysis shows that any smooth adaptive dynamical system has a canonical representation as a self-modeling network. This implies theoretically that the self-modeling network format is widely applicable and that no biases or limitations are introduced by choosing a network modeling approach to design adaptive dynamical system models. In particular, it also generalizes the most common neural system models.

    This finding also has been shown in many practical applications varying from biological, cognitive, affective to social processes and their interaction. It is illustrated by many examples in particular in books38,39 introducing the self-modeling network modeling approach and its applications, a book55 focusing on the use of self-modeling network models to handle dynamics, adaptation and control of internal mental models, and a book56 focusing on the use of self-modeling network models to model organizational learning processes. Furthermore, stationary point and equilibrium analysis were addressed and applied to the introduced self-modeling network model. These analyses were used as a form of verification of the model, which provided evidence that the implemented model is correct with respect to its design specifications.

    Thus, a flexible human-like second-order multi-adaptive neural agent model was obtained for the way in which detected synchrony leads to different types of behavioral adaptivity concerning the short-term affiliation and long-term bonding between the two agents.

    A number of aspects that still may be considered relevant are not covered by the model introduced here. One of these aspects is the use of time lags in the process of synchrony detection. This has not been addressed here, but can be a relevant extension of the work reported here. More in general, different combination functions describing methods for synchrony detection may be tried out either with time lags or without time lags. Another relevant aspect that is not addressed in this paper is the role of interruptions or transitions in synchrony and their effect on behavioral adaptivity.

    For further work, many more simulation experiments can be designed and conducted, for example to explore the question which types of short-term synchrony are most likely to become translated into long-term benefits for a relationship, or to explore in more detail the roles of intrapersonal synchrony and interpersonal synchrony.

    Also, the model can easily be extended to cover interaction between more than two agents. To achieve that, for each additional other agent, within the agent model the number of sensing states can be extended by three of them (for each modality) and similarly three additional representation states and three interpersonal synchrony detection states can be added for each additional other agent. Accordingly, also additional first-and second-order self-model states can be added. This will add complexity to the agent model.

    Alternatively, if the model abstracts from the differences between the other agents, then the current agent model without any additional states can be directly applied by using the current sensing, representation, and synchrony detection states and also the self-model states as states aggregating all other agents. This keeps the complexity of the agent model the same, but of course then the model is less context-sensitive as different adaptations to specific other agents are not possible. So, as more often happens, there is a trade-off between the complexity of the agent model and the extent of context-sensitivity here: more context sensitivity comes with more complexity.

    Considered from a wider scientific perspective, the model can provide a basis to develop adaptive virtual agents that are able to concentrate on each other by short-term behavioral adaptivity and bond with each other by long-term behavioral adaptivity in a human-like manner. For example, in other work57 the focus is on virtual conversational agents and how they can adapt to their human users. In that work, classical learning techniques from AI are used to optimize the agent’s behavior with respect to a given user such as Q-learning, which is not directly inspired or justified by neuroscience.

    In contrast, this paper offers approaches such as synaptic plasticity by adaptive connection weights19,20,21,22 and nonsynaptic plasticity by adaptive excitability thresholds,15,16 and in addition metaplasticity23 to control both types of plasticity. As all of these forms of adaptivity are justified in neuroscience literature, this will in principle lead to a more human-like agent model. Nevertheless, it will be interesting to explore in further work how these two different perspectives can benefit from each other.

    Concerning the relation of the considered agent model to mechanisms from neuroscience, note that these mechanisms have been incorporated only from an abstracted functional perspective. This means that it cannot be claimed that the model is human-like for the (neuro)physiological level. The latter has been left out of consideration here and would require another research project.

    Validation of the model has only been done based on qualitative empirical information from the psychological literature. Dynamic and adaptive patterns have been obtained that are in accordance with that type of empirical information. Due to the lack of quantitative (numerical) empirical information, no quantitative validation has been performed yet. For future research, it is considered to try to acquire such numerical data and then perform quantitative numerical validation by parameter tuning. The way how that can be done is described in Chap. 19 of our 2022 book.58

    Part of the work presented in this paper was presented in a preliminary form in the AIAI’22 conference and published in their proceedings as a paper59 of less than 50% of the length of this paper. This paper is limited to the design of the model and an example simulation. In contrast, the fundamental mathematical analysis of the positioning of the modeling approach based on self-modeling temporal–causal networks in the landscape of adaptive dynamical systems (described in Secs. 6 and 10) is new. It has been shown there that any smooth adaptive dynamical system has a canonical representation as a self-modeling temporal–causal network, which means that the applied modeling approach is universal for smooth adaptive dynamical systems. Moreover, the in-depth verification of the introduced model (described in Sec. 7) is also new. Here, substantial evidence was added that the implemented model is correct with respect to its design specification. Finally, the full specification of the model in Appendix B is new as well.

    9. Conclusion

    All in all, we achieved the following summarized findings and discoveries:

    Formalization of the informal domain of human social interaction involving emerging and multiple types of adaptive dynamical system effects is possible.

    Unifying bridges between causal modeling, network modeling, and dynamical systems modeling are possible.

    A systematic approach to (higher-order) adaptivity in these different modeling perspectives is possible.

    More specifically, the following has been achieved:

    The notion of canonical temporal–causal network representation for any smooth dynamical system is introduced; it is shown how any smooth dynamical system can be assigned this canonical network representation. This also applies to most common neural network approaches. This creates a bridge between different subdisciplines that are usually kept separate: causal modeling in AI, neural networks in AI, (multidisciplinary) network science, computational science. This enables simulation and analysis of any smooth dynamical system in terms of network concepts.

    The notion of canonical temporal–causal self-modeling network representation for any smooth adaptive dynamical system is introduced; it is shown how any smooth adaptive dynamical system can be assigned this canonical network representation. This again creates a bridge between different subdisciplines that are usually kept separate and provides a clear way of addressing higher-order adaptivity to any of these: metalevel architectures in AI, causal modeling in AI, neural networks in AI, (multidisciplinary) network science, computational science. This enables simulation and analysis of any smooth higher-order adaptive dynamical system in terms of network concepts.

    It is shown by mathematical stationary point and equilibrium analysis how verification of correctness of the implemented neural agent model introduced (in comparison to its design specifications) provides more evidence of its correctness.

    By explaining the full specification of the introduced neural agent model, reproducibility is obtained.

    Appendix A. More Details for Sec. 6

    In this section, a more detailed explanation for the main argument in Sec. 6.1 for Theorem 1 can be found (see Box 2) and it is discussed how the chosen approach differs from and relates to Ashby’s approach.

    Box 2. More detailed explanation of the argument for Theorem 1.

    Assume s>0. Subtracting Eq. (8) from Eq. (7) (see Sec. 6.1) and dividing by s provides:

    [Xj(t+s)Xj(t)]/s=[Fj(X1(t),Xn(t),s)Fj(X1(t),,Xn(t),0)]/s.
    When for both sides of this equation the limit for s approaching 0 is taken, the left-hand side of (3) becomes (renaming s to Δt to get the familiar expression)
    lims0([Xj(t+s)Xi(t)]/s)=limΔt0([Xj(t+Δt)Xj(t)]/Δt)=dXj(t)/dt=Xj(t)
    and the right-hand side (here renaming s to Δs to get the familiar expression)
    lims0([Fj(X1(t),Xn(t),s)Fj(X1(t),,Xn(t),0)]/s)=limΔs2([Fj(X1(t),,Xn(t),Δs)Fj(X1(t),,Xn(t),0)]/Δs)=[Fj(X1(t),,Xn(t),s)/s]s=0.
    Therefore, it is obtained:
    dXi(t)/dt=Fi(X1(t),,Xn(t),s)/s]s=0.

    The differences and relationships with the approach by Ashby44 are as follows. Instead of (7), Ashby44 uses the special case of (7) for t=0 as indication for a state-determined system :

    Xj(s)=Fj(X1(0),,Xn(0),s)for s>0.(A.1)

    As this special case by itself is not enough to characterize a state-determined system, he furthermore also uses a second condition for a state-determined system that can be called transitivity:

    Fi(X1(t),,Xn(t),s+s)=Fi(F1(X1(t),,Xn(t),s),,Fn(X1(t),,Xn(t),s),s).(A.2)

    So, in the end he characterizes a state-determined system by the conjunction of conditions (A.1) and (A.2). It turns out that condition (7) that we use here to characterize a state-determined system is not equivalent to (A.1) but to this conjunction of (A.1) and (A.2), in other words, the following holds:

    Theorem 3 (Characterizing state-determined systems). The following are equivalent:

    (i)

    Xj(t+s)=Fj(X1(t),,Xn(t),s) for s>0,

    (ii)

    Xj(s)=Fj(X1(0),,Xn(0),s) for s>0 and the system is transitive.

    Proof. (i) ⇒ (ii) That (7) implies transitivity follows from

    Xi(t+(s+s))=Xi((t+s)+s)
    and working out both sides of this:
    Xi(t+(s+s))=Fi(X1(t),,Xn(t),s+s)for s,s>0,Xi((t+s)+s)=Fi(X1(t+s),,Xn(t+s),s),
    where
    Xi(t+s)=Fi(X1(t),,Xn(t),s).
    So
    Xi((t+s)+s)=Fi(F1(X1(t),,Xn(t),s),,Fn(X1(t),,Xn(t),s),s).
    This proves transitivity from (7).

    (ii) ⇒ (i) To be proven

    Xj(t+s)=Fj(X1(t),,Xn(t),s).(7)
    Given
    Xj(s)=Fj(X1(0),,Xn(0),s)for all s.
    So also
    Xi(t+s)=Fi(X1(0),,Xn(0),t+s).
    Now by transitivity it holds
    Xj(t+s)=Fj(X1(0),,Xn(0),t+s)=Fj(F1(X1(0),,Xn(0),t),,Fn(X1(0),,Xn(0),t),s)=Fj(X1(t),,Xn(t),s).
    This proves (7). □

    This explains how the approach used here differs from but still relates to Ashby’s approach.44 Note that another main difference is that Ashby did not analyze how (adaptive) dynamical systems can be related to (adaptive) network models, what is our main focus here.

    Appendix B. Further Details of the Introduced Model

    B.1. Overview of all states of the model

    Tables B.1 (base states) and B.2 (first- and second-order self-model states) provide explanations of all states of the introduced model.

    Table B.1. Base states of the computational network model.

    Table B.2. First-order self-model T-states and W-states for excitability thresholds and connection weights and second-order self-model HT-states and HW-states for the adaptation speed of the T-states and W-states of the computational network model.

    B.2. Full specification of the model in role matrices format

    In this section, first some further simulation pictures are shown for the affiliation patterns represented by the W-states in relation to the detected intra- and inter-personal synchronies. Next, the full specification of the introduced adaptive network model is shown in terms of role matrices which are tables with the network characteristics in standardized table format. These tables are readable by the dedicated software environment available via https://www.researchgate.net/publication/368775720_Network-Oriented_Modeling_Software and then can generate simulations. In this way reproducibility is supported. In Tables B.3B.7 the full specification of the adaptive network model by role matrices is shown. Each role matrix has 93 rows for all states X1X93 of the model.

    Table B.3. Role matrix mb for base connectivity.

    The connectivity characteristics are specified by role matrices mb and mcw shown in Tables B.3 and B.4. Role matrix mb lists for each state the states (at the same or lower level) from which the state gets its incoming connections, while in role matrix mcw the connection weights are listed for these connections.

    Nonadaptive connection weights are indicated in mcw (in Table B.4) by a number (in a green shaded cell), but adaptive connection weights are indicated by a reference to the (self-model) W-state representing the adaptive value (in a peach-red shaded cell). This can be seen for states X7X9 (with self-model W-states X63X65), states X11X13 (with self-model W-states X54X56), X24X26 (with self-model W-states X57X59), X28X30 (with self-model W-states X75X77), X32X34 (with self-model W-states X66X68), and X45X53 (with self-model W-states X69X71, X60X62, and X72X74).

    Table B.4. Role matrix mcw for connection weights.

    The network characteristics for aggregation are defined by the selection of combination functions from the library and values for their parameters. In role matrix, mcfw it is specified by weights which state uses which combination function; see Table B.5.

    Table B.5. Role matrix mcfw for combination function weights.

    In role matrix mcfp (see Table B.6) it is indicated what the parameter values are for the chosen combination functions. A number of them are adaptive: their adaptive excitability thresholds are represented by self-model T-states. These concern agent A states X11X13 (with excitability threshold self-model T-states X78X80) and X24X26 (with self-model T-states X81X83), and for agent B states X32X34 (with excitability threshold self-model T-states X84X86) and X35X47 (with self-model T-states X87X89).

    Table B.6. Role matrix mcfp for combination function parameters.

    In Table B.7, the role matrix ms for speed factors is shown, which lists all speed factors. Next to it, the list of initial values can be found. For ms some entries are adaptive as well. For agent A the speed factors of W-states X54X65 are represented by (second-order) self-model HW-state X90 and for agent B the speed factors of W-states X66X77 are represented by (second-order) self-model HW-state X91. Moreover, for agent A the speed factors for T-states X78X83 are represented by (second-order) self-model HT-state X92 and for agent B the speed factors of T-states X84X89 are represented by (second-order) self-model HT-state X93.

    Table B.7. Role matrix ms for speed factors and iv for initial values.

    References

    • 1. M. Accetto, J. Treur and V. Villa , An adaptive cognitive-social model for mirroring and social bonding during synchronous joint action, Procedia Comput. Sci. 145 (2018) 3–12. https://doi.org/10.1016/j.procs.2018.11.002 CrossrefGoogle Scholar
    • 2. J. K. Burgoon, L. Dillman and L. A. Stem , Adaptation in dyadic interaction: Defining and operationalizing patterns of reciprocity and compensation, Commun. Theory 3(4) (1993) 295–316. CrossrefGoogle Scholar
    • 3. J. K. Burgoon, L. A. Stem and L. Dillman , Interpersonal Adaptation: Dyadic Interaction Patterns (Cambridge University Press, 1995). https://doi-org.vu-nl.idm.oclc.org/10.1017/CBO9780511720314 CrossrefGoogle Scholar
    • 4. J. N. Cappella , Mutual influence in expressive behavior: Adult–adult and infant–adult dyadic interaction, Psychol. Bull. 89(1) (1981) 101. Crossref, Medline, Web of ScienceGoogle Scholar
    • 5. G. Dumas, J. Nadel, R. Soussignan, J. Martinerie and L. Garnero , Inter-brain synchronization during social interaction, PloS One 5(8) (2010) e12166. Crossref, Medline, Web of ScienceGoogle Scholar
    • 6. M. J. Hove and J. L. Risen , It’s all in the timing: Interpersonal synchrony increases affiliation, Soc. Cogn. 27(6) (2009) 949–961. Crossref, Web of ScienceGoogle Scholar
    • 7. S. L. Koole, W. Tschacher, E. Butler, S. Dikker and T. F. Wilderjans , In sync with your shrink, in Applications of Social Psychology, eds. J. P. Forgas, W. D. Crano and K. Fiedler (Taylor and Francis, Milton Park, 2020), pp. 161–184. CrossrefGoogle Scholar
    • 8. F. Ramseyer and W. Tschacher , Nonverbal synchrony in psychotherapy: Coordinated body movement reflects relationship quality and outcome, J. Consult. Clin. Psychol. 79 (2011) 284–295. https://doi.org/10.1037/a0023419a Crossref, Medline, Web of ScienceGoogle Scholar
    • 9. B. Tarr, J. Launay and R. I. M. Dunbar , Silent disco: Dancing in synchrony leads to elevated pain thresholds and social closeness, Evol. Hum. Behav. 37(5) (2016) 343–349. Crossref, Web of ScienceGoogle Scholar
    • 10. S. S. Wiltermuth and C. Heath , Synchrony and cooperation, Psychol. Sci. 20(1) (2009) 1–5. Crossref, Medline, Web of ScienceGoogle Scholar
    • 11. E. Ferrer and J. L. Helm , Dynamical systems modeling of physiological coregulation in dyadic interactions, Int. J. Psychophysiol. 88(3) (2013) 296–308. Crossref, Medline, Web of ScienceGoogle Scholar
    • 12. R. M. Warner , Cyclicity of vocal activity increases during conversation: Support for a nonlinear systems model of dyadic social interaction, Behav. Sci. 37(2) (1992) 128–138. CrossrefGoogle Scholar
    • 13. W. Tschacher, F. Ramseyer and S. L. Koole , Sharing the now in the social present: Duration of nonverbal synchrony is linked with personality, J. Pers. 86(2) (2018) 129–138. Crossref, Medline, Web of ScienceGoogle Scholar
    • 14. S. L. Koole and W. Tschacher , Synchrony in psychotherapy: A review and an integrative framework for the therapeutic alliance, Front. Psychol. 7 (2016) 862. Crossref, Medline, Web of ScienceGoogle Scholar
    • 15. N. Chandra and E. Barkai , A non-synaptic mechanism of complex learning: Modulation of intrinsic neuronal excitability, Neurobiol. Learn. Mem. 154 (2018) 30–36. Crossref, Medline, Web of ScienceGoogle Scholar
    • 16. D. Debanne, Y. Inglebert and M. Russier , Plasticity of intrinsic neuronal excitability, Curr. Opin. Neurobiol. 54 (2019) 73–82. Crossref, Medline, Web of ScienceGoogle Scholar
    • 17. A. H. Williams, T. O’Leary and E. Marder , Homeostatic regulation of neuronal excitability, Scholarpedia 8 (2013) 1656. CrossrefGoogle Scholar
    • 18. A. Zhang, X. Li, Y. Gao and Y. Niu , Event-driven intrinsic plasticity for spiking convolutional neural networks, IEEE Trans. Neural Netw. Learn. Syst. (2021) https://doi.org/10.1109/tnnls.2021.3084955 Web of ScienceGoogle Scholar
    • 19. M. F. Bear and R. C. Malenka , Synaptic plasticity: LTP and LTD, Curr. Opin. Neurobiol. 4(3) (1994) 389–399. Crossref, MedlineGoogle Scholar
    • 20. D. O. Hebb , The Organization of Behavior: A Neuropsychological Theory (John Wiley and Sons, New York, 1949). Google Scholar
    • 21. C. J. Shatz , The developing brain, Sci. Am. 267 (1992) 60–67. Crossref, Medline, Web of ScienceGoogle Scholar
    • 22. P. K. Stanton , LTD, LTP, and the sliding threshold for long-term synaptic plasticity, Hippocampus 6(1) (1996) 35–42. Crossref, Medline, Web of ScienceGoogle Scholar
    • 23. W. C. Abraham and M. F. Bear , Metaplasticity: The plasticity of synaptic plasticity, Trends Neurosci. 19(4) (1996) 126–130. Crossref, Medline, Web of ScienceGoogle Scholar
    • 24. B. L. Robinson, N. S. Harper and D. McAlpine , Meta-adaptation in the auditory midbrain under cortical influence, Nat. Commun. 7 (2016) 13442. Crossref, Medline, Web of ScienceGoogle Scholar
    • 25. D. L. Trout and H. M. Rosenfeld , The effect of postural lean and body congruence on the judgment of psychotherapeutic rapport, J. Nonverbal Behav. 4 (1980) 176–190. Crossref, Web of ScienceGoogle Scholar
    • 26. R. E. Maurer and J. H. Tindall , Effect of postural congruence on client’s perception of counselor empathy, J. Couns. Psychol. 30(2) (1983) 158–163. https://doi.org/10.1037/0022-0167.30.2.158 Crossref, Web of ScienceGoogle Scholar
    • 27. C. F. Sharpley, J. Halat, T. Rabinowicz, B. Weiland and J. Stafford , Standard posture, postural mirroring and client-perceived rapport, Couns. Psychol. Q. 14 (2001) 267–280. https://doi.org/10.1080/09515070110088843 CrossrefGoogle Scholar
    • 28. R. Feldman , Parent–infant synchrony biological foundations and developmental outcomes, Curr. Dir. Psychol. Sci. 16 (2007) 340–345. https://doi.org/10.1111/j.1467-8721.2007.00532.x Crossref, Web of ScienceGoogle Scholar
    • 29. C. Tichelaar and J. Treur , Network-oriented modeling of the interaction of adaptive joint decision making, bonding and mirroring, in Proc. 7th Int. Conf. Theory and Practice of Natural Computing, TPNC’18, Lecture Notes in Computer Science, Vol. 11324 (Springer Nature, Cham, 2018), pp. 328–343. CrossrefGoogle Scholar
    • 30. H. B. Laws, A. G. Sayer, P. R. Pietromonaco and S. I. Powers , Longitudinal changes in spouses’ HPA responses: Convergence in cortisol patterns during the early years of marriage, Health Psychol. 34(11) (2015) 1076. Crossref, Medline, Web of ScienceGoogle Scholar
    • 31. N. Boot, M. Baas, S. V. Gaal, R. Cools and C. K. W. D. Dreu , Creative cognition and dopaminergic modulation of fronto-striatal networks: Integrative review and research agenda, Neurosci. Biobehav. Rev. 78 (2017) 13–23. Crossref, Medline, Web of ScienceGoogle Scholar
    • 32. J. Lisman, K. Cooper, M. Sehgal and A. J. Silva , Memory formation depends on both synapse-specific modifications of synaptic strength and cell-specific increases in excitability, Nat. Neurosci. 21 (2018) 309–314. Crossref, Medline, Web of ScienceGoogle Scholar
    • 33. J. Treur , Temporal factorisation: A unifying principle for dynamics of the world and of mental states, Cogn. Syst. Res. 8(2) (2007) 57–74. Crossref, Web of ScienceGoogle Scholar
    • 34. J. Treur , Temporal factorisation: Realisation of mediating state properties for dynamics, Cogn. Syst. Res. 8(2) (2007) 75–88. Crossref, Web of ScienceGoogle Scholar
    • 35. P. U. Tse , The Neural Basis of Free Will: Criterial Causation (MIT Press, Cambridge, 2013). CrossrefGoogle Scholar
    • 36. J. Treur , Modeling the emergence of informational content by adaptive networks for temporal factorisation and criterial causation, Cogn. Syst. Res. 68 (2021) 34–52. Crossref, Web of ScienceGoogle Scholar
    • 37. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S. Dikker and S. L. Koole, On becoming in sync with yourself and others: An adaptive agent model for how persons connect by detecting intra- and interpersonal synchrony, Hum.-Centric Intell. Syst. J. (2023), https://www.springer.com/journal/44230 [In sync with yourself and with others: Detection of intra- and interpersonal synchrony within an adaptive agent model, in Face2face: Advancing the Science of Social Interaction (Royal Society, London), https://www.researchgate.net/publication/358964043]. Google Scholar
    • 38. J. Treur , Network-Oriented Modeling: Addressing Complexity of Cognitive, Affective and Social Interactions (Springer Nature, 2016). CrossrefGoogle Scholar
    • 39. J. Treur , Network-Oriented Modeling for Adaptive Networks: Designing Higher-Order Adaptive Biological, Mental and Social Network Models (Springer Nature, 2020). CrossrefGoogle Scholar
    • 40. J. Treur , Modeling multi-order adaptive processes by self-modeling networks (Keynote speech), in Proc. 2nd Int. Conf. Machine Learning and Intelligent Systems, MLIS’20, eds. A. J. Tallón-Ballesteros and C.-H. Chen , Frontiers in Artificial Intelligence and Applications, Vol. 332 (IOS Press, 2020), pp. 206–217. CrossrefGoogle Scholar
    • 41. A. R. Damasio , The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Houghton Mifflin Harcourt, 1999). Google Scholar
    • 42. G. Hesslow , Conscious thought as simulation of behaviour and perception, Trends Cogn. Sci. 6 (2002) 242–247. Crossref, Medline, Web of ScienceGoogle Scholar
    • 43. D. Grandjean, D. Sander and K. R. Scherer , Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization, Conscious. Cogn. 17(2) (2008) 484–495. Crossref, Medline, Web of ScienceGoogle Scholar
    • 44. W. R. Ashby , Design for a Brain, 2nd extended edn. (Chapman and Hall, London, 1960). CrossrefGoogle Scholar
    • 45. R. F. Port and T. V. Gelder , Mind as Motion: Explorations in the Dynamics of Cognition (MIT Press, Cambridge, MA, 1995). Google Scholar
    • 46. J. Zweerings, K. Sarasjärvi, K. A. Mathiak, J. Iglesias-Fuster, F. Cong, M. Zvyagintsev and K. Mathiak , Data-driven approach to the analysis of real-time fMRI neurofeedback data: Disorder-specific brain synchrony in PTSD, Int. J. Neural Syst. 31(11) (2021) 2150043. Link, Web of ScienceGoogle Scholar
    • 47. A. Olamat, P. Ozel and A. Akan , Synchronization analysis in epileptic EEG signals via state transfer networks based on visibility graph technique, Int. J. Neural Syst. 32(2) (2022) 2150041. Link, Web of ScienceGoogle Scholar
    • 48. G. Liu, L. Tian and W. Zhou , Patient-independent seizure detection based on channel-perturbation convolutional neural network and bidirectional long short-term memory, Int. J. Neural Syst. 32(6) (2022) 2150051. Link, Web of ScienceGoogle Scholar
    • 49. M. Ahmadlou and H. Adeli , Fuzzy synchronization likelihood with application to attention-deficit/hyperactivity disorder, Clin. EEG Neurosci. 42(1) (2011) 6–13. Crossref, Medline, Web of ScienceGoogle Scholar
    • 50. M. Ahmadlou and H. Adeli , Visibility graph similarity: A new measure of generalized synchronization in coupled dynamic systems, Phys. D, Nonlinear Phenom. 241(4) (2012) 326–332. Crossref, Web of ScienceGoogle Scholar
    • 51. M. Ahmadlou, H. Adeli and A. Adeli , Fuzzy synchronization likelihood-wavelet methodology for diagnosis of autism spectrum disorder, J. Neurosci. Methods 211(2) (2012) 203–209. Crossref, Medline, Web of ScienceGoogle Scholar
    • 52. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S. Dikker and S. L. Koole , On the same wavelengths: Emergence of multiple synchronies among multiple agents, in Proc. 22nd Int. Workshop on Multi-Agent-Based Simulation, MABS’21, Lecture Notes in Computer Science, Vol. 13128 (Springer, Cham, 2022), pp. 57–71. CrossrefGoogle Scholar
    • 53. S. C. F. Hendrikse, S. Kluiver, J. Treur, T. F. Wilderjans, S. Dikker and S. L. Koole , How virtual agents can learn to synchronize: An adaptive joint decision-making model of psychotherapy, Cogn. Syst. Res. 79 (2023) 138–155. Crossref, Web of ScienceGoogle Scholar
    • 54. M. McPherson, L. Smith-Lovin and J. M. Cook , Birds of a feather: Homophily in social networks, Annu. Rev. Sociol. 27(1) (2001) 415–444. Crossref, Web of ScienceGoogle Scholar
    • 55. J. Treur and L. V. Ments (eds.), Mental Models and their Dynamics, Adaptation, and Control: A Self-Modeling Network Modeling Approach (Springer Nature, 2022). CrossrefGoogle Scholar
    • 56. G. Canbaloğlu, J. Treur and A. Wiewiora (eds.), Computational Modeling of Multilevel Organisational Learning and its Control Using Self-Modeling Network Models (Springer Nature, 2023). CrossrefGoogle Scholar
    • 57. B. Biancardi, S. Dermouche and C. Pelachaud , Adaptation mechanisms in human–agent interaction: Effects on user’s impressions and engagement, Front. Comput. Sci. 3 (2021) 696682. Crossref, Web of ScienceGoogle Scholar
    • 58. J. Treur , Does this suit me? Validation of self-modeling network models by parameter tuning, in Mental Models and their Dynamics, Adaptation, and Control: A Self-Modeling Network Modeling Approach, eds. J. Treur and L. V. Ments , Chap. 19 (Springer Nature, 2022), pp. 537–565. CrossrefGoogle Scholar
    • 59. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S. Dikker and S. L. Koole , On the interplay of interpersonal synchrony, short-term affiliation and long-term bonding: A second-order multi-adaptive neural agent model, in Proc. 18th Int. Conf. Artificial Intelligence Applications and Innovations, AIAI’22, eds. I. Maglogiannis et al., Advances in Information and Communication Technology, Vol. 646 (Springer Nature, 2022), pp. 37–57. CrossrefGoogle Scholar