World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

The Paradoxical Impact of Information Privacy on Privacy Preserving Technology: The Case of Self-Sovereign Identities

    https://doi.org/10.1142/S0219877023500256Cited by:2 (Source: Crossref)

    Abstract

    Advance of digital technologies brings great benefits but takes users at risk of the dark sides of the internet. Preventive mechanisms and privacy-preserving solutions could overcome this challenge. As such, self-sovereign identities (SSIs) provide users with increased control over personal information. However, users neglect their privacy in favor of the most convenient solution. In this paper, we empirically examine how information privacy influences adoption of SSIs. Our results contradict the existing theory that privacy is critical to the success of identity management (IdM) systems. Analogous to the privacy paradox, the study does not lend empirical support that perceived privacy has an impact on the adoption of an SSI. On the contrary, these findings contradict the prevailing view of privacy as a key factor for IdM systems and contribute to knowledge on privacy and adoption behavior.

    1. Introduction

    The disclosure of personal information is fundamental to the use of digital services [Forsythe et al., (2006)], yet increases concerns over loss of privacy and identity theft [Hille et al. (2015)]. Users must balance the risks of sharing sensitive information with the benefits of digital services [Dinev and Hart (2006)], but they often willingly disclose personal information despite expressing significant privacy concerns [Smith et al. (2011)]. Recent examples — such as the scandal of Facebook sharing user data to the analytics company Cambridge Analytica — illustrate the impact that information disclosure can have on nations, society, and citizens. These cases highlight the need for solving the problems related to nature of the internet [Lee (2015)], for instance, through new privacy-preserving technologies and policy laws [Isaak and Hanna (2018)]. A self-sovereign identity (SSI) is a privacy-preserving technology that enables users to limit the disclosure of their personal information and control their digital identity without losing access to digital services [Mühle et al. (2018); Stokkink and Pouwelse (2018); Hesse and Teubner (2020)]. In other words, an SSI is an identity management (IdM) system that enables users to fully own and manage their digital identities [Dunphy and Petitcolas (2018); Mühle et al. (2018)]. Features of SSIs may provide a solution to privacy concerns by returning users’ control over identity and personal information while enabling them to benefit from digital services [Mühle et al. (2018); Acquisti (2008)]. Thus, an SSI can enable users to experience “the convenience and freedom of expression [of anonymity]” [Lee (2015, S. iii)], while benefiting from digital services. SSI might, therefore, even present a solution to counteract anonymous fraud and crime as key challenges of the Bright Internet [Lee (2015); Lee et al. (2020)]. To leverage its full potential, a critical number of users and service providers must implement and use an IdM system that builds on an SSI. Such an IdM must cover multiple digital services from different providers so that the system is convenient for users. Unfortunately, only a small number of IdM systems — for example, Facebook’s single sign-on (SSO) — have, so far, achieved widespread adoption across multiple digital services [Hansen et al. (2004); Jensen (2011)].

    Blockchain regularly is a central technological component of an SSI. Since Nakamoto introduced the peer-to-peer (P2P) electronic cash system [Nakamoto (2008)], Bitcoin, blockchain technology has emerged into the public consciousness igniting interests in research [Lee et al. (2020)] and practice [Chong et al. (2019)]. Blockchains are distributed databases that serve as a physically decentralized but logically centralized source of truth for information [Alt (2020); Rossi et al. (2019)]. Multiple studies ascribe substantial potential to blockchain in different use cases [Constantinides et al. (2018); Du et al. (2019)]. One application domain that aims to capitalize on the features of blockchain is that of IdM and decentralized identities. The importance of digital identities is increasing due to the growing importance of the Internet in our daily lives [Crossler and Posey (2017); Hille et al. (2015); Whitley et al. (2014)]. Yet, individuals must rely on an intermediating registration authority to use digital services, but they could instead trust a blockchain-based system. Therein, changes in data are transparent, and the transaction history cannot be tampered [Dunphy and Petitcolas (2018); Hawlitschek et al. (2018)]. Blockchain then pairs identification and authentication and ensures consensus, transparency, and integrity [Mühle et al. (2018); Rieger et al. (2021)], comparable to the preventive cybersecurity measures of the Bright Internet initiative, in which blockchain can serve as an audit trail to prevent misuse of personal data [Lee et al. (2020); De Filippi et al. (2020)].

    The increasing importance of digital identities means that users must, in turn, spend increasing effort managing their identities, for example, administering different account information and passwords for various digital services. These efforts are to the detriment of the value proposition of digital services [Hansen et al. (2004)]. IdM systems can support users in managing their digital identities and facilitate the use of digital services, and are, thus, an emerging field for practice and research. However, the field of IdM still lacks knowledge about the interplay between identity and technologies, and the factors affecting user adoption of IdM systems [Kjærgaard and Gal (2009); Halperin (2006); Alkhalifah and D’Ambra (2015)]. Privacy is one factor thought to influence adoption. For example, Hansen et al. [2004] claimed that privacy is the key factor determining the acceptance of IdM systems. In the United Kingdom, for example, users refused to adopt identity cards due to a lack of protection of private data [Landau and Moore (2012)]. On the other hand, Facebook’s SSO mechanism contradicts these observations and has become the most popular IdM system, despite the fact that the system shares excessive user data with the digital service to which users sign up [Landau and Moore (2012); Buxmann et al. (2014)]. This discrepancy highlights the need for advance in knowledge on IdM. Research must investigate the impact of privacy and privacy concerns through the assessment of individuals’ privacy perceptions [Crossler and Posey (2017); Hansen et al. (2008); Mueller et al. (2006)].

    However, studies involving prospective users of privacy-preserving technologies such as an SSI remain scarce. This scarcity has led to calls for more behavioral research in the IdM domain and studies about these systems from a user perspective [Alkhalifah and D’Ambra (2015); Bélanger and Crossler (2011); Crossler and Posey (2017); Seltsikas and O’Keefe (2010)]. Current research lacks an empirical examination of users’ perceptions of privacy in the context of the adoption of IdM systems. Given these considerations, this study aims at understanding the effect of information privacy on the intention to adopt a system for SSIs. Thereby the specific goals of this paper are (i) to present empirical and behavioral insights for the adoption of IdM- and blockchain-based systems, (ii) to understand the interplay of privacy and technology acceptance by combining existing theories from these two domains, and (iii) to examine the importance of information privacy from a user perspective against the background of privacy-preserving technologies.

    In this study, we combine and adapt existing theories of technology acceptance and information privacy research to fit the IdM context — specifically, the novel context of SSIs. We deduce determinants of behavioral intention to use an SSI system, since consumers yet cannot use an SSI-system breadthways as well as perceived information privacy to develop a research model defining and hypothesizing the relationships between the variables examined. To validate our hypotheses empirically, we operationalized each construct with reflective measurement indicators derived from renowned studies in the information privacy and technology acceptance literature [e.g. Dinev et al. (2013); Krasnova et al. (2010); Pavlou and Fygenson (2006)], and pre-tested the resulting questionnaire with multiple respondents [Kim et al. (2009)]. We developed a structural equation model (SEM) and used the Partial-Least-Square (PLS) approach to investigate the relationships in our research model [Benitez et al. (2020); Urbach and Ahlemann (2010)]. Lastly, we analyzed the data with SmartPLS 3 and determined the theoretical and managerial implications of our study.

    The remainder of this study is structured as follows: First, we outline the theoretical foundations of information privacy, IdM, blockchain and SSI, as well as technology acceptance research. Next, we present our research model and our hypotheses. In Sec. 4, we clarify our research method before presenting the results of our survey. In Sec. 6, we discuss the hypotheses, our theoretical contribution, and the managerial implications. Finally, we shed light on each of the limitations of our work as well as fruitful paths for future research and conclude the study.

    2. Theoretical Foundations

    2.1. Information privacy

    The internet enables the collection, storage, processing, and utilization of personal information by multiple parties [Smith et al. (2011)]. As companies often misuse personal information, consumers’ privacy has become an important topic of the information age [Pavlou and Fygenson (2006); Smith et al. (2011); Spiekermann et al. (2001)], and information privacy has become an important subject of research [Bélanger and Crossler (2011); Li (2012); Pavlou (2011)].

    Despite the significance of privacy in current research, several discipline-specific definitions and conceptualizations of privacy exist [Smith et al. (2011)]. In the field of law, privacy is seen as a right [Clarke (1999); Warren and Brandeis (1890)]. Social science and information systems (ISs), in contrast, highlight control of one’s information as an integral part of privacy [Altman (1975); Westin (1967); Schoeman (1984)]. As a result, some researchers equate privacy with control [Smith et al. (2011)]. Other researchers define privacy as a state of restricted access [Schoeman (1984)]. To resolve confusion regarding definitions of privacy, Smith et al. [2011] classified different approaches in value-based and cognate-based definitions. According to the value-based definition, privacy is a human right and part of society’s norms and values. In contrast, the cognate-based definition sees privacy as an individual’s mind, perceptions, and cognition. A significant stream within the latter category highlights the role of control in the context of privacy. According to definitions by Westin [1967] and Altman [1975], control of transactions in order to reduce privacy risks is central to privacy [Margulis (1977)]. Control becomes particularly relevant in contexts with a high risk of opportunistic behavior and a breach of social contracts [Malhotra et al. (2004)]. Consequently, control plays a major role in information privacy as consumers often disclose highly sensitive personal data when conducting transactions on the internet [Malhotra et al. (2004)]. Furthermore, control is a crucial factor for decreasing privacy concerns and perceived privacy invasions [e.g. Culnan and Armstrong (1999); Dinev and Hart (2004)]. We ground our understanding of privacy on the cognate-based conceptualization, which has emerged as the dominant stream in IS and so provides a suitable lens for our study [Smith et al. (2011)]. Following research practice [Smith et al. (2011); Karwatzki et al. (2017)], we use the term “privacy” to refer to “information privacy”, even though information privacy is only one element of the larger concept [Bélanger and Crossler (2011)].

    Analogous to the definition of privacy, measuring privacy in behavioral research is similarly complex [Smith et al. (2011)]. The most common proxies for privacy are information privacy concerns and perceived information privacy [Dinev et al. (2006, 2013); Xu et al. (2011)]. Researchers often combine these privacy constructs with other privacy-related theories [Smith et al. (2011)]. The most common theory is the privacy calculus [Li (2012)]. Rational individuals perform a risk-benefit analysis (i.e. privacy calculus) to decide whether to disclose personal information [Culnan and Armstrong (1999); Acquisti and Grossklags (2005); Simon (1959)]. Consumers disclose information if they perceive that the overall benefits balance or exceed the perceived risk of disclosure [Dinev and Hart (2006)]. Disclosure incentives for customers can be economic benefits [e.g. Culnan and Armstrong (1999); Xu et al. (2009)], the personalization or increased convenience of services [e.g. Chellappa and Sin (2005); Hann et al. (2007)], or social or relational benefits [e.g. Culnan and Armstrong (1999); Lu et al. (2004)]. Nevertheless, studies show that the fundamental assumption that individuals make rational choices is flawed, and that individuals tend to decide irrationally [Dinev et al. (2015)]. Consequently, the privacy decisions of individuals often seem paradoxical. Users may, for example, state that they have serious concerns about privacy but readily submit their personal information [Smith et al. (2011)]. This phenomenon is called the “privacy paradox”, which describes a dichotomy between attitudes to privacy and actual behaviors [Spiekermann et al. (2001); Norberg et al. (2007); Bélanger and Crossler (2011)]. These opposing reactions can be explained by limited rationality in the decision-making process [Acquisti (2004); Acquisti and Grossklags (2005)], individuals’ tendency to discount future benefits and risk [O’Donoghue and Rabin (2001, 2000)], or situational factors (e.g. factors related to a specific website or online company) that override privacy concerns [Li et al. (2011)].

    2.2. Foundations of identity management

    An identity answers question such as “who am I?” and “what am I like?” [Chatman et al. (2005)]. Although there is no consistent definition of identity in academic literature, definitions tend to share three fundamental characteristics [Weick (1995)]. The first is that identities represent or are associated with entities (e.g. individuals or organizations) [Camp (2004); Jøsang and Pope (2005)]. In keeping with the first, the second is that identities cannot be related to more than one entity, although an individual might have several identities that emerge in different social contexts, which are referred to as “partial identities” [Jøsang and Pope (2005); Hansen et al. (2004); Talamo and Ligorio (2001)]. The third characteristic is that identities consist of a set of temporary or permanent individual attributes [Camp (2004)].

    IS research focuses on identities in digital contexts [Whitley et al. (2014)]. These digital identities consist of “a set of claims made by one digital subject about itself or another digital subject” [Cameron (2005, S. 11)] and enable digital subjects to prove that they are who they claim to be and to distinguish between different entities [Mühle et al. (2018)]. As identities are fundamental to participation in online transactions [Mühle et al. (2018); Whitley et al. (2014)], the management of identities is subject to significant attention. IdM enables identity holders to authenticate, identify, and authorize within an identity domain. As the importance of digital services grows, a manual IdM can limit access to the benefits of online transactions [Hansen et al. (2004); Jøsang and Pope (2005)]. IdM systems, therefore, facilitate the management of identities [Dhamija and Dusseault (2008)]. These technologies or programs establish the collection and connection of identifiers with identity attributes, enable a digital service to trust in the identity of a user, and allow the user to engage these services [Dhamija and Dusseault (2008); Dunphy and Petitcolas (2018); Hansen et al. (2004)]. IdM systems enable seamless transactions, combat fraud, connect information on multiple devices, and enable the development and use of innovative services [Hansen et al. (2008)].

    Several different IdM systems exist, and these have evolved in recent years [Hansen et al. (2004); Allen (2016)]. Centralized IdM models are closed systems in which a single exclusive authority acts as a provider of identifiers and credentials [Dhamija and Dusseault (2008); Jøsang and Pope (2005)]. As a single authority controls and manages identities, users can raise concerns about privacy, security, and trust [Allen (2016); Hansen et al. (2004)]. In contrast, federated identity management (FIM) systems distribute an identity and enable authentication across domains [Landau and Moore (2012); Maler and Reed (2008)]. FIM systems try to reduce the number of identifiers and credentials a user has to manage and to enhance the usability and user experience of digital services [Jøsang and Pope (2005)]. As identity providers need to share the personal information of users within the federated domain, these systems are often a source of significant user concern regarding privacy [Maler and Reed (2008)]. User-Centric IdM systems go one step further by enabling clients to use and control their identity across multiple digital services. These systems selectively disclose personal data and credentials for authentication on digital services [Allen (2016); Hansen et al. (2004)]. Since User-Centric IdM systems focus on authentication, users must be able to manage their identifiers and credentials effectively, which requires a high level of usability [Jøsang and Pope (2005)]. As an alternative to IdM systems that still rely mostly on central entities, decentralized IdM systems have emerged. They do not rely on a central identity provider but distribute identities across multiple local user repositories [Reed et al. (2018); Ahn et al. (2004); Dhamija and Dusseault (2008)].

    2.3. Blockchain and self-sovereign identities

    Blockchains can serve as underlying technology for decentralized IdM systems [Mühle et al. (2018)]. The idea behind blockchain is based on the concept of Distributed-Ledger-Technologies (DLT). DLTs avoid centralized data storage by using P2P networks to distribute data across nodes of a network [Amend et al. (2021); Cho et al. (2021)]. These nodes commonly make decisions about the actualization of stored data. Each node maintains a local copy of all data and can distribute new data across the network [Ziolkowski et al. (2020)]. Blockchains are databases that store transactions on decentralized nodes [Glaser (2017)]. Transactions are validated in the network and combined to form blocks. New blocks are cryptographically chained to their predecessor, which generates a chronological, tamper-resistant order of all transactions: a chain of blocks [Du et al. (2019); Chong et al. (2019)]. Central to the functioning of blockchains are the hashing and linking of transactions, which produce validated and retrospectively tamper-resistant transactions that reduce risks for users [Glaser (2017); Beck et al. (2018); Zhang et al. (2019)]. Blockchains use consensus mechanisms like Proof-of-Work and Proof-of-Stake to determine the database’s consistency [Beck et al. (2018); Lock et al. (2020)]. Newer generations of DLTs, such as Ethereum, also facilitate executable programs in forms of so-called smart contracts. These are protocols triggered by an external event that runs on every node of the network [Glaser (2017); Guggenberger et al. (2021); Lock et al. (2020)].

    One alternative to user-centric IdMs are decentralized IdM systems, which distribute identifiers across multiple user repositories [Ahn et al. (2004); Dhamija and Dusseault (2008); Reed et al. (2018)]. Blockchains serve as a technological infrastructure in decentralized IdM systems, extended by the concept of Decentralized Identifiers (DIDs). A DID represents an entity within such systems, which is persistent and not governed by a central authority. DIDs support authentication via cryptographic proofs (e.g. digital signatures) [W3C (2019a); Reed et al. (2018)], and serve as identifiers for verifiable claims (VCs), which are claims verified through the digital signature of an identity provider [Mühle et al. (2018)]. The World Wide Web Consortium (W3C) conceptualized DIDs following privacy by design requirements. Hence, VCs capitalize on DIDs to enhance the security and privacy of a person’s identity [W3C (2019a)].

    An SSI is such a concept for IdM and is regularly based on blockchain, though approaches exist that do not necessarily require a blockchain for SSI [Hoess et al. (2022); Sedlmeir et al. (2021)]. An SSI enables users to fully own and manage their digital identities [Dunphy and Petitcolas (2018); Mühle et al. (2018)]. An SSI is based on three core principles — the security, controllability, and portability of identities [Allen (2016); Tobin and Reed (2016)] — which are achieved and maintained using blockchain. The technology replaces the registration authority, pairing identification and authentication based on a public key infrastructure (PKI) where the public key is stored as a value of the identifier on the blockchain [Mühle et al. (2018)]. Blockchain assures consensus, transparency, and integrity when it comes to transactions, and thus, provides elements essential for IdM systems [Dunphy and Petitcolas (2018)]. Identity information can be referenced on the blockchain without being owned by a single authority. Furthermore, changes in data are made transparent, and historical activity cannot be tampered with. Blockchain also increases the inclusivity of humans restricted in their access to digital services and can reduce costs. Lastly, users gain increased control over their digital identifiers and can minimize the disclosure of personal data [Dunphy and Petitcolas (2018)]. An SSI uses zero-knowledge-proofs (ZKPs), which enable cryptographic tools to prove, statistically, that an assertion is valid without revealing additional information [Goldreich et al. (1991); Sedlmeir et al. (2021)]. ZKPs provide three features in the context of digital identities [W3C (2019b)]. Firstly, they combine multiple VCs from several issuers to form a single, verifiable presentation without revealing VCs or identifiers to the verifier. Secondly, they allow users to minimize data disclosure while retaining full control over their own identity [Sovrin (2018); Mühle et al. (2018)]. Lastly, they increase the flexibility of VCs, as VCs issued previously can be adapted to the requirements of the verifier and so do not need to be reissued [W3C (2019b)]. Thus, the critical components of an SSI that enhance the technology’s privacy-preserving character are blockchains, DIDs, VCs, PKI, and ZKPs. Figure 1 provides an overview of the interplay of these components.

    Fig. 1.

    Fig. 1. Interplay of DID, VC, and ZKPs in an SSI.

    As noted earlier, blockchain acts as a tamper-resistant registration authority [Mühle et al. (2018)]. Due to the privacy-risks of the blockchain, users store their private information in local storage. They can use this information to make an identity claim that needs to be verified by an issuer. Furthermore, each user manages an indefinite number of DIDs stored in a personal wallet [W3C (2019a)]. Based on PKI, the user can verify the ownership of a specific DID using the corresponding secret key. To verify a specific claim, the user presents a DID and the claim to an issuer. As an approved authority with a public identifier, the issuer does not necessarily require several DIDs. A user can now present the VC-DID combination to a verifier (e.g. to gain access to a digital service). To prevent the verifier and the issuer from correlating a user’s DIDs [W3C (2019b)], which would pose a significant risk to the privacy and security of identities, the user transfers the VC from the original DID to another DID in the wallet, using a ZKP. This procedure is called pairwise DID and decreases the privacy risk of users while enabling them to reuse a VC [W3C (2019b)]. Pairwise DIDs enable users to remain anonymous, which representing one extreme on the SSI’s spectrum of privacy with ‘totally identifiable’ at the other extreme. The necessity for a broad spectrum of privacy, which SSIs would facilitate, reflects the varying degrees of privacy required by users in different situations [W3C (2019a)].

    2.4. Technology acceptance research

    A major area within IS research examines factors that influence individuals’ decisions to adopt particular innovations [Rogers (1983)]. These factors need to be considered at various stages of the technology and product life cycle [Mathieson (1991)]. Therefore, researchers developed so-called Technology Acceptance Models (TAM) [Venkatesh et al. (2003)]. Based on Fishbein’s Theory of Reasoned Action [Fishbein and Ajzen (1975)], Davis [1985] proposed one of the first models, the so-called TAM. The TAM investigates individuals’ decision-making to explain the later success of IS. Using the TAM, [Davis (1985)] identified Perceived Usefulness and Perceived Ease of Use as factors affecting Attitude toward Using, which is embedded in a complex relationship between external variables and potential system usage [Marangunić and Granić (2015)]. Due to the prominence of this research field, researchers developed several other frameworks including different constructs to explain a user’s intention to adopt a technology. Venkatesh et al. [2003] reviewed eight of these models and unified them into one comprehensive theory of the acceptance and use of technology (i.e. UTAUT). Thanks to the relevance of these models — particularly UTAUT — these frameworks were extended and integrated into new contexts and researchers developed enhanced versions of the TAM and UTAUT [Venkatesh et al. (2012)].

    3. Research Model and Hypotheses

    3.1. Research model

    To recognize different influencing factors of SSI, we combined and adapted two different theories that build on research models suitable for exploring the influence of information privacy on the adoption of a system for SSIs. To this aim, technology acceptance as well as privacy are the two central theories that underly our study. Our model, thus based on UTAUT2 and the privacy framework of Dinev et al. [2013], can be used to determine Perceived Privacy via a control-risk calculus. UTAUT2 is a popular framework examining technology acceptance by users in different domains [Venkatesh et al. (2012)]. The key elements of UTAUT2 are Performance Expectancy, Effort Expectancy, Social Influence, Facilitating Conditions, Habit, Hedonic Motivation, and Price Value, Use Behavior and Behavioral Intention. We excluded Use Behavior on the basis that Behavioral Intention serves to explain more of the variance of a model and customers cannot yet use an SSI-based IdM [Venkatesh et al. (2012); Weinhard et al. (2017)]. We eliminated Habit, Hedonic Motivation, and Price Value, as these constructs require an established technology and previous experience of its use [Salinas Segura and Thiesse (2015)]. As the privacy perspective cannot yet be entirely validated by using UTAUT2, we integrated that perspective into our model by applying the model proposed by Dinev et al. [2013], determining Perceived Privacy through a control-risk calculus. Dinev et al. [2013] strongly recommend future research to clarify, enhance, and develop this model. Thus, we adopted additional relationships for Perceived Benefits and Information Sensitivity [Kehr et al. (2015)] and altered the role of Regulatory Expectations. Regulation is a proxy control mechanism ensuring the user’s privacy [Xu et al. (2012)]. Hence, we added a relationship between Regulatory Expectations and Perceived Privacy. Altering the role of Regulatory Expectations enables the comparison of a market-based approach, protecting customers’ privacy, with a regulatory approach [Berg et al. (2017)]. We also examined the effect that regulatory expectations have on the acceptance of an SSI.

    3.2. Hypotheses

    Performance Expectancy

    Performance Expectancy is the strongest predictor of Behavioral Intention and refers to users’ gains from the use of a new technology [Venkatesh et al. (2003)]. So far, passwords are the dominant authentication method used on the internet, but they are inconvenient and can lead to security problems [Neumann (1994); Recordon and Reed (2006); Roßnagel et al. (2014)]. An SSI can increase the performance of a user by providing a single-sign-on mechanism that enables easier access to digital services while securing the user’s privacy [Dunphy and Petitcolas (2018)]. If users expect a higher performance gain from an SSI, they are more willing to adopt the IdM system [Venkatesh et al. (2003)].

    H1:

    Performance Expectancy positively affects Behavioral Intention.

    Effort Expectancy

    Effort Expectancy reflects the “degree of ease associated with the use of the system” [Venkatesh et al. (2003, p. 450)] and is especially important in the early stages of a technology [Venkatesh et al. (2003)]. Due to complex privacy and security requirements, designing easy-to-use IdM systems is challenging [Roßnagel et al. (2014)]. However, usability is a critical factor for the success of such systems [Jøsang et al. (2007); Dhamija and Dusseault (2008)], and SSI providers need to align usability with privacy and security requirements to achieve adoption.

    H2:

    Effort Expectancy positively affects Behavioral Intention.

    Social Influence

    Social Influence includes the perceived impact of a user’s social surroundings on their Behavioral Intention [Venkatesh et al. (2003)]. The effect of the social environment is especially significant for new technologies [Venkatesh et al. (2003)]. Research also shows that the social environment influences the privacy decisions of individuals [Laufer and Wolfe (1977)]. As an SSI is a new concept based on emerging technology that aims to secure a user’s privacy (i.e. blockchain), social influence can have a positive effect on a user’s decision to adopt an SSI.

    H3:

    Social Influence positively affects Behavioral Intention.

    Facilitating Conditions

    The perceived availability of support in the use of a new technology varies significantly in different consumer settings [Venkatesh et al. (2003, 2012)]. An SSI is based on blockchain and sophisticated cryptographic techniques [Mühle et al. (2018)]. Thus, providers of an SSI cannot expect every customer to have a deep understanding of these concepts. This means that providers need to offer facilitation to their customers. Consumers with access to assistive resources are more likely to intend to use a technology [Venkatesh et al. (2012)].

    H4:

    Facilitating Conditions positively affects Behavioral Intention.

    Perceived Privacy

    Perceived Privacy implies a cognitive calculus resulting in a perceived state of privacy in a specific situation [Kehr et al. (2015); Schoeman (1984)]. Research shows that privacy concerns and the privacy calculus can influence the adoption of technologies [e.g. Angst and Agarwal (2009); Li et al. (2016); Dinev et al. (2006)]. Since privacy is the key factor determining the acceptance of IdM systems, these systems need to acknowledge the users’ privacy and enable users to control their information disclosure [Hansen et al. (2004)]. The conceptualization of SSIs follows privacy by design and information minimization principles, and enables users to control their information privacy [Sovrin (2018); Berg et al. (2017)]. Consequently, individuals expecting an SSI to increase their level of privacy are more willing to use the technology.

    H5:

    Perceived Privacy positively affects Behavioral Intention.

    Perceived Information Control

    The perceived ability of individuals to control their information disclosure can be supported by privacy-preserving technologies, such as SSIs. Dinev et al. [2013] distinguish between control over information disclosure and control over shared information. An SSI enables control over the disclosure of information by allowing users to share their information selectively [Mühle et al. (2018)]. The combined use of ZKPs, and VCs enables control over shared information based on the two concepts of “Zero-Knowledge-Set-Membership (ZKSM)” [Ma et al. (2022)] and “Zero-Knowledge-Range-Proofs (ZKRP)” [Günsay et al. (2021)]. In ZKSM, information within a VC is present in an unordered fashion (e.g. the list of students enrolled in a university) while in ZKRP, information must necessarily be present in an ordered fashion (e.g. the minimum age of individuals to attend events) [Morais et al. (2019)]. The proofs in both ZKSM and ZKRP can now be integer- or binary-based [Morais et al. (2019)]. In the integer-based proof, all elements of the (mostly unordered) data within the VC are signed. The verifier’s knowledge of this signature (or a sum resulting from the signatures) is now sufficient for proof. In binary-based proof, so-called secrets are used instead of signatures. These are split into individual bits and must be supplied by the verifier to provide the proof. As a result, users do not need to share actual personal data but only a DID and a VC proving that the actual requirement is fulfilled. Thus, users do not have to fear the misuse of disclosed personal information as the information is anonymized, pseudonymized, and untraceable. Hence, we assume an SSI to increase users’ perception [Culnan and Armstrong (1999); Dinev et al. (2013); Sheehan and Hoy (2000)].

    H6:

    Perceived Information Control positively affects Perceived Privacy.

    Tactics of Information Control

    Customers use three tactics to control the amount and accuracy of disclosed information: anonymity, secrecy, and confidentiality [Zwick and Dholakia (2004)]. Anonymity (and pseudonymity) enables users to conceal their true identity by creating various identity representations to hide their identity and prevent tracking [Zwick and Dholakia (2004); Turkle (1997)]. An SSI follows a comparable approach and enables the customer to create several minimized identities. Furthermore, it uses pairwise DIDs to avoid tractability while guaranteeing the validity of the identity claim [W3C (2019a, 2019b)]. Secrecy is defined as the concealment of personal information to prevent a digital representation of an individual [Tefft (1980); Zwick and Dholakia (2004)]. An SSI obtains Secrecy using pairwise DIDs and ZKPs. For instance, users can state that they are eligible to buy restricted products without sharing their real age. Lastly, Confidentiality is the externalization of limited but highly accurate personal information, and includes the unauthorized access of third parties to this information [Zwick and Dholakia (2004); Camp (1999)]. Service providers store information in databases, which can be attacked by hackers [Hille et al. (2015)]. Hence, consumers need to trust the organization to securely store information in a provider’s database [Camp (1999); Dinev et al. (2013)]. With an SSI, consumers do not need to rely on trust, since users share DIDs solely with the service provider and determine access to their personal data [Sovrin (2019)]. In the event of a data breach in which information storing resources (e.g. user wallets) are affected, they make their DIDs unusable for third parties [Sovrin (2018)]. These three tactics of IdM are essential for users to limit the disclosure of their information. Thus, we conclude that:

    H7:

    Anonymity positively affects Perceived Information Control.

    H8:

    Secrecy positively affects Perceived Information Control.

    H9:

    Confidentiality positively affects Perceived Information Control.

    Perceived Risk

    Perceived Risk is the fear of negative outcomes as a result of information disclosure, and implies a loss of control over personal information [Dinev and Hart (2006); Dinev et al. (2013)]. Risk is provoked by uncertainty, discomfort, or anxiety [Dowling and Staelin (1994)] as a result of potential opportunistic behavior on the part of organizations, such as unauthorized access, theft [Rindfleisch (1997)], and the sharing or sale of personal information [Budnitz (1997)]. Studies show that risk determines users’ information and identity disclosure, perceived privacy, and privacy concerns [Dinev and Hart (2004); Dinev et al. (2013); Krasnova et al. (2009)].

    H10:

    Perceived Risk negatively affects Perceived Privacy.

    Perceived Benefits of Information Disclosure

    Perceived Benefits of Information Disclosure is based on the notion of the privacy calculus and represents the perception of a positive net outcome of the assessment of risks and benefits of the information disclosure [Culnan and Bies (2003); Dinev et al. (2013)]. In return for disclosing information on digital services, consumers receive monetary or social benefits, personalized services, or increased convenience [Forsythe et al., (2006); Hann et al. (2007); Lu et al. (2004)]. These benefits can exceed the negative consequences of information disclosure and lead to an enhanced perceived state of privacy in a given situation [Smith et al. (2011); Kehr et al. (2015)]. Chellappa and Sin [2005] even demonstrated that the benefits of personalization are almost twice as significant as consumers’ privacy concerns. Thus, an individual’s perceived benefits also impede the perception of risks associated with information disclosure [Kehr et al. (2015)].

    H11:

    Perceived Benefits of Information Disclosure negatively affects Perceived Risk.

    H12:

    Perceived Benefits of Information Disclosure positively affects Perceived Privacy.

    Information Sensitivity

    The general disclosure of information does not necessarily raise privacy concerns. Rather, it may be the sensitivity of information that determines a user’s privacy concerns and leads to paradoxical privacy-related behavior [Mothersbaugh et al. (2012)]. Information Sensitivity involves a cognitive and rational assessment and depends on personal characteristics, cultural backgrounds, legislative settings, and the specific context [Bansal et al. (2010); Bellman et al. (2004); Kehr et al. (2015)]. Hence, a user’s perception of the sensitivity of a piece of information determines the impact on perceived privacy, privacy concerns, or the disclosure of private data [Kam and Chismar (2006); Malhotra et al. (2004)]. Empirical studies show that higher sensitivity of personal information intensifies Perceived Risk and reduces the Perceived Benefit of Information Disclosure [Malhotra et al. (2004); Mothersbaugh et al. (2012)].

    H13:

    Information Sensitivity negatively affects Perceived Benefits of Information Disclosure.

    H14:

    Information Sensitivity positively affects Perceived Risk.

    Importance of Information Transparency

    From a user perspective, organizational approaches to handling sensitive information regularly lack transparency. Consequently, individuals emphasize being informed by organizations about the collection and processing of their personal information [Dinev et al. (2013); Waldo (2007)]. Organizations can increase their transparency and enable customers to assess their privacy risk by publishing privacy policy statements that aggregate the organization’s privacy practices [Awad (2006)]. Opaque privacy practices increase the perceived risks and individuals’ fear of adverse consequences [Pitkänen and Tuunainen (2012)]. They also reduce the willingness of customers with high demand for transparency to disclose their personal information [Awad (2006); Karwatzki et al. (2017)].

    H15:

    Importance of Information Transparency positively affects Perceived Risk.

    Regulatory Expectations

    Researchers distinguish three approaches to protecting information privacy: individual self-protection, industry self-regulation, and government legislation [Culnan and Bies (2003); Tang et al. (2008); Xu et al. (2009)]. An SSI is a market-based approach to individual self-protection, offering an alternative to privacy regulations [Zheng et al. (2018)]. Regulatory approaches, such as the General Data Protection Regulation (GDPR) in the European Union, can similarly realize the fundamental principles of SSIs, namely privacy by design, minimization, and portability [Allen (2016)], and enable individuals to exercise proxy control, and diminish privacy concerns and perceived risks [Berg et al. (2017); Dinev et al. (2013); Xu (2007)]. Individuals tend to demand more rigorous privacy regulations if they perceive that alternative approaches alone do not preserve their privacy [Smith et al. (2011)]. However, their limited resources mean that users often struggle to evaluate their protection [Lwin et al. (2007)]. In contrast, regulators have the required resources, meaning they are most able to protect individuals’ privacy. This is particularly apparent in their ability to punish those responsible for privacy breaches [Spiro and Houghteling (1981)]. Thus, effective privacy regulations are an alternative to an SSI and would decrease the willingness to adopt SSI systems.

    H16:

    Regulatory Expectations positively affects Perceived Privacy.

    H17:

    Regulatory Expectations negatively affects Behavioral Intention.

    4. Research Methodology

    4.1. Measurement development

    To validate our research hypotheses, we developed a survey, in English, using constructs and items from the privacy and technology acceptance literature. We adapted all items to our specific research context of digital identities, and modified items of control-related constructs and Perceived Privacy to support the use of an SSI. All items were built as reflective indicators and measured using 7-point Likert scales ranging from totally disagree (1) to totally agree (7). We incorporated multiple additional indicators for most of our constructs to improve reliability.

    The introduction provided respondents with basic knowledge, briefly explaining identity attributes, the difference between centralized and decentralized identity, SSI, and the increased control of personal data enabled by an SSI. All respondents were asked to reflect the use of SSI from a mandatory perspective. We also added three control questions to verify that our respondents correctly understood these descriptions. Respondents who answered one of these questions incorrectly were excluded from the data analysis to minimize differing perceptions of our constructs. Lastly, to compare descriptive statistics, we added questions collecting demographic data from our respondents.

    Following Kim et al. [2009] and Urbach and Ahlemann [2010], we conducted a pre-test to validate our reflective measurement model in terms of reliability and validity. In total, we collected 40 complete responses, of which 30 respondents answered the control questions correctly. We used SmartPLS 3 to evaluate our pre-test and followed the procedure recommended by Hair et al. [2017] to trim down our questionnaire. As a result, we eliminated selected indicators as well as the constructs Importance of Information Transparency and Perceived Risk and their corresponding hypotheses as we could not ensure validity and reliability without neglecting content validity. Appendix A provides a table with all constructs, their corresponding items (those excluded are marked gray) as well as the source of these items. Figure 2 illustrates our final research model with the remaining hypotheses.

    Fig. 2.

    Fig. 2. Final research model.

    4.2. Data collection

    To gather a diverse sample of respondents, we distributed our survey on several social networks, internal mailing lists, chats, forums of blockchain communities, and survey exchange platforms, as well as Amazon Mechanical Turk. In total, we collected 495 responses, of which 354 were complete. We eliminated data points where respondents did not answer our control questions correctly. In the end, we amassed 240 valid responses. Of our respondents, 56.20% were male, and 43.30% were female. Their average age was 30.42 years. Nearly half held a bachelor’s degree or higher, with 38.8% of current students, and 45% full-time employees (cf. Table 1).

    Table 1. Descriptive statistics.

    Demographic variablesCategoryValue
    AgeMinimum15
    Maximum77
    Mean30.42
    Median27
    Standard deviation10.07
    GenderMale56.30%
    Female43.30%
    Other0.40%
    EducationNo schooling completed1.30%
    High school graduate26.70%
    Bachelor’s degree49.20%
    Master’s degree20.00%
    Doctorate degree2.90%
    EmploymentEmployed full time45.00%
    Employed part time11.30%
    Unemployed looking for work2.10%
    Unemployed not looking for work1.30%
    Retired1.30%
    Student38.80%
    Disabled0.40%

    There is little consensus among researchers as to the required sample size for conducting SEM-PLS. In general, PLS is favored by many researchers as it does not require a large sample, and because the sample size is independent of the model’s complexity [Hair et al. (2017); Cassel et al. (1999)]. A minimum of n = 130 responses for a survey with six constructs determining a dependent variable and a significance level of p = 0.050 and an R2 of 0.250 is required [Hair et al. (2017)]. Other researchers recommend conducting a G*Power analysis to determine the required sample size [Faul et al. (2009)]. The a priori G*Power analysis (effect size f2 = 0.111, alpha = 0.050, Power = 0.800, 12 predictors) reports a required sample size of n = 167. Other researchers state that the requirements for SEM-PLS are comparable to those of covariance-based approaches (n>150) and recommend using bootstrapping to assess the significance levels of the sample and the standard errors [Urbach and Ahlemann (2010)]. Hence, we fulfill the recommendations for the required sample size for SEM-PLS.

    5. Data Analysis

    5.1. Measurement model

    To maximize the explanatory power of our model, we evaluated our data in terms of reliability as well as convergent and discriminant validity. We primarily followed the general recommendations of Hair et al. [2017] and Benitez et al. [2020], supported by the guidelines of Urbach and Ahlemann [2010] for IS specifics.

    To examine internal consistency reliability, we used composite reliability [Urbach and Ahlemann (2010)]. All our constructs displayed a desirable CR between 0.700 and 0.950 (cf. Table 2) [Nunnally and Bernstein (2008)]. Next, we assessed convergent reliability on the indicator and construct levels. We investigated the indicators’ outer loadings to examine internal reliability. Outer loadings higher than 0.708 are favorable; indicators with outer loadings between 0.400 and 0.700 may be retained [Hair et al. (2017)]. Our data showed that all values were higher than 0.400. Perceived Benefit, Social Influence, and Information Sensitivity had at least one indicator between 0.600 and 0.700, with indicator 3 (IS3) of Information Sensitivity having the lowest value of 0.495. Nevertheless, we concluded that indicator reliability was given. We used average variance extracted (AVE) with a threshold of 0.500 to evaluate convergent reliability on a construct level [Fornell and Larcker (1981); Urbach and Ahlemann (2010)]. Our constructs displayed an AVE between 0.517 and 0.811, indicating convergent reliability. Thus, all of our indicators and constructs imply convergent reliability.

    Table 2. Reliability and validity.

    ConstructCronbach’s alphaComposite reliabilityAverage variance extracted (AVE)
    ANYT0.8880.9230.749
    BEN0.8330.8900.671
    BI0.8950.9290.769
    CFDT0.8590.9050.703
    EE0.8460.8950.681
    FC0.7180.8410.640
    IS0.7520.7490.517
    LAW0.8380.9030.756
    PCTL0.9160.9370.748
    PE0.9230.9400.723
    PRIV0.8830.9280.811
    SCRT0.8640.9080.711
    SI0.8400.8890.620

    To examine the degree of difference between the constructs, we assessed discriminant validity using the Fornell–Larcker criterion, which requires a latent variable (LV) to share more variance with its assigned indicators than with any other LV [Urbach and Ahlemann (2010); Fornell and Larcker (1981)]. Remarkably, discriminant validity was not established for Perceived Control with Confidentiality and Perceived Privacy. Thus, we examined the inter-item correlation to identify highly correlating indicators. Subsequently, we eliminated the indicators PCTL1 and PCTL2 of Perceived Control, establishing discriminant validity for all constructs (cf. Table 3).

    Table 3. Fornell–Larcker criterion.

    ANYTBENBICFDTEEFCISLAWPCTLPEPRIVSCRTSI
    ANYT0.865
    BEN0.4440.819
    BI0.4530.3140.877
    CFDT0.7360.4020.5160.839
    EE0.4240.3720.5630.5300.825
    FC0.3400.3420.4900.4630.7850.800
    IS0.099−0.1530.0910.0970.0650.0220.719
    LAW0.070−0.0050.2210.2250.3240.3090.2490.869
    PCTL0.7220.4060.5650.8250.5680.4950.1630.2230.865
    PE0.5830.4440.6900.6150.5810.5360.2040.2020.6300.850
    PRIV0.7270.4270.5450.8310.5560.4950.0570.2230.8400.5890.900
    SCRT0.6850.3120.5650.7730.5180.4540.1790.2640.8100.5560.8370.843
    SI0.4220.4410.6330.4920.5570.5910.0470.2310.4920.5650.5250.5150.788

    5.2. Structural model

    We applied partial least squares structural equation modeling (PLS-SEM) to test our research model using Smart PLS 3.0 [Hair et al. (2017); Urbach and Ahlemann (2010)]. PLS is a popular statistical approach within the IS discipline as it does not require a relatively large sample size or normal-distributed data to test SEMs with a substantial number of constructs, especially for theory development [Urbach and Ahlemann (2010)]. Figure 3 displays the results of our structural model.

    Fig. 3.

    Fig. 3. Research model with path coefficients, t values, and significance levels.

    We first investigated the collinearity using VIF values with a threshold of 5.000 [Hair et al. (2017)]. Confidentiality has the highest VIF value (3.106) indicating no critical degree of collinearity. As seen in Fig. 3, the impact of each of the three tactics of information control on Perceived Control is highly significant. Information Sensitivity has no significant relationship with Perceived Benefit. Perceived Benefit and Perceived Control have a strong and highly significant impact on Perceived Privacy. On the contrary, Regulatory Expectations do not appear to share a significant relationship with Perceived Privacy. Furthermore, the construct does not imply a significant relationship with Behavioral Intention. Performance Expectancy, Effort Expectancy, and Social Influence have a significant positive impact on Behavioral Intention. However, the influence of Facilitating Conditions on Behavioral Intention is not significant. Overall, the model explains R2 = 0.583 of the variance in the dependent variable Behavioral Intention. Furthermore, it seems that Perceived Benefit has little explanatory power (R2 = 0.023) whereas Perceived Privacy (R2 = 0.764) and Perceived Control (R2 = 0.716) hold substantial explanatory power.

    We used Cohen’s f2 to evaluate the effect size of the paths in our research model [Urbach and Ahlemann (2010); Hair et al. (2017); Cohen (2013)]. Facilitating Conditions (0.009), Regulatory Expectations (0.001), and Perceived Privacy (0.010) appear to have little effect. Furthermore, Regulatory Expectations (0.001) shows little impact on Perceived Privacy. Effort Expectancy (0.024), Information Sensitivity (0.024), Social Influence (0.127), Anonymity (0.042), and Perceived Benefit (0.035) evince an average effect. Lastly, Performance Expectancy (0.209), Confidentiality (0.235), Secrecy (0.232), and Perceived Control (1.702) are shown to have a significant effect on their dependent variables.

    Lastly, we examined the Stone–Geisser criterion’s Q2-values, based on a blindfolding procedure with an omission distance of D = 7 [Hair et al. (2017)]. The results show that Perceived Benefit has little predictive power (Q2 = 0.013). The other LVs indicate high predictive power (Behavioral Intention: Q2 = 0.415, Perceived Privacy: Q2 = 0.541, Perceived Control: Q2 = 0.541).

    6. Discussion

    6.1. Hypotheses

    To account for the behavioral perspective in our research model, we borrowed four constructs from UTAUT2 to examine Behavioral Intention. The results confirm that Performance Expectancy(H1), Effort Expectancy(H2), and Social Influence(H3) significantly affect Behavioral Intention, which is in line with former research [e.g. Bélanger and Crossler (2011); Pavlou (2011)]. Our results for the influence of Facilitating Conditions(H4), however, contradict the outcomes of prior empirical studies. At no point did our data provide evidence that Facilitating Conditions have a positive effect on Behavioral Intention. We assume that the novelty of an SSI and the underlying concept of blockchain influenced this result. Hence, users may struggle to determine the available support and the compatibility of these new technologies [Weinhard et al. (2017)].

    Based on the privacy-related constructs from the privacy framework of Dinev et al. [2013], we first stated the influence of Perceived Privacy on Behavioral Intention (H5). In our sample, we cannot find evidence for this hypothesis. Hence, Perceived Privacy does not statistically significantly affect the Behavorial Intention of a user to adopt an SSI-based IdM system. Nevertheless, our data confirm the hypothesis H6. Perceived Control indicates that an SSI enables users to perceive control over their information, which has a significant positive effect on Perceived Privacy. These results confirm the findings of existing privacy literature and the close relationship between control and privacy. Their close relationship, again, could lead to discriminant validity and collinearity concerns within our study. To counteract these concerns, some researchers equate control with privacy [e.g. Smith et al. (2011)], while others define control as an important determinant of privacy concerns [Malhotra et al. (2004)]. Consequently, we confirm the proximity, but also maintain the separation of these constructs. We further demonstrated and confirmed that Anonymity, Secrecy, and Confidentiality significantly affect Perceived Control(H7–H9). The results confirm the importance of the three tactics of IdM in enabling users to control the disclosure of their information.

    In relation to the privacy calculus described by Dinev and Hart [2006] and originally theorized in a study by Laufer and Wolfe [1977] as a calculus of behavior, we studied the impact of Perceived Benefit on Perceived Privacy(H12) [Kehr et al. (2015)]. Perceived Benefit was shown to have a positive effect on Perceived Privacy, which confirms hypothesis H12 and supports the underlying theory of the privacy calculus: that users evaluate risks and benefits to assess their state of privacy. If users overlook these risks, the importance of additional factors influencing the success of IdMs (e.g. usability) increases. Kehr et al. [2015] outline that highly beneficial services are often associated with the highest privacy risk for users. Consequently, we included Information Sensitivity in our study as it has been revealed as the origin of paradoxical privacy-related behavior. The sensitivity of information multiplies risks and reduces the perceived benefits of information disclosure [Malhotra et al. (2004); Mothersbaugh et al. (2012)]. Hence, we theorized that Information Sensitivity negatively affects Perceived Benefit(H13). Throughout our study, however, this relationship was not found to be significant. Lastly, we rejected both hypotheses related to Regulatory Expectations, which examined the effects on Perceived Privacy(H16) and Behavior Intention(H17). Table 4 provides an overview of the results of our proposed hypotheses, including the four hypotheses that were excluded from the testing due to statistical considerations.

    Table 4. Summary of hypothesis testing.

    No.HypothesisResult
    H1Performance Expectancy positively affects Behavioral Intention.Accepted
    H2Effort Expectancy positively affects Behavioral Intention.Accepted
    H3Social Influence positively affects the Behavioral Intention to use an SSI.Accepted
    H4Facilitating Conditions positively affects Behavioral Intention.Rejected
    H5Perceived Privacy positively affects Behavioral Intention.Rejected
    H6Perceived Information Control positively affects Perceived Privacy.Accepted
    H7Anonymity positively affects Perceived Information Control.Accepted
    H8Secrecy positively affects Perceived Information Control.Accepted
    H9Confidentiality positively affects Perceived Information Control.Accepted
    H10Perceived Risk negatively affects Perceived Privacy.Not examined
    H11Perceived Benefits of Information Disclosure negatively affects Perceived Risk.Not examined
    H12Perceived Benefits of Information Disclosure positively affects Perceived Privacy.Accepted
    H13Information Sensitivity negatively affects Perceived Benefits of Information Disclosure.Rejected
    H14Information Sensitivity positively affects Perceived Risk.Not examined
    H15Importance of Information Transparency positively affects Perceived Risk.Not examined
    H16Regulatory Expectations positively affects Perceived Privacy.Rejected
    H17Regulatory Expectations negatively affects Behavioral Intention.Rejected

    6.2. Theoretical contribution

    The goal of our study was to provide empirical insights into the impact of privacy perception on the adoption of IdM systems. These we examined within the emerging context of blockchain for a blockchain-based IdM system called an SSI. Blockchain is particularly interesting as most previous studies in this field have examined the potential of the technology or its technological foundations [e.g. Beck et al. (2018); Glaser (2017)], and few have investigated the potential of blockchain from individual and behavioral perspectives [Mendoza-Tello et al. (2018)]. Similarly, empirical results from a behavioral perspective remain scarce in IdM literature, although an extensive body of theory exists on the influence of factors such as information privacy on the adoption of non-blockchain-based IdM systems [Hansen et al. (2004); Seltsikas and O’Keefe (2010)]. Mindful of this lack of knowledge within the IdM and blockchain literature, we conducted our study from an individual perspective to investigate the impact of information privacy-related theories (namely, the privacy paradox and privacy calculus) on the acceptance of an SSI system. Our research model consisted of established constructs from technology acceptance and privacy research. However, given the novelty of our research context, our study disclosed some unexpected findings. Analogous to the privacy paradox, our research does not empirically support the claim that perceived privacy affects the acceptance of an SSI. These findings contradict the prevailing view of privacy as a key factor for IdM systems.

    Despite the effect that Perceived Control has on Perceived Privacy, we did not find a significant relationship between Perceived Privacy and Behavioral Intention, although extant literature theorized this relationship to be of critical importance to the success of IdM systems [e.g. Hansen et al. (2004); Roßnagel et al. (2014)]. On the base of this theorized relationship, extensive efforts were made in developing and using privacy-preserving DTs [Mühle et al. (2018)]. Our results do not confirm this relationship. This may explain a lack of practical use of solutions that build upon this assumption and, in turn, explain the success of SSO mechanisms whose value proposition is based on convenience and security, rather than on privacy, such as those of Facebook and Google. For instance, Bauer et al. [2013], as well as Pitkänen and Tuunainen [2012], showed that users of these SSO mechanisms — and social networks in general — were unaware of the underlying privacy practices despite consent information that pretends to inform the user about these practices prior to use. At the same time, Bauer et al. [2013] also showed that, although they continued their use of SSOs, users expressed significant privacy concerns about such mechanisms.

    The results of the study are in line with studies that investigated the privacy paradox. After all, the likes of Spiekermann et al. [2001] investigated self-reported privacy preferences and the corresponding actual behavior of e-commerce customers. They found that privacy-preserving approaches may be ineffective due to discrepancies between the stated and actual behavior of customers. Users often express privacy concerns regarding the disclosure of personal information but reveal low inhibition thresholds when asked to share their information to benefit from a digital service [Dinev and Hart (2006)]. Despite the privacy paradox, and despite the fact that SSI enhances perceived control, privacy does not seem to be a factor influencing the adoption of privacy-preserving IdM systems such as SSIs. This conclusion is further supported by Dhamija and Dusseault [2008] who found that IdM, and thus the management of private information, is not a primary goal of consumers. SSI shifts the ownership — and with it the responsibility for their privacy — to users and asks them to actively manage their privacy settings [Der et al. (2017)]. The findings of the study presented here are therefore of relevance to examine and advance theoretical assumptions that form the basis of the technological progress of SSI. Additionally, these findings have implications for initiatives seeking to balance privacy with cybersecurity. By identifying themselves, users can be trusted by digital services and therefore benefit from such a service. Besides situations, in which users disclose information voluntarily to benefit from a digital service, cybersecurity could also be a reason to reduce privacy. In line with the concept of the bright internet, in some cases, a user’s information privacy is not predominant in favor of legitimate preventive cybersecurity mechanisms [Lee et al. (2020)]. Our findings support these considerations.

    Remarkably, although Perceived Control has a positive impact on Perceived Privacy, we did not detect a similar effect for Regulatory Expectations on Perceived Privacy. Our hypothesis was based on the theory that regulations would empower users to exercise proxy control over their privacy [Xu et al. (2012)], while an SSI would be a market-based alternative, which enables the user to exercise actual instead of proxy control. Therefore, we hypothesized that appropriate privacy regulations could make an SSI redundant from a privacy point of view and, hence, negatively affect Behavioral Intention. The results of our study are in contrast to studies by Xu et al. [2011] and Lwin et al. [2007]. Additionally, we did not detect a significant effect of Regulatory Expectations on Behavioral Intention.

    Although we could not find significant effects for the abovementioned relationships within our study, the results do not necessarily mean that the underlying assumptions were wrong as the results are, indeed, in line with previous research. In a study of the role of privacy control and privacy assurance approaches in location-based services — namely, individual self-protection, industry self-regulation, and government legislation — Xu et al. [2011] investigated the interplay of these three approaches to identify the extent to which they can substitute one another. The authors present two explanations of particular importance in which the results of our study can be embedded. First, based on the difference between control agency of proxy control approaches (e.g. privacy regulations) and the real control of individual self-protection through privacy-enhancing technologies (e.g. SSI), the latter affords a greater sense of control and has a stronger impact on users’ perceived information control [Xu et al. (2012)]. Second, self-control mechanisms diminish the need for regulatory expectations and can even substitute them to some extent [Xu et al. (2012)]. This previous research provides a tentative explanation for the results of our study. An SSI, as a means of individual self-protection, provokes a greater perception of control than Regulatory Expectations and diminishes the proxy-control-effect of Regulatory Expectations on Perceived Privacy. Consequently, the effect of Regulatory Expectations on Behavioral Intention also decreases. Therefore, Xu et al. [2012] conclude that approaches to individual self-protection must be promoted as an appropriate substitute for other privacy protection approaches, especially due to their feature to overcome “international, regulatory, and business boundaries” [Xu et al. (2012, S. 1360)]. This feature is a big advantage of blockchain-based IdM systems and must be heralded to support the adoption of respective systems [Rieger et al. (2019)].

    6.3. Managerial implications

    The findings in this study lead to several important practical implications for users, IdM system providers and digital service providers. In light of the privacy paradox, users must be aware that control over their identity does not necessary result in a higher privacy. Therefore, users must calculate the risks and benefits of the information disclosure against the background of the privacy paradox. Additionally, the use of an SSI-based IdM system increases control but demands that users take responsibility and ownership over their information privacy [Der et al. (2017)]. For instance, users must define with the help of an SSI-based IdM system what and how much personal information they want to share with a specific digital service. As a result, the efforts in using these digital services increase for respective users.

    SSI providers must address these use-related factors (e.g. higher efforts on the user side) to deliver accepted and successful solutions. Although privacy and control are central to the value proposition of SSI-based IdM systems, managers in charge for the implementation of SSI solutions must focus on interoperability, usability, and security, as summarized by Roßnagel et al. [2014], to achieve widespread adoption. For instance, interoperability is especially critical from an economic and a network effects perspective. The more users and correspondingly digital services rely on an SSI, the higher will be the benefit from an SSI for these actors [Katz and Shapiro (1994)]. Only the interplay of these factors will ensure the success of such an IdM system, and consequently, of an SSI [Dunphy and Petitcolas (2018)].

    Additionally, managers of digital services must set up application programming interfaces within their own organization for capitalizing on an SSI. The organization and their digital services must be prepared to connect to these identity domains and provide a seamless experience to their users. Hence, important questions such as the role of the own organization in the IdM system (e.g. issuer, verifier) must be answered upfront.

    6.4. Limitations and future research

    Our results must be interpreted in light of their conceptual and empirical limitations. Conceptually, a forward-facing approach was taken, seeing as respondents were asked to consider their intention, rather than their actual use of SSI. If an SSI was implemented and respondents became familiar with the concept, research could examine actual Use Behavior instead of Behavioral Intention. Such research would improve the comparability of the results and eliminate the risk of participants misunderstanding underlying concepts [Arnold and Feldman (1981)]. Furthermore, when consolidating our research model, statistical considerations led us to eliminate constructs of potential relevance. We excluded Perceived Risk because we could not ensure validity and reliability without neglecting content validity. However, Dinev et al. [2013] found that Perceived Risk is an essential antecedent of Perceived Privacy. Additionally, privacy protection approaches, such as an SSI, diminish individuals’ perceived risk and affect their decision making [Adjerid et al. (2018)]. Hence, excluding Perceived Risk from our study may have reduced our explanatory power, which may have distorted results. Future studies could attempt to include Perceived Risk to increase the explanatory power of the research model.

    Empirically, there may be various sources of errors in a study that distort results [Hair et al. (2017)]. Although we distributed our survey across multiple channels to reach a wide range of respondents, the representativeness of our study is still limited for at least three reasons. Firstly, we distributed our survey exclusively via selected online channels. This was because our research aimed to examine the digital identities and information privacy of actual online users. Nevertheless, we did not reach users of online services other than those we selected. Secondly, a wide range of personal and cultural factors influence perceptions of privacy [Smith et al. (2011)]. However, our descriptive statistics indicate that the sample had a relatively low average age as well as an above-average educational background, which might lead to statistical distortions. Thirdly, we cannot rule out the possibility that linguistic and semantic barriers affected our results. In the survey, we presented a hypothetical setting to our respondents in English. An SSI is a new technology which we briefly explained within our survey. SSIs do not have reached mainstream adoption yet, and we expect that not every respondent was familiar with the underlying technological concepts. As most of our respondents were non-native speakers, we must assume that not every respondent fully understood the concept of an SSI even though we tested their understanding with control questions.

    The above-mentioned limitations present useful opportunities for future research. First, studies should examine the effect that additional factors have on the acceptance of an SSI. Our research indicates that users are struggling to assess the facilitating conditions of a blockchain-based privacy-preserving IdM that is an SSI. Hence, the communication of values proposed by blockchain must be effective. Blockchain, which is often implemented as underlying and invisible infrastructure, regularly stays under the radar of users. For instance, blockchain creates trust between parties based on the use of technology rather than trust based on the reputation of institutional intermediaries [Chanson et al., (2019)]. Thanks to the technological features of blockchains, users can trust the tamper-resistance of a document stored on a blockchain [Beck et al. (2018); Chanson et al. (2019); Rossi et al. (2019)]. Yet users may remain unaware that they can trust their counterparts based on the tamper-resistance of blockchain. As a result, blockchain-based service providers should communicate the advantages of their technology-based intermediation, including, for example, increased transparency and reduced transaction costs [Rieger et al. (2019); Lock et al. (2020)]. Consequently, future research could further examine the impact of facilitating conditions on privacy and trust among various actors of a blockchain network.

    Second, usability represents another interesting research opportunity, particularly from a design science perspective. Security and privacy requirements often present complex challenges for the usability of IdM systems [Roßnagel et al. (2014)]. Researches could examine how an SSI, with its underlying cryptographic technologies such as ZKP or DIDs, should be designed and how different designs affect the use of an SSI, as well as its privacy-preserving nature [Bélanger and Crossler (2011); Pavlou (2011)]. Research could, therefore, theoretically develop adequate design science artifacts and evaluate these in practice [Hevner and Park (2004)], or even follow an action design research approach and ensure relevance by involving practitioners from the early stages of the project. With such knowledge on the design at hand, research could again focus on behavioral questions, such as, for example, whether and how users change their behavior in the presence of fully trusted privacy-preserving IdM systems.

    6.5. Conclusion

    Blockchain is an innovative technology with significant potential that allows for use-cases such as an SSI. Yet, aside from Bitcoin, blockchain applications have not reached mainstream adoption. This study provides empirical knowledge on the acceptance of a privacy-preserving IdM system that is an SSI. We combined theories of technology acceptance and information privacy to investigate factors influencing the acceptance of an SSI. The results of our study augment knowledge in the aforementioned domains and, in particular, about IdM as a superordinate concept of an SSI. We contribute to the theoretical differentiation of control and privacy and shed light on the privacy paradox in the acceptance of these systems with our empirical finding that privacy is not a critical factor in the acceptance of IdM systems from a behavioral perspective. These results contradict existing literature on the impact of privacy as a critical factor in the success of IdM systems. An SSI allows users perceived control over their digital identities, which positively affects users’ perceived privacy. But paradoxically, perceived privacy is not a critical factor in the acceptance of an SSI-based IdM system. Our findings suggest the need for future research on factors that affect the acceptance of IdM systems and blockchain use-cases. We propose that future research should investigate the impact of blockchain’s technological features and respective value propositions, which could lead to the acceptance of these use-cases. Here, the focus should be on the individual technological components of an SSI and the selection of a user group with different technology literacy. Future studies should further investigate differences of use behavior within SSI-based IdM systems that rely on capabilities of blockchain technology and within those that do not. These studies would contribute to a more comprehensive understanding of factors critical to the acceptance of blockchain, SSI, and privacy-preserving solutions in general.

    Appendix A

    ConstructNo.ItemSource
    AnonymityANYT1I believe I can hide my true identity on digital services when I would use an SSI.Dinev et al. [2013]
    ANYT2I believe I can stay anonymous and do everything I want on digital services when I would use an SSI.
    ANYT3I can keep my information anonymous on digital services when I would use an SSI.
    ANYT4I feel that digital services cannot trace back how I use their services when I would use an SSI.Benlian et al. [2019], adapted from Pinsonneault and Heppel [1997]
    ANYT5I feel anonymous when I would use an SSI.
    ANYT6I do not feel like the digital service identifies my use of their service when I would use an SSI.
    Perceived Benefit of Information DisclosureBEN1Revealing my personal information on digital services will help me obtain information/products/services I want.Dinev et al. [2013]
    BEN2I need to provide my personal information so I can get exactly what I want from digital services.
    BEN3I believe that because of my personal information disclosure, I will benefit from a better, customized service and/or better information and products.
    BEN4I think my benefits gained from the use of digital services can offset the risks of my information disclosure.Xu et al. [2011]
    BEN5The value I gain from use of digital services is worth the information I give away.
    BEN6I think the risks of my information disclosure will be greater than the benefits gained from digital services.
    BEN7Overall, I feel that using digital services is beneficial.
    Behavioral IntentionBI1I intend to use SSI in the next months.Gupta et al. [2008], adapted from [Fishbein and Ajzen, 1975]
    BI2I predict I would use SSI in the months.
    BI3I plan to use SSI in the next months.
    BI4I am curious about SSI.Oliveira et al. [2014], adapted from Kim et al. [2009]
    BI5I intend to manage my accounts using an SSI.
    BI6I want to know more about SSI.
    ConfidentialityCFDT1When I would use an SSI, I believe my personal information provided to digital services remains confidential.Dinev et al. [2013]
    CFDT2I believe an SSI would prevent unauthorized people from accessing my personal information in databases of digital services.
    CFDT3When I would use an SSI, I believe my personal information is accessible only to those authorized to have access.
    CFDT4When I would use an SSI, I expect my personal information to be confidential when I use digital services.Pavlou and Fygenson [2006], adapted from Cheung and Lee [2001] and Salisbury et al. [2001]
    CFDT5An adequate protection of my personal information would make it (much more difficult/easier) for me to use a digital service.
    CFDT6When I would use an SSI, I feel secure that my personal information is kept confidential when I use digital services.
    CFDT7Feeling secure that personal information is kept private would make it (much more difficult/easier) for me to use a digital service.
    Effort ExpectancyEE1I would find it easy to use an SSI to access digital services.Chan et al. [2010], adapted from Venkatesh et al. [2003]
    EE2Learning to use an SSI to access digital services would be easy for me.
    EE3It would be easy for me to become skillful at using an SSI to access digital services.
    EE4My interaction with SSI would be clear and understandable.Martins et al. [2014], adapted from Venkatesh et al. [2003]
    Facilitating ConditionsFC1I expect to have the resources necessary to use an SSI to access digital services.Chan et al. [2010], adapted from Venkatesh et al. [2003]
    FC2I expect to have the knowledge necessary to use an SSI to access digital services.
    FC3I expect that a specific person or group would be available for assistance with difficulties using an SSI to access digital services.
    Information SensitivityIS1I do not feel comfortable with the type of information digital services request from me.Dinev et al. [2013]
    IS2I feel that digital services gather highly personal information about me.
    IS3The information I provide to digital services is very sensitive to me.
    Regulatory ExpectationsLAW1I believe that the law should protect me from the misuse of my personal data by online companies providing digital services.Dinev et al. [2013]
    LAW2I believe that the law should govern and interpret the practice of how digital services collect, use, and protect my private information.
    LAW3I believe that the law should be able to address violation of the information I provided to digital services.
    LAW4The existing laws in my country are sufficient to protect consumers’ online privacy.Lwin et al. [2007]
    LAW5There are stringent international laws to protect personal information of individuals on the Internet.
    LAW6The government is doing enough to ensure that consumers are protected against online privacy violations by digital services.
    LAW7The best way to protect personal privacy would be through strong laws.Milberg et al. [2000]
    Perceived Information ControlPCTL1I think I have control over what personal information is released by digital service when I would use an SSI.Dinev et al. [2013]
    PCTL2I believe I have control over how personal information is used by digital services when I would use an SSI.
    PCTL3I believe I have control over what personal information is collected by digital services when I would use an SSI.
    PCTL4I believe I can control my personal information provided to these digital services when I would use an SSI.
    PCTL5I feel in control over information I provide to digital services when I would use an SSI.Krasnova et al. [2010]
    PCTL6An SSI would allow me to have full control over the information I provide on digital services.
    PCTL7I feel in control of who can view my information on digital services when I would use an SSI.
    Performance ExpectancyPE1Using an SSI would enable me to access digital services more quickly.Chan et al. [2010], adapted from Venkatesh et al. [2003]
    PE2Using an SSI would make it easier for me to access digital services.
    PE3Using an SSI would enhance my effectiveness in accessing digital services.
    PE4I think that using an SSI would enable me to conduct tasks more quickly.Martins et al. [2014], adapted from Venkatesh et al. [2003]
    PE5I think that using an SSI would increase my productivity.
    PE6I think that using an SSI would improve my performance.
    PE7I would find an SSI useful in my job.[Queiroz and Fosso Wamba (2019)], adapted from Venkatesh et al. [2003]
    PE8I would find an SSI useful in my personal life.
    Perceived PrivacyPRIV1I feel I have enough privacy when I would use an SSI to access these digital services.Dinev et al. [2013]
    PRIV2I am comfortable with the amount of privacy I have when I would use an SSI.
    PRIV3I think my online privacy is preserved when I would use an SSI to access digital services.
    Perceived RiskRISK1In general, it would be risky to give personal information to digital services.Dinev et al. [2013]
    RISK2There would be high potential for privacy loss associated with giving personal information to digital services.
    RISK3Personal information could be inappropriately used by digital services.
    RISK4Providing digital services with my personal information would involve many unexpected problems.
    SecrecySCRT1When I would use an SSI, I believe I can hide some information from digital services when I want to.Dinev et al. [2013]
    SCRT2When I would use an SSI, I feel I can pseudonymize some of my personal information if it is asked for by digital services.
    SCRT3When I would use an SSI, I believe I can minimize information I must give to digital services when I think it is too personal.
    SCRT4When I would use an SSI, I avoid giving digital services detailed information about myself.Lwin et al. [2007]
    SCRT5When I would use an SSI, I can have full access and benefits as a registered user without revealing my real identity.
    SCRT6When I would use an SSI, I may only fill up data partially to register with digital services.
    Social InfluenceSI1People who influence my behavior would think that I should use an SSI to access digital services.Chan et al. [2010], adapted from Venkatesh et al. [2003]
    SI2People who are important to me would think that I should use an SSI to access digital services.
    SI3People who are in my social circle would think that I should use an SSI to access digital services.
    SI4I would use an SSI if I needed to.Shafi and Weerakkody [2009]
    SI5I would use an SSI if my friends and colleagues used it.
    Importance of Information TransparencyTR1Please specify the importance of whether digital services will allow me to find out what information about me they keep in their databases.Dinev et al. [2013], adapted from Awad [2006]
    TR2Please specify the importance of whether digital services tell me how long they will retain information they collect from me.
    TR3Please specify the importance of the purpose for which digital services want to collect information from me.
    TR4Please specify the importance of whether a digital service is going to use the information they collect from me in a way that will identify me.Awad [2006]

    References

    Jannik Lockl studied Industrial Engineering and Management at the University of Bayreuth. Jannik has been working as a research associate and postdoctoral researcher at the FIM Research Center and Branch Business & Information Systems Engineering of the Fraunhofer FIT since February 2018, while he further holds a position as research associate at UCL CBT. His research on emerging digital technologies has been published in conferences and journals such as the International Conference on Information Systems, ACM Transactions on Management Information Systems, IEEE Transactions on Engineering Management, International Journal of Technology Assessment in Health Care, and MISQ Executive. Jannik worked as a consultant for companies like BMW and IBM and is founder of the AI-driven MedTech startup inContAlert GmbH.

    Nico Thanner studied Business Administration (M.Sc.) at the University of Bayreuth. His research focusses on topics in emerging technologies and digital innovation, with a specific focus on an entrepreneurial perspective. He worked in innovation and growth centers of Porsche Digital and N26, and the technology labs of the FIM Research Center and Project Group Business & Information Systems Engineering of the Fraunhofer FIT. Since 2021, Nico works for the Danish InsurTech startup Undo, delivering strategic insights to founders based on data analyses and predictive analytics.

    Manuel Utz is a doctoral candidate at the Chair of Information Systems and Digital Energy Management at the University of Bayreuth, Germany. His research focuses on the design and implementation of blockchain-based applications in energy markets. Currently Manuel is also employed at BMW Group as Head of Digital Energy Management.

    Maximilian Röglinger holds the chair of Information Systems and Business Process Management at the University of Bayreuth and is Adjunct Professor in the School of Management at Queensland University of Technology. Maximilian also serves as Deputy Academic Director of the Research Center Finance & Information Management (FIM) as well as Deputy Director of Fraunhofer FIT. Maximilian’s activities in research and teaching center around business process management, digital innovation, and customer orientation. His work has been published in journals such as Business & Information Systems Engineering, Decision Support Systems, European Journal of Information Systems, Journal of Strategic Information Systems, Information Systems Journal, and Journal of the Association for Information Systems. Maximilian is passionate about joint research with companies. Among others, he has collaborated with Allianz, Deutsche Bahn, Deutsche Bank, Fujitsu Technologies, HILTI, Infineon Technologies, Munich Airport, and ZEISS.