Symposium Introduction

OpenAI’s release of ChatGPT in November 2022 was a watershed moment in the history of technological development and, consequently, a buzz topic that extended beyond the tech community and into public discourse. Concerns ranging from the theoretical possibility of AI becoming/being conscious to pragmatic questions touching on the economy and polity became the discussion points of podcasts, conferences, news outlets, and coffee shop gatherings. Since then, AI has become increasingly ubiquitous, permeating many occupations and forms of life. As AI has transitioned from novelty to the mundane, everyday conversations broaching upon the ethics of AI and its ontological makeup have waned as our concerns have diminished, not so much due to knowledge as familiarity. This is occurring while AI is increasingly shaping public life, and often without serious reflection upon where we are headed, as current deregulatory approaches to AI are not concerned with where we go but only that we get there before others.

This context makes Luciano Floridi’s The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities, his forthcoming monograph Artificial Agency: Explorations of a Post-AI Culture, and the conversations they inspire all the more pertinent lest we act without thinking. The Ethics of Artificial Intelligence, the monograph that inspired this symposium, is divided into two sections: “Understanding AI” and “Evaluating AI.” The first addresses the ontology of AI and its epistemic implications, whereas the latter addresses the ethics of AI.

At the heart of the first section is the claim that artificial intelligence is a misnomer that would be more accurately named artificial agency. This simple transition in naming brings about a conceptual clarification that not only illuminates what AI is presently but also its history and likely future. Floridi frames the history of AI through what he calls digital’s cleaving power. That is how the digital couples historically distinct realities together (e.g., self-identity and personal data), decouples historically connected realities (e.g., location and presence), and recouples realities once connected (e.g., consumer and producer) (6–10). This cleaving power, the power to unite and separate, re-ontologizes the world we live within and creates the condition of possibility for the success of AI. A success that depends upon the ability to decouple intelligence and agency by creating environments that envelop AI, allowing it to complete complex tasks, although often without a high skill threshold, e.g., tying shoes or ironing shirts, without intelligence. This enveloping is analogous to how a dishwasher’s success depends upon having an enclosed environment that facilitates the performance of the machinery, with the end result being indistinguishable from the drastically different process of hand washing. Likewise, collectively, we create environments that facilitate the success of AI, or better, AA, by living in a world where the digital bleeds into the older analogue world, thereby re-ontologizing it. We live neither online nor offline, but in a hybrid state that Floridi coins as onlife (6).

As our participant Alessandro Blasimme points out, “the ultimate aim of any philosopher of technology—a label I wish Floridi will be comfortable with—is not to understand what technology is, but what technology is for.” Thus, after conceptually clarifying the nature of artificial intelligence or artificial agency, the next set of questions that spontaneously arise concern “What should we do?” Floridi addresses these questions at both the level of offering a theoretical framework and at the level of practical, concrete problems that face human society, increasingly incorporating AI into everyday life. Concerning the first, Floridi makes the case for approaching the ethics of AI through ethical principles closely aligned with those of existing bioethics frameworks: beneficence, nonmaleficence, autonomy, justice, and explicability. These principles are to be enacted in a manner that is not merely reactionary to the world we have built via technological innovations, but in a manner that guides the very development of innovation, designing the world we desire to build: “we need to shift from chasing to leading” (79). This is accomplished through both hard and soft ethics, a distinction coined by Floridi to distinguish between laws we must obey (hard ethics) and norms we should follow (soft ethics). Although this distinction often names two facets of concrete realities, the logical distinction between hard and soft ethics helps us recognize the insufficiency of relying solely on laws, which cannot determine best practices, or on norms that cannot be enforced in the same way.

This theoretical, ethical framework guides the remaining chapters of Floridi’s monograph as Floridi addresses the concrete problems of algorithms, crimes committed through AI, and opportunities presented through AI. A particular concern that occupies Floridi in this section is the relationship between our current ecological situation and AI. This is a precarious relationship in that the operations of AI consume a nontrivial amount of energy while also providing a powerful resource for solving problems pertinent to our ecological situation.

This symposium brings together several critical perspectives on Luciano Floridi’s The Ethics of Artificial Intelligence, whom I am pleased to introduce.

Our symposium begins with an essay by Massimo Durante on the political implications of how AI alters the meaning of space. Through an erudite explication of Floridi’s insights into how the infosphere and AI ontologically alter our relation to space, Durante articulates three distinct conceptions of political space: “1) the modern one, for which political space is still empirically and normatively territory and borders . . . 2) the contemporary one, for which the political space is a place of articulation and overlap between the analogue and digital world . . . 3) the approaching one, for which the political space is an increasingly structured and computable space of prediction and calculation.” Based on these three competing models of space, Durante poses several questions to Floridi about their relationship and the potential challenges they pose to governance.

Our next participant, Ugo Pagallo, highlights Floridi’s attention to the often-overlooked challenge of technological underuse rooted in “fear, ignorance, misplaced concerns, or excessive reaction” (169), resulting in what economists refer to as opportunity costs. Pagallo notes that Floridi’s solution to underuse is primarily through soft ethics—a notion that Floridi adapts from the existing term soft law. Pagallo affirms this stance taken by Floridi and develops it further by explaining the role of soft law in the future governance of AI.

The theme of soft ethics continues with our next panelists, Alexander Kriebitz and Christoph Lütge. Identifying two distinct normative traditions for regulating technology, namely, technology’s relationship to fundamental rights and ethics more broadly, Kriebitz and Lütge probe the relationship between these traditions concerning AI regulation. They begin by framing these traditions through Floridi’s distinction between hard and soft ethics, and then enter into a brief, historically focused discourse on the distinction between fundamental rights and ethics. This leads Kriebitz and Lütge into the final section, where they make the case for introducing the normative traditions of fundamental rights and ethics as adding a further layer of nuance and complexity to Floridi’s distinction between hard and soft ethics.

Alessandro Blasimme wraps up our symposium with an essay that focuses on the big-picture questions about AI as a historical phenomenon. To engage Floridi on this level, Blasimme works from Floridi’s concept of “enveloping,” the process of making the world “AI-friendly, or better, AI-ready,” in conjunction with Floridi’s claim that we are living within a digital transformation of the world; a transformation that can neither be unmade nor happen again. One pertinent question that arises for Blasimme within this framework is the degree to which humans are ends or means within the enveloping process that facilitates AI agency.

Massimo Durante

Response

What “Game” Do We Play?

Space, Politics, and Constitutive Rules

The Ethics of Artificial Intelligence is a complex book that is composed, in Luciano Floridi’s typical style, of three layers of analysis and reflection: a first in which Floridi reconnects with his previous thematic strands to delineate a further aspect of the impact of digitization on our lives and reality; a second in which he brings into focus the current central cores and problematic issues of artificial intelligence from a broad normative perspective, bringing ethics into legal and political discourse; and, finally, a third in which Floridi sows the seeds of new research perspectives on the topic under consideration.

In this short paper, I would like to focus on the third layer. In particular, I will refer to some passages in Chapter 3,1 in which Floridi not only carries forward some reflections related to the technological and digital reconfiguration of space initiated in earlier works,2 but also lays down the groundwork for thinking about a certain redesigning of space. Such space is no longer understood empirically as a physical environment, concrete situation or transcendentally as an a-priori condition of sensory experience, but normatively as a sphere of play, the nature and extent of which need to be examined and evaluated in relation to the degree to which it is (relatively) structured.

This new approach is actually quite traditional and takes us back to the beginnings of modernity, where space is thought of as a horizon for the constitution of politics. Political thought in modernity investigates, ponders, and elaborates on the conditions of human coexistence as a political project mostly implemented through the law. Coexistence demands not only to think about but also to normatively and empirically design how to structure space as the horizon of such human coexistence across time. Since modernity, a national border, for example, is an empirical boundary that can never be fully separated from its normative function of organizing human coexistence in space (inside/outside) and time (inclusion/exclusion).

In political modernity, it is possible to think about space starting from the fact that an object placed in a specific place does not allow another object to be placed in the same place except through time. An object placed in a specific place raises a claim that sets off a conflict in space (two objects cannot persist in the same place at the same time), which can be moderated in time (two objects can coexist in the same space over time). We have realized that the digital space is largely different from the analogical one.3 This has already forced us to rethink space as a horizon for the constitution of politics in the contemporary world as a human project that redesigns normatively the forms and conditions of coexistence.4 Today, we must start thinking, along the lines outlined by Floridi, about the possibility, or the hazard, that AI contributes (or nudges us) to normatively redesign the space of such coexistence.

Floridi’s critical insight is that the space in which AI systems make decisions and act is a space that is redesigned (i.e., structured, not only as a function of how such systems are agents, i.e., operate in the world) but also, and above all, as a function of how such systems are epistemic subjects (i.e., perceive and represent the world in which they are called upon to perform tasks). In earlier works, Floridi used to refer to this consideration with the robotic notion of envelope by emphasizing that space is enveloped around a robot to enable it to function better.5 This consideration still has an empirical meaning, emphasizing the robot’s ability to adapt to the environment. This consideration is theoretically significant because it already shows the blurring of the culture/nature divide as a result of the technological redesign of the environment and especially the digital synchronization of the protocols and interfaces of the technologies communicating with the redesigned environment.6

In Ethics of Artificial Intelligence, Floridi goes a step further, conceiving the space in which an AI system operates as one that is not only empirically redesigned but also structured in epistemic and normative terms. Let me first clarify what I mean by Floridi’s step further and what this can entail on the level of coexistence of human or artificial agents. Floridi recalls the difference between constraining and constitutive rules and applies it to the distinction between structured and unstructured (or partially structured) contexts through the metaphor, many times used in the literature, of the game, for example, distinguishing the game of chess from the game of football.7

The game of chess, for example, is characterized by constitutive rules, that is, rules that constitute all and only lawful behavior on the board. The rules of chess tell us how to move the chess pieces on the chessboard: there are no possible moves other than those dictated and made possible by the constitutive rules. This is made possible by two concomitant conditions: 1) chess pieces are not so much empirical objects (tokens) as representations of sets of rules (types). In this sense, if we lost a horse, we could take whatever token (e.g., a cork) and use it as a chess piece, simply saying, “This is the horse”; 2) rules can constitute all and only allowed behaviors because moves take place on a chessboard that constitutes a predefined and structured space, both empirically and normatively.

The game of football, on the other hand, is characterized by constraining rules, which do not constitute behaviors in the sense above-mentioned, i.e., they do not so much tell us what we can do (they do not instruct us about how to do what we can do) but instead just tell us what we cannot do (what behaviors are unlawful). Players can play however they want as long as they respect the rules that forbid certain incorrect behaviors. This is made possible by two concomitant conditions: 1) players are embodied agents guided by intentions; 2) rules cannot constitute the behaviors because the moves of the game occur in a semi-structured space (the pitch is empirically predefined but is normatively unstructured or only partially structured).

Leaving metaphors behind, what does all this mean? A space characterized by a structured environment governed by constitutive rules is one in which behaviors are more predictable and computable (notably by means of artificial intelligence). It is not just that but something more radical and far-reaching. It is in a computable space that it becomes possible to simulate the interaction between agents, predict the possible outcomes of those interactions, and train artificial intelligence models (such as machine learning or adversarial artificial intelligence systems) in order to increase and improve their ability to predict behaviors. From this perspective, the gradual redesign of space into increasingly structured environments governed by constitutive rules, which in turn enable the transformation of difficult tasks into complex ones,8 is not only instrumental to the development and use of artificial intelligence but also to the making of space as a new horizon for the interaction9 and coexistence of agents’ external freedoms and, therefore, for the constitution of the political. Doesn’t the possibility of generating, simulating, analyzing, and predicting behaviors and interactions among agents extend and fulfill by other means the modern political dream of a computable space?

We are therefore dealing with three different but overlapping conceptions of political space, which nowadays coexist rather than exclude each other: 1) the modern one, for which political space is still empirically and normatively territory and borders,10 in which sovereign power is exercised as a relationship of inclusion and exclusion in time and space; 2) the contemporary one, for which the political space is a place of articulation and overlap between the analogue and digital world, in which sovereign power is exercised as regulation and control over the technological protocols of information and communication and their interfaces between the different spheres of experience of our life and reality; 3) the approaching one, for which the political space is an increasingly structured and computable space of prediction and calculation.

How are these three different forms of space destined to be articulated? How deeply can space be redesigned and adapted to the functioning and development of artificial intelligence models and systems? How will this computable space of data generation and prediction impact the way we conceive coexistence and collaboration between the human and non-human? What “game” will politics play to give itself new and powerful means of control and regulation over the world?

I am confident that Luciano Floridi’s forthcoming insights and thoughts on the politics of information will provide important clues and elements to answer this complex set of questions.


  1. Luciano Floridi, The Ethics of Artificial Intelligence (Oxford: Oxford University Press, 2023), chapter 3: “Future: The Foreseeable Development of AI,” 31–50.

  2. See Luciano Floridi, “The Philosophy of Presence: From Epistemic Failure to Successful Observation,” Presence: Teleoperators & Virtual Environments 14, no. 6 (2005): 656–67; Floridi, The Ethics of Information (Oxford: Oxford University Press, 2013), chap. 4: “Information Ethics as E-nviromental Ethics”; Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (Oxford: Oxford University Press, 2014), chap. 2: “Space. Infosphere.”

  3. See notably Luciano Floridi, ed., The Onlife Manifesto: Being Human in a Hyperconnected Era (New York: Springer, 2014). For an interpretation of the informational space in Floridi’s perspective see Massimo Durante, Ethics, Law and the Politics of Information: A Guide to the Philosophy of Luciano Floridi (Dordrecht: Springer, 2017), notably 23–28.

  4. Luciano Floridi, The Green and the Blue: Naive Ideas to Improve Politics in an Information Society (New York: Wiley, 2023).

  5. Luciano Floridi, “Enveloping the World: The Constraining Success of Smart Technologies,” in CEPE 2011: Ethics in Interdisciplinary and Intercultural Relations, ed. J. Mauger (Milwaukee, WI, 2011), 111–16; Floridi, “Children of the Fourth Revolution,” Philosophy & Technology 24 (2011): 227–32. For a recent commentary on this notion see S. Robbins, “AI and the Path to Envelopment: Knowledge as a First Step towards the Responsible Regulation and Use of AI-Powered Machines,” AI & Society 35 (2020): 391–400.

  6. Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (Oxford: Oxford University Press, 2014), 36–41.

  7. Luciano Floridi, The Ethics of Artificial Intelligence (Oxford: Oxford University Press, 2023), 37: “‘Games’ can be understood here as any formal interactions in which players compete according to rules and in view of achieving a goal. The rules of these games are not merely constraining, but constitutive. The difference I like to draw becomes evident if one compares chess and football. Both are games, but in chess the rules establish legal and illegal moves before any chess-like activity is possible. So, they are generative of only and all acceptable moves. Whereas in football a prior act—let’s call it kicking a ball—is ‘regimented’ or structured by rules that arrive after the act itself. The rules do not and cannot determine the moves of the players. Instead, they put boundaries around what moves are acceptable as ‘legal’.”

  8. Floridi, The Ethics of Artificial Intelligence, 41–42: “So, the future of successful AI probably lies not only in increasingly hybrid or synthetic data, as we saw, but also in translating difficult tasks into complex ones. How is this translation achieved? We saw . . . that it requires transforming (enveloping) the environment within which AI operates into an AI-friendly environment. Such translation may increase the complexity of what the AI system needs to do enormously. But as long as it decreases the difficulty, it is something that can be progressively achieved more and more successfully”; and also, with explicit reference to the future of design, 49–50: “First, we will seek to develop AI by using data that are hybrid and preferably synthetic as much as possible. We will do so through a process of ludifying interactions and tasks. In other words, the tendency will be to try to move away from purely historical data insofar as this is possible. In areas such as health and economics, it may well be that historical or at most hybrid data will remain necessary due to the difference between constraining and constitutive rules. Second, we will do all this by translating difficult problems into complex problems as much as possible. Translation will happen through the enveloping of realities around the skills of our artefacts. In short, we will seek to create hybrid or synthetic data to deal with complex problems by ludifying tasks and interactions in enveloped environments. The more this is possible, the more successful AI will be.”

  9. Floridi, The Ethics of Artificial Intelligence, 38: “Only when (1) a process or interaction can be transformed into a game, and (2) the game can be transformed into a constitutive-rule game, then (3) AI will be able to generate its own, fully synthetic data and be the best ‘player’ on this planet, doing what AlphaZero did for chess.”

  10. As I write these lines, war persists and flares up (the Russo-Ukrainian and the Israeli-Palestinian war conflict) in a very traditional way, as an armed and bloody conflict, regarding occupied states and territories, in line with modern history.

  • Luciano Floridi

    Luciano Floridi

    Reply

    Response to Massimo Durante

    Massimo Durante’s essay offers a very insightful analysis of how AI systems are reshaping our understanding of political space. This is not surprising. As in the past (Durante 2017, 2021), his interpretation captures and enlightens a crucial point in the book, in this case, the transformation in how we conceptualize the relationship between space, politics, and AI.

    The essay’s central insight—that AI’s operation within increasingly structured environments represents a fundamental shift in political space—deserves deeper investigation. Durante builds upon the distinction between constraining and constitutive rules (Floridi 2019) to show how AI systems are driving a transformation of difficult tasks into complex but computable problems. As I have suggested (Floridi 2014), this transformation occurs through the “enveloping” or adaptation of reality around artificial systems’ capabilities. The well-known analogy between chess and football helps us understand how this happens, why AI systems perform remarkably well in artificially constituted domains while struggling in naturally constrained ones, and, therefore, why AI reshapes political space. Digital technologies, especially AI, exercise relentless pressure to transform natural environments into artificial ones to ensure their success. The future will see more “AI-fication,” not less. It is not whether it will happen but how we manage it that represents both a challenge and an opportunity. And this is because the restructuring of political space through digital systems, and especially with AI as a new form of agency (Floridi and Sanders 2004; Floridi 2023a), is both a problem and a solution for democratic participation.

    On the negative side, AI’s capacity to aggregate and analyze collective preferences and choices, and therefore influence and mobilize human actions at unprecedented scales (hypersuasion, see Floridi 2024) can also facilitate new populist and undemocratic ways of controlling the transformation of the “will of all” into the “general will,” to use Rousseau’s classic distinction. Interaction between bias and lack of transparency in AI-structured political spaces could create a “democratic deficit of intelligibility.” Citizens may find themselves operating within political spaces structured by constitutive rules they do not perceive, hence understand, let alone have meaningfully consented to. Finally, there is an ongoing risk that AI systems’ capacity to structure, monitor, and regulate political space will erode the traditional boundaries between legislative, executive, and judicial functions, as AI systems implement constitutive rules that simultaneously create and enforce political arrangements and decisions increasingly seamlessly.

    On the positive side, the structured environments, which AI systems promote and help create in a self-reinforcing cycle, could enhance democracy. One could read this as a Millian perspective (Mill 1859): AI could improve democratic participation in this restructured political space by making complex political issues more computationally tractable and accessible to public understanding; facilitating free and more inclusive discussion, more and better-informed debates; and helping to identify areas of consensus, disagreement, and fairer, more acceptable trade-offs in the political dialogue. In Floridi and Noller 2022 and Floridi 2023b, I argued that this transformation of the infosphere creates new possibilities for civic engagement and political participation.

    All this leads to issues concerning control, which have equally profound implications for political sovereignty, understood as control over citizens’ behavior (Floridi 2020). The ability of AI systems to create and manage structured environments that transcend physical boundaries suggests a new form of sovereign power, based not on territorial control but on the ability to define and implement constitutive rules that shape spaces of agency (see Schmitt’s concerns about the relationship between spatial order and political authority). Hobbes’s conception of sovereignty as absolute authority over a defined territory becomes increasingly complex in an AI-mediated political space, although not obsolete (the control of the digital is increasingly exercised through the control of the analogue; Floridi 2024). Implementing constitutive rules through AI systems creates new forms of governance that transcend traditional territorial boundaries and the territoriality of the law. This raises questions about consent, (legitimate) authority, and how democratic legitimacy can be maintained when political space is increasingly structured by algorithmic systems controlled by unelected actors, who are accountable to the markets they serve but not the societies they regulate. Political power is changing, becoming a matter not so much of the network’s control (Castells 2000) but of the construction and control of the network. As Durante’s analysis suggests, AI’s capability to structure space through constitutive rules may represent an even more fundamental transformation. One may see this in terms of Foucault’s analysis of how spatial arrangements embody and enable power relations (Foucault 1977), though in this case through algorithmic rather than architectural means. Or, as I would prefer, more philosophically and less sociologically, as the politicization of the transcendental nature of space—Durante seems to hint at this interpretation—which transforms space from an epistemic condition of possibility of experience into an experiential condition of possibility of agency. To borrow an expression from phenomenological studies (Ihde 1990), this is no longer a technologically mediated lifeworld, but a technologically constituted lifeworld in which humanity lives and interacts more with digital than analogue realities. Those who control the constitutive processes control the corresponding experienced realities.

    Ultimately, the challenge lies in designing AI systems that structure political space in ways that enhance rather than constrain democratic life. Here, the concept of “infraethics” (Floridi 2017) becomes particularly relevant, as it helps us understand that the design of AI systems must be approached not just as a technical challenge or an economic opportunity, but as a fundamentally political project.

    Durante’s essay offers valuable groundwork for understanding how AI systems are reshaping political space through the implementation of constitutive rules and structured environments. I am in his debt for the clarifications and the insightful new issues he has provided. We are witnessing not just a technological transformation but a fundamental reconstitution of the spatial conditions of our political existence. As we move forward, we must develop a human project that considers this and proactively designs the appropriate solutions to benefit humanity and the environment, while shaping the future of human-AI interactions in an ethically acceptable way. The politics of information will be increasingly crucial in determining how the digital revolution will enhance rather than diminish human flourishing. There is much work that needs to be done. I just hope there will be time, because the issue could not be more pressing.

    References

    Castells, Manuel. 2000. The Rise of the Network Society. 2nd ed. Oxford: Blackwell Publishers.

    Durante, Massimo. 2017. Ethics, Law and the Politics of Information: A Guide to the Philosophy of Luciano Floridi. New York: Springer.

    Durante, Massimo. 2021. Computational Power: The Impact of ICT on Law, Society and Knowledge. London: Routledge.

    Floridi, Luciano. 2014. The 4th Revolution: How the Infosphere Is Reshaping Human Reality. Oxford: Oxford University Press.

    Floridi, Luciano. 2017. “Infraethics—On the Conditions of Possibility of Morality.” Philosophy & Technology 30: 391–94.

    Floridi, Luciano. 2019. “What the Near Future of Artificial Intelligence Could Be.” Philosophy & Technology 32, no. 1: 1–15.

    Floridi, Luciano. 2020. “The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU.” Philosophy & Technology 33: 369–78.

    Floridi, Luciano. 2023a. “AI as Agency without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models.” Philosophy & Technology 36, no. 1: 15.

    Floridi, Luciano. 2023b. The Green and the Blue: Naive Ideas to Improve Politics in the Digital Age. Newark: John Wiley & Sons.

    Floridi, Luciano. 2024. “The Hardware Turn in the Digital Discourse: An Analysis, Explanation, and Potential Risk.” Philosophy & Technology 37, no. 1: 39.

    Floridi, Luciano, and Jörg Noller. 2022. The Green and the Blue: Digital Politics in Philosophical Discussion. Baden-Baden: Nomos Verlagsgesellschaft.

    Floridi, Luciano, and Jeff W. Sanders. 2004. “On the Morality of Artificial Agents.” Minds and Machines 14: 349–79.

    Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. London: Allen Lane.

    Ihde, Don. 1990. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press.

    Mill, John Stuart. 1859. On Liberty. London: J. W. Parker and Son.

Ugo Pagallo

Response

The Challenges of AI Underuse and the Strength of Soft Law

Luciano Floridi’s latest book, The Ethics of Artificial Intelligence, has manifold merits, one of which is taking the challenges of technological underuse seriously (Floridi 2023). This perspective sheds light on what most scholars overlook today and how ethics relates to further normative domains, such as politics and the law. As stressed in the preface of the book, The Ethics of AI should be understood as the first part of a forthcoming Politics of Information. By focusing on the challenges of technological underuse, the aim of this note is threefold: (i) to stress “the proper design of new forms of artificial and political agency” (op. cit., at xiii); (ii) to highlight what is still missing in this “study of the design opportunities available to us” (ibid.); and (iii) to ask how the author may intend to address this gap on what I call “the strength of soft law.”

Taking the Underuse of AI Seriously

In addition to the risks brought forth by misuses and overuses of technology—by far the most popular topic among scholars discussing the normative challenges of AI today—attention must be drawn to how “fear, ignorance, misplaced concerns, or excessive reaction may lead a society to underuse AI technologies below their full potential, and for what might be broadly described as the wrong reasons” (Floridi 2023, 169–70). Although the problem may appear invisible, it is highly relevant. The wrong reasons for technological underuse trigger what economists dub as opportunity costs. For example, work on national health services and their cost analysis estimated that the opportunity costs of ambulatory medical care in the U.S.A. are around 15% (Ray et al. 2015). In the U.K., the opportunity costs of the National Health Service may amount to around 10 million pounds each year, whereas such figures could even underestimate the phenomenon (UY 2021). In some countries, e.g., Italy, which invests around 9% of its gross domestic product (GDP) in the public health sector, the opportunity costs of technological underuse may amount to 1% up to 2% of the Italian GDP (Pagallo 2022a).

Floridi proposes 20 recommendations for a good AI society that should provide a clear ethical and legal framework to resolve “the tension between incorporating the benefits and mitigating the potential harms of AI—in short, simultaneously avoiding the misuse and underuse of these technologies” (Floridi 2023, 173). Those recommendations are divided into four general areas for action to assess, develop, incentivize, and support. The recommendations include the development of auditing mechanisms for AI systems to identify unwanted consequences; the development of agreed-upon metrics for the trustworthiness of AI products and services; the incentivization of technologies that are socially preferable and environmentally friendly; the creation of educational curricula and public awareness activities that should revolve around the societal, legal, and ethical impact of AI; etc. (op. cit., 174–79). Such recommendations, on the one hand, hinge on the tenets of soft rather than hard ethics and, on the other hand, on the means of good governance. Soft ethics refers to post-compliance ethics, which applies after legal compliance with legislation (op. cit., 77). Governance concerns “the practice of establishing and implementing policies, procedures, and standards for the proper development, use, and management of the infosphere” (op. cit., 79–80). Therefore, compliance with the law is merely necessary but insufficient; it is the least that is required, whereas the most must be done: “It is the difference between playing according to the rules and playing well to win the game, so to speak” (op. cit., 173).

However, this very difference between law and ethics should be further clarified. Although Floridi admits that his distinction between hard ethics and soft ethics “is loosely based on the one between hard and soft law” (op. cit., 82), the role of soft law is simply overlooked. Why would soft law be so relevant?

Putting the Secondary Rules of the Law First

Soft law refers to the opinions, recommendations, or guidelines of authorities, public agencies, and boards that do not aim to replace the hard tools of legislation—i.e., the top-down provisions of lawmakers that hinge on the threat of physical and/or pecuniary sanctions—but rather, intend to complement them. The soft tools of the law are indispensable under several circumstances to enforce, strengthen, clarify, or stimulate the adoption of the top-down provisions of the regulator through, e.g., the development of new standards. Moreover, it is worth mentioning that several legal systems have complemented the hard tools of the law with forms of soft law and new forms of coregulation, coordination, and cooperation to tackle cases of technological underuse. Drivers of technological underuse, such as public distrust and business diffidence, can hardly be addressed only based on the top-down instructions of the regulator. The response of most legal systems to the threat of AI underuse has thus intended to complement the binding rules of the law with guidelines and mechanisms of coordination, recommendations and methods of cooperation, etc. In health law, for example, this is the approach in both the USA and the EU (Pagallo 2021).

Rather than a symptom of weakness, the strength of soft law depends on both the requirements and functions of the law, namely, what the law is supposed to be (requirements) and what it is called to do (functions). In some fields, e.g., the legal regulation of autonomous vehicles, it is worth mentioning that the lack of any robust soft law appears in most jurisdictions—Germany’s 2021 Road Traffic Act being the proverbial exception—that which appears to be the by-product of an ongoing process to determine the rules of hard law (Pagallo 2022b).

Against this backdrop, we may wonder about the connection between soft ethics and soft law.

All in all, we may accept Floridi’s quasi-jusnaturalistic approach—according to which “hard ethics first precede and then further contribute to shaping legislation” (Floridi 2023, 77)—and still, his remarks on how soft ethics may apply to hard law cannot be simply expanded to the realm of soft law. Although soft law may rely on moral arguments, a crucial difference exists between the soft sides of ethics and the law. In legal theory, soft law and the adoption of mechanisms of coordination, cooperation, and even delegation of legal powers to stakeholders concern what is usually dubbed as the secondary rules of the law. Contrary to the primary rules of the law, i.e., the hard tools of legislation that aim to directly govern the behavior of groups and individuals through the top-down instructions of the regulator, the secondary rules of the law comprise norms of change and procedures. Advocates of legal positivism and, in particular, of exclusive legal positivism would claim that matters of validity, even in the case of the soft tools of the law, stand by themselves, that is, regardless of any moral constraint after compliance with legislation. Nevertheless, the problem with legal positivism has to do with the very idea of compliance with the valid rules of the game.

After all, attention has been drawn to the soft tools of the law and mechanisms of coregulation, cooperation, and coordination endorsed by lawmakers to tackle the challenges of technological underuse. However, the analysis of how effective these norms are requires subtler forms of evaluation for the status of adherence to the regulatory provisions of the legal system. Rather than the traditional stance, according to which either legal agents are compliant or not, more nuanced assessments should distinguish between different kinds of compliance, e.g., ideal, sub-ideal, and non-compliant statuses of the legal agents (Pagallo 2022c). Whereas legal positivism and the binary alternative of compliance or non-compliance do not provide any valuable information for the assessment and improvement of current institutional initiatives against the underuse of AI—through methods of cooperation, coregulation, or coordination that hinge on the soft tools of the law—I reckon that Floridi’s distinction between playing according to the rules of the game and playing well to win the game points toward the right direction. How does soft ethics interact with soft law?

The Politics of Information

The strength of soft law is twofold since it regards both the formation or clarification of the rules of the game and how well we play to win the game through different forms of legal compliance, such as average compliance, reasonably high compliance, very high compliance, and full compliance (Hashmi et al. 2018). Although such assessment is crucial to determine how much current policies and legislation have effectively tackled the opportunity costs of technology, this kind of work is still in its infancy. I think Floridi’s soft ethics plays a critical role in supporting the efforts of soft law. The interaction between ethics and the law does not only regard the difference between the rules of the game (hard law) and playing well to win the game (soft ethics) but also the different levels of legal compliance and how the application of ethical principles can help playing increasingly well in the legal domain. Remarkably, this stance fits like a hand into a glove with the “space of soft ethics” illustrated in Figure 6.2 of Floridi’s book (op. cit., 83). I am eager to know how he may further develop this stance in The Politics of Information.

References

Floridi, L. 2023. The Ethics of Artificial Intelligence. Oxford: Oxford University Press.

Hashmi, M., P. Casanovas, and L. de Koker. 2018. “Legal Compliance through Design: Preliminary Results of a Literature Survey.” TERECOM2018@JURIX, Technologies for Regulatory Compliance. http://ceur-ws.org/Vol-2309/06.pdf.

Pagallo, U. 2021. “The Governance of AI and Its Legal Context-Dependency.” In The 2020 Yearbook of the Digital Ethics Lab, edited by J. Cowls and J. Morley. Cham: Springer.

Pagallo, U. 2022a. Il dovere alla salute: Sul rischio di sottoutilizzo dell’intelligenza artificiale in ambito sanitario. Milan: Mimesis.

Pagallo, U. 2022b. “The Politics of Self-Driving Cars—Soft Ethics, Hard Law, Big Business, Social Norms.” In Autonomous Vehicle Ethics: The Trolley Problem and Beyond, edited by R. Jenkins et al. Oxford: Oxford University Press.

Pagallo, U. 2022c. “The Politics of Data in EU Law: Will It Succeed?” DISO 1, 20.

Ray, K. N., A. V. Chari, J. Engberg, M. Bertolet, and A. Mehrotra. 2015. “Opportunity Costs of Ambulatory Medical Care in the United States.” American Journal of Managed Care 21, no. 8: 567–74.

UY. 2021. Re-estimating Health Opportunity Costs in the NHS. University of York’s Centre for Health Economics. https://www.york.ac.uk/che/research/teehta/health-opportunity-costs/re-estimating-health-opportunity-costs/#tab-1.

  • Luciano Floridi

    Luciano Floridi

    Reply

    Response to Ugo Pagallo

    Ugo Pagallo’s essay is an important contribution to the ongoing discussion on AI governance. His exploration of the book highlights some crucial aspects of the interplay between ethics, law, and technology (Barfield and Pagallo 2018; Barfield, Weng, and Pagallo 2024) that deserve broader attention in contemporary debates, particularly the understudied phenomenon of AI underuse and its relationship with soft law mechanisms. His emphasis on opportunity costs in healthcare systems (Pagallo 2022; Pagallo et al. 2024) provides compelling evidence for the economic and social implications of technological underuse, making a persuasive case for why this issue demands immediate attention.

    Pagallo’s analysis of soft law’s role in AI governance aligns with several classic sources worth highlighting. First, there is the Aristotelian concept of practical wisdom (phronesis), through which flexibility in applying rules is seen as essential to achieving just outcomes (Aristotle 1954). Second, the argument that soft law represents not a weakness but rather a necessary complement to hard law links with Hart’s distinction between primary and secondary rules (Hart 1961). This theoretical foundation strengthens when considered alongside the framework of soft ethics defended in the book: different regulatory mechanisms can work in concert to promote optimal AI adoption while maintaining appropriate safeguards. Third, Pagallo is right in stressing that the distinction between soft and hard ethics mirrors the complementarity between soft and hard law in crucial ways. While hard ethics seeks to establish immutable principles and moral absolutes, soft ethics provides the necessary flexibility to address novel situations and technological challenges. Soft ethics complements hard ethical constraints by giving guidance in post-compliance areas where absolute rules prove insufficient or counterproductive. This duality reflects Dworkin’s distinction between rules and principles, where principles possess weight and importance that rules, by their nature, cannot embody (Dworkin 1977). In the context of AI governance, this framework allows us to maintain fundamental ethical constraints while adapting to rapid technological change. Fourth, the examination of compliance frameworks reflects Kant’s distinction between acting in accordance with duty and acting from duty (Kant 2011). This philosophical perspective enriches the analysis beyond traditional binary approaches to suggest a more nuanced understanding of regulatory adherence. Finally, the interplay between soft and hard law in AI governance relates to a more profound philosophical tension between flexibility and certainty. Hard law, like hard ethics, provides the necessary framework of enforceable rules and clear consequences, while soft law, like soft ethics, can create spaces for innovation and adaptation. This dynamic recalls Fuller’s internal morality of law, where legal systems must balance the need for stable rules with the capacity to respond to changing circumstances (Fuller 1964). In the rapidly evolving field of AI, this trade-off becomes particularly crucial.

    One way to further Pagallo’s analysis would be by considering how soft law mechanisms might evolve in response to rapid technological change. As I suggest in my discussion of the infosphere, the governance of AI requires dynamic and adaptable regulatory frameworks, with a typically Rawlsian reflective equilibrium, where principles and judgments must be mutually adjusted to achieve coherence. In this direction, several promising areas for development emerge from Pagallo’s analysis. The relationship between soft law and technological innovation deserves deeper exploration, particularly regarding how different compliance models might affect innovation rates. The framework could be extended to examine other sectors, beyond healthcare, where AI underuse presents significant opportunity costs, for example, fiscal reforms and taxation. Empirical studies are needed to quantify the relative effectiveness of soft versus hard law approaches across different jurisdictional contexts, particularly in rapidly evolving technological domains where traditional regulatory frameworks struggle to keep pace with innovation. Future research may also prioritize the development of robust metrics for evaluating soft law effectiveness, including both quantitative and qualitative indicators that can capture the nuanced impacts of different regulatory approaches (see, for example, Casanovas et al. 2024; Casanovas, Hashmi, and de Koker 2024). This work could incorporate cross-cultural analyses of how soft law mechanisms are interpreted and implemented across different legal traditions and business environments. The development of hybrid regulatory frameworks that can dynamically adjust the balance between soft and hard law elements based on empirical outcomes represents a crucial area for investigation. Furthermore, research is needed on methodologies for measuring AI underuse impact across different sectors, with particular attention to hidden costs and missed opportunities that may not be immediately apparent in traditional economic analyses.

    As I continue to work on the next project, The Politics of Information, I am grateful for Pagallo’s analysis, which suggests fruitful directions for future research. The integration of soft law mechanisms with ethical frameworks represents a particularly promising avenue for investigation. These developments could significantly influence how we approach AI governance in the coming years. Critical to this evolution will be the development of sophisticated stakeholder feedback mechanisms that can inform the ongoing refinement of both soft and hard law approaches. This will ensure that regulatory frameworks remain responsive to technological change while maintaining their effectiveness in promoting beneficial AI adoption.

    References

    Aristotle. 1954. The Nicomachean Ethics of Aristotle. Oxford: Oxford University Press.

    Barfield, Woodrow, and Ugo Pagallo. 2018. Research Handbook on the Law of Artificial Intelligence. Cheltenham, UK: Edward Elgar Publishing.

    Barfield, Woodrow, Yueh-Hsuan Weng, and Ugo Pagallo. 2024. The Cambridge Handbook on the Law, Policy, and Regulation of Human-Robot Interaction. Cambridge: Cambridge University Press.

    Casanovas, Pompeu, Mustafa Hashmi, and Louis de Koker. 2024. “A Methodological Approach to Legal Governance Validation.” In AI Approaches to the Complexity of Legal Systems. Cham: Springer.

    Casanovas, Pompeu, Mustafa Hashmi, Louis de Koker, and Ho-Pun Lam. 2024. “A Three Steps Methodological Approach to Legal Governance Validation.” arXiv preprint arXiv:2407.20691.

    Dworkin, Ronald. 1977. Taking Rights Seriously. Cambridge: Harvard University Press.

    Fuller, Lon L. 1964. The Morality of Law. New Haven: Yale University Press.

    Hart, H. L. A. 1961. The Concept of Law. Oxford: Clarendon Press.

    Kant, Immanuel. 2011. Groundwork of the Metaphysics of Morals: A German-English Edition. Cambridge: Cambridge University Press.

    Pagallo, Ugo. 2022. Il dovere alla salute: Sul rischio di sottoutilizzo dell’intelligenza artificiale in ambito sanitario. Milan: Mimesis.

    Pagallo, Ugo, Shane O’Sullivan, Nathalie Nevejans, Andreas Holzinger, Michael Friebe, Fleur Jeanquartier, Claire Jean-Quartier, and Arkadiusz Miernik. 2024. “The Underuse of AI in the Health Sector: Opportunity Costs, Success Stories, Risks and Recommendations.” Health and Technology 14, no. 1: 1–14.

Alexander Kriebitz

Response

Ethics vs. Fundamental Rights

Two Competing Normative Pillars of AI Regulation?

Introduction

Legislators and international organizations are increasingly preoccupied with the governance and regulation of AI, since it touches upon significant domains of life, including but not limited to finance, healthcare, human resources, military operations, transportation, or public transportation (see Hunkenschroer and Lütge 2022; Schmidt et al. 2022). Consequently, the said family of technologies exerts an immense influence on individuals, business, and society at large.

The European Union in particular has positioned itself early on in the discourse on how to regulate AI. Particularly, the work of the High Level Expert Group on AI (HLEG, April 2019), but also the ethical discussions initiated by AI4People had a strong effect on the formulation of a joint European approach to AI regulation (Engstrom and Jebari 2023). This development was led by prominent philosophers and ethicists, but it was also driven by scholarship in constitutional law and organizations that expressed a keen interest in aligning AI with legal notions such as fundamental rights, rule of law, or democracy (Ulnicane 2022).

Thus, the regulatory response to AI involves two different conceptual pillars that represent different normative traditions. The discourse on technology ethics has early on scrutinized the impact of technologies, specifically in the context of life sciences, but also in respect to nuclear energy. These discourses have revolved around concepts such as individual responsibility, accountability, but also considerations related to values such as beneficence, autonomy, fairness, or justice that originate in traditional moral philosophy. Moreover, the EU AI Act but also earlier on the report of HLEG on Trustworthy AI has pointed at the relevance of fundamental rights and constitutional principles such as democracy or rule of law that are at stake when it comes to the massive use of AI in society (HLEG AI, April 2019).

The existence of the two mentioned traditions prompts the question on how the relationship between both conceptual pillars within current attempts of legislation, but also the place of ethics in emerging AI regulation. Is AI regulation replacing ethics by gravitating to fundamental rights, or are ethics and fundamental rights to be understood as mutually reinforcing concepts? In order to elaborate this, the remainder of our contribution is structured as follows. We begin with the examination of the concept of hard and soft ethics including the closely related idea of post-compliance as introduced by Floridi. In a further step, we concentrate on the question how the aforementioned normative traditions, namely AI ethics and fundamental rights have been integrated in the EU AI Act. Based on this, we examine the structural relationship between ethics and human rights along the lines of Floridi’s use of hard and soft ethics, in order to derive implications for the future role of ethics in AI regulation and governance, with a specific focus on the concept post-compliance.

Hard and Soft Ethics

In The Ethics of Artificial Intelligence, Floridi (2023) distinguishes between two concepts of ethics that impose different types of normative obligations on individuals and society. This distinction serves as a conceptual tool to derive a better understanding of the division of labor between upcoming AI legislation and AI ethics approaches developed by companies in the form of self-regulation. It also closely aligns with earlier approaches on the division between politics and ethics, but also formal and informal institutions.

According to Floridi (2023, 82), hard ethics pertains to what “we usually have in mind when discussing values, rights, duties, and responsibilities—or, more broadly, what is morally right or wrong.” Moreover, hard ethics has a purpose in defining the legal conditions, constraints, and circumstances which determine individual agency in society by “formulating new regulations and challenging existing ones (p. 82).” It concerns therefore quite fundamental questions in respect to social organization that require clarification on the level of legislation and are not just subject to individual deliberation. According to the author, the reflection on the underlying normative foundations of society has prompted the dismantling of legal structures that were unethical, such as Apartheid legislation. In that sense, it can be understood as a significant driver of moral progress. Furthermore, Floridi introduces the concept of soft ethics, which refers to the conduct of individuals and companies in areas that lie outside of the scope of regulation (82). In a game-theoretical sense, soft ethics determines the moves of the game played by individual players (Lütge et al. 2016). Floridi (2023, 82) states here that soft ethics “does so by considering what ought and ought not to be done over and above the existing regulation—not against it, or despite its scope, or to change it.” It is therefore the domain of individual moral choice and can be advantageous to act in line with soft ethics. Particularly, in the case of AI, soft ethics serves as a tool to inform corporate decision makers on the positive potential of technologies, depicting an opportunity strategy, but as serving as an approach to risk management.

The distinction made by Floridi between hard and soft ethics is deeply embedded in a wider philosophical tradition, tracing back to Aristotle, who sharply distinguished between politics, referring to the principles of how to govern a society, and ethics, pertaining to the right, but also smart way to act as an individual. This distinction is highly relevant from institutional economics, which emphasizes the role of formal (laws, constitutions, and binding guidelines) and informal institutions (such as traditions, moral perceptions, and customs) that coordinate human behavior (Lütge et al. 2016). It also involves the metaethical question of how to separate between the space to be governed by hard ethics as understood by Floridi and the space to be governed by individual moral commitments and strategic choices likely to be influenced by a particular cultural background. This debate manifests itself in policy choices, for instance the fiercely debated issue of voluntary self-regulation vs. mandatory legislation in the case foundational models. Based on this conceptual difference it is therefore relevant to identify the structural role of both approaches, namely hard and soft ethics, in the context of emerging AI regulation, and particularly the role “soft ethics” is likely to play in this.

The Regulation on Artificial Intelligence: Between Ethics and Fundamental Rights

In its beginning, the public debate on AI governance and regulation has been characterized by a dominance of ethical discourses and the search for a common normative foundation that should underpin upcoming legislation, in the best case anchored in international conventions. Doing so, the majority of frameworks has oriented itself to existing values such as fairness or accountability, resorted to consequentialist and deontological arguments, or sought to align themselves to particular frameworks including the United Nations Sustainable Development Goals or other concepts related to AI for Good (see Halsband and Heinrichs 2023).

Furthermore, ethical approaches to AI have attempted to formulate principles that would address certain features of AI as a technology that stand in conflict with ethical norms (Evans et al. 2023). This owes to the fact that AI tends to replace human agency, that it is prone to bias, that it depends on data, and that the complexity of machine learning and neural networks deprives us of a clear explanation as to why AI has made a particular decision (compare HLEG AI, April 2019). The EU AI Act has referred to these elements as “opacity, complexity, dependency on data, autonomous behavior” and put them at the center of its legislative approach (see “Explanatory Memorandum,” Regulation 2021/0106). Finally, the actual impact created by AI depends on the agenda of the involved parties, particularly users and developers, implying that AI can also be used and modified for unethical or illegal purposes. The presented characteristics have therefore been understood as the foundation of the design of mitigation and prevention measures. This includes for instance the concept of explainability as a means to address the algorithmic complexity, fairness addressing the issue of bias, and the focus on human autonomy when navigating the issue of algorithmic opacity or the non-deterministic nature of AI. However, the contextual meaning of what fairness or what prevention of harm would constitute in a given specific context has often remained vague (see HLEG, April 2019). While the mitigation measures paved their way into the AI Act high-risk requirements, the objective to be realized by developers and users of AI as well as the exact risk categories to be considered in assessments has not been fully addressed in the AI ethics discourse.

On the contrary, regulatory frameworks can rely on an arsenal of normative concepts, which informs on harm, fairness, or privacy considerations in a given context. In the European context, the EU Charter of Fundamental Rights (2000) codifies not only basic rights of individuals such as the right to privacy, freedom of opinion, or the right to non-discrimination, but also rights of citizens, for instance the right to good administration. Fundamental rights are understood as principles that (1) create boundaries for state, in this case EU, interventions, that (2) imply measures taken by state legislation to protect individuals from adverse impacts on these rights by third parties and that impose duties on states to fulfill entitlements, for instance the duty of states providing individuals with access to education. According to the preamble of the EU CFR, technologies are relevant for this constitutional mission, as they can unfold major effects on the norms protected by human rights. This coincides with tendencies in international law, which calls on state organizations to prevent adverse effects on human rights—a concept closely related to fundamental rights—created by business entities (Kriebitz and Lütge 2020).

A key aim of European legislation on technologies has therefore been to ensure that business activities are not leading to a decline in fundamental rights. This explains why fundamental rights are repeatedly mentioned throughout the act as determinants of risks, as they give a contextual understanding of harm, for instance when it comes to violations of the physical integrity of a person, or when an AI system undermines freedom of speech. Thus, fundamental rights considerations are found in different contexts of the EU AI Act, particularly when it comes to the identification of “prohibited practices” and “high-risk” AI (see “Explanatory Memorandum,” Regulation 2021/0106). The former case refers to situations in which the very use of AI conflicts with the content of fundamental rights, for instance in the case of social scoring (see ibid. Art. 5). Moreover, high-risk use cases describe settings in which the stakes are higher due to the proximity of AI use to norms protected by fundamental rights. This would be the case if AI is deployed in the context of determining access to education.

To conclude with, the EU AI Act draws from two different conceptual pillars. The AI ethics discourse has largely described the nature of the problem, namely that AI has certain characteristics that can undermine norms and that AI can also be used for multiple purposes. On the contrary, fundamental rights depict normative expectations and give indications about the risks characterizing a particular setting, for which AI is used.

Fundamental Rights and Ethics: Hard or Soft?

Owing to the conceptual difference between hard and soft ethics on the one side, which correlates with the normative strength of claims on the spectrum between “must” and “may,” and on the other side the existence of two normative pillars of AI regulation, namely “ethics” and “fundamental” rights, the question arises how the two conceptual tools stand with each other. In the following, we elaborate that the existence of fundamental rights and ethics as two different normative traditions in AI regulation adds a further layer to the concept of hard and soft ethics as introduced by Floridi.

This might be counterintuitive on the first glance. One might be tempted to make the case that fundamental rights given their legal background and their prominent role in the EU AI Act represent an example of hard ethics. They would directly imply a “must,” in the sense of immediate action for changing the legal framework, but also imposing strong normative obligations on individual or corporate agencies. Likewise, one could argue that ethics would be reduced to post-compliance as a principle informing on the right conduct, following the introduction of fundamental rights as the major normative approach behind AI regulation.

On a second glance, the exact interplay between hard and soft ethics, but also between ethics and fundamental rights is a bit more complex than it seems. While fundamental rights portray clear normative guardrails for states, the implications of fundamental rights for companies are slightly different. Other than states, fundamental rights in their own right do not legally bind business enterprises (compare: Kriebitz and Lütge 2020). The notion of fundamental rights impact hints therefore at certain norms (in German Rechtsgüter), protected by fundamental rights such as freedom of speech or freedom of movement that should be considered by companies in their operations. Companies need therefore to demonstrate that they did engage in measures to prevent and to mitigate against particular interventions in areas protected by fundamental rights. For instance, a social network evaluates the impact of certain design choices and privacy settings on freedom of speech (see HLEG, April 2019). However, not all entities in the AI ecosystem are likely to have such an influence on design choices, particularly small business enterprises that use AI solutions created by others. It is therefore plausible to assume that exact normative urgency on the spectrum of “must” vs. “may” would depend on the leverage companies have in understanding but also preventing a negative impact created by an AI solution. Such a rationale would be at least plausible considering the focus in the business and human rights discourse on the mitigation of adverse impacts created by corporate activities. Detailed codes of conduct specifying the implementation of fundamental rights in AI development and use would be difficult to define anyhow, given the rule of constitutional courts in resolving tensions between different implications of human, and in the EU context, fundamental rights. Moreover, fundamental rights are instrumental for understanding the exact interplay between soft and hard ethics, as legislation—hard ethics—bears always implications for the exercise of fundamental rights for example the freedom to conduct business. Tighter regulation usually implies higher market entry barriers, particularly for companies that seek to challenge already existing monopolies. AI regulation can impose high costs on individuals and companies that seek to participate in AI development, and needs to strike a proportionate balance between the risk posed by the technology and the consequences of legislation on fundamental rights understood as freedom. Doing so, a fundamental rights-centered approach would create room for individual actions that can be guided by ethical considerations and that are restricted by an overarching legal framework (compare: Lütge et al. 2016; Häußermann and Lütge 2022).

The role of ethics remains therefore in identifying traits of AI that are pertinent for a given use case and how these traits create negative effects on AI solution, but also to identify the right mitigation and prevention measures when balancing positive and negative impacts of a particular AI solution in terms of human rights. An example would be the use of care robots that require immense data in order to provide highly individualized patient care (Boch et al. 2023). Here again, ethics will have to assess the existence of hard boundaries, namely clear violations of ethical principles for instance, that must not be crossed for instance the lack of consent in patient interactions and the existence of principles, which require a contextual interpretation. Moreover, the resolution of dilemmatic situations and other moral trade-offs is likely to be a main field of examination, particularly as regulation will hardly regulate all potential trade-offs and dilemma situations. Nevertheless, this debate often involves facets regulated by law, for instance in the case of risk assessments along the lines of fundamental rights, and by broader and not-codified ethical conventions, where developers and users of AI will have to rely on normative instruments to diffuse tensions between different norms for example the notion of proportionality or the understanding of leverage.

Considering the initial question on the role of ethics in world after AI regulation, we would propose seeing ethics not merely as an area of post-compliance. Instead, we would recommend to see ethics as a means to identify the normative relevance of a given AI solution (“pre-compliance”), an indicator for the right choice of mitigation and prevention measures and finally as a tool making it easier for companies to comply with upcoming legislation (“compliance”). While the conceptual distinction between hard and soft ethics makes sense as it informs on the division of labor between society, organizations, and individuals, AI regulation constitutes an in-between necessitating not only hard and soft ethics, but a closer understanding of the conceptual differences between ethics and fundamental rights.

References

Boch, A., S. Ryan, A. Kriebitz, L. M. Amugongo, and C. Lütge. 2023. “Beyond the Metal Flesh: Understanding the Intersection between Bio- and AI Ethics for Robotics in Healthcare.” Robotics 12, no. 4: 110.

Charter of the Fundamental Rights of the European Union. 2000. https://www.europarl.europa.eu/charter/pdf/text_en.pdf.

Engstrom, E., and K. Jebari. 2023. “AI4People or People4AI? On Human Adaptation to AI at Work.” AI & Society 38, no. 2: 967–68.

Evans, K., N. de Moura, S. Chauvier, and R. Chatila. 2023. “Automated Driving Without Ethics: Meaning, Design and Real-World Implementation.” In Connected and Automated Vehicles: Integrating Engineering and Ethics, 123–43. Cham: Springer Nature Switzerland.

Floridi, L. 2023. The Ethics of Artificial Intelligence. Oxford: Oxford University Press.

Halsband, A., and B. Heinrichs. 2022. “AI, Suicide Prevention and the Limits of Beneficence.” Philosophy & Technology 35, no. 4: 103.

Häußermann, J. J., and C. Lütge. 2022. “Community-in-the-Loop: Towards Pluralistic Value Creation in AI, or—Why AI Needs Business Ethics.” AI and Ethics, 1–22.

High-Level Expert Group on Artificial Intelligence (HLEG AI). 2019, April. “Ethics Guidelines for Trustworthy AI.” European Commission. https://ec.europa.eu/futurium/en/ai-allianceconsultation.1.html.

Hunkenschroer, A. L., and C. Lütge. 2022. “Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda.” Journal of Business Ethics 178, no. 4: 977–1007.

Kriebitz, A., and C. Lütge. 2020. “Artificial Intelligence and Human Rights: A Business Ethical Assessment.” Business and Human Rights Journal 5, no. 1: 84–104.

Lütge, C., T. Armbrüster, and J. Müller. 2016. “Order Ethics: Bridging the Gap between Contractarianism and Business Ethics.” Journal of Business Ethics 136: 687–97.

Regulation 2021/0106. “Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence.” European Parliament, Council of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.

Schmidt, L. 2022. “Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape.” AI and Ethics 2, no. 2: 303–14.

Ulnicane, I. 2022. “Artificial Intelligence in the European Union: Policy, Ethics and Regulation.” In The Routledge Handbook of European Integrations. Taylor & Francis.

  • Luciano Floridi

    Luciano Floridi

    Reply

    Response to Alexander Kriebitz and Christoph Lütge

    Kriebitz and Lütge’s essay offers a nuanced analysis of the relationship between ethics and fundamental rights in AI regulation, making several valuable contributions to understanding how these normative frameworks interact. The examination of the distinction between hard and soft ethics provides an especially useful level of abstraction through which one may evaluate current regulatory developments.

    One of the essay’s main insights concerns the complementary rather than competing nature of ethics and fundamental rights in AI governance. The essay convincingly argues that ethics, in general, cannot be relegated to a purely post-compliance role, but serves essential functions across the entire regulatory lifecycle, from identifying salient features of AI systems to guiding the implementation of mitigation measures. I agree. The introduction of the distinction between soft and hard ethics is not meant to be in contrast or incompatible with this view, which I have defended in other contexts, when ethics is taken as a whole (Floridi 2013). It is only when laws are in place that it becomes helpful to be able to distinguish between hard and soft ethics to show that, even in the presence of acceptable and comprehensive legislation, there is still plenty of room for more ethical analysis and behavior, which is post-compliance.

    I also agree that the relationship between soft ethics and human rights deserves more study. While fundamental rights establish legal boundaries and obligations, soft ethics can help organizations operationalize these principles in practice, over and above what is required or allowed by the relevant legislation. Furthermore, soft ethics can guide understanding and behavior in areas not explicitly covered by regulation. In the context of human rights, this suggests soft ethics can help companies interpret and implement human rights principles in specific technological contexts. For instance, while the right to privacy is legally protected, soft ethics frameworks can guide companies in making nuanced decisions about data collection and use that go beyond minimum legal requirements or what is legally allowed, while respecting the spirit of privacy rights.

    The interplay between soft ethics and human rights manifests in many important ways. It can help bridge the governance gaps that exist in traditional human rights frameworks, particularly when dealing with novel technologies. Companies often face situations where applying human rights principles to AI systems is not straightforward (Ruggie 2013). In these cases, soft ethics provides a framework for deliberation and decision-making that helps maintain alignment with human rights objectives while addressing practical challenges. Legal frameworks, including human rights legislation, typically develop at a relatively slower pace due to the need for extensive consultation, political consensus, and formal adoption procedures. Soft ethics, by contrast, can adapt more rapidly to emerging challenges and technological developments. Ethical frameworks can be updated and refined through corporate practice and stakeholder dialogue without requiring formal legislative processes (Rességuier and Rodrigues 2020). This agility makes soft ethics particularly valuable in the fast-moving field of AI development, where new ethical challenges can emerge more quickly than legal frameworks can address them. Furthermore, soft ethics can serve as an innovation driver in human rights protection. Some ethical frameworks can evolve more rapidly than legal structures, allowing organizations to develop and test new approaches to protecting human rights in the context of emerging technologies (Yeung, Howes, and Pogrebna 2020). This dynamic relationship between soft ethics and human rights creates a feedback loop where ethical innovations can eventually inform and strengthen formal human rights protections. This iterative process helps refine our understanding of how fundamental rights should be interpreted and applied in novel technological contexts (Nemitz 2018).

    The adaptability of soft ethics makes it particularly suitable for addressing cultural variations in the implementation of human rights principles. While fundamental rights provide universal standards, their post-compliance, practical application often needs to be sensitive to local contexts and values. Soft ethics can help bridge this gap by delivering flexible frameworks that respect both universal rights and contextual considerations.

    The essay’s discussion of fundamental rights as creating “normative guardrails” rather than direct obligations for private actors is perceptive. This framing aligns with the conception of constitutional essentials as providing basic framework conditions while leaving room for different reasonable comprehensive doctrines (Rawls 2005). The recognition that the intensity of obligations varies based on companies’ leverage and capacity for impact shows appreciation for practical implementation challenges.

    The essay suggests some fruitful lines of further research on how different stakeholders navigate the space between hard regulatory requirements and softer ethical guidelines. For example, soft ethics can help organizations develop more robust human rights due diligence processes by providing frameworks for identifying and addressing potential human rights impacts before they materialize. This preventive function of soft ethics complements the protective function of human rights law, creating a more comprehensive approach to responsible AI development (Hagendorff 2020; Heilinger 2022).

    In conclusion, the essay makes a strong case that AI governance requires both ethical and rights-based frameworks to work in concert. This insight has important implications as regulatory regimes continue to evolve. Rather than viewing ethics as subordinate to or separate from fundamental rights protections, we should understand them as mutually reinforcing pillars supporting responsible AI development. The varying development speeds of these frameworks suggest the need for mechanisms to ensure better coordination between soft ethics, human rights principles, and legal requirements. Future research might explore how to create more formal feedback channels between ethical innovation and legal development, ensuring that insights from ethical frameworks can more effectively inform the evolution of hard law while maintaining the necessary rigor of legal processes.

    References

    Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press.

    Hagendorff, Thilo. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30, no. 1: 99–120.

    Heilinger, Jan-Christoph. 2022. “The Ethics of AI Ethics: A Constructive Critique.” Philosophy & Technology 35, no. 3: 61.

    Nemitz, Paul. 2018. “Constitutional Democracy and Technology in the Age of Artificial Intelligence.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133.

    Rawls, J. 2005. Political Liberalism. New York: Columbia University Press.

    Rességuier, Anaïs, and Rowena Rodrigues. 2020. “AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics.” Big Data & Society 7, no. 2.

    Ruggie, J. G. 2013. Just Business: Multinational Corporations and Human Rights. London: W. W. Norton.

    Yeung, Karen, Andrew Howes, and Ganna Pogrebna. 2020. “AI Governance by Human Rights–Centered Design, Deliberation, and Oversight.” The Oxford Handbook of Ethics of AI, 77–106.

Alessandro Blasimme

Response

Enveloping the World

Digital Transformation and the Ethics of AI

It is hard to be aware of transformations as they are unfolding. Even harder it is—for most of us—to accept transformations once they have unfolded. Few things elicit more resistance than change, especially when it is deployed before our eyes, in plain sight. Humans have this mysteriously fascinating tendency to deny that what they thought was the case, in fact, is no more. And yet we have embraced the digital transformation of our previous (analogue) world into a global ecosystem in which our presence is the presence of our data, our identity is the identity of our data.

Or, haven’t we? Have we really accepted this new digital world, or are we experimenting with (futile? pointless?) forms of resistance to an already accomplished process? The coming of age of AI—not AI per se, that has been around for decades, but its recent large-scale implementation in all domains of individual and associated life—is sparking global debate about the merits and the limits of digitalization.

We share presence and identity with tools and objects that literally do not exist for us (meaning, cannot be used, rented, bought and sold) unless we can appear as data that such tools and objects can compute and transact alongside the data of myriad other users, perfectly identical to ourselves—at least in the “eyes” of such digital objects. To many of us this is just what it is: the present way humans inhabit the worlds. But what shall we do of the deeply uncomfortable feelings that our existence is somehow reduced to that of our data, that it is inexorably mediated by technological objects, and that there is no coming back from this point of (digital) alienation?

Perhaps a simpler question is whether the alleged digital transformation of our world has actually already occurred, or whether it is a process that is still unfolding, perhaps an irreversible one. But yet again such question triggers more complex thoughts, as we inevitably need to ask how shall we live in such transformation, whether it is an accomplished or an ongoing process.

These and similar questions hover overhead as the reader engages with Luciano Floridi’s new book The Ethics of Artificial Intelligence. Right from the first pages, Floridi gives center stage to the phenomenon of digital transformation as the bedrock of his argument(s). The book dovetails anthropological, sociological, and ethical reflections as it presents a number of normative observations about AI ethics and, while the title does not really convey that, AI governance.

The philosophical starting point of such articulated normative discussion is an epistemological assumption: we can no longer know the world we are in unless we see it for what it is, meaning, a digital world. An ontological assumption is coupled with the epistemological one—or rather nested within it, as it were: the very nature of the world is different (digital), not just the phenomenic surface of how we do stuff in it. Floridi has defended such two philosophical theses elsewhere. In this new book, he takes them as the basis for two historically laden (sociological) claims. First, the digital transformation of the world has happened and cannot be unmade. Second, the digital transformation of the world will not happen a second time. Floridi, however, maybe unlike the way he has been making this argument in the past, is now insisting more on the ongoingness of digital transformation. And he does so, precisely in relation to AI. The digital world, he maintains, is still evolving—or being remade. In particular, AI is the force that is shaping such digital retransformation. AI is providing unprecedented computational opportunities. In this sense it is part and parcel with the very digital world it is nonetheless transforming. However, for AI to be able to deploy its agency (I’ll get back to this shortly) in the digital world, the digital world needs to be properly arranged in the first place. The current status of the digital world is not—yet—conducive to the deployment of AI’s full potential in terms of predictive, classificatory, and decision-making capabilities. What needs to happen is what Floridi calls “enveloping.” Very simply stated, enveloping means making the world AI-friendly, or better, AI-ready. More specifically, it means arranging the digital world in such a way as to provide AI systems with the kind of data they need (in terms of both kind, quality, and quantity); restructuring industries and institutions (e.g., the healthcare sector) so that key services can be delivered in an AI-automated way; reforming ethics norms and regulatory ecosystems so that AI-driven decisions can be trusted and be perceived as legitimate.

It seems that many of the challenges Floridi identifies around AI relate in one way or the other to enveloping. The biggest of such problems from an ethics point of view, Floridi clearly spells out—with overtly Kantian overtones: are humans (their wellbeing, their happiness, their flourishing) the ultimate end of this enveloping process, or is it rather the case that humans too are the means through which enveloping is taking place (28)?

What is particularly interesting in this way of interrogating the ethics of AI is that it shifts the focus from the consequences of using and misusing AI, to exploring which “forms of life” AI is giving rise to. Philosophy of technology and Science and Technology Studies scholars have long insisted that technological artifacts do derive from and, at the same time, reinforce imagined forms of social life (ethics) and social order (governance)—a process that is not necessarily visible, let alone straightforward to reconstruct while it is still taking shape.

And this takes us back to the point we have started from, namely, Floridi’s (now) sociologically and historically laden appreciation of digital transformation as an ongoing process. It is precisely because the process is ongoing that we may have a hard time recognizing it or figuring out its ethically fundamental characteristics. Nonetheless, it looks like we cannot avoid questioning (in Kantian, philosophical, or STS ways) the process whereby the world and its inhabitants get enveloped for the sake of, well, AI itself.

As briefly alluded to before, Floridi defines AI as an agent, or to be more precise, as a reservoir of agency to solve problems and perform tasks (whole chapter 2). This is a promising viewpoint to start exploring whether and how humans are adapting to the presence and behavior of new agents (AI models) and what specific principles, guidelines, and regulations need to be adopted in order to deal with those agents. Are humans able to resist certain unethical or unpolitical forms of adaptation? Should we be worried that resistance is—as we said at the beginning—already a sign of pointless protest against change that has already inexorably occurred? If the digital transformation of the world will not happen a second time—as Floridi maintains—then what AI is realizing is not a second digital revolution. It is either the latest stage (the end stage?) of digitalization or something different altogether.

Floridi seems to think the former to be the case. The recent explosion of AI and its associated ethical and governance challenges are part of a broader and longer phenomenon. To some extent, the reader has the impression that questioning this phenomenon cannot stretch to the point of arresting it, or, for that matter, attempting to substantially interfere with it. The process has momentum of its own and it is pretty much irreversible. Hence the book goes back and forth between foundational issues and practical strategies of piecemeal adaptation and mitigation, focusing also extensively on principles, guidelines, and governance practices as means to channel innovation in AI toward what Floridi calls “AI for social good” (AI4SG).

The ultimate aim of any philosopher of technology—a label I wish Floridi will be comfortable with—is not to understand what technology is, but what technology is for. In this respect, philosophy of technology converges toward, or maybe even overlaps with ethics—a disciplinary label with which Floridi has become increasingly comfortable with in recent years. Floridi’s book is a book about the ethics of AI. But it is more generally a book about the ethics of technology as a historical phenomenon that invests, at the same time, human self-understanding and our capacity to conduct ourselves with practical reason—yes, in an overtly Kantian sense. However, this book pursues an aim that Kant might have found too mundane—that of situating practical reason historically. The backbone of Floridi’s strategy is spelling out the ethical salience of AI as the incarnation of a historical condition. Hence, his interest—at the same time ambitious and humble—for AI as a determinant of social good. I think recognizing this philosophical move is key to understand the book as a book about social ethics, but also, more in general, to appreciate where ethics as a discipline is going and how it can help with some of humanity’s most pressing questions. This book invites scholars in ethics to look at the big picture, and to analyze how our questions relate to AI as a piece of our (present) world.

Whether we should think—with Hegel—of historically determined technological conditions, or—with Marx—of technologically determined historical conditions is not a problem that bothers Floridi more than it should. Or maybe it will, in his next book.

  • Luciano Floridi

    Luciano Floridi

    Reply

    Response to Alessandro Blasimme

    Alessandro Blasimme’s essay is a discerning examination of my latest contribution to AI ethics, skillfully highlighting how the work situates AI within a broader historical and philosophical framework of digital transformation. His analysis particularly excels in unpacking the concept of “enveloping”—the process of making the world AI-ready or AI-friendly, to rephrase his helpful synthesis—and its profound implications for human agency and social organization, a theme that I have developed throughout my work on the philosophy of information and information ethics (Floridi 2010, 2013).

    His essay identifies a tension between technological determinism and human agency, by framing AI development as part of an ongoing digital transformation that has already occurred yet is still unfolding. This is correct and the point deserves further reflection. Enveloping may be seen as the ontological counterpart of what Heidegger calls “enframing” (Gestell) and its role in shaping human experience and understanding (Heidegger 1977). Enframing refers to the fundamental mode of revealing or understanding the world through modern technology, where everything is viewed as a “standing reserve” or raw material ready to be exploited and manipulated. Essentially, enframing describes the mindset that conceives or conceptualizes the world as a resource to be used or exploited for human purposes, rather than something with inherent value. Enframing is thus closer to a Husserl-like or phenomenological perspective about a technologically mediated lifeworld (Ihde 1990). In both cases, what is emphasized is the increasingly technology-dependent and technologically oriented nature of our experience. In Kantian terms, the phenomena are grasped through and as technology. Instead, enveloping is a metaphysical concept (in a Kantian sense, no Heideggerian terminology involved here), as it refers to the design of the world in itself, not its conceptual appropriation, in ways that are meant to facilitate the success of digital technologies and AI in particular. In Kantian terminology, although we may not know the noumenon, we are designing it to ensure that digital artefacts will be successfully embedded in it. If this seems strange, let me add two short clarifications. First, even in Kant we must accept that our contact with the world through doing is a direct contact with the noumenon. When I eat an apple, I am a noumenon that eats another noumenon, not a phenomenon, whatever the Apfel an sich may be. Second, the post-Kantian debate about the philosophy of technology struggled precisely with this point: insofar as technology represents a creation of the knowing subject (Floridi 2018), the latter must be granted a more constitutive degree of knowledge of what is built than Kant—who still endorses a classic view of knowledge as observation rather than constructions of the intrinsic nature of the observed—can allow (I have sought to explain how this can be reconciled with a post-Kantian view of the unknowability of the noumena in Floridi 2008).

    To return to the tension, it is complicated and perhaps arbitrary to establish precisely when the dialectic between enveloping the world factually (metaphysically in Kant’s terminology) and enframing it conceptually begun (does the invention of the wheel or writing count?), but it seems reasonable to argue that it has reached a new turning point when the ontology of the technologies used in both cases changed, from analogue to digital. When I refer to a digital revolution, it is this perspective that I have in mind, and hence this threshold, which can be dated roughly to the work of Alan Turing, if a single referent must be chosen, despite all the preparatory steps that lead to it, from Leibniz to Babbage. We have crossed that threshold, and this is the point about a new chapter in our history. Still, the development of such a revolution and its consequences have only begun unfolding, and this is the point about the continuous nature of such a new chapter.

    The essay contains many other valuable insights. It connects my work to Kantian ethics. This is correct and illuminating, especially regarding the question of whether humans are means or ends in the process of digital transformation. This ethical framework provides a valuable approach for examining AI development, although I have also argued that work on distributed agency, socio-technical systems, and more ecological approaches offer additional insights. Furthermore, the essay’s emphasis on my treatment of AI in terms of “agency” raises important questions about moral status and responsibility in human-AI interactions, topics that still need much more research and will become increasingly crucial as we move into environments often inhabited by artificial agents. I believe this conception of AI as a new form of agency, not intelligence, provides a better framework for understanding how AI systems can be understood as agents with a moral impact (Floridi and Sanders 2004) while maintaining important distinctions from human moral agency. It is also more helpful to analyze ethical implications and design related solutions. And it connects fruitfully to broader discussions in philosophy of technology about the nature of technical artifacts and their role in human society (Winner 2020).

    Looking forward, the essay points toward essential questions about the governance of AI systems and the role of ethics in shaping technological development. A step toward “social ethics” and more political considerations represents a promising direction for future research in applied ethics and technology studies. Thus, the closing reference to Hegel and Marx highlights an important area for future philosophical investigation (see also Floridi 2023): the relationship between technological and historical determinism in the context of AI development—and, I would add, the digital revolution more broadly—and the role that human agency and freedom may have in shaping its own future through better design, which means better enveloping and better enframing. This is what I hope to work on in the near future.

    References

    Floridi, Luciano. 2008. “A Defence of Informational Structural Realism.” Synthese 161, no. 2: 219–53.

    Floridi, Luciano. 2010. The Philosophy of Information. Oxford: Oxford University Press.

    Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press.

    Floridi, Luciano. 2018. “What a Maker’s Knowledge Could Be.” Synthese 195, no. 1: 465–81.

    Floridi, Luciano. 2023. The Green and the Blue: Naive Ideas to Improve Politics in the Digital Age. Newark: John Wiley & Sons.

    Floridi, Luciano, and Jeff W. Sanders. 2004. “On the Morality of Artificial Agents.” Minds and Machines 14: 349–79.

    Heidegger, Martin. 1977. The Question Concerning Technology, and Other Essays. New York: Harper & Row.

    Ihde, Don. 1990. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press.

    Winner, Langdon. 2020. The Whale and the Reactor: A Search for Limits in an Age of High Technology. 2nd ed. Chicago: University of Chicago Press.