Soft Digital Ethics
As our world is becoming evermore digital, we need increasingly not only off- but also online guidelines for ethical behavior. In this spirit, literature distinguishes between hard and soft ethics, whereby the former is what may contribute to making or shaping laws and regulations. Soft ethics, on the other hand, covers the same normative ground as hard ethics, but it does so by considering what ought and ought not to be done over and above the existing norms, not against them, or despite their scope, or to change them, or to by-pass them (e.g., in Internet self-regulation of big data, machine learning and artificial intelligence). This kind of digital ethics, which we blend with a fuzzy, pluralistic as well as multicultural worldview, can be exercised in places of the world where digital regulation is on the good side of the moral (i.e., in Europe and Switzerland).
An AI Ethic for Swiss Society
Referred to as new oil, data is becoming increasingly central. The oil fuels the machinery of our Swiss society thanks to machine learning and artificial intelligence. However, data does not exhaust, it grows. One should therefore not think of data as a static machine, but rather as a kind of living organism that better and better represents our society. Yet, a living organism is very complex and must therefore be viewed and researched holistically. Thereby ethical considerations play an increasingly prominent role.
As a Service Public company, in a Corporate Digital Responsibility project, Swiss Post (as other network industries) is striving to get its data strategy Swisswide approved and thus, together with the Human-IST ecosystem, is inviting the society for a broad digital ethics discussion. Key stakeholders include politicians, the public sector, companies, universities and citizen associations etc. This transdisciplinary project arose from the Swiss Human-IST Association, which was formed in the Smart Capital Region and which deals holistically with humanistic values for a sustainable as well as a responsible digitization of Switzerland.
Based on human and ethical values discussed in this ecosystem, we develop a label (or Charta) for a human-centered, sustainable and responsible digitalization of Switzerland. Inspired by human rights, and looking at human experience, democracy and federalism, this Human-IST label will be awarded to digital products and services that follow ethical and humanistic principles. Thereby, the idea is to provide and distribute the label with a Human-IST blockchain solution.
Human-IST collaborators: Edy Portmann, Denis Lalanne, Luis Terán
Partners/ External collaborators (companies): Swiss Post, Smart Capital Region
Ethik Check for Conversational AIs
Our General Idea
We are developing an ethics test for chat- and voicebots resp. conversational AIs. Inspired by the Turing test, which is supposed to find out how human-correct an AI is, we want to identify ethically correct bots and give hints for optimization to those that are not yet. To define ethically correct bots as such, we first must define what ethically correct means in our language and culture. And we must consider that these values might change over time. Subsequently, we must define measurement values, how the ethical criteria developed before can be measured. This is followed by an assessment, which each bot can undergo to obtain an evaluation of the state of its ethical correctness, its ethicity (ethicity means a fuzzy measure of an ethical value.), including any potential for improvement. The assessment will initially be conducted by humans, but it is conceivable that bots will also be able to solve the assessment in the future.
The ethics check for conversational agents is doubly novel. On the one hand, there are no recognized or widely used guidelines for conversational AI projects, neither in research nor in practice. On the other hand, apart from the Turing test, there are no benchmarks or other tests that evaluate conversational agents while also identifying optimization opportunities. Our project even combines both aspects in a final application.
In contrast to the Turing test, in which a human chat with a conversational AI, our ethics check should be able to be performed directly by a chatbot at the end. The chatbot then has the defined set of ethics rules, knows which questions to ask, and can match and rank the answers of the chatbot under test with the benchmarks we have defined. The result is a fully automated ethics check that is also transparent, as it shows exactly which criteria contributed to the decision-making process and to what extent.
Before an ethics check can take place, it must be defined what ethically correct means. Ethics is not universal. Ethics is something that is constantly evolving and is strongly influenced by culture. Together with other partners, we define ethical standards for a first region. The German-speaking European region was chosen as this first region. We call this region smart region, since we want to develop our standards based on the ideas of smart cities, which typically integrate the citizens into the process of idea-generation and development. With the help of the different research methods: literature reviews, surveys, expert interviews and focus groups, we will find out ethical standards for chat- and voicebots and give first approaches how these can also be measured. For the surveys, focus groups and expert interviews, care must be taken to cover a broad swath of the population. For the surveys, a representative panel must be used that takes equal account of different age groups, educational levels, occupational groups, and origins. Experts from various disciplines will be selected for the focus groups. These primarily include, psychology, data protection, computer science, digitization, education, data science, marketing, business. We need to consider that ethical values may change. Therefore, a mechanism of continuously improvements must be integrated. In further iterations, concrete methods or questions can then be developed to find out how ethically correct a bot behaves. Providers, mostly companies that use bots for their customers and employees will then even receive suggestions on how they can develop their bots in a more ethically correct way. At this point, we probably have the challenge that many conversational AI projects are focused on specific use cases. So, we must overcome the challenge of measuring ethical correctness despite these limitations, and possibly define rules for doing so. In the first phase, the ethics check will be carried out by humans. Humans chat as with the bot, ask the relevant questions, note down the answers and then evaluate them using a previously defined evaluation grid. While developing our ethic test, we keep attention to the work of Pangaro (Pangaro P. The Architecture of Conversation Theory, 1989) on conversation theory to conceive a model allowing the learning, and thus the co-adaptation or co-evolution of interacting humans and/or machines, through conversation. Since nowadays AI often lacks of feedback loops, we want to integrate conversations which are about feedback loops on different levels managing the how? and the why?. In addition, conversation loops are elaborated to become a small data learning technique that brings many advantages such as ecological, economical, and privacy-preserving computing.
In further phases, the chatting and the ethics check will be carried out by a chatbot, so that the entire ethics check can be fully automated in the long term. As soon as an AI must evaluate another bot, we will resort to the approaches of fuzzy logic and computing with words (CWW). Fuzzy systems can deal with fuzzy data and are therefore very suitable when it comes to characterising the expressions of people or bots or testing their ethical maturity (i.e., fuzzy ethicity). Additionally, computing with words is a system of computation, based on fuzzy logic, in which the objects of computation are predominantly words, phrases and propositions drawn from a natural language, as we have in our chat conversations.
Human-IST collaborators: Edy Portmann, Sophie Hundertmark
Swiss Digital Ethics Compass
The project will develop artifacts (e.g., ethical heat maps, linguistic summaries) as minimum viable products (MVPs) with feedback loops in leading cooperation of Swiss Post as a business and implementation partner.
Developing digitization initiatives necessitate a balanced approach to ethics. Several frameworks have been adopted to clarify the various challenges in digital contexts. An integrated approach is required to meet the demand for sustainable digitalization and address the growing number of challenges in handling data and related components. Our framework is set out to address these challenges from data, machine learning, and artificial intelligence that are connected to concerns of justice, sustainability, and climate change and to implement a digital ethic radar as a user interface. By means of our digital ethics compass, a conceptual framework, our vision is to develop digitalization standards that connect law and regulations, ethics and justice, and environmental sustainability.
Our framework includes boundary conditions of sustainable development, social thresholds of justice, and planetary boundaries. It incorporates frameworks and standards such as value-based engineering and IEEE. The goal is to assess digital services on their sustainability and ethics. Thereby, we focus on computational ethics, which is intended to make values measurable and calculable. Collaboration between ethical research, computer science, and business practice is needed to define conditions of justice as evaluation standards for digital services and maturity models. Ethics contributes by defining norms used as evaluation standards and computer sciences are required to determine criteria and algorithms to assess digital services.
The project will develop artifacts as ethical heat maps and linguistic summaries as minimum viable products with feedback loops with an agile methodology. Part of testing the artifacts is Swiss Post’s operationalization within their use cases to get constant and early feedback from users concerning the expected business value of the framework implementation will impact services to operationalize in different projects to create added value.
Human-IST collaborators: Edy Portmann, Luis Terán, Narek Andreasyan