Virtue Signaling vs Virtue Ethics

ethics in action

Virtue ethics is an approach to ethics that treats the concept of moral virtue as central. Virtue ethics is usually contrasted with two other major approaches in ethics, consequentialism and deontology, which make the goodness of outcomes of an action (consequentialism) and the concept of moral duty or rule (deontology) as central. Virtue ethics focuses on the character of the agent rather than their actions or their adherence to rules. It holds that an individual’s ethical behavior should be measured by their trait-based characteristics such as honesty, courage, and wisdom, rather than by the consequences of their actions or the particular duties they are obliged to obey. Virtue ethics is based on the idea that we acquire virtue through practice and habituation. By practicing being honest, brave, just, generous, and so on, a person develops an honorable and moral character. Virtue ethics also emphasizes the role of practical wisdom or phronesis, which is the ability to discern the right course of action in a given situation. Practical wisdom involves both intellectual and emotional capacities and requires a sensitivity to context and circumstances. Virtue ethics traces its origins to ancient Greek philosophy, especially to Plato and Aristotle, who identified various virtues and vices and discussed how they relate to human flourishing or eudaimonia.

image credit: Adobe Stockvirtue signaling

Virtue Signaling

Virtue signaling is a term that is often used pejoratively to describe the public expression of opinions or sentiments intended to demonstrate one’s good character or social conscience or the moral correctness of one’s position on a particular issue. Virtue signaling is often used to imply that the person expressing such opinions or sentiments is doing so insincerely or hypocritically, without actually being committed to the cause or issue they claim to support. Virtue signaling is also seen as a form of self-glorification or self-righteousness, rather than a genuine expression of moral concern or conviction. Virtue signaling is often associated with social media platforms, where people can easily share their views on various topics and receive validation or criticism from others. Some examples of virtue signaling are: expressing outrage over a social injustice without taking any concrete action to address it; posting a picture of oneself with a marginalized group or a charitable cause without having any meaningful involvement with them; displaying symbols or slogans that indicate one’s alignment with a certain political or ideological movement without understanding its implications or consequences.

The main difference between virtue ethics and virtue signaling is that virtue ethics is a normative ethical theory that aims to provide guidance for how to live a good life and cultivate moral character, while virtue signaling is a descriptive term that criticizes the superficial or self-serving display of moral attitudes or opinions. Virtue ethics is concerned with the internal qualities of the agent, such as their motives, intentions, emotions, and reasoning, while virtue signaling is concerned with the external appearance of the agent, such as their words, actions, and symbols. Virtue ethics requires consistent practice and habituation of virtues, while virtue signaling does not require any effort or sacrifice on the part of the agent. Virtue ethics values practical wisdom and contextual sensitivity, while virtue signaling disregards the complexity and diversity of moral situations. In short, virtue ethics is about being virtuous, while virtue signaling is about appearing virtuous.

Greenwashing is a form of advertising or marketing spin in which green PR and green marketing are deceptively used to persuade the public that an organization’s products, aims and policies are environmentally friendly or have a greater positive environmental impact than they actually do. Greenwashing involves making an unsubstantiated claim to deceive consumers into believing that a company’s products are environmentally friendly or have a greater positive environmental impact than they actually do. Greenwashing may also occur when a company attempts to emphasize sustainable aspects of a product to overshadow the company’s involvement in environmentally damaging practices. Greenwashing is a play on the term “whitewashing,” which means using false information (misinformation) to intentionally hide wrongdoing, error, or an unpleasant situation in an attempt to make it seem less bad than it is.

Greenwashing is an example of virtue signalling, which is the public expression of opinions or sentiments intended to demonstrate one’s good character or social conscience or the moral correctness of one’s position on a particular issue. Virtue signalling is often used to imply that the person expressing such opinions or sentiments is doing so insincerely or hypocritically, without actually being committed to the cause or issue they claim to support. Virtue signalling is also seen as a form of self-glorification or self-righteousness, rather than a genuine expression of moral concern or conviction.

Greenwashing can be used by individuals, companies and governments to appear more virtuous than they actually are, and to gain favour with consumers, investors, voters or other stakeholders who are concerned about environmental issues. However, greenwashing can be seen as a dishonest and manipulative practice that undermines the credibility and trustworthiness of the entity and its products, services or policies. Greenwashing can also have negative consequences for the environment and society, as it may mislead people into buying products that are harmful or wasteful, investing in companies that are polluting or exploiting, supporting policies that are ineffective or detrimental, or discouraging them from taking more effective actions to reduce their environmental impact. Greenwashing can also create confusion and skepticism among people about the genuine environmental claims and initiatives of other entities. Some examples of greenwashing by individuals, companies and governments are:

Individuals: Some people may engage in greenwashing by buying products that have green labels or packaging, but are not actually eco-friendly. They may also post pictures or messages on social media that show their support for environmental causes, but do not reflect their actual lifestyle choices or actions.

Companies: Some companies may engage in greenwashing by renaming, rebranding or repackaging their products to make them seem more natural, organic or sustainable. They may also launch PR campaigns or commercials that portray them as eco-friendly or socially responsible, but do not match their actual practices or performance.

Governments: Some governments may engage in greenwashing by announcing policies or initiatives that claim to address environmental issues, but are either insufficient, ineffective or counterproductive, such as HSBC’s climate ads or Ikea’s illegal logging. They may also use green rhetoric or symbols to appeal to voters or other countries, but do not follow through with concrete actions or commitments.

Greenwashing involves making false or exaggerated claims about the environmental friendliness or impact of an entity or its products, services or policies. It is a deceptive and unethical practice that can harm both the environment and the people who are misled by it.

Are you a technical, business or legal professional who works with technology adoption? Do you want to learn how to apply ethical frameworks and principles to your technology work and decision-making, understand the legal implications and challenges of new technologies and old laws, and navigate the complex and dynamic environment of technology innovation and regulation? If so, you need to check out this new book: Ethics, Law and Technology: Navigating Technology Adoption Challenges. This book is a practical guide for professionals who want to learn from the experts and stay updated in this fast-changing and exciting field.

DAOs and ADSs

Distributed Computing Paradigms

Modern public and private infrastructure is increasingly large and complex regardless of the industry within which it is deployed – communications, power, transportation, manufacturing, finance, etc. Large-scale infrastructure is increasingly a significant source of data on both its own operations as well as the society dependent on it. Big data from infrastructure can thus supply a variety of artificial and augmented intelligence systems. Large complex infrastructures typically require large complex organizations to operate and maintain. Such organizations though are typically centralized and constrained by bureaucratic policies and procedures. Recent organizational innovations include entity structures such as Decentralized Autonomous Organizations (DAOs)

Decentralized Autonomous Organization

Decentralized Autonomous Organization

DAOs are emerging that may provide better transparency, autonomy and decentralization for the manner in which these large-scale infrastructures are operated. ADSs are analogous to living systems with autonomous and decentralized subsystems that have been developed to support the requirements of modern infrastructure. DAOs are digital-first entities that share similarities with a traditional company structure but have some additional features, such as the automatic enforcement of operating rules via smart contracts DAOs are more focused on governance and decision-making while ADSs are more focused on the operation of large-scale infrastructure

ADS DAO
Timeline Started ~1993 Started ~2013
Initial applications Large scale infrastructure Digital asset administration – e.g., cryptocurrencies
Sense of Autonomy Maintaining automated operation despite environmental changes (Financial) liability / responsibility allocation when failures occur
Decentralization Physically distributed infrastructure Physically distributed stakeholders
Artificial Intelligence Role Augmented Intelligence for system operators Adjunct off-chain processes
Reliability Mechanisms Various by system implementation Consensus protocols, cryptographic hashes
Focus System composed of infrastructure Organization ( legal entity) automating management administration

 

Both DAOs and ADSs are decentralized and autonomous, but in somewhat different aspects. DAOs provide an opportunity for decentralized, autonomous governance that may be a useful feature for future ADS systems to consider. For more details on this topic refer to Wright, S. A. (2023, March). DAOs & ADSs. In 2023 IEEE 15th International Symposium on Autonomous Decentralized System (ISADS) (pp. 1-6). IEEE.

Regulation of Technology

Ethics in Action

Technology adoption carries with it a number of ethical risks. A society under the rule of law often creates legal regulations to constrain technology adoption. Regulations may be developed for a variety of policy purposes, but from an ethical perspective regulations can be categorized in terms of the ethical harms they seek to avoid and the ethical virtues that they seek to encourage.

image credit: Adobe Stock

Regulation of Technology

The regulation of new technology seeks to provide a number of ethical virtues. One of the main virtues is that regulations can help to ensure that new technologies are developed and used in ways that are safe and beneficial to individuals and society as a whole. For example, regulations can help to ensure that new technologies do not pose a risk to public health or safety.Another ethical virtue associated with the regulation of new technology is the potential for regulations to promote social justice and equality. For example, regulations can help to ensure that new technologies are developed in ways that are inclusive and that benefit everyone. Finally, regulation can help to promote environmental sustainability by ensuring that new technologies are developed in ways that do not harm the environment. By promoting sustainable development, regulations can help to ensure that future generations are able to enjoy a healthy and prosperous planet.

The regulation of new technology seeks to avoid a number of ethical harms. One of the main concerns is that new technologies may be developed and used in ways that are harmful to individuals or society as a whole. For example, new technologies may be used to invade people’s privacy or to discriminate against certain groups.

Another ethical harm that regulation seeks to avoid is the potential for new technologies to exacerbate existing social inequalities. For example, new technologies may create new opportunities for some people while leaving others behind1. It is important to ensure that new technologies are developed in ways that are inclusive and that benefit everyone.

Finally, regulation seeks to avoid the potential for new technologies to be used in ways that are harmful to the environment. For example, new technologies may be developed that contribute to climate change or that pollute the environment. It is important to ensure that new technologies are developed in ways that are sustainable and that do not harm the environment.

Overall, the regulation of new technology is an important issue that requires careful consideration of many different factors. By taking an ethical approach to regulation, we can ensure that new technologies are developed in ways that are beneficial to society as a whole.

Are you a technical, business or legal professional who works with technology adoption? Do you want to learn how to apply ethical frameworks and principles to your technology work and decision-making, understand the legal implications and challenges of new technologies and old laws, and navigate the complex and dynamic environment of technology innovation and regulation? If so, you need to check out this new book: Ethics, Law and Technology: Navigating Technology Adoption Challenges. This book is a practical guide for professionals who want to learn from the experts and stay updated in this fast-changing and exciting field.

6G Decentralization

Blockchains in 6G

The Internet’s original design objectives included sufficient decentralization to ensure the survivability of packet network services despite failures (or some form of censorship) of intervening nodes. Although the Internet is decentralized, only a small number of major international technology companies are in charge of our daily activities. These businesses are beginning to resemble old-style monopolies more and more. This is unhealthy given the significance of the Internet in our daily activities. As web services become more central to daily activities, Web 3.0 seeks to use decentralization ( and blockchains in particular)  to disrupt the digital sovereignty of proprietary Web 2.0 platforms. Web 3.0 decentralization focuses on maintaining individual control and ownership of data and other digital assets. Decentralization remains a tool for economic regulation to deter market dominance or anti-trust monopoly influences that can distort online services. Most countries provide legal remedies against market monopolies. While many blockchain advocates assume their blockchains provide decentralization,  the reality may differ from the ideal. Blockchains have already been proposed or deployed in many other domains from finance to healthcare. While public network infrastructures like 5G have been evolving native support for a greater variety of services decentralization has not been a major focus. 5G systems are in the stage of early deployments in several countries while 6G requirements are being gathered for the infrastructure to be deployed in the 2030s.

Image Credit: Adobe Stock6G

6G Decentralization

There is emerging academic literature articulating potential requirements for 6G infrastructure. Data sovereignty is emerging as a topic of national interest in managing the emergence of proprietary platforms capturing citizens’ data. The Internet is a transnational communications platform that challenges notions of jurisdiction. The development of decentralized blockchains stretches these notions even further, with consensus occurring through nodes distributed across multiple jurisdictions. An individual’s control of their own identity is fundamental to self-determination human rights. Identity theft has emerged as a significant threat in current communications networks. The tension between privacy and surveillance has long been discussed. Individual privacy becomes increasingly problematic considering the increasing deployments of IoT. Cybersecurity has become such a pervasive topic across the breadth of society that there are even cybersecurity awareness programs for children. Technology centric 6G  developments risk building an infrastructure that does not meet societal needs and wasting considerable amounts of scarce human and economic capital solving the wrong problems.

Decentralization is a deceptively simple term with a long history in politics and management /organizational theory than communication networks. Web 3.0 decentralization can be seen as a response to this demand for greater personalization and control through decentralization protocols. Deployments of decentralized networks are relatively recent with the processes and technology for monitoring them is also relatively immature. Algorithms and protocols embodied in communications infrastructures and services also implement and enforce other social objectives and requirements.

5G deployments have introduced the public to a number of new communications services – e.g., targeting Machine-Machine (M2M) communication as well as new communication modalities- e.g., Augmented and Virtual Reality (AR/VR) technologies for metaverse applications such as Digital Twins. The plethora of different services currently available or enabled by 5G has already spawned a huge variety of business models. This significantly impacts the number of stakeholders impacted by 6G services. Safe, reliable decentralized technologies such as blockchain can be expected to play a significant role in delivering 6G services.

6G is still within the requirements-gathering phase with deployments targeted for the 2030s. Proposals are emerging that place decentralization squarely within the scope of 6G. Decentralization remains a tool for economic regulation to deter market dominance or anti-trust monopoly influences that can distort online services. 6G developers have the opportunity to focus on solving real human needs rather than extrapolating technology possibilities. Decentralization has demonstrated utility in a broad range of fields from politics and organization theory to networks. Algorithmic approaches to implement decentralization objectives already exist and are being further industrialized

For additional details refer to Wright, S. A. (2022, November). 6G Decentralization. In 2022 International Conference on Electrical and Computing Technologies and Applications (ICECTA) (pp. 309-312). IEEE.

Ethical issues with ML

Ethics in action

Machine learning (ML) is a branch of artificial intelligence technology that enables computers to learn from data and make predictions or decisions. However, ML  technology can also raise ethical issues and challenges that affect individuals and society. ML can raise more ethical issues than other forms of AI because:

ML is more pervasive and ubiquitous: ML can be applied to a wide range of domains and contexts, such as health care, education, finance, justice, security, entertainment, and social media. This means that ML can affect more aspects of human life and society than other forms of AI that are more specialized or limited in scope.

ML is more autonomous and adaptive: ML can learn from data and feedback without explicit human guidance or intervention. This autonomy means that ML can evolve and change over time, potentially in unpredictable or unintended ways. This also means that ML can have more agency and influence over human actions and outcomes than other forms of AI that are more controlled or fixed.

ML is more complex and opaque: ML can produce complex and opaque models and systems that are difficult to apply, understand and interpret, even by experts. This means that ML can have more uncertainty and ambiguity about its processes and outcomes than other forms of AI that are more simple or transparent.

ML is more data-driven and data-dependent: ML depends on the quality and quantity of the data it is trained on and uses for prediction or decision making. This means that ML can inherit or amplify biases and errors that exist in the data, algorithms, or human judgments that influence its development and deployment. This also means that ML can create or raise new ethical issues related to data collection, processing, analysis, and use.

Image Credit: Adobe StockEthical issues with ML

Ethical issues with ML

Here are some of the top 10 ethical issues with the use of ML technology:

Privacy and surveillance: ML can collect, process, and analyze large amounts of personal and sensitive data, such as biometric, health, financial, or behavioral data. This can pose risks to the privacy and security of individuals and groups, especially if the data is used without their consent or knowledge, or if it is accessed or misused by unauthorized or malicious parties. Moreover, ML can enable mass surveillance and tracking of individuals and populations, potentially infringing on their civil liberties and human rights.

Transparency and explainability: ML can produce complex and opaque models and systems that are difficult to understand and interpret, even by experts. This can limit the transparency and accountability of ML processes and outcomes, especially if they are used for high-stakes or sensitive decisions that affect people’s lives, such as health care, education, employment, or justice. Moreover, ML can lack explainability and justification for its predictions or recommendations, making it hard to verify its validity, reliability, and fairness.

Bias and discrimination: ML can inherit or amplify biases and prejudices that exist in the data, algorithms, or human judgments that influence its development and deployment. This can result in unfair or discriminatory outcomes that disadvantage certain groups or individuals based on their characteristics, such as race, gender, age, disability, or sexual orientation. Moreover, ML can create or reinforce stereotypes and social norms that may harm the diversity and inclusion of individuals and society.

Autonomy and agency: ML can influence or interfere with the autonomy and agency of individuals and groups, especially if it is used to manipulate, persuade, coerce, or control their behavior, preferences, opinions, or emotions. Moreover, ML can affect the identity and dignity of individuals and groups, especially if it is used to replace, augment, or enhance their cognitive or physical abilities.

Responsibility and liability: ML can raise questions and challenges about the responsibility and liability for the actions and consequences of ML models and systems. This can involve multiple actors and stakeholders, such as developers, users, providers, regulators, researchers, educators, beneficiaries, victims, or critics. Moreover, ML can create moral dilemmas and trade-offs that may conflict with ethical values and principles.

Trust and acceptance: ML can affect the trust and acceptance of individuals and society towards ML models and systems. This can depend on factors such as the quality, accuracy, reliability, fairness, transparency, explainability, usability, security, privacy of ML models and systems. Moreover, trust and acceptance can depend on factors such as the awareness, education, communication, participation, representation, and empowerment of individuals and society regarding ML models and systems.

Beneficence and non-maleficence: ML can have positive or negative impacts on the well being and welfare of individuals and society. This can involve aspects such as health, safety, education, employment, justice, environment, culture, or democracy. Moreover, ML can have intended or unintended consequences that may be beneficial or harmful to individuals and society, both in the short-term and in the long-term.

Justice and fairness: ML can affect the justice and fairness of individuals and society. This can involve aspects such as equality, equity, diversity, inclusion, accessibility, accountability, redress, or participation. Moreover, ML can create or exacerbate inequalities or injustices that may affect certain groups or individuals more than others, such as minorities, vulnerable, or marginalized populations.

Human dignity and human rights: ML can affect the human dignity and human rights of individuals and society. This can involve aspects such as respect, recognition, autonomy, agency, identity, privacy, Security, freedom, or democracy. Moreover, ML can violate or undermine human dignity and human rights if it is used for malicious or unethical purposes, such as exploitation, discrimination, manipulation, coercion, control, or oppression.

Human values and ethics: ML can reflect or challenge human values and ethics of individuals and society. This can involve aspects such as morality, integrity, honesty, compassion, empathy, solidarity or altruism. Moreover, ML can create or raise new or emerging values and ethics that may not be well-defined or well-understood, such as trustworthiness, explainability, responsibility, or sustainability.

Are you a technical, business or legal professional who works with technology adoption? Do you want to learn how to apply ethical frameworks and principles to your technology work and decision-making, understand the legal implications and challenges of new technologies and old laws, and navigate the complex and dynamic environment of technology innovation and regulation? If so, you need to check out this new book: Ethics, Law and Technology: Navigating Technology Adoption Challenges. This book is a practical guide for professionals who want to learn from the experts and stay updated in this fast-changing and exciting field.

Why Decentralize Deep Learning?

Deep learning, big data, IoT and blockchain are individually very important research topics of today’s technology, and their combination has the potential to generate additional synergy. Such synergy could enable decentralized and intelligent automated applications to achieve safety, and security and optimize performance and economy. These technologies -Deep learning, big data, IoT, and blockchain all rely on infrastructure capabilities in computing and communications that are increasingly decentralized. Edge computing deployments and architectures are commencing with 5G and are expected to accelerate in 6G. Existing application domains like healthcare and finance are starting to explore the integration of these technologies. Newly emerging application areas such as the metaverse may well require native support of decentralized deep learning to achieve their potential. But the path of new technology development is never smooth. New challenges have been identified and additional architectural frameworks have been developed to overcome some of these issues. Decentralizing deep learning enables increased scale for AI implementations, but also enables improvements in privacy and trustworthiness. The plethora of literature emerging on decentralized deep learning prompts the need for rationale criteria to support design decisions for implementation to utilize decentralized deep learning

 

Why Decentralize Deep Learning

Why Decentralize Deep Learning

Decentralization via blockchains and AI via deep learning are major research and develop trends in modern technology that converge in DDL. The excitement of researchers and technologies for new technologies have led to cycles of unrealistic expectations. The development of suitable criteria to provide a rationale for why DDL should be considered in the designing solutions for commercial implementation should help reduce unrealistic applications. Fig. 1 provides an approach to synthesizing such rationale criteria. This is derived from the discussion in section III to formulate some preliminary questions to help resolve the applicability of DDL as a design solution. Additional criteria could then be developed to extend design decisions further e.g. into selecting particular blockchain or DL algorithms and features, or based on data availability or quality.

The initial enquiry asks whether the objectives of the proposed DDL have an explicit requirement for decentralization. For example, the proposed DDL might be intended for application in an improved type of blockchain which is inherently decentralized.

The second inquiry asks whether there are multiple stakeholders. An internal implementation within a single stakeholder is less likely to require decentralization. Multiple stakeholder collaborating in a more likely to benefit from a trusted decentralized implementation.

The third inquiry asks whether the data or AI algorithm are already decentralized. Where a portion of the design prerequisites are already decentralized, a DDL may be an efficient solution. Where none of the design prerequisites are already decentralized may require a significant transformation of existing processes, implying greater implementation costs.

The fourth inquiry asks whether decentralization is required for at least part of the life cycle. If the decentralization can be constrained to a reduced portion of the lifecycle, this may reduce the implementation costs. Conversely if there is no part of the lifecycle where decentralization makes sense, then DDL does not seem a relevant design solution.

For more information on this topic refer to Wright, S. A. (2023). Why Decentralize Deep Learning? In Workshop on: Decentralized Deep Learning: New Trends and Advanced Technologies (DDL2023) held in conjunction with 2023 IEEE 15th International Symposium on Autonomous Decentralized System (ISADS) (pp. 1-6). IEEE.

The Ethics of Technology Adoption

ethics in action

Technology is the application of scientific knowledge and skills to produce goods and services that meet human needs and wants. Technology adoption or utilization decisions create uncertainties that require consideration from an ethical perspective because they involve complex and dynamic interactions between human values and interests, technological capabilities and limitations, and social and environmental contexts and consequences. Technology adoption or utilization decisions are not merely technical or economic choices, but also moral choices that reflect and affect our individual and collective well-being, rights and responsibilities. Ethics is relevant to the adoption of new technology at the individual, organizational and societal levels because it helps us evaluate the impacts and implications of technology on human values and interests. Technology adoption is not a neutral process, but rather a complex and dynamic one that involves multiple stakeholders, trade-offs, risks and benefits

Image Credit: Adobe StockThe Ethics of Technology Adoption

The Ethics of Technology Adoption

On the other hand, technology can challenge and change our ethical understanding and reasoning by introducing new possibilities, dilemmas, and consequences that were not previously considered. For example, technology can raise questions about the moral status of non-human entities, such as animals, plants, robots, and digital agents, the responsibility and accountability of human and machine agents, and the impact of technology on human dignity, autonomy, and well-being.

The Ethics of Technology Adoption at an individual level

At the individual level, technology adoption or utilization decisions create uncertainties about how to use technology in ways that are consistent with our personal values and goals, and that do not harm or infringe on the rights of others. For example, we may face challenges and need to adopt best practices in the light of  uncertainties about how to balance our privacy and security with our convenience and connectivity, how to manage our digital identity and reputation, and how to cope with the psychological and emotional effects of technology use. At the individual level, ethics can help us make informed and responsible choices about how to use technology in our personal and professional lives.

The Ethics of Technology Adoption at an organizational level

At the organizational level, technology adoption or utilization decisions create uncertainties about how to design and implement technology in ways that are aligned with our organizational mission, vision and values, and that do not harm or exploit our stakeholders or society at large. When preplanning the scaling of a new technology,  it should be considered that what may be ethically acceptable at an individual scale may become problematic at a larger social scale. For example, we may face uncertainties about how to ensure the accessibility, inclusivity, fairness and transparency of our technology, how to protect the data and information of our customers, employees and partners, and how to mitigate the risks and liabilities of our technology.This is an issue for all companies facing technology adoption decisions, not just those developing new technologies.

The Ethics of Technology Adoption at a societal level

At the societal level, technology adoption or utilization decisions create uncertainties about how to address the broader social and environmental challenges and opportunities that technology creates. For example, we may face uncertainties about how to promote the common good, foster social justice and human rights, and protect the planet and its resources. At the societal level, ethics can help us address the broader social and environmental challenges and opportunities that technology creates.

Conclusion

Technology adoption or utilization decisions require consideration from an ethical perspective because they have significant moral implications for ourselves, others and future generations. By applying ethical principles and values to our technology decisions, we can reduce uncertainties, resolve dilemmas, and enhance trust and innovation. Ethics and technology are not separate domains, but rather intertwined aspects of human life that require constant reflection, dialogue, and evaluation. By applying ethical thinking to the practical concerns of technology, we can ensure that our technological systems and practices are aligned with our moral values and goals.

Are you a technical, business or legal professional who works with technology adoption? Do you want to learn how to apply ethical frameworks and principles to your technology work and decision-making, understand the legal implications and challenges of new technologies and old laws, and navigate the complex and dynamic environment of technology innovation and regulation? If so, you need to check out this new book: Ethics, Law and Technology: Navigating Technology Adoption Challenges. This book is a practical guide for professionals who want to learn from the experts and stay updated in this fast-changing and exciting field.

Blockchain Enabled Decentralized Network Management in 6G

The Internet has evolved from a fault-tolerant infrastructure to support both social networking and a semantic web for machine users. Trust in the data, and the infrastructure, has become increasingly important as cyber threats and privacy concerns rise. Communication services become increasingly delivered through virtualized, software-defined infrastructures, like overlays across multiple infrastructure providers. Increasing recognition of the need for services to be not only fault-tolerant but also censorship-resistant while delivering an increasing variety of services through a complex ecosystem of service providers drives the need for decentralized solutions like blockchains. Service providers have traditionally relied on contractual arrangements to deliver end-to-end services globally. Some of the contract terms can now be automated through smart contracts using blockchain technology.

image credit: adobe stock Blockchain

Blockchain network management

This is a complex distributed environment with multiple actors and resources. The trend from universal service to service fragmentation, already visible in the increasing IoT deployments using blockchains, is expected to continue in 6G. Virtualization of the infrastructure with NFV-SDN make prevalent the concepts of network overlays, network underlays, network slices. In the 6G era, it seems that service providers will need to provide network management service assurance beyond availability including aspects such as identities, trustworthiness, and censorship resistance.

Blockchains are not only proposed for use at a business services level but also in the operation of the network infrastructure including dynamic spectrum management, SDN and resource management, metering and IoT services. Traditional approaches to network management have relied on client–server protocols and centralized architectures. The range of services offered over 6G wireless that need to be managed is expected to be larger than the variety of services over existing networks. Scaling delivery may also require additional partners to provide the appropriate market coverage. Management of 6G services needs to support more complex services in a more complex commercial environment, and yet perform effectively as the services and infrastructure scale.

Digital transformation at both network operators and many of their customers has led to a software-defined infrastructure for communication services, based on virtualized network functions. Decentralized approaches for network management have gained increasing attention from researchers. The operators increased need for mechanisms to assure trust in data, operations and commercial transactions while maintaining business continuity through software and equipment failures, and cyberattacks provide further motivations for blockchain-based approaches. These architectural trends towards autonomy, zero touch and zero trust are expected to continue as a response to networking requirements. Blockchain infrastructures seem to provide an approach address some of these requirements.

Blockchain-enabled decentralized network management is disruptive change to existing network management processes. The scope and scale of the 6G network management challenge supports the need for these types of network management architectures. Both technical and commercial or organization challenges remain before the wider adoption of these technologies. Blockchain-enabled decentralized network management provides a promising framework for considering the operational and administrative challenges expected in 6G communications infrastructure.

For further details refer to Wright, S.A. (2022). Blockchain-Enabled Decentralized Network Management in 6G. In: Dutta Borah, M., Singh, P., Deka, G.C. (eds) AI and Blockchain Technology in 6G Wireless Network. Blockchain Technologies. Springer, Singapore. https://doi.org/10.1007/978-981-19-2868-0_3

Learning to Solve the Right Problem

5 Benefits for Product Managers

What’s the worst nightmare for a product manager? When he/she is working on creating a solution…and this realization strikes like lightning: “Wait a minute…what problem am I actually solving? … How can I be sure I actually solve the right problem?

Or when someone asks this question in a meeting — and it puts everything on hold.

You see, solving problems is innate to product managers. It’s what helps the team, department, product, and company as a whole succeed when product professionals like product managers solve problems that matter. But, the secret that makes a product manager great, is identifying and solving the right problems first.

(We’ll get to that shortly later in the post)

First, let’s have a look at the 5 powerful benefits you’ll gain by leveling up your knowledge — and learning how to think outside the box to solve the right problem!

1.   You’ll make your workflow smoother.

Obstacles can hinder your workflow. When you solve the right problem, you overcome the right obstacle at the right time, you can keep your projects running smoothly. More often than not, you’re tackling a bunch of complex problems at the same time. By learning how to solve those sudden, unexpected problems that matter the most first,  you can pave the way to a smoother workflow.

2.   You’ll become a better team leader.

When you solve the right problem, you become a better problem solver, better team leader. You’ll be able to keep your team cohesive and connected. Since you’re the first respondent to every new problem — your attitude to how you approach that problem matters the most. If you do that job the right way, your team will stay worry-free, and will be confident enough to put their trust in you. Resultantly, you’ll position yourself as a credible, trustworthy leader who knows how to go about incredibly complex problems, at any given time.

3.   You’ll finish more work in less time.

When you solve the right problem you save time. One single problem — even as mundane as renewing a software subscription — can put a ‘full stop’ to your work progress. Everyone just stops. All that time goes wasted, unproductive, or extremely less productive at the very least. But when you know what problems to prioritize first, how to identify them proactively, and then solve them before they hurt your productivity too much — that exact thing will help you do more in less time.

4.   You’ll be delivering on-time projects.

On-time projects boost clients’ satisfaction, your department’s success, and your company’s success. But that’s only possible when you’re fast, efficient, and instinctive to solving the right problems, at the right time.

5.   Happy clients/customers, happy You.

This is the most important benefit. At the end of the day, all that matters is how happy and satisfied your heart is by your work progress. As a product manager, you’re always at the forefront of every new obstacle. At times, it can even leave you overwhelmed on how to understand exactly what problems you’re facing — let alone how to solve those particular problems. When you master your problem-solving skills, you become confident, competent, satisfied, and a profitable team player for the company.

It’s Time.

Which problems should you solve first? The important ones, of course.

But how do you know which one is the most important one? Learn how to solve the right problem!

Instead of rushing, I want you to pause for a second.

Breathe.

What happens mostly is that managers rush into problems. The science behind why that happens is because we’re trained so instinctively.

Think about your school days. Your questions were well-framed and well-stated, all you needed were to figure out solutions/answers.Think about your childhood.

Your parents made it very clear about the things you’re doing wrong – and even guided you how to fix them.

Think about your mentor. They paved the way by making you well-aware of the  hidden problems that were sabotaging your success.

The same pattern is continued by most managers in their professional life as well.

They just rush instinctively, without creating better problem statements first

Without actually stopping and figuring out the root cause of the problem.

They jump into ‘solution space’ too quickly, instead of spending enough time exploring the ‘problem space.’

If you’re also a victim of that…stop.

Because when you’re in business, problems and obstacles can be complex.

Often extremely nerve-racking, coming from all different root causes.

That leads to unsolved clusters of problems — or worse, working on wrong problems at the wrong time.

Before anything, simplify what the problem actually is — and whether it’s the right one to focus on first.

What you should do instead is to…pause…wear your detective hat…look at everything with a fish-eye lens…

And craft better problem statements.

Get a hold of yourself — and see the problem as a whole using my Power Perspective method.

This is a brand new course I created to help product managers, entrepreneurs, or business professionals.

This course will teach you a compact, systematic approach to craft better problem statements using the unique power of perspective — so you can solve your best problems.

You’ll be able to create solutions that are effective and thorough.

Learn More about this course right here.

Virtue Ethics

Ethics in Action

Virtue ethics is a normative ethical theory that focuses on the character of the moral agent rather than the rightness or wrongness of an action. It is one of the three major approaches in ethics, along with consequentialism and deontology. Virtue ethics is based on the idea that we should aim to cultivate virtuous habits that will help us act in accordance with moral values. The origin of virtue ethics can be traced back to ancient Greek philosophy, especially the works of Plato and Aristotle. They argued that the goal of human life is to achieve eudaimonia, which is often translated as happiness or well-being, but also implies a sense of fulfillment and flourishing. To attain eudaimonia, one must develop and exercise the virtues, which are excellences of character that enable us to function well as rational and social beings. Some examples of virtues are courage, justice, wisdom, honesty, generosity, and loyalty. These are not fixed rules or principles, but rather flexible and context-dependent dispositions that guide our actions, thoughts, and feelings in various situations. Virtues are acquired through practice and habituation, not through following commands or calculating consequences. As Aristotle famously said, “We become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts” .

Image Credit: Adobe StockVirtue Ethics

Virtue Ethics

One of the distinctive features of virtue ethics is that it emphasizes the role of practical wisdom (phronesis) in moral decision-making. Practical wisdom is the ability to discern the best course of action in a given circumstance, taking into account the relevant facts, the moral values at stake, and the particular situation of the agent. It is not a scientific or theoretical knowledge, but rather a skill that is learned through experience and education. Practical wisdom enables us to apply the general ideals of virtue to concrete cases and to balance different and sometimes conflicting values.  Virtue ethics has several advantages over other ethical theories. First, it provides a more realistic and holistic account of human nature and morality, recognizing that we are not only rational but also emotional, social, and embodied beings. Second, it offers a more positive and aspirational vision of ethics, encouraging us to develop our moral character and to seek excellence rather than merely avoid wrongdoing. Third, it respects the diversity and complexity of moral situations, allowing for flexibility and nuance rather than imposing rigid and universal rules.

However, virtue ethics also faces some challenges and criticisms. One is that it may be too vague or subjective to provide clear and consistent guidance for moral action. How do we define and measure the virtues? How do we resolve conflicts between different virtues or different interpretations of the same virtue? How do we deal with moral dilemmas where no option seems virtuous? Another challenge is that virtue ethics may be too demanding or elitist to be applicable to ordinary people. How can we attain the virtues in a world full of temptations, pressures, and obstacles? How can we ensure that we have the proper education and environment to foster our moral development? How can we avoid being influenced by corrupting factors such as self-interest, bias, or prejudice?

Despite these difficulties, virtue ethics remains a rich and influential tradition that has inspired many thinkers and practitioners across different fields and cultures. Some of the major variants in virtue ethics include:

Eudaimonist Virtue Ethics: This framework is based on the ancient Greek concept of eudaimonia, which means happiness or well-being. It holds that the ultimate goal of human life is to achieve eudaimonia, which is possible only by developing and exercising the virtues. The virtues are excellences of character that enable us to function well as rational and social beings. The most influential version of this framework was proposed by Aristotle, who identified a list of moral and intellectual virtues and argued that they are acquired through habituation and education.

Agent-Based and Exemplarist Virtue Ethics: This framework is based on the idea that we can identify the virtues by looking at the traits and actions of moral exemplars, such as saints, heroes, or role models. It holds that the virtues are not defined by rules or principles, but by common-sense intuitions that we as observers have about admirable people. The most influential version of this framework was proposed by Michael Slote (see e.g. [Slote 2020]), who argued that the virtues are based on empathic caring and respect for others.

Target-Centered Virtue Ethics: This framework is based on the idea that we can identify the virtues by looking at the ends or goals that they aim at. It holds that the virtues are not defined by human nature or moral ideals, but by the specific context and situation in which they are exercised. The most influential version of this framework was proposed by Christine Swanton ( see e.g., [Swanton 2003]), who argued that the virtues are based on promoting the well-being of oneself and others in a pluralistic and complex world.

Ethics of Care: This framework is based on the idea that we can identify the virtues by looking at the relationships and responsibilities that we have with others. It holds that the virtues are not defined by abstract or universal values, but by concrete and particular needs and emotions. The most influential version of this framework was proposed by Carol Gilligan ( See e.g. [Gilligan 1982]), who argued that the virtues are based on caring and nurturing, especially for those who are vulnerable or dependent.

[Gilligan 1982].  Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Cambridge, MA: Harvard.

[Slote 2020]        Slote, M. (2020). Agent-based virtue ethics. Handbuch Tugend und Tugendethik, 1-10. Springer

[Swanton 2003] Swanton, C. (2003). Virtue ethics: A pluralistic view. Clarendon Press.