Rajakishore Nath and RIYA MANNA , “From posthumanism to ethics of artificial intelligence,” AI & Society 38(1), 185–196.
Indian Institute of Technology, Bombay
“Cited by” (approx): 108
What the paper is trying to do
This is a conceptual philosophy paper, not an experiment. The authors are basically asking: as AI and “posthuman” futures get more plausible, who counts as a moral agent, and how does that reshape AI ethics?
Their own abstract frames the mission :
Our aim … is to critically analyze the authenticity of the posthuman cyborg as an agent … and the emergence of ‘AI ethics’.
AI-centered key findings (what you can actually use)
1) AI pushes a shift in what “human” even means
They argue posthumanism “deconstructs” a radical concept of the human, and that AI advancement will drive a new conception of “biological human being.”
Sooner or later, we shall get a different conception of ‘biological human being’ through the advancement of artificial intelligence (AI) technology.
A lot of AI ethics assumes stable categories: human users, human rights-holders, nonhuman tools. This paper says those categories are about to get messy.
2) “Hybrid beings” and cyborgs force the moral agency question
They highlight a future where AI could replace parts of the brain or body, creating “hybrid human beings,” and ask whether these hybrids should be treated as moral agents like biological humans.
we must analyze whether … ‘hybrid human beings’ [are] moral agents, similar to biological humans.
AI angle: This is the bridge from posthumanism into AI ethics. If agency is distributed across human and machine, then “who did the action” becomes a design and governance problem.
3) Responsibility is the central ethical bottleneck
They call responsibility “the most debatable issue” and link it to whether we can build “conscious moral agency” into machines.
- “The responsibility question remains the most debatable issue…”
- “Without any ‘conscious moral agency’, we could not held them responsible for their actions.”
AI angle: This maps directly onto modern questions like: liability for autonomous systems, accountability for agentic AI, responsibility gaps, and whether “moral agency” is necessary for governance.
4) “Friendly AI” risks becoming anthropocentrism in disguise
They describe AI ethics as often aiming for “friendly AI,” then point out the trap: defining “friendly” as “human-benefiting” can re-install anthropocentrism.
We are prone to judge future intelligent systems from the anthropocentric attitude…
AI angle: This is a useful critique of AI ethics that only optimizes for human preferences, while ignoring broader moral circles (animals, ecosystems, nonhuman agents, future beings).
“AI ethics is secretly a project to preserve human dominance.”
Evidence line: “The central theme … ‘friendly-AI’ … betterment of human society.”
- Strength: It’s a clean bridge from posthumanism into the AI ethics questions that actually matter: moral agency, responsibility, anthropocentrism, and computable ethics.
- Weakness: It’s largely speculative and sometimes treats “posthuman cyborg” as the main future path, while today’s AI ethics problems often come from boring realities like incentives, deployment, opacity, and power. The paper gives you the moral vocabulary, not the governance playbook.
Key follow-up research questions (AI-first)
- If we never achieve machine consciousness, what responsibility framework best handles agentic AI in the real world?
- How do we define “friendly AI” without defaulting to anthropocentrism?
- What parts of ethics are computable, and what parts collapse when you try to formalize them?
- If hybrids exist, what minimal criteria should trigger rights, protections, or person-like status?
0 Comments