{"id":57751,"date":"2026-02-24T13:07:42","date_gmt":"2026-02-24T21:07:42","guid":{"rendered":"https:\/\/svch.io\/?p=57751"},"modified":"2026-02-24T13:07:48","modified_gmt":"2026-02-24T21:07:48","slug":"ai-ethics-in-the-posthuman-age-power-risk-and-responsibility","status":"publish","type":"post","link":"https:\/\/svch.io\/es\/ai-ethics-in-the-posthuman-age-power-risk-and-responsibility\/","title":{"rendered":"AI Ethics in the Posthuman Age: Power, Risk, and Responsibility"},"content":{"rendered":"\n<p><a href=\"https:\/\/www.linkedin.com\/company\/svch\/\"><\/a><a href=\"https:\/\/www.linkedin.com\/in\/rajakishore-nath-146927263\/\">Rajakishore Nath<\/a> and <a href=\"https:\/\/www.linkedin.com\/in\/riya-manna-696985167\/\">RIYA MANNA<\/a> , \u201cFrom posthumanism to ethics of artificial intelligence,\u201d <em>AI &amp; Society<\/em> <strong>38(1)<\/strong>, 185\u2013196.<\/p>\n\n\n\n<p id=\"ember1776\"><a href=\"https:\/\/www.linkedin.com\/school\/indian-institute-of-technology-bombay\/\">Indian Institute of Technology, Bombay<\/a><\/p>\n\n\n\n<p id=\"ember1777\"><strong>\u201cCited by\u201d (approx):<\/strong> 108<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1778\">What the paper is trying to do<\/h3>\n\n\n\n<p id=\"ember1779\">This is a <strong>conceptual philosophy paper<\/strong>, not an experiment. The authors are basically asking: as AI and \u201cposthuman\u201d futures get more plausible, <strong>who counts as a moral agent<\/strong>, and how does that reshape <strong>AI ethics<\/strong>?<\/p>\n\n\n\n<p id=\"ember1780\">Their own abstract frames the mission :<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Our aim \u2026 is to critically analyze the authenticity of the posthuman cyborg as an agent \u2026 and the emergence of \u2018AI ethics\u2019.<\/p>\n<\/blockquote>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/svch.io\/\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" src=\"https:\/\/media.licdn.com\/dms\/image\/v2\/D5612AQHQ_J56hyOrLQ\/article-inline_image-shrink_1500_2232\/B56ZyMWNJNJQAU-\/0\/1771881158250?e=1773273600&amp;v=beta&amp;t=v-GEwaln-0sOzd7xZlehFgl3KcXPyuc_NXouk8agM6k\" alt=\"Silicon Valley Certification Hub. Certifications on Artificial Intelligence \"\/><\/a><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1784\">AI-centered key findings (what you can actually use)<\/h3>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1786\">1) AI pushes a shift in what \u201chuman\u201d even means<\/h3>\n\n\n\n<p id=\"ember1787\">They argue posthumanism \u201cdeconstructs\u201d a radical concept of the human, and that AI advancement will drive a new conception of \u201cbiological human being.\u201d<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Sooner or later, we shall get a different conception of \u2018biological human being\u2019 through the advancement of artificial intelligence (AI) technology.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"ember1789\">A lot of AI ethics assumes stable categories: human users, human rights-holders, nonhuman tools. This paper says those categories are about to get messy.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1790\">2) \u201cHybrid beings\u201d and cyborgs force the moral agency question<\/h3>\n\n\n\n<p id=\"ember1791\">They highlight a future where AI could replace parts of the brain or body, creating \u201chybrid human beings,\u201d and ask whether these hybrids should be treated as moral agents like biological humans.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>we must analyze whether \u2026 \u2018hybrid human beings\u2019 [are] moral agents, similar to biological humans.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"ember1793\"><strong>AI angle:<\/strong> This is the bridge from posthumanism into AI ethics. If agency is distributed across human and machine, then \u201cwho did the action\u201d becomes a design and governance problem.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1794\">3) Responsibility is the central ethical bottleneck<\/h3>\n\n\n\n<p id=\"ember1795\">They call responsibility \u201cthe most debatable issue\u201d and link it to whether we can build \u201cconscious moral agency\u201d into machines.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u201cThe responsibility question remains the most debatable issue\u2026\u201d<\/li>\n\n\n\n<li>\u201cWithout any \u2018conscious moral agency\u2019, we could not held them responsible for their actions.\u201d<\/li>\n<\/ul>\n\n\n\n<p id=\"ember1797\"><strong>AI angle:<\/strong> This maps directly onto modern questions like: liability for autonomous systems, accountability for agentic AI, responsibility gaps, and whether \u201cmoral agency\u201d is necessary for governance.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1798\">4) \u201cFriendly AI\u201d risks becoming anthropocentrism in disguise<\/h3>\n\n\n\n<p id=\"ember1799\">They describe AI ethics as often aiming for \u201cfriendly AI,\u201d then point out the trap: defining \u201cfriendly\u201d as \u201chuman-benefiting\u201d can re-install anthropocentrism.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember1800\">We are prone to judge future intelligent systems from the anthropocentric attitude\u2026<\/h2>\n\n\n\n<p id=\"ember1801\"><strong>AI angle:<\/strong> This is a useful critique of AI ethics that only optimizes for human preferences, while ignoring broader moral circles (animals, ecosystems, nonhuman agents, future beings).<\/p>\n\n\n\n<p id=\"ember1802\"><strong>\u201cAI ethics is secretly a project to preserve human dominance.\u201d<\/strong><\/p>\n\n\n\n<p id=\"ember1803\">Evidence line: \u201cThe central theme \u2026 \u2018friendly-AI\u2019 \u2026 betterment of human society.\u201d<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Strength:<\/strong> It\u2019s a clean bridge from posthumanism into the AI ethics questions that actually matter: moral agency, responsibility, anthropocentrism, and computable ethics.<\/li>\n\n\n\n<li><strong>Weakness:<\/strong> It\u2019s largely <strong>speculative<\/strong> and sometimes treats \u201cposthuman cyborg\u201d as the main future path, while today\u2019s AI ethics problems often come from boring realities like incentives, deployment, opacity, and power. The paper gives you the moral vocabulary, not the governance playbook.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember1806\">Key follow-up research questions (AI-first)<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>If we never achieve machine consciousness, what <strong>responsibility framework<\/strong> best handles agentic AI in the real world?<\/li>\n\n\n\n<li>How do we define \u201cfriendly AI\u201d without defaulting to <strong>anthropocentrism<\/strong>?<\/li>\n\n\n\n<li>What parts of ethics are <strong>computable<\/strong>, and what parts collapse when you try to formalize them?<\/li>\n\n\n\n<li>If hybrids exist, what minimal criteria should trigger <strong>rights, protections, or person-like status<\/strong>?<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/svch.io\/\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" src=\"https:\/\/media.licdn.com\/dms\/image\/v2\/D5612AQFiWYcW36OFtQ\/article-inline_image-shrink_1500_2232\/B56ZyMXA90IcAU-\/0\/1771881370048?e=1773273600&amp;v=beta&amp;t=wkxD0s8fy1V_xXmu3yyaKj1Os-qdON_SlrHM2i1Xprc\" alt=\"Silicon Valley Certification Hub\"\/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Rajakishore Nath and RIYA MANNA , \u201cFrom posthumanism to ethics of artificial intelligence,\u201d AI &amp; Society 38(1), 185\u2013196. Indian Institute of Technology, Bombay \u201cCited by\u201d (approx): 108 What the paper [&hellip;]<\/p>\n","protected":false},"author":155,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"advanced_seo_description":"Explore how posthumanism shapes the ethics of artificial intelligence and what it means for responsibility, power, and the future of humanity.","jetpack_seo_html_title":"The Ethical Reckoning of Artificial Intelligence After Posthumanism","jetpack_seo_noindex":false,"_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-57751","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"acf":[],"jetpack_featured_media_url":"","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/posts\/57751","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/users\/155"}],"replies":[{"embeddable":true,"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/comments?post=57751"}],"version-history":[{"count":0,"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/posts\/57751\/revisions"}],"wp:attachment":[{"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/media?parent=57751"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/categories?post=57751"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/svch.io\/es\/wp-json\/wp\/v2\/tags?post=57751"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}