Sunday, March 15, 2026
Today's Paper
Upcoming
Upcoming event

EU AI Act Delays: Vedanta for Ethical Governance

From pause calls to high-risk delays, Vedanta guides discernment, duty, and fear-free innovation today.

Between June and November 2025, Europe’s AI rulebook met its first real-world pressure test. Lobby groups and some leaders urged a pause, warning that missing standards and guidance could turn noble safeguards into confusion. Months later, the Commission floated delays for “high-risk” obligations, folding AI into a broader effort to simplify digital regulation. These policy tremors are not merely legal. They reveal a deeper question: how should a society govern power that learns, predicts, and persuades, without losing its soul?

Vedanta answers by starting closer than any statute, at the mind that writes statutes. The Upanishads ask, “Know That by which all this is known.” The Bhagavad Gita teaches yoga as skill in action, and karma-yoga as duty without attachment to outcomes. Applied to AI governance, these teachings become: clarity before control, humility before measurement, compassion before enforcement, and restraint before scale. A stress test, in this light, is a sadhana: revealing where our governance is sattvic, rajasic, or tamasic.

1. June to November 2025: when governance met gravity

In textbooks, regulation looks like a clean staircase: a law is written, guidance appears, standards mature, and the market adapts. In life, the staircase shakes. From June through November 2025, the European Union’s AI governance agenda experienced that shaking in public. It was not only a dispute about compliance calendars. It was a test of readiness, legitimacy, and social consent.

June: “pause” becomes a serious word

By late June 2025, a leading tech lobbying group publicly urged the Union to pause implementation of the AI Act, arguing that a rushed rollout could stall innovation. The reason given was not merely discomfort with oversight. The concern was that “critical parts” needed for compliance were still missing close to key dates, including elements linked to general purpose AI rules that were expected soon. The same period saw political leaders echo the “pause” language, describing the rules as confusing and warning that unclear obligations could harm competitiveness.

The dates matter. Public reporting highlighted that key provisions for general purpose AI were scheduled to apply on 2 August 2025, while some supporting pieces expected earlier (including items expected by early May) did not arrive on that earlier schedule. When rules approach their effective date while standards and guidance lag, even supportive actors feel exposed. This is the first signal of a governance stress test: the gap between legal intent and operational readiness.

Whether one agrees with pause requests or not, the underlying lesson is important. Governance is more than principle. Governance is the availability of workable tools, processes, and institutions. A law can be ethically motivated and still fail in execution if the scaffolding is incomplete.

Summer: the friction between principle and implementation

During the summer, the debate widened. Some actors argued that Europe should slow down to avoid legal uncertainty and fragmented enforcement; others argued that slowing down would expose people to harm and create a vacuum in accountability. In many public conversations, “high standards” and “high innovation” were framed as rivals. Yet the deeper friction was more subtle: when guidance, standards, and institutional capacity lag, even sincere actors become anxious. Anxiety then expresses itself as lobbying, brinkmanship, and simplified slogans.

Vedanta gives a name to this dynamic. It calls it rajas, the restless drive to act, compete, and secure outcomes. When rajas dominates, even good aims can become reactive. In governance, rajas can show up as hurried timelines and frantic compliance theater. It can also show up as the market’s demand to loosen constraints before the constraints are properly understood.

Early November: talk of targeted pauses

By early November 2025, reports indicated that the European Commission was weighing targeted pauses or adjustments to parts of the AI Act amid pressure from industry and geopolitical partners. The language shifted from broad “stop the clock” demands to narrower, more concrete policy options. Whatever the merits of the pressure, the story signaled that the stress test was reaching decision time: either the EU would reaffirm its implementation timeline as written, or it would propose amendments that acknowledged real-world implementation constraints.

Mid November: a simplification package and a “stress test” by design

On 19 November 2025, the Commission presented a broader “Digital Omnibus” approach to simplify parts of the EU’s digital rulebook, and it explicitly launched a “Digital Fitness Check” described as a stress test of that rulebook, with stakeholder input invited through 11 March 2026.

Within the same package, the Commission proposed changes affecting the timeline for “high-risk” AI obligations. Public reporting described a proposal to delay the application of stricter rules for high-risk AI systems to December 2027, rather than the earlier schedule. The policy rationale was framed around practical implementation challenges, including the pace of harmonised standards, guidance, and the readiness of national authorities and assessment bodies. The Commission’s legislative proposal also discusses implementation challenges and explicitly links high-risk obligations’ entry into application to the availability of measures supporting compliance, such as harmonised standards, common specifications, and guidelines, while still setting a definite backstop date.

What matters for our Vedantic inquiry is not who “won” the debate. What matters is what the stress test revealed:

  1. Europe is trying to regulate not a single product class but a fast evolving general capability.

  2. Rules that are principled on paper still require standards, tools, and institutions to become usable.

  3. Too much speed can produce confusion, but too much delay can prolong exposure to harms.

  4. Governance is not only about preventing misuse. It is also about preserving the moral legitimacy of innovation.

The arc from June to November is, therefore, a living laboratory for ethics. It shows a continent attempting to remain faithful to fundamental rights while also competing in a global AI race. It shows industry attempting to preserve agility while also avoiding liability. It shows policymakers attempting to be both careful and fast, which is often a contradictory demand.

Vedanta does not resolve that contradiction by slogans. It resolves it by returning to first principles: what is the aim of action, what is the nature of knowledge, and what is the right way to use power.

2. Why “high-risk” rules became the flashpoint

The AI Act’s risk-based structure treats some uses of AI as especially consequential: systems that can meaningfully affect health, safety, and fundamental rights. The label “high-risk” is not meant as moral panic. It is a classification tool.

In public summaries, the “high-risk” category often includes contexts such as biometric identification, employment and recruitment, access to education and examinations, credit decisions, healthcare, critical infrastructure and utilities, law enforcement, and certain public services. The common thread is not that AI is always harmful in these areas. The common thread is that errors, opacity, or manipulation in these areas can become life-altering.

So why did “high-risk” compliance become the center of delay proposals?

  1. Standards dependency. Many technical obligations become meaningful only when harmonised standards or common specifications exist. Without them, compliance becomes interpretive and inconsistent.

  2. Institutional readiness. Authorities, conformity assessment bodies, and enforcement processes must exist in practice, not only in a recital.

  3. Economic magnitude. Many enterprises deploy AI in hiring, screening, risk scoring, and monitoring. Changing these systems is expensive and politically sensitive.

  4. Public legitimacy. If high-risk obligations arrive with confusion, the first wave of enforcement can look arbitrary. That undermines trust in the whole project.

A Vedantic metaphor helps here. In traditional teaching, a student is given a prakriya, a method. If the method is unclear, the student may misapply it and then blame the truth. Similarly, when a compliance method is unclear, industry may misapply it and then blame the ethical aim.

The stress test period was, in effect, a collective question: can Europe align the ethics of protection with the engineering of compliance? If the answer is “not yet,” the temptation is to delay. If the answer is “we cannot delay,” the temptation is to enforce with partial readiness. Both temptations carry costs.

Vedanta encourages a third posture: steady preparation plus principled urgency. Prepare with humility, but do not use preparation as a disguise for avoidance. This is karma yoga on the scale of institutions.

3. The Vedantic diagnosis: three gunas of AI governance

Vedanta and the broader Indian philosophical tradition often describe experience through the three gunas: sattva (clarity and harmony), rajas (activity and restlessness), and tamas (inertia and obscuration). The gunas are not moral labels. They are patterns. Each appears in bodies, minds, and institutions.

A governance stress test asks: which guna is currently running the system?

Sattva: clarity that protects without choking

Sattvic governance is transparent, coherent, and proportionate. It seeks truth, learns from evidence, and aims at the welfare of all. It does not confuse complexity with wisdom. It builds simple pathways for compliance, and it makes enforcement predictable.

A sattvic regulator does not aim to “catch” actors. It aims to raise the quality of action. In Vedantic terms, it helps society move from tamas to rajas to sattva.

Sattva is present when:

  • rules are legible to ordinary practitioners, not only to legal specialists,

  • obligations match the risk profile and context of use,

  • documentation serves learning and accountability, not only bureaucracy,

  • audits are oriented to real outcomes and harms, not ritual checklists.

Rajas: speed, ambition, and the heat of competition

Rajas is not bad. Without rajas, no action happens. In Europe’s AI debate, rajas shows up as competitiveness concerns, startup energy, and the desire to be a global standard setter. It also shows up as anxious lobbying, defensive postures, and “move fast” narratives.

Rajas becomes harmful when it refuses limits. A rajasic market demands rapid rollout even when harms are plausible and unmeasured. A rajasic regulator may promise timelines that cannot be supported by standards and enforcement capacity, and then compensate by pushing paperwork.

Rajas is present when:

  • deadlines are treated as performance theater rather than readiness milestones,

  • compliance becomes an arms race rather than a safety discipline,

  • public communication becomes polarized: either “innovation will die” or “safety is everything.”

The remedy for rajas is not suppression. The remedy is direction. The Gita describes yoga as “skill in action,” suggesting that energy becomes beneficial when guided by discernment.

“Yoga is skill in action.” (Bhagavad Gita 2.50, common translation)

Tamas: delay, fog, and the comfort of vagueness

Tamas is inertia, dullness, and concealment. In governance, tamas appears as endless consultations with little output, guidance that arrives too late, and institutions that lack resources or coordination. It also appears as corporate denial: “we cannot know our model’s effects,” said with a shrug rather than curiosity.

Tamas is present when:

  • standards are perpetually “coming soon,”

  • enforcement bodies exist on paper but not in capacity,

  • complexity becomes an excuse for non-action,

  • harms are treated as unfortunate but inevitable.

The stress test period reveals a danger: in trying to avoid rajasic haste, institutions can fall into tamasic drift. That drift can be worse than haste because it normalizes irresponsibility. In the Upanishads, tamas is called avidya, ignorance, not only intellectual but moral.

Vedanta’s solution is consistent: cultivate sattva through disciplined practice. For governance, that means aligning timelines with real scaffolding, and aligning innovation with real accountability.

4. Satya, ahimsa, and aparigraha: yamas for the age of AI

Yoga philosophy gives a compact ethical toolkit known as the yamas: restraints that protect relationships and reduce harm. Although they originate in personal practice, they scale surprisingly well into institutional life.

Satya: truthfulness as transparency and non-deception

Satya is truthfulness, but not bluntness. It is alignment between what is known, what is said, and what is done. In AI governance, satya demands that claims about systems match evidence.

Practical expressions of satya include:

  • clear model documentation that admits limitations,

  • honest reporting about training data provenance and constraints,

  • evaluation results presented with uncertainty, not marketing gloss,

  • disclosure of when humans are replaced by automated judgments.

The Mundaka Upanishad offers a simple moral anchor:

“Satyameva jayate.”“Truth alone triumphs.”

Satya also means avoiding “truth-washing,” where a company publishes extensive documentation that does not help anyone understand real risks. Clarity is a virtue.

Ahimsa: non-harm as safety engineering and rights protection

Ahimsa is non-harm. For AI, it does not mean “do nothing.” It means act with care, test before deployment, and avoid foreseeable injury.

Ahimsa includes:

  • bias and discrimination testing in hiring, credit, and education tools,

  • red teaming for systems that can be misused for persuasion or surveillance,

  • robust security against data leakage and model inversion attacks,

  • mechanisms for appeal and human review when high-stakes decisions occur.

In Vedantic language, ahimsa flows from the perception of unity. If the other is not other, harm becomes self-harm. The Isha Upanishad teaches:

“All this is pervaded by the Lord.”

When this perception becomes real, governance stops being a contest and becomes care.

Asteya: non-stealing as data dignity

Asteya is non-stealing. In the AI era, theft can be subtle: taking attention, taking privacy, taking creative labor, taking autonomy through manipulation.

Asteya suggests:

  • respecting data protection and consent,

  • avoiding covert extraction of behavioral signals,

  • not training on personal or proprietary data without lawful grounds,

  • not using AI to “borrow” human creativity while erasing attribution.

Asteya is important because governance debates often focus on safety while ignoring extraction. Yet extraction is harm.

Aparigraha: non-hoarding as restraint in data and power

Aparigraha is non-hoarding. It asks us to release the compulsion to possess more: more data, more users, more influence, more surveillance.

Aparigraha becomes a powerful lens for AI governance:

  • collect only what is needed,

  • retain only what is justified,

  • limit secondary uses that expand risk,

  • resist building systems that centralize decision power without accountability.

The Isha Upanishad adds a phrase that feels like a policy principle:

“Tena tyaktena bhunjitha.”“Enjoy through renunciation.”

In governance terms: prosperity grows when restraint protects trust. When trust grows, adoption becomes stable.

Brahmacharya: conserving attention and avoiding exploitative persuasion

Brahmacharya is often misunderstood as only celibacy. In many interpretations, it means disciplined use of energy. In the AI context, it points to the ethics of attention: recommendation engines, engagement optimization, and persuasive interfaces.

A brahmacharya-informed governance posture includes:

  • limits on manipulative design,

  • transparency about recommender goals,

  • protection for children and vulnerable users,

  • auditing of “dark pattern” optimization.

This is not anti-technology. It is technology aligned with human flourishing.

Saucha and santosha: cleanliness and contentment as governance culture

Even cultural virtues matter. Saucha suggests clean processes, clear responsibilities, and clean incentives. Santosha suggests not being driven by endless escalation. A governance culture that is never content will always chase the next capability without integrating the last.

The stress test debate in 2025 can be seen as a clash between satya and marketing, between ahimsa and speed, between aparigraha and accumulation. Vedanta does not demonize any side. It invites a higher synthesis: ethics that are realistic, and realism that is ethical.

5. Karma yoga for regulators, builders, and deployers

Karma yoga is the yoga of action. It does not ask us to abandon action. It asks us to act without ego-fixation on results. In the Gita:

“To action alone you have a right, never to its fruits.” (Gita 2.47, common translation)

In AI governance, fruits include market dominance, political credit, and public praise. When actors cling to these fruits, they distort decisions. A company may downplay risk to protect revenue. A regulator may overpromise speed to protect prestige. A politician may weaponize “innovation” or “safety” rhetoric for advantage.

Karma yoga offers a steadier approach: do what is right, do it well, and accept that outcomes will unfold through many causes.

For regulators: duty as public stewardship

A karma-yoga regulator focuses on:

  1. Legibility. Create guidance that engineers can apply. If a rule cannot be explained to a capable practitioner, it is not yet mature.

  2. Proportionality. Align obligations with risk. Protect rights without imposing identical burdens on radically different systems.

  3. Capacity building. Invest in competent oversight bodies. Laws without capacity are ideals without hands.

  4. Iterative learning. Treat audits and incidents as learning inputs. Update guidance with humility.

  5. Fairness. Avoid enforcement that feels arbitrary. Predictability is a form of justice.

The 2025 stress test exposed a classic governance vulnerability: ambitious legislation can outrun the supply chain of standards, auditors, and national authorities. Karma yoga responds by strengthening the supply chain, rather than only adjusting dates.

For model builders: duty as safety engineering

A karma-yoga builder sees safety as part of the craft, not as external constraint. That means:

  • designing evaluation as a first-class discipline,

  • documenting intended use and foreseeable misuse,

  • supporting downstream deployers with testing tools,

  • offering clear mechanisms to report issues and receive updates.

When builders externalize safety costs onto society, they violate ahimsa. When they internalize safety, they practice yajna, offering effort into the shared fire of welfare.

The Gita also teaches that action should support loka-sangraha, the holding together of the world:

“Act for the welfare of the world.” (Gita 3.20, paraphrase)

A builder who takes this seriously will not treat “high-risk” sectors as markets alone. They will treat them as arenas of human vulnerability.

For deployers: duty as context responsibility

Most harm does not arise from a model in the abstract. Harm arises when a system is deployed into a context with high stakes and weak safeguards. A deployer therefore carries a distinct responsibility:

  • ensure human oversight in decisions that affect rights,

  • monitor performance drift over time,

  • train staff to recognize model limits,

  • maintain appeal and remedy channels,

  • avoid automation bias, the tendency to trust outputs too much.

A karma-yoga deployer also resists the temptation to hide behind vendors. “The model did it” is not an ethical defense. Vedanta reminds us that agency is shared. When we outsource judgment, we still own consequences.

Karma yoga and the “pause” debate

How does karma yoga view calls to pause the rollout?

It does not reflexively reject them, and it does not reflexively accept them. It asks: what is the intention behind the pause, and what will be done during the pause?

  • If “pause” is a method to build standards, capacity, and clarity, it can be sattvic.

  • If “pause” is a method to avoid accountability while continuing deployment, it becomes tamasic.

  • If “pause” is demanded as a threat, to win leverage, it becomes rajasic.

Similarly, how does karma yoga view proposals to delay high-risk rules?

  • If delay is tied to concrete readiness criteria and used to strengthen the scaffolding, it can serve dharma.

  • If delay becomes a habit of deferral, it erodes trust and invites harm.

Karma yoga is therefore not a slogan. It is a governance method: make the work real, not theatrical.

6. Jnana and epistemic humility: what can AI know, and what cannot it know?

Vedanta is often called jnana marga, the path of knowledge. Yet Vedanta begins by questioning what we mean by knowledge. The Upanishads repeatedly point out that the Self is not an object among objects. It is the light by which objects are known.

The Kena Upanishad famously asks:

“That which is the ear of the ear, the mind of the mind.” (Kena Upanishad, condensed)

This teaching offers a striking mirror for modern AI. AI systems can predict, classify, and generate patterns. They can model behavior and language. But they do not possess the inner witness. They do not have the “knower” in the Vedantic sense. Therefore, their outputs are powerful, but their authority is not the same as human understanding.

Epistemic humility becomes a central governance virtue. It encourages two disciplines:

  1. Limits are not embarrassment. They are reality. A system that admits uncertainty is safer than a system that pretends certainty.

  2. Knowledge is contextual. A model can be accurate on average and still unjust in edge cases that matter.

The 2025 governance stress test implicitly revolved around humility. Regulators recognized that harmonised standards and guidance were not ready at the pace desired. Industry recognized that obligations might be hard to interpret without those standards. Both sides were acknowledging a limit: we cannot govern what we cannot yet clearly operationalize.

But humility should not become paralysis. Vedanta teaches viveka, discernment. Viveka distinguishes:

  • uncertainty that can be reduced through testing and monitoring,

  • uncertainty that is intrinsic because the world is complex,

  • uncertainty that is manufactured through lack of transparency.

AI governance should aim to reduce the first kind, accept the second kind, and eliminate the third kind.

Viveka in practice: separating risk types

A Vedantic governance mindset would separate at least five risk types:

  1. Physical risk. Systems that can affect bodies: healthcare, transport, industrial control.

  2. Rights risk. Systems that affect dignity: surveillance, biometrics, policing, border control.

  3. Economic risk. Systems that shape opportunity: hiring, lending, insurance, education.

  4. Epistemic risk. Systems that distort reality: misinformation, deepfakes, automated persuasion.

  5. Existential risk. Systems that could enable large scale misuse or instability.

The EU’s “high-risk” category focuses strongly on physical and rights risk, with economic risk included. The stress test period showed that implementing these categories requires not just legal text but measurement practices. Measurement is a form of pramana, a means of knowledge. If the pramanas are weak, governance becomes opinion.

Vedanta, at its best, is not anti-measurement. It is anti-confusion about what measurement can deliver. It uses the right tool for the right domain. Similarly, AI governance must choose evaluation methods that actually track harms, not proxies that only look scientific.

7. A Vedantic blueprint for AI governance after the stress test

If the period from June to November 2025 was a stress test, what practices strengthen governance for the next phase? Below is a Vedanta-inspired blueprint that speaks to regulators, companies, and civil society.

Practice 1: begin with the aim, not the instrument

Vedanta asks: what is the purpose of human life? In civic terms: what is the purpose of technology? If the purpose is human flourishing, then governance must reward flourishing, not only growth.

A simple policy translation is: define success metrics that track well-being outcomes, not only adoption speed.

Practice 2: treat clarity as a human right

Confusion harms. It harms innovators who cannot comply, and it harms citizens who cannot understand their protections. Therefore, legibility is ethical.

During the stress test, missing standards and delayed guidance were repeatedly cited as causes of uncertainty. The remedy is not only “more pages.” It is clearer pages.

Practice 3: build standards as living tools, not sacred tablets

Standards should be:

  • testable,

  • updateable,

  • linked to evidence about harms,

  • compatible with multiple sectors.

A sattvic standard evolves while keeping its intention stable.

Practice 4: separate governance for capability from governance for use

A general capability model and a domain-specific deployment have different risk profiles. Governance should separate obligations accordingly. Otherwise, requirements become either too heavy to follow or too light to matter.

Practice 5: make audits educational, not only punitive

Punishment can deter, but learning can prevent. Audits should produce knowledge that circulates: what failure modes occurred, how mitigations worked, what monitoring caught.

This aligns with the Vedantic ideal of shravana, manana, nididhyasana: listen, reflect, assimilate.

Practice 6: create “abhaya channels” for reporting and remedy

Abhaya means fearlessness. Systems must allow workers and users to report harms without retaliation. Governance without reporting channels is blind.

Practice 7: put remedy at the center of high-stakes AI

In Vedanta, compassion is not sentiment. It is response. High-stakes AI must include:

  • clear contestation pathways,

  • human review,

  • documented rationale,

  • correction and restitution where harm occurred.

Practice 8: use aparigraha to limit surveillance incentives

Many AI harms are powered by data hunger. Governance should discourage unnecessary collection and secondary use. If the economy rewards hoarding, ethics will be defeated by incentives.

Aparigraha suggests: limit what you keep, and limit what you can do with what you keep.

Practice 9: treat attention as a protected commons

Persuasion at scale can destabilize democracies and mental health. Governance should treat attention as a commons, not a commodity to be strip-mined.

Brahmacharya implies restraint in designing systems that exploit cognitive vulnerabilities.

Practice 10: align timelines with readiness criteria, not politics

A stress test revealed the temptation to treat deadlines as political trophies. A better approach is to tie major compliance milestones to readiness criteria:

  • standards available,

  • competent authorities designated and trained,

  • conformity assessment capacity in place,

  • clear guidance published.

When these are real, timelines become credible.

Practice 11: cultivate a culture of sattva inside organizations

Compliance is not only external. It is internal culture. Organizations should cultivate:

  • humility to admit unknowns,

  • courage to delay deployment when harms are plausible,

  • discipline to monitor and update,

  • compassion to prioritize users over metrics.

Sattva is a culture feature, not only a personality trait.

Practice 12: keep the deepest truth in view: unity

Vedanta’s ultimate teaching, expressed in mahavakyas like “Tat tvam asi” (That thou art), invites us to see the other as self. This does not replace law. It dignifies law.

When unity is remembered, governance debates soften. We stop treating regulation as enemy and innovation as idol. We treat both as instruments in service of a shared life.

8. A practical sadhana checklist for teams building in Europe

Vedanta becomes real through practice. Here is a simple weekly sadhana for AI teams operating under or alongside EU expectations. It is intentionally concrete.

  1. One satya audit: Identify one claim in product marketing and verify it with evidence.

  2. One ahimsa test: Run one adversarial or bias test tied to the system’s context of use.

  3. One aparigraha cut: Remove one unnecessary data field, retention practice, or logging stream.

  4. One human dignity check: Review one workflow where a person is judged by a score. Add explanation, appeal, or oversight.

  5. One drift review: Examine how performance changed since last month. Document what shifted.

  6. One red-team story: Ask, “How would a malicious actor misuse this?” Update mitigations.

  7. One stakeholder conversation: Speak with a user group affected by the system, especially those with less power.

  8. One silence practice: Ten minutes of quiet before planning meetings, to reduce rajasic reactivity.

  9. One incident rehearsal: Simulate a failure and practice response steps.

  10. One learning share: Publish a short internal note on what was learned this week.

This checklist is not a substitute for legal compliance. It is the spirit that makes compliance meaningful. It also answers the “pause” question in a mature way: whether or not rules are delayed, the discipline continues.

9. A closing contemplation: governing power without fear

The Upanishads offer a prayer:

“Asato ma sad gamaya, tamaso ma jyotir gamaya, mrityor ma amritam gamaya.”“Lead me from the unreal to the real, from darkness to light, from death to immortality.”

In the AI era, “unreal” includes inflated promises and deceptive outputs. “Darkness” includes opacity, manipulation, and the fog of unclear responsibility. “Death” includes the erosion of dignity and trust that makes societies brittle.

Europe’s 2025 governance stress test, from pause calls to delay proposals, can be read as a collective attempt to move from darkness to light, even if imperfectly. A delay, if used wisely, can be a movement toward light: standards clarified, institutions strengthened, harms mapped. A delay, if used poorly, can be a movement back into darkness: accountability deferred, deployment unexamined, extraction normalized.

Vedanta asks for a steadier heart. Do what is right without panic. Build what is useful without arrogance. Regulate what is powerful without hatred. Innovate without worshiping speed.

If we can hold that balance, then governance becomes yoga: skill in action, rooted in compassion, guided by discernment, and anchored in the quiet truth that the welfare of the other is not separate from our own.

Om shanti, shanti, shanti.

10. Advaita, pluralism, and the European question of dignity

A common misunderstanding of Advaita Vedanta is that it dissolves the world into an abstract oneness and therefore has little to say about law. In fact, classical Advaita is careful: it distinguishes levels of truth. At the absolute level, Brahman alone is real. At the empirical level, action matters, harm matters, and justice matters. The wise person does not confuse these levels. They see the absolute without neglecting the relative.

This layered vision can help a pluralistic society like Europe. The EU’s legal culture is strongly shaped by dignity and fundamental rights. Vedanta resonates with that impulse, because it treats consciousness as the deepest dignity. When the Chandogya Upanishad says “Tat tvam asi,” it is declaring an inviolable worth that does not depend on productivity, status, or data exhaust. That worth is the ground on which rights stand.

At the same time, Vedanta warns against a subtle danger: paternalism masquerading as protection. If regulators treat citizens as children who must be shielded from every risk, governance can become heavy and controlling. If markets treat citizens as consumers whose attention can be harvested, governance becomes exploitative. Vedanta suggests a middle path: empower discernment, not only impose limits.

Neti neti: learning what not to claim

The Brihadaranyaka Upanishad uses the method “neti neti,” meaning “not this, not this.” It is a discipline of refusing false identification. Applied to AI governance, neti neti becomes a way to resist exaggerated claims, on both sides.

  • Not this: the claim that regulation alone can guarantee safety.

  • Not this: the claim that innovation alone can guarantee prosperity.

  • Not this: the claim that a model’s benchmark score equals real-world trustworthiness.

  • Not this: the claim that “high-risk” is a stigma rather than a call for stronger care.

Neti neti does not end in nihilism. It ends in clearer seeing. It clears away noise so that the essential can be held.

The witness and the right to explanation

Vedanta centers the sakshi, the witness, the aware presence that observes thoughts and experiences. In civic terms, this points to a right that is easy to overlook: the right to remain a subject, not become an object. Many AI harms are objectifying. They reduce a person to a score, a risk label, a predicted behavior, a “likely” intent.

A Vedanta-aligned governance posture insists that systems, especially in high-stakes domains, must preserve the person as a participant:

  • explanations that are meaningful, not symbolic,

  • opportunities to contest and add context,

  • human review that is real, not rubber-stamp,

  • limits on biometric identification that treat bodies as data sources without consent.

The EU’s focus on fundamental rights is, therefore, not a competing value against innovation. It is a necessary condition for innovation that does not corrode society.

A quiet lesson from the stress test

The June to November 2025 period showed that governance is itself a living system. It must adapt, learn, and correct. That is not weakness. That is maturity. Vedanta calls it śraddhā supported by viveka: confidence grounded in discernment.

If a delay of high-risk obligations is used to strengthen standards, guidance, and enforcement competence, then the delay can serve dignity by reducing arbitrary outcomes. If the delay is used to postpone hard conversations about surveillance, discrimination, and manipulation, then the delay harms dignity by extending exposure.

This is why Vedanta keeps returning to intention. In the Gita, Krishna repeatedly turns Arjuna’s gaze from external complexity to inner alignment. Not because the battle is unreal, but because alignment is the only stable compass in a storm.

Europe’s AI governance, like any human project, will contain politics, lobbying, compromise, and imperfection. Vedanta does not demand purity. It demands sincerity and steadiness. It says: keep the aim in view, reduce harm, speak truth, and keep learning. That is the way a society can govern intelligent power without losing freedom, and without losing love.

You will get Vedanta updates in your inbox.

Occasional reflections on Vedanta. Unsubscribe anytime.


Donate