Machine cognition introduces a distinctive set of social problems precisely because it departs from the foundations of human rationality and moral judgment. Human rationality involves the capacity to reason through causes, weigh evidence, give justifications, and revise beliefs in light of new arguments rather than merely detecting patterns. It involves fairness, often on a case-by-case basis. Moral judgment is the ability to evaluate actions and decisions in terms of right and wrong, responsibility and harm, grounded in empathy, norms and principles of justice rather than statistical outcomes.
Accordingly, human reasoning is oriented toward explanation, justification, and normative evaluation. But machine cognition is largely statistical, correlation-driven, and optimized for prediction. This shift risks in decision-making. Essentially, over-reliance on machine cognition, or reliance on machine cognition alone, erodes rationality in public decision-making by replacing reason-giving processes with opaque outputs that cannot be meaningfully explained or contested. This is the case even when there is knowledge of the algorithms used by AI. When decisions are justified by “the model says so”, rational deliberation gives way to technocratic deference.
The consequences for justice are serious. Machine systems do not understand fairness, responsibility, or rights; instead, they produce approximate outcomes based on patterns in historical data. That data can reflect and often does not reflect existing inequalities, discrimination and structural bias. So, machine cognition can silently reproduce and amplify injustice while appearing neutral. Remember, any era will tend to have a dominant socio-psychology (collective psychology), and machine cognition will reflect that.
Proxy variables used in relation to machine cognition, such as postcode, purchasing habits, or online behaviour, can function as indirect stand-ins for protected characteristics. Protected characteristics are legally safeguarded personal attributes, such as race, sex, religion, disability, age or ethnicity, that must not be used as grounds for discrimination in decision-making – whether by persons in the public sector or the private sector. Proxy variable can enable discrimination without explicit intent or visibility. This undermines core legal principles such as equal treatment, due process, and the right to a reasoned decision.
Scale and speed compound these risks. Machine cognition can apply flawed reasoning across millions of cases almost instantaneously, transforming localized errors into systemic harm. In such environments, individuals lose the ability to understand how decisions affecting them were made, let alone challenge them. Accountability becomes diffuse, as responsibility is displaced onto models, data pipelines, or “the system” that is used.
Ultimately, treating machine cognition as equivalent to human judgement threatens both rationality and justice. Rational governance requires reasons that can be examined and debated; justice requires presenting cases that are contestable and coming to decisions that are explainable, which means they are ethically grounded. Without deliberate constraints, machine cognition risks replacing human judgement not with superior reasoning, but with faster, less visible, and less accountable forms of power. Such a shift would hollow out rational deliberation, so undermining the moral foundations of legitimate authority which is based on reasons why people are elected or appointed and which they are expected to carry through into their office or positions. They are not elected or appointed to rely solely on machines and machine cognition.
https://substack.com/@macropsychic/note/c-208089043
