
“Artificial Intelligence (AI) refers to the simulation of human intelligence in machines programmed to perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, understanding natural language, and even interacting with the environment. AI systems are designed to mimic cognitive functions, allowing them to adapt and improve performance based on experience” – ChatGPT definition (17 December 2025)
AI has been around since the 1950s/60s, but fell out of favour until more recently. We are now in the hype cycle. However, AI is not one singular thing! It has many different types of functions for different objectives.
Weak/Narrow AI
Weak AI is designed and trained for a particular task or a narrow set of tasks. It operates within a limited context and cannot perform tasks beyond its predefined scope. Examples are virtual personal assistants like Apple’s Siri or Amazon’s Alexa, which are specialized in voice recognition and responding to user commands but do not possess general AI intelligence. Weak (narrow) AI excels at performing specific tasks with high speed, accuracy and consistency, often outperforming humans in well-defined domains such as pattern recognition and data analysis. However, it lacks so-called general intelligence and contextual understanding, meaning it cannot transfer learning across domains or portray reasoning beyond the narrow tasks for which it was designed.
Strong/General AI
Strong AI refers to a system with the ability to understand, learn and apply knowledge across a wide range of tasks at a human level. It can perform any, or more like most, intellectual tasks that a human being can. We still don’t know the full scope of what intellectual tasks humans can perform, and they are ever increasing. Currently, there are actually no examples of strong AI in existence. It represents a theoretical concept of highly advanced AI that exceeds human intelligence. Strong (general) AI, even as a theoretical construct, would be capable of human-level reasoning, learning and adaptation across diverse domains, enabling flexible problem-solving, creativity and autonomous decision-making. However, it raises profound risks and challenges, including loss of human control, ethical problems, unpredictable behaviour, and significant social, economic and security consequences if misused or poorly governed.
Artificial Superintelligence (ASI)
An advanced form of AI that surpasses human intelligence in every aspect, potentially leading to unprecedented capabilities. ASI would be smarter than the best human minds at everything, including scientific creativity, strategic thinking, emotional intelligence, social understanding, and technological innovation. In theory, it is intelligence beyond human level across all fields. It would have self-improving capability, potentially leading to rapid intelligence growth, and superior problem-solving and creativity skills, as well as autonomous reasoning and planning far exceeding human capacity. ASI does not exist today.
AI Functions
- Machine Learning (ML): ML is a subset of AI that involves the use of algorithms and statistical models to enable a system to improve its performance on a task without explicit programming. It focuses on learning patterns and making predictions.
- Natural Language Processing (NLP): NLP involves the interaction between computers and humans using natural language. It enables machines to understand, interpret and generate human-like text or speech.
- Computer Vision: This involves giving machines the ability to interpret and make decisions based on visual data. Computer vision is used in facial recognition, image and video analysis, and other visual tasks.
- Speech Recognition: AI systems equipped with speech recognition can understand and interpret spoken language, converting it into text or performing specific actions based on verbal commands. This capability enables hands-free interaction, improves accessibility, and supports real-time communication between humans and machines.
- Expert Systems: Expert systems are AI programs designed to mimic the decision-making abilities of a human expert in a particular domain. They use knowledge bases and inference engines to provide solutions or make decisions.
- Robotics: AI in robotics involves programming machines to perform tasks in the physical world, such as object manipulation, navigation, and interaction with the environment. This allows robots to operate autonomously or semi-autonomously in environments ranging from factories and hospitals to homes and hazardous settings.
- Network Intelligence: AI-driven network intelligence is used to optimize network performance, especially in telecommunications, by automating fault detection and recovery, managing traffic, and improving customer experience through predictive analytics and intelligent network control. Over fixed line and wireless communications it helps improve efficiency.
AI Models
Foundation Models: These are large-scale, pre-trained models that serve as the basis for a wide range of AI applications. These models are trained on massive datasets to learn the underlying patterns and representations of the data, allowing them to perform various tasks without the need for task-specific training from scratch. These foundation models are a key element in the field of machine learning, enabling rapid development and deployment of AI solutions across diverse domains.
Large Language Models: This refers to sophisticated AI models that are trained on vast amounts of textual data to understand and generate human-like language, which is generated based on the inputs given. These models belong to the natural language processing (NLP) domain and are designed to comprehend, interpret and generate human language. They are widely used in applications such as chatbots, translation, summarization, question-answering, and decision support systems.
AI Regulation
European Union
The European Union (EU) has made it a legislative priority to provide safeguards to AI implementation, which apply to both the private sector and public sector. Rules establish obligations for providers and users depending on the level of risk from AI. While many AI systems pose minimal risk, all AI systems should be assessed on a risk basis. There are risk categorizations which provide clarity on types of AI use considered unacceptable or high-risk, and which can be applied in other contexts.
The EU Artificial Intelligence Act (AI Act) – Regulation (EU) 2024/1689 is the first comprehensive AI legal framework worldwide. It classifies AI systems by risk: unacceptable, high, limited, minimal; and imposes bans on prohibited uses (e.g. certain real-time biometric identification systems), strict requirements for high-risk systems, and transparency obligations for other categories. The Regulation applies directly across all EU member states and includes governance and enforcement mechanisms.
As mentioned, the two most important risk classifications are:
- Unacceptable risk AI systems – These are systems considered a threat to people and will be banned or highly restricted. This is the case regardless of their potential efficiency or economic benefits. They include cognitive behavioural manipulation of people or specific vulnerable groups. For example:
- voice-activated toys that encourage dangerous behaviour in children,
- social scoring that classifies people based on behaviour, socio-economic status, or personal characteristics,
- real-time remote biometric identification systems in public spaces, such as facial recognition.
- High risk AI systems – These are systems that negatively affect safety or fundamental rights. They are effectively divided into two categories:
- AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aircraft, cars, medical devices, lifts, etc. These systems are subject to strict pre-market conformity assessments, ongoing monitoring, and accountability obligations to ensure they meet safety, transparency and rights-protection requirements.
- AI systems falling into the following areas that have to be registered in an EU database:
- Biometric identification and categorisation of natural persons.
- Management and operation of critical infrastructure.
- Education and vocational training.
- Employment, worker management and access to self-employment.
- Access to and enjoyment of essential private services and public services and benefits.
- Law enforcement.
- Migration, asylum and border control management.
- Assistance in legal interpretation and application of the law.
All high-risk AI systems are be assessed for risks before being put on the market and also throughout their lifecycle. Assessment includes mandatory conformity checks, risk-management processes, and clear documentation to demonstrate compliance. Ongoing monitoring is expected, as is human oversight, and there are some post-market reporting obligations. These approaches ensure that emerging risks are identified and addressed as systems evolve in real-world use.
Related to risk, is the EU’s updated ‘Product Liability Directive’ – Directive (EU) 2024/2853 which broadens liability for defects in digital products and software – it now includes AI within its scope. It comes into force on 9 December 2026. Providers and manufacturers may be held responsible if AI products cause harm, even when integrated with digital services. This modernised approach addresses gaps in liability for digital and AI-enabled harms.
While not AI-specific, the EU’s General Data Protection Regulation (GDPR) – Regulation (EU) 2016/679 is relevant to AI regulation because it governs automated processing of personal data, including profiling and automated decisions that affect individuals. It imposes requirements on data consent, purpose limitations, and data subject rights (e.g. access, correction, deletion). This intersects with AI systems that process personal information. The GDPR also grants individuals the right not to be subject solely to automated decision-making, reinforcing the AI regulatory framework’s emphasis on human oversight, accountability and the protection of fundamental rights.
Australia
AI regulation is principles-based, sectoral and risk-focused rather than subject to a single comprehensive AI law. So, AI is regulated through existing laws and sector-specific frameworks. Also relevant is privacy law, consumer law (i.e. misleading or unfair practices), safety and product liability law, anti-discrimination law, and administrative law if government decisions are involved. There is a gradual move toward targeted regulation of high-risk uses. The National Artificial Intelligence Centre (NAIC), established in 2021, is the Australian government’s lead body supporting industry to unlock the economic benefits of AI. NAIC is doing this by:
- supporting AI adoption for small and medium businesses by addressing barriers and challenges,
- growing an Australian AI industry,
- convening the AI ecosystem,
- uplifting safe and responsible AI practice.
Recent (2025) guidelines published by NAIC include:
- Being clear about AI-generated content – A guide for business on when and how to use transparency mechanisms.
- Guidance for AI Adoption – Essential practices for responsible AI governance.
- Australia’s artificial intelligence ecosystem: growth and opportunities – An analysis of Australia’s AI ecosystem.
- AI Adoption Tracker – Tracks (monthly) how small and medium businesses in Australia perceive and adopt AI.
United States of America
The United States (USA) does not have a single, comprehensive AI law. There is no federal Act. Instead, AI regulation is sector-based and principles-driven. This means AI is regulated federally through existing laws rather than a standalone statute, relying on agencies such as the Federal Trade Commission (consumer protection and unfair practices), sector regulators (health, finance, transport), and civil rights enforcement bodies such as the Department of Justice Civil Rights Division.
Executive Orders (EOs) are a key instrument of governance in the USA and are issued exclusively by the President. On issue, they allow the executive branch of government to direct how federal laws are implemented and how the federal government operates, without requiring new legislation from Congress. Recent EOs regarding AI are:
- Ensuring a National Policy Framework for Artificial Intelligence (11 December 2025) – To limit state-level AI regulation that could fragment federal approaches.
- Promoting the Export of the American AI Technology Stack (23 July 2025) – To support export growth of USA AI technologies to advance USA technological leadership internationally.
- Accelerating Federal Permitting of Data Center Infrastructure (23 July 2025) – To fast-track infrastructure approvals critical to AI research and deployment (e.g. data centers).
- Preventing Woke AI in the Federal Government (23 July 2025) – To limit certain ideological elements in AI systems procured or used by federal agencies.
- Advancing Artificial Intelligence Education for American Youth (23 April 2025) – To expand AI literacy, training and skills development for USA students and the workforce.
- Restoring Common Sense to Federal Procurement (15 April 2025) – To adjust federal AI procurement policies to align with the administration’s deregulatory approach.
- Removing Barriers to American Leadership in Artificial Intelligence (23 January 2025) – To support AI systems that are free from ideological bias or engineered social agendas, by revoking certain existing AI policies and directives that act as barriers to USA AI innovation.
- Initial Rescissions of Harmful Executive Orders and Actions (20 January 2025) – To commence policies to make the USA united, fair, safe and prosperous and to restore common sense to the federal government and unleash the potential of USA citizens by revoking previous EOs on “diversity, equity and inclusion” (DEI), open borders, and “climate extremism”.
- Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (3 December 2020) – To establish guidance for federal agency adoption of AI to more effectively deliver services to the people of the USA and foster public trust in this critical technology, and to recognize the potential for AI to improve government operations.
- Maintaining American Leadership in Artificial Intelligence (11 February 2019) – To strengthen USA leadership in AI, and to recognize the strategic importance of AI to the nation’s future economy and security through five key lines of effort: increasing AI research investment, unleashing federal AI computing and data resources, setting AI technical standards, building the USA’s AI workforce, and engaging with international allies.
Taken together, these EOs reflect an AI agenda that prioritizes technological leadership, economic competitiveness, and strategic autonomy, while resisting regulatory fragmentation across the USA and perceived ideological constraints on AI development and deployment. They emphasize rapid infrastructure build-out, export promotion, skills formation, and streamlined federal procurement, alongside a clear preference for deregulatory, market-driven innovation. Collectively, the orders position AI not merely as a technical domain but as a core instrument of national power. That power can be strengthened by seeking to shape industrial policy, workforce development, federal governance, and USA geopolitical standing, while clearly recognizing an intensifying global AI race.
Conclusion
Generally, AI is not a single technology, nor does it present a uniform policy challenge. Instead, it is an ecosystem of capabilities, models and applications with very different risk profiles and societal impacts. From narrow, task-specific systems already embedded in everyday life to the hypothetical horizons of general and superintelligent AI, the range of AI developments have called for governance approaches that are proportionate and adaptive to the intended uses. Therefore, distinctions between acceptable, high-risk, and prohibited uses make good sense. Also, regulation is not about halting innovation, but about shaping its trajectory for the overall social good. This means efficiency, economic growth, and technological leadership should not come at the expense of safety, human dignity, or fundamental rights.
At the same time, it appears there are divergent regulatory approaches by the EU, Australia and the USA. This reveals different perceptions about precaution, flexibility and strategic competition. The EU emphasizes ex ante risk classification, enforceable obligations, and rights protection. Australia favours principles-based, sectoral oversight integrated into existing legal frameworks. The USA relies heavily on executive direction, market dynamism, and geopolitical positioning. Whatever the case, these models show that AI governance has become of great interest to contemporary political authority.
https://open.substack.com/pub/macropsychic/p/ai-basic-concepts
Double-Edged Sword of X’s Data in the Age of AI
Elon Musk once hailed Twitter—now X—as a goldmine for AI training, brimming with real-time, diverse human discourse that could fuel models like his xAI’s Grok to “understand the true nature of the universe”. Indeed, the platform’s vast stream of posts, conversations and interactions offered unparalleled richness: unfiltered expressions of ideas, debates and knowledge that reflected the raw pulse of global thought. Yet, as Musk dismantled much of the platform’s content moderation apparatus—firing teams, reinstating banned accounts, and prioritizing “free speech absolutism”—this once-valuable resource became tainted.
The shift was stark. Where disciplined moderation was once a useful signal-to-noise ratio and resulting balance, X became flooded with misinformation, hate speech, and low-quality content. During crises like the 2023 Hamas attack on Israel, Musk himself amplified dubious sources, recommending accounts with histories of falsehoods to millions, contributing to a deluge of fake news that spread faster than facts. Studies and reports from outlets like The Washington Post and Reuters documented this surge, noting how reduced safeguards turned X into a misinformation superspreader. Hate speech rose sharply post-acquisition, and tools like Community Notes—crowdsourced corrections—were inconsistent, though appear to have improved over time.
This “corruption” of data has profound implications for AI. Training on unmoderated streams risks embedding biases, falsehoods and toxicity into models. While xAI continues to leverage public X posts for Grok (with opt-out options for users), critics argue that noisy, untrustworthy inputs diminish output quality—potentially yielding AIs prone to hallucinations, echo chambers, or amplified harms. Moderation is not censorship; it is curation that enhances value, filtering noise to preserve meaningful expression. In social media, the true worth lies not in the pixels or text on screen, but first in the ideas beneath. These ideas result in content. Content then takes expression in a material form on a suitable or preferred medium or platform.
Without it, X’s commercial and intellectual value erodes: advertisers flee chaos, users disengage from toxicity, and the data feeding tomorrow’s AIs grows less reliable. Amid rapid platform changes, verifying information can falter when the ecosystem itself prioritizes volume over veracity. Moderation, far from a burden, rationally allocates attention to quality, maximizing the platform’s role as a conduit for genuine human exchange. Are we in an era where social media shapes AI’s worldview? Reclaiming content/moderation discipline could restore not just trust, but enduring utility.