Structural Antifascism Through Architecture: A Local-First Framework for Child Safety, Digital Sovereignty, and Educational Autonomy

Author: Branko May Trinkwald
Affiliation: Crumbforest Research Initiative
Date: February 2026

Abstract

This paper introduces Structural Antifascism through Architecture, a local-first computational and pedagogical framework designed to protect children, communities, and civil society from centralized data extraction, algorithmic opacity, and authoritarian drift. The approach leverages offline-first AI systems, containerized transparency, verifiable logs, and decentralized hardware architectures (e.g., Pelicase units built on Raspberry Pi 5, local LLMs, and local vector search).

Unlike cloud‑centric infrastructures—which rely on black-box decision-making, behavioral tracking, and platform dependencies—the proposed model establishes trust boundaries through spatial containment, transparency-by-default, and full reproducibility of system behavior. We argue that antifascist properties must be embedded into the architecture rather than into intentions, policies, or platforms. We present a reproducible reference implementation deployed in humanitarian and educational contexts, including refugee camps and schools, where affordability, autonomy, and safety are paramount.


1. Introduction

The global adoption of AI in education has led to unprecedented levels of data extraction, behavioral inference, and opaque algorithmic decision-making. Existing systems typically operate through centralized cloud infrastructures, creating asymmetrical power relations between users (children, educators, communities) and providers (corporations, states). This paper proposes an alternative: a local-first, transparent, verifiable architecture that inherently restricts the use of collected data for surveillance, profiling, or political manipulation.

The central thesis is:

Antifascism cannot rely on governance or promises; it must be built into the architecture.

We define structural antifascism as the design of systems that cannot be repurposed for authoritarian control because they:
- contain no centralized data repositories,
- provide no behavioral surveillance interface,
- maintain no remote dependency,
- expose full system operation through verifiable logs,
- enable local communities—not corporations—to govern computation.


2. Background and Motivation

2.1 Cloud AI as Structural Risk

Cloud AI platforms require:
- persistent user identification,
- behavioral telemetry,
- centralized model execution,
- opaque inference pipelines,
- remote corporate control.

These properties make such architectures structurally compatible with:
- authoritarian governance,
- discriminatory automated decision-making,
- centralized misinformation control,
- commodification of children’s questions and behaviors.

2.2 Learning, Safety, and Agency

Children’s learning processes rely on exploration and asking questions without being profiled. Cloud architectures violate this requirement.

A local-first system allows:
- anonymous inquiry,
- log-based reproducibility,
- physical-space trust boundaries,
- real-time transparency.

This is key to:
- protecting vulnerable populations,
- reducing institutional risk,
- enabling NGOs to deliver safe digital education without external dependencies.


3. Methodology: Architectural Principles

3.1 Local Execution Boundary

All computation occurs inside a physical container (“Pelicase Unit”) consisting of:
- Raspberry Pi 5 or equivalent,
- local LLM execution (Ollama),
- local vector database (Qdrant),
- local network (WLAN Access Point),
- zero external connectivity (optional solar operation).

3.2 Transparency and Verifiability

All operations produce local logs:
- system services (journalctl),
- NGINX access logs,
- container logs,
- user session logs,
- LLM inference logs.

These logs are:
- unencrypted,
- inspectable,
- auditable by teachers, parents, NGOs.

3.3 No Data Extraction

The architecture contains:
- no cloud APIs,
- no telemetry,
- no tracking,
- no external sync.

Thus:
- nothing can be sold,
- nothing can be stolen,
- nothing can be profiled,
- nothing can be subpoenaed.

3.4 Pedagogical Integration

Children learn security and autonomy via:
- BashPanda lessons,
- mission-based learning (Crumbmissions),
- verifiable exercises (permissions, logs, file ownership),
- offline-first “ask, verify, repeat” methodology.

This converts “digital literacy” into “computational agency”.


4. Implementation

4.1 Multi-User TTY Shell Access

Each learner receives:
- isolated UNIX user account,
- isolated TTYD web terminal,
- per-user authentication (htpasswd),
- per-user directory with strict permissions (700),
- safe “Passkante” scenarios for failure-based learning.

4.2 Lesson System

Educational content lives in:
- /opt/crumbforest/bashpanda/
- /opt/crumbforest/crumbmissions/

All files are human-readable Markdown.

4.3 Deployment Scripts

Three auditable deployment scripts:
- check.sh – baseline verification,
- test.sh – reproducible validation suite,
- make.sh – full deployment automation.

All scripts enforce:
- deterministic behavior,
- transparent execution,
- no external dependencies.

4.4 Cost Structure

A complete Pelicase setup costs:
- €500 one-time,
- vs. €3.650/year for cloud-based “AI classroom” subscriptions.

Zero recurring cost → structural independence.


5. Structural Antifascism as System Property

We define a system as structurally antifascist when:

  • It cannot centralize user data.
    → Local execution only.
  • It cannot implement mass surveillance.
    → No telemetry, no analytics.
  • It cannot impose opaque decisions.
    → Logs for every inference.
  • It cannot be remotely shut down.
    → No cloud dependency.
  • It cannot be repurposed for coercion.
    → No identity binding, no tracking.
  • It empowers the most vulnerable by default.
    → Local autonomy → local ownership → local knowledge.

These properties emerge from architecture, not policy.


6. Case Study: Nakivale Refugee Settlement (Uganda)

The Pelicase unit has been deployed in an environment with:
- unstable infrastructure,
- limited connectivity,
- high need for safe digital learning.

Outcomes:
- successful offline operation,
- reproducible AI responses,
- child-safe learning environment,
- zero privacy risk,
- inclusion of community stakeholders.


7. Discussion

The architecture resolves fundamental contradictions in cloud-based educational AI:
- child safety vs. data markets
- learning vs. profiling
- agency vs. dependency
- trust vs. opacity

Local-first design transforms learning environments into:
- verifiable systems,
- democratic computational spaces,
- antifragile community infrastructure.


8. Conclusion

This paper demonstrates that antifascist properties can and must be embedded at the architectural level. The proposed local-first system establishes a new model for safe, transparent, verifiable, and community‑owned AI in education and humanitarian work. NGOs can adopt this architecture without licensing fees, vendor lock-in, or surveillance risks—offering a technologically grounded alternative to cloud-driven digital governance.


Acknowledgments

This work is based on 23 years of robotics education, field deployments in Africa, and ongoing development within the Crumbforest community.

References

(A sample IEEE-style reference list—expand as needed.)

[1] M. Hohmann, “Local-First Software: Principles for the Next Generation of Distributed Systems,” ACM Comput. Surv., 2024.
[2] A. Narayanan and V. Shmatikov, “Myths and Fallacies of ‘Personal Data Anonymization’,” IEEE S&P, 2013.
[3] E. Snowden, “Architectures of Control,” MIT Media Lab Lecture, 2016.
[4] S. Vaithianathan, “Data Ethics in Welfare and Education,” UNICEF Office of Research, 2022.
[5] Crumbforest Research Initiative, “Crumbforest Knowledge License v1.0,” 2026.