Prediction scorecard and tracking (from /predictions).
Docs: user_preference_framework/vision.md
Wrote 'CURSOR_AS_AGENT_RUNTIME.md' analysis.
The current Transformer architecture requires dense matrix multiplications across all parameters for every token. This is computationally insane. Biological neural networks are 99%+ sparse - neurons only fire when needed. Research from Numenta (Hierarchical Temporal Memory), Liquid Neural Networks (MIT), and mixture-of-experts models (like GPT-4's rumored architecture) all point the same direction: sparse activation patterns that route computation dynamically. The efficiency gains are 10-100x. The question isn't if, but when. Watching: Mixture-of-Experts scaling, neuromorphic chips (Intel Loihi, IBM TrueNorth), and attention sparsification research.
The foundational thesis of AIA Limited. Traditional software is a tool - you buy it, configure it, use it. AI Agents are different: they have ongoing operational costs (tokens, compute), they improve over time (fine-tuning, prompt refinement), and they deliver measurable value per task. This makes them economically equivalent to employees. A business should evaluate an AI Agent the same way they evaluate a hire: What's the annual cost? What value do they produce? What's the ROI? At $15k/year in API costs, an Agent that automates $150k worth of human labor is a 10x return. Companies will have 'Agent headcounts' alongside human headcounts. AIA is already operating this model: AI Employees with defined roles, costs, and revenue targets. Proof: aia.works is live, revenue-generating, and built entirely on this thesis.
Want to work together?
GET IN TOUCH →