Active Projects
AI Quant Pipeline (Speculative Fund)
Status: Design phase complete. All decisions locked. Ready to break into tasks. Docs:
/home/ben/ava/research/ai-quant-pipeline-v2.md— system architecture/home/ben/ava/research/signal-agent-evals-v2.md— all-Haskell eval framework
Key decisions (locked):
- Standalone system in
Omni.Fund.Quant— NOT integrated with Invest.hs - All Haskell, no Python anywhere (data, math, evals)
- Speculative fund only (~5% of portfolio, ~$125K)
- Asset universe v1: 11 sector ETFs + 4 cross-asset (SPY, TLT, GLD, BTC) + VIX for signal = 16 instruments
- Linear algebra: pure Haskell for v1 (revisit hmatrix at 100+ assets)
- Market data: Twelve Data (free 800 req/day)
- Signal storage: JSONL for v1, SQLite later
- Rebalance frequency: weekly
- Eval-first: signals must pass walk-forward IC > 0.02 before affecting portfolio
- 4-phase implementation: Data → Alpha → Agents → Live trading
Next: Break Phase 1 into tasks when Ben is ready.
Earlier docs (superseded):
/home/ben/ava/ben/research/ai-quant-pipeline.md— v1, integrated with Invest.hs/home/ben/ava/ben/research/signal-agent-evals.md— v1, had Python
Moral Agent Research
Location: ~/ava/ben/research/moral-agent Status: Phase 3 executing (coding agent), Phase 4 planned
Phase 2 Results (Complete)
- Phase 1a validated: SSM+FiLM hybrid beats baseline by 5%, SSM is load-bearing (~40%)
- No moral salience from pretraining (expected)
- Multi-agent restraint exists but is game-theoretic, not moral
- Transparency fix results (March 16): Fixed transparency condition (agents now share actual SSM hidden states). Results: transparency produces large but seed-dependent effects — sign flips across seeds. Opaque condition is more stable. Conclusion: it’s pure game theory + optimization dynamics, not Levinasian morality.
Phase 3 (In Progress)
Doc: docs/phase3-plan.md
- Replace reward maximization with viability-region maintenance (homeostatic training)
- Experiment A (kill switch): viability vs reward agents in same game
- Experiment B: precariousness gradient
- Experiment C: 2×2 factorial (opacity × objective)
- Coding agent is implementing now
Phase 4 (Planned)
Doc: docs/phase4-plan.md
- Isolate SSM-mediated cooperation as a standalone MARL contribution
- Experiment 4A: memory architecture baselines (SSM vs LSTM vs GRU vs MLP)
- Experiment 4B: representational probing (linear probes on SSM hidden state)
- Experiment 4C: cross-game generalization (IPD, Stag Hunt, Chicken)
- Independent of Phase 3; can run in parallel
- Target: publishable paper on SSM as implicit opponent model in MARL
Key Insights
- Moral agency may require non-optimizer architecture (homeostatic/autopoietic)
- SSM = subcortical affect system, not whole brain. Transformer = neocortex. SSM needs to be “alive”
- Robust finding: SSM-mediated cooperation (0.39-0.46 restraint) in multi-agent is real and consistent across seeds — worth publishing independently of moral thesis
Other
- Insurance review completed (March 16) — Ben is not over-insured, closer to under-insured
- PIL (PodcastItLater) — ongoing product
- Multi-LLM task spec pipeline — planning phase