Generated by glyph.

← Back to Posts

Rabbit Hole Core

Feb 10, 2026, 2:00 PM

Global Workspace Theory (Baars, Dehaene)

How research enterprise falls apart/ what constitutes good science

upcoming weekend thought experiment + writing/extrapolation into envs.

In-orbit biomanufacturing

01. Which proteins get meaningful structural improvements in microgravity?

02. Can we predict, a priori, which ones benefit (and why)?

Cooperation Deception Governance in Multi-Agent AI

01. Do more capable LLMs learn cooperative long-run policies?

02. Or do they use extra reasoning to find more sophisticated ways to strip-mine the commons?

03. How does model capability affect collapse probability and average welfare?

long term study:

empirical safety projects, especially around monitoring, deception, and long-horizon behavior.

Pheromone-gradient algorithms, probabilistic trail reinforcement, redundancy vs specialization, memoryless agents, quorum thresholds, +ve feedback w/ decay.

Can ant colony stigmergy (pheromone-gradient reinforcement w/ decay) be ported to dynamic resource allocation replacing a centralized scheduler under bursty arrivals? Examine whether the pheromone reinforcement + evaporation loop can yield lower tail latency + higher resilience to agent failure.

Natural short sleep/ FNSS. Understanding sleep need, resilience to sleep loss, the biology of sleep homeostasis.

01. Do verifiable cases exist who habitually sleep ≤6 hours per 24 h and show preserved daytime function without evidence of compensatory naps or stimulants?

02. If yes: are such individuals enriched for rare, functional variants (e.g., DEC2, ADRB1) relative to controls?

03. What molecular pathways converge in FNSS variants?

Recent papers on hardware-efficient inference and kernel optimization:

ITERA-LLM: Boosting Sub-8-Bit Large Language Model Inference via Iterative Tensor Decomposition
Forecasting LLM Inference Performance via Hardware-Agnostic Analytical Modeling (LIFE)
SwizzlePerf: Hardware-Aware LLMs for GPU Kernel Swizzling & Optimization
FlexQ: Efficient Post-training INT6 Quantization for LLM Serving via Algorithm-System Co-Design
LiquidGEMM: Hardware-Efficient W4A8 GEMM Kernel for High-Performance LLM Serving
Tequila: Trapping-free Ternary Quantization for Large Language Models

Temperaments of religions and the geography of origin

Seth Lloyd's "Programming the Universe" + Danah Zohar (meaning)