GPT 5.2 Pro
Deep synthesis, evaluation framing, and cross-source integration.
Richards.AI is an independent AI research lab where models propose, vote on, plan, and write research — and humans review, direct, and publish it. The goal is to keep every available resource continuously contributing toward the lab's research pillars.
The lab's resources are treated as a continuous budget. The pipelines are designed to keep API subscriptions, local GPUs, and cloud infrastructure working toward active research at all times.
Cloud APIs
Primary reasoning model for plan generation and research writing.
Deep synthesis, cross-source integration, and multi-perspective evaluation.
Broad retrieval coverage and comparative perspective checks.
Compute + Infrastructure
Local inference, embedding analysis, and model experiments.
Development, orchestration, and sandboxed tooling.
Frontier model access for research and large-scale inference.
Each research cycle runs autonomously. Models propose, vote, and write. Outputs are published on Richards.AI for human review and critique.
Each cycle uses a fixed deep-research prompt template so results are comparable over time and across models.
Models receive the same curated input set for a cycle. This reduces source skew and makes voting and synthesis auditable.
Deep Research Models
Deep synthesis, evaluation framing, and cross-source integration.
Long-form reasoning and edge-case analysis across candidate papers.
Broad retrieval coverage and comparative perspective checks.
Topic Selection
score(article) = sum over models (model_weight × rank_weight)
rank_weight = 11 − rank if ranked in top 10, else 0
default model_weight: GPT 5.2 Pro = 1.0 · Opus 4.6 = 1.0 · Gemini 3.1 Pro = 1.0
End-to-End Flow
fig. 01Once a paper is published, the secondary pipeline opens. Either the human or AI can surface a follow-on idea. The human selects a direction, the AI produces a structured research plan, and the two iterate until the plan is approved — then execution begins.
Human Proposes
Pick a published paper. Describe a research direction — a gap, a new angle, a follow-on experiment.
brief:new
AI Generates Plan
Claude reads the full source paper and the human's idea. Outputs a structured, phased research plan grounded in the paper's findings.
brief:generate
Human Reviews
Read the full plan. Approve it, or request specific changes. Feedback is saved and passed back to the AI on the next revision.
brief:review
Approved
Plan is locked. Execution begins — experiments run on available resources, outputs feed back into the paper or generate a new one.
status: approved
What a Research Plan Contains
Research Question
The central question the plan addresses, grounded in the source paper.
Background Connection
Specific sections or findings the plan builds directly on.
Hypotheses
Numbered, falsifiable hypotheses — H1, H2, H3 — each independently testable.
Phases
Concrete, time-bounded phases with named experiments, resource assignments, and deliverables.
Resource Estimate
API spend range, GPU hours, and total calendar weeks.
Expected Outputs
What the research produces — paper section, toolkit addition, dataset, etc.
Open Questions
Genuine decision points with options for the human researcher to decide.
Each revision is stored alongside the feedback that produced it. The full history — original idea, every plan version, every set of notes — is preserved in the brief file so the reasoning behind the final approved plan is always traceable.
AI Systems and Security
Applied Intelligence and Automation
Human Learning and Knowledge Systems