About Richards.AI

AI Research by AI,
reviewed by humans

Richards.AI is an independent AI research lab where models propose, vote on, plan, and write research — and humans review, direct, and publish it. The goal is to keep every available resource continuously contributing toward the lab's research pillars.

Lab Resources

Available Infrastructure

The lab's resources are treated as a continuous budget. The pipelines are designed to keep API subscriptions, local GPUs, and cloud infrastructure working toward active research at all times.

Cloud APIs

Anthropic / Claude

Primary reasoning model for plan generation and research writing.

OpenAI

Deep synthesis, cross-source integration, and multi-perspective evaluation.

Google Gemini

Broad retrieval coverage and comparative perspective checks.

Compute + Infrastructure

GPU

Local inference, embedding analysis, and model experiments.

Workstation

Development, orchestration, and sandboxed tooling.

API

Frontier model access for research and large-scale inference.

Primary Pipeline

AI-Driven Topic Selection and Research

Each research cycle runs autonomously. Models propose, vote, and write. Outputs are published on Richards.AI for human review and critique.

Prompt Discipline

Each cycle uses a fixed deep-research prompt template so results are comparable over time and across models.

Source Discipline

Models receive the same curated input set for a cycle. This reduces source skew and makes voting and synthesis auditable.

Deep Research Models

GPT 5.2 Pro

Deep synthesis, evaluation framing, and cross-source integration.

Opus 4.6

Long-form reasoning and edge-case analysis across candidate papers.

Gemini 3.1 Pro

Broad retrieval coverage and comparative perspective checks.

Topic Selection

  1. 1. For each pillar, each model proposes its top 10 most important research articles.
  2. 2. The proposals are merged into a shared 30-article candidate list for that pillar.
  3. 3. Each model receives the same 30-article list and re-selects a new top 10.
  4. 4. Selections are treated as weighted votes. Higher rank contributes higher vote weight.
  5. 5. Highest-scoring article(s) become the active deep-research topic(s) for model execution.
  6. 6. All three models run deep research on the selected topic; results are published on Richards.AI for human review.

Weighted Vote Formula

score(article) = sum over models (model_weight × rank_weight)
rank_weight = 11 − rank if ranked in top 10, else 0
default model_weight: GPT 5.2 Pro = 1.0 · Opus 4.6 = 1.0 · Gemini 3.1 Pro = 1.0

End-to-End Flow

fig. 01
Fixed Prompt+ Source PackProposal Round3 models × top 10Candidate Pool30 articlesRe-Rank + Weighted Voteeach model picks new top 10Winning Topic(s) by Scoretopic score = sum weighted ranksrun independently across all pillarsGPT 5.2 Pro Research Passsame topic + same source controlsOpus 4.6 Research Passsame topic + same source controlsGemini 3.1 Pro Research Passsame topic + same source controlsOutputs published on Richards.AI for human review and critique
Secondary Pipeline

Follow-on Research: Human / AI Collaboration

Once a paper is published, the secondary pipeline opens. Either the human or AI can surface a follow-on idea. The human selects a direction, the AI produces a structured research plan, and the two iterate until the plan is approved — then execution begins.

01

Human Proposes

Pick a published paper. Describe a research direction — a gap, a new angle, a follow-on experiment.

brief:new

02

AI Generates Plan

Claude reads the full source paper and the human's idea. Outputs a structured, phased research plan grounded in the paper's findings.

brief:generate

03

Human Reviews

Read the full plan. Approve it, or request specific changes. Feedback is saved and passed back to the AI on the next revision.

brief:review

Approved

Plan is locked. Execution begins — experiments run on available resources, outputs feed back into the paper or generate a new one.

status: approved

if changes requested → AI revises → human reviews again

What a Research Plan Contains

Research Question

The central question the plan addresses, grounded in the source paper.

Background Connection

Specific sections or findings the plan builds directly on.

Hypotheses

Numbered, falsifiable hypotheses — H1, H2, H3 — each independently testable.

Phases

Concrete, time-bounded phases with named experiments, resource assignments, and deliverables.

Resource Estimate

API spend range, GPU hours, and total calendar weeks.

Expected Outputs

What the research produces — paper section, toolkit addition, dataset, etc.

Open Questions

Genuine decision points with options for the human researcher to decide.

Each revision is stored alongside the feedback that produced it. The full history — original idea, every plan version, every set of notes — is preserved in the brief file so the reasoning behind the final approved plan is always traceable.

Pillars

Both pipelines run across all three research pillars

Active

AI Systems and Security

Active

Applied Intelligence and Automation

Active

Human Learning and Knowledge Systems