ARIS (Auto Research In Sleep)
by ARIS
Description
ARIS is a prompt-engineering framework where the LLM *is* the runtime. 5 MCP servers bridge external LLMs for cross-model review, 5 CLI tools handle arxiv/Semantic Scholar fetching + GPU watchdog monitoring, and 49+ Markdown "skill" modules define composable research workflows (with YAML frontmatter) consumed directly by Claude Code/Codex/Cursor. The core architectural insight — "the LLM doesn't need a scheduler; the LLM IS the scheduler" — is orthogonal to Forge's programmatic orchestration but surfaces several quality patterns worth stealing: cross-provider adversarial review, provider-specific parameter clamping, thread history persistence for stateless APIs, and private dotfile API key fallback.
Summary
Prompt-engineering framework: LLM-as-runtime with 5 MCP servers (Python), 5 CLI tools, 49+ Markdown skill modules. Orthogonal architecture (no real runtime) but surfaces 4 quality steals: P0 provider-...
Tags
Related Tools
ARIS: Auto-Claude Code Research in Sleep — Deep Analysis
ARIS:
**ARIS** is a methodology-first, Markdown-driven skill system for autonomous ML research workflows. It orchestrates **cross-model collaboration** — Claude Code executes research while an external LLM (Codex, Gemini, or other) reviews work as an adversarial critic. The entire system is files + plain Markdown skills (no database, no framework), making it portable across Claude Code, Cursor, Trae, Codex CLI, and other agents.
arXiv:2603.03329 — AutoHarness: Improving LLM Agents by Automatically Synthesizing a Code Harness
arXiv
AutoHarness tackles a critical LLM agent failure mode: **agents making illegal/invalid actions**.
HN Multi-Agent Framework Link Triage
HN
**47 unique URLs extracted** across 6 categories from 6 HN threads (1,100+ combined points, 418 comments). The HN multi-agent community is skeptical of framework proliferation but hungry for: