From Weekend Hacks to Autonomous Lenses: How Messy Virgo Is Designing Its Due Diligence Engine
In a recent piece, VentureBeat unpacked how a "weekend vibe" code hack by Andrej Karpathy quietly sketches a missing layer for AI systems: an architecture, not just a model, for how intelligent agents should actually work in the wild — orchestrating tools, evaluating their own outputs, and iterating in tight loops of reasoning and action (article link).
At Messy Virgo, we've been independently converging on a very similar idea — but applied to on-chain due diligence and, eventually, autonomous trading. Our litepaper describes this journey as an evolution from culture and narrative to a complete AI-native due diligence engine and, ultimately, an autonomous trader on Base and Ethereum (Messy Virgo litepaper).
This article sketches, in broad strokes, how our lens architecture takes inspiration from these emerging agentic patterns while remaining grounded in enterprise-grade, secure, and modular design. It's intentionally high-level: we are still in active R&D, and details will continue to evolve.
From Karpathy's Agent Patterns to On-Chain Due Diligence
The architecture Karpathy prototyped — and that others have since iterated on — points toward a few key ideas:
- LLM as orchestrator, not oracle: The model doesn't just answer questions; it decides what tools to call, what data to fetch, and when to loop back.
- Tools and data sources as first-class citizens: Retrieval, APIs, code execution, and external services form a modular toolbox that the agent can call into.
- Inspection and self-evaluation: Outputs can be rechecked, critiqued, or compared by other agents or models, reducing reliance on any single run.
- Long-running processes with memory: Instead of one-shot prompts, the system maintains state across steps, allowing more thoughtful, multi-stage workflows.
These patterns are exactly what on-chain due diligence needs. When you're evaluating a token, a protocol, or a narrative, you don't want a single static answer from a single model; you want multiple perspectives, multiple data sources, and explicit reasoning about where the information might be wrong or incomplete.
That's where our lens architecture comes in.
Messy Virgo's Vision: Lenses on the Path to Autonomy
The Messy Virgo litepaper describes a four-pillar vision:
- The All-Hearing Ear (Signal Detection) – Listening to the blockchain's "whispers": social signals, narratives, and on-chain movements.
- The Crystal Lens (AI Due Diligence) – Applying structured analysis across APIs, code, documents, and fact-checks.
- The Architect (Portfolio Optimization) – Turning insights into mathematically sound positioning.
- The Human Heart (Community Sovereignty) – Keeping humans in control via governance and oversight.
Our upcoming Agentic Token Due Diligence Engine sits squarely in Pillar 2. It's the part of the system that says:
"Given everything the world is telling us about this asset, what do we actually believe — and with what confidence?"
The lens architecture is the internal operating system for that engine: a way to decompose due diligence into modular, composable stages that can scale across different perspectives (technical, tokenomics, social, risk, governance, and more) and across many tokens.
The Lens Architecture: A High-Level Overview
At a high level, each lens is a pipeline that looks like this:
- Input – Normalize what we're analyzing.
- Data – Gather raw or semi-structured information from many sources.
- Validation – Compare, cross-check, and qualify that data.
- Info – Distill everything into a coherent view for that lens.
- Output – Produce artifacts and scores for humans, agents, and, eventually, trading systems.
We deliberately keep the internal mechanics flexible and evolving, but conceptually, the stages work as follows.
Input: Defining the Question Precisely
Every lens starts from a clear, normalized input. In practice, this can be:
- A token contract + chain (e.g., "this ERC‑20 on Base").
- A project identity (e.g., a specific ecosystem or protocol).
- A question the system is being asked ("How robust is this token's tokenomics?").
The Input stage transforms that into a canonical representation — a "single source of truth" for what is being analyzed. This normalization is essential when multiple lenses later collaborate on the same asset.
Data: Fanning Out to Many Sources
Next, each lens fans out into multiple data providers, which might include:
- Market and on-chain APIs (liquidity, volume, holder distribution, unlock schedules).
- Smart contract metadata and security signals.
- Whitepapers, documentation, and repositories.
- LLM-based summaries or extractions over semi-structured text.
- Social sentiment and narrative signals.
Crucially, no single provider is trusted blindly. Each provider's output is treated as one piece of a larger puzzle — useful, but incomplete and possibly noisy.
Validation: The "Crystal Layer" Between Data and Insight
This is where our architecture aligns strongly with the patterns hinted at in Karpathy's experiments — but applied to DeFi.
Rather than handing raw data directly to a summarizer, we've inserted an explicit Validation layer between Data and Info:
Source scoring
Each data source or LLM output is scored along dimensions like:
- Reliability: How often does it agree with ground truth or peers?
- Completeness: Does it cover the critical aspects we care about?
- Consistency: Does it contradict itself or other trusted signals?
Cross-review
Multiple models can critique each other's outputs. For example:
- One LLM reviews another's tokenomics summary.
- Narrative claims are checked against on-chain metrics or unlock schedules.
- Conflicting data is highlighted rather than silently averaged away.
Gap and conflict detection
The Validation layer is explicitly allowed to say:
- "We don't have enough high‑quality data about this aspect."
- "These two sources strongly disagree on circulating supply or vesting terms."
- "This narrative cannot be confirmed with the current on-chain evidence."
Critical safety mechanisms
The Validation layer also includes explicit kill switches for deal-breaking findings. For example:
- If the security audit identifies a honeypot contract, the due diligence process halts immediately.
- The system provides a clear explanation of why the token was rejected rather than proceeding with incomplete analysis.
- Other automatic rejections may include verified rug-pull patterns, malicious admin functions, or critical smart contract vulnerabilities.
These kill-switches ensure we never waste resources on fundamentally compromised assets and maintain clear audit trails for why tokens are filtered out.
Guidance for the next stage
Instead of just sending "all the data" forward, Validation produces guidance:
- Which sources to trust more or less.
- Where to be conservative.
- Where to surface uncertainty to the user instead of pretending we know.
The result is that when the Info stage runs, it sees not just what was said, but also how much we trust each source and why.
Info: Lens-Specific Synthesis and Scoring
The Info stage takes:
- The normalized input,
- The collection of data from many providers, and
- The Validation layer's evidence and guidance,
and turns them into a lens-specific understanding of the token.
Examples for future lenses might include:
- Technical Analysis lens – Interprets price, liquidity, and volatility patterns into signals.
- Tokenomics lens – Evaluates supply, emissions, unlocks, holder distribution, and economic design.
- Narrative/Sentiment lens – Assesses how aligned the public story is with on-chain fundamentals.
- Risk/Governance lens – Considers upgrade paths, admin keys, treasury concentration, and process maturity.
Each lens can produce:
- A score (or set of scores),
- A structured explanation of why that score was assigned, and
- A map of known unknowns (areas where data was missing, conflicting, or low‑confidence).
Output: Artifacts for Humans and Agents
Finally, the Output stage renders artifacts that can be consumed by:
- Humans – Markdown-style reports, narrative explanations, and visualizations.
- Agents – JSON‑like structures that downstream components (like a portfolio optimizer or an autonomous trader) can operate on.
These outputs are designed to be:
- Transparent – You can trace back from a score to underlying evidence.
- Composable – Multiple lenses can be combined into a broader "due diligence view" for a token.
- Governance-aware – Later phases can use these outputs as inputs to human or DAO-level decisions.
Multiple Lenses, One Due Diligence Engine
The true power of the architecture is not in any single lens, but in the ensemble:
A token is not just "good" or "bad" — it's a matrix of lens-specific perspectives, each backed by different data sources and validation strategies.
Over time, we expect to:
- Add new lenses (e.g., tokenomics, social/narrative, security, governance risk).
- Retire or refine lenses that don't add enough value.
- Use the scores and explanations from each lens to:
- Inform human‑in‑the‑loop reviews.
- Trigger alerts when conditions change (e.g., vesting unlocks or governance shifts).
- Act as inputs to a portfolio optimization and eventually autonomous trading engine, tightly aligned with the roadmap outlined in the litepaper.
Throughout, we remain conservative in how we communicate these capabilities. This is early-stage, moving-target work, and we treat scores and outputs as decision support — not oracles.
How This Relates to the Broader Messy Virgo Roadmap
Putting it all together with the litepaper roadmap (Messy Virgo litepaper):
Phase 1 – Culture Engine (LIVE)
Establishes the identity, community, and narrative layer around $MESSY — the "voice" of the system.
Phase 2 – Minimum Viable Intelligence (Q1 2026)
The lens architecture described here is the internal operating system for that Agentic Token Due Diligence Engine:
- Signal detection feeds candidate tokens.
- Lenses run structured, multi-source due diligence.
- Validation layers quantify trust and highlight gaps.
Phase 3 – Autonomous Trader (Q3 2026)
Once the due diligence engine is reliable enough (and appropriately governed), its outputs become inputs to a constrained, transparent autonomous trading layer:
- Risk parameters and portfolio constraints are set by humans/DAO.
- The AI proposes trades based on lens-derived scores and evidence.
- Execution remains auditable and reversible within clear boundaries.
Phase 4 – DAO Transition (2027)
Governance over the whole stack — including what lenses exist, how strict validation should be, and how trading behaves — gradually moves into the hands of the $MESSY community via on-chain mechanisms.
Under the hood, we're building this with modular, hexagonal architecture and enterprise-ready patterns, but those implementation details are less important than the core idea:
We're designing an AI-native due diligence engine that thinks in lenses, not just in prompts.