Risk Methodology
Every score on StableLens is tagged with the model version that produced it. This is v0.1 — versioned, citable, downloadable. The framework evolves openly; every change is a version bump with a changelog.
v0.1 honest-limitations notice
v0.1 is a starting point, not a finish line. Several dimensions use proxies (e.g. data-source maturity tags) where high-fidelity inputs aren't yet available. The "Honest caveats" section below names every proxy. We bump the version every time a proxy is replaced.
StableLens Risk Methodology v0.1
This is the public methodology document. Versioned, citable, downloadable as PDF. Every score on the site is tagged with the model version that produced it; this file describes v0.1.
Why a methodology document exists
A risk score is only as useful as the framework it comes from. We publish ours so that:
- Allocators can audit our reasoning before trusting our grades.
- Issuers can challenge specific sub-scores when they believe the data is wrong.
- Researchers can cite a stable, dated version when discussing the platform.
- The framework can evolve openly — every change is an explicit version bump with a changelog.
Source code for the model is open source at github.com/DigiDom87/stablelens-v2/tree/main/lib/risk. License: Apache 2.0.
Scope
The StableLens risk model scores stablecoin yield opportunities — not stablecoins in isolation, not protocols in isolation. A score is always attached to a specific pool (a yield-generating position in a specific protocol on a specific chain in a specific stablecoin).
A pool's risk is decomposed into ten dimensions, each scored 0–100, where 100 is best (lowest risk). The dimensions combine into a weighted overall score, then mapped to a letter grade (AAA→D).
We do not score:
- Pure spot exposure to a stablecoin without a yield wrapper.
- Centralised exchange savings products (we link to them and label them "CeFi" but do not assign a StableLens risk grade).
- Tokens that aren't stablecoin-denominated even if they're advertised as "stable" (e.g., LSTs, LRTs, BTC-pegged synthetics).
The ten dimensions
Each dimension is computed independently from public data, then combined. Definitions, inputs, and formulas are below. Weights are given at the end and are subject to change in v0.x updates with explicit changelog entries.
1. Smart-contract risk (weight 15%)
How likely is the contract code itself to fail or be exploited?
Inputs:
- Auditor list, with each auditor weighted by reputation tier (T1: Trail of Bits, OpenZeppelin, ChainSecurity, Spearbit; T2: Halborn, CertiK Skynet, Hexens; T3: smaller; T4: none).
- Audit recency (months since last audit on the deployed bytecode).
- Exploit history on this contract or its templates.
- Code mutability (upgradeable proxy vs immutable).
- Time-lock on upgrades (in hours).
- Multisig threshold for admin actions.
- Bug bounty program size (USD pool).
- Formal verification coverage (boolean for any FV in scope).
Formula sketch:
sc_risk = 100
- exploit_penalty(exploits_count, total_dollars_lost)
- audit_recency_penalty(months_since_audit)
+ auditor_reputation_bonus(weighted_auditor_tier)
- upgradeability_penalty(is_upgradeable, timelock_hours, multisig_threshold)
+ bounty_bonus(bounty_pool_usd)
+ formal_verification_bonus(has_fv)
clamp(0, 100)
Worked example: Aave v3 mainnet pool → audited by OpenZeppelin and ChainSecurity within 6 months, immutable core, 24h timelock on parameter changes, 4-of-7 multisig, $250k Immunefi bounty, no exploits. Score: ~94/100.
2. Issuer / asset risk (weight 15%)
How likely is the underlying stablecoin itself to fail, regardless of the protocol?
Inputs:
- Reserve composition (T-bills %, cash %, repos %, corporate paper %, crypto collateral %, algorithmic %).
- Attestation cadence (monthly, quarterly, annual, none).
- Attestor reputation (Big-4 audit firm, mid-tier, "internal," none).
- Redemption guarantee (KYC-gated 1:1, restricted, market-only).
- Issuer jurisdiction (Tier-1: US/EU/UK/Singapore; Tier-2: Bermuda/Cayman with regulatory disclosure; Tier-3: opaque).
- Regulatory licenses (NYDFS, MiCA, EMI, Singapore MAS).
- Insolvency or significant-depeg precedent.
Formula sketch:
issuer_risk = base_score(reserve_composition_quality)
+ attestation_bonus(cadence, attestor_tier)
+ redemption_bonus(redemption_path_quality)
+ jurisdiction_bonus(tier)
+ regulatory_license_bonus(licenses_held)
- precedent_penalty(historic_depegs, magnitude)
clamp(0, 100)
Worked example: USDC → 80% T-bills + 20% cash, monthly attestation by Deloitte, KYC-gated redemption, US-regulated NYDFS, MiCA-compliant, no insolvency, no significant depegs since March 2023. Score: ~96/100.
3. Peg stability (weight 12%)
How well has the underlying stablecoin held its peg historically?
Inputs:
- 30-day, 90-day, and 365-day max deviation from peg (in bps).
- Time-to-recovery from worst depeg event (in minutes/hours).
- Reserve buffer ratio (reserves / circulating supply).
- Live redemption-arb test result (boolean: did $1M of synthetic redemption clear at parity within 30 minutes during the last test?).
Formula sketch:
peg_score = 100
- deviation_penalty(max_30d, max_90d, max_365d)
+ recovery_bonus(time_to_recover)
+ buffer_bonus(reserves / circulating)
+ arb_test_bonus(passed_last_test)
clamp(0, 100)
4. Liquidity & exit (weight 12%)
How much can you actually get out of, and how fast?
Inputs:
- TVL of the pool itself.
- DEX depth at 50 bps and 200 bps slippage thresholds.
- Withdrawal-queue current ETA (in hours).
- KYC-gated redemption (boolean — adds friction even if path exists).
- Historical exit-time-under-stress (median time observed when TVL drops by >20% in 24h).
Formula sketch:
liquidity_score = depth_score(tvl, dex_depth_50, dex_depth_200)
- withdrawal_queue_penalty(queue_hours)
- kyc_penalty(if_gated)
- stress_exit_penalty(historic_stress_exit_time)
clamp(0, 100)
5. Counterparty & custody (weight 10%)
How concentrated is operational control?
Inputs:
- Admin key control structure (multisig threshold + timelock).
- Custodian rating (if reserves include custodied assets).
- Off-chain settlement risk (does the protocol depend on a single off-chain entity?).
- Centralization of validation (for protocols with off-chain components).
Formula sketch:
counterparty_score = key_control_score(multisig, timelock)
+ custodian_score(rating_tier)
- off_chain_dependency_penalty(severity)
- centralization_penalty(degree)
clamp(0, 100)
6. Chain risk (weight 8%)
How robust is the underlying chain?
Inputs:
- Validator/sequencer decentralization (Nakamoto coefficient or equivalent).
- Finality time (in seconds).
- Halt/reorg history (count + magnitude).
- Native bridge security if non-canonical.
- Sequencer-failure escape hatch (boolean for L2s).
Formula sketch:
chain_score = decentralization_score(nakamoto_coefficient)
+ finality_score(seconds_to_finality)
- halt_penalty(halt_count, max_duration)
+ escape_hatch_bonus(has_force_inclusion)
clamp(0, 100)
7. Oracle & bridge risk (weight 8%)
What external data dependencies does this pool inherit?
Inputs:
- Oracle providers used (Chainlink, Pyth, Redstone, internal, etc.).
- Number of oracle price feeds the protocol depends on.
- Deviation thresholds before triggers.
- Heartbeat (max time between updates).
- Bridge dependencies for cross-chain assets, with bridge type (native, optimistic, light-client, multi-sig).
Formula sketch:
oracle_bridge_score = oracle_provider_score(weighted_avg)
- dependency_count_penalty(num_critical_feeds)
+ heartbeat_score(max_seconds)
- bridge_dependency_penalty(bridge_type, count)
clamp(0, 100)
8. Regulatory risk (weight 8%)
Could regulatory action disrupt the pool?
Inputs:
- Issuer jurisdiction (already in dimension 2; reused with different weight here).
- Restricted-persons clauses on the underlying asset (boolean).
- Sanctions exposure (any OFAC-listed addresses as historic counterparties).
- Recent enforcement actions against the protocol or issuer.
- MiCA / GENIUS Act / NYDFS compliance posture (compliant / pending / non-applicable / non-compliant).
Formula sketch:
reg_score = base_jurisdiction_score
+ compliance_bonus(MiCA, GENIUS_Act, NYDFS, MAS)
- enforcement_history_penalty(severity, recency)
- sanctions_exposure_penalty(any_OFAC_history)
clamp(0, 100)
9. Sustainability of yield (weight 7%)
If token emissions go to zero tomorrow, what's left?
Inputs:
- Real-yield share = (APY - emissions APY) / APY.
- Emission program runway (months remaining at current rate).
- Treasury health (months of runway at current opex).
- Forward APY estimate (function of current SOFR + protocol spread).
Formula sketch:
sustainability_score = real_yield_share * 70
+ emission_runway_score(months_remaining) * 0.2
+ treasury_health_score(opex_months) * 0.1
clamp(0, 100)
A pool with 100% real yield, no emissions, healthy treasury → 100. A pool with 0% real yield, 6 months emission runway, weak treasury → ~12.
10. Counter-stress (weight 5%)
Modeled performance under three published stress scenarios:
- Rates -200 bps: SOFR drops 200 bps over 30 days. How does headline APY respond?
- USDT depeg 50 bps for 24h: USDT loses 50 bps for 24 hours. Does the pool's value at risk concentrate in USDT-paired liquidity?
- ETH down 40% in 7 days: ETH spot falls 40% in a week. Does the pool depend on ETH-collateralized debt that would liquidate?
Each scenario produces an estimated impact on (a) pool APY, (b) realisable exit value, (c) underlying peg, normalised to a 0–100 sub-score. Average of the three is the counter-stress score.
Combining the ten dimensions
Each sub-score s_i (0–100) is multiplied by its weight w_i (above), summed, and the total normalised to 0–100:
overall_score = sum(s_i * w_i) / sum(w_i)
Default weights (v0.1):
| Dimension | Weight |
|---|---|
| 1. Smart-contract | 15% |
| 2. Issuer / asset | 15% |
| 3. Peg stability | 12% |
| 4. Liquidity & exit | 12% |
| 5. Counterparty | 10% |
| 6. Chain | 8% |
| 7. Oracle & bridge | 8% |
| 8. Regulatory | 8% |
| 9. Sustainability of yield | 7% |
| 10. Counter-stress | 5% |
| Total | 100% |
These weights reflect a v0.1 hypothesis, informed by historical loss data: smart-contract and issuer risk together account for ~80% of all stablecoin yield losses observed 2020–2026, so they receive the heaviest individual weights. We expect to revise weights in v1.0 once we have a year of post-launch data.
Letter grades
Overall scores map to letter grades using these thresholds:
| Score | Grade | Plain-English meaning |
|---|---|---|
| 95–100 | AAA | Highest-confidence stablecoin yield available. Reserved for top-tier issuer + audited protocol + immutable code + deep liquidity. Examples (illustrative): Aave v3 USDC mainnet at low utilization. |
| 85–94 | AA | Excellent. Some single-dimension weakness but no systemic concern. |
| 75–84 | A | Strong. One or two notable risks worth being aware of. |
| 65–74 | BBB | Investment-grade with caveats. Allocators should size accordingly. |
| 55–64 | BB | Speculative. Not for primary treasury allocation. |
| 45–54 | B | Speculative with material weaknesses. |
| 35–44 | CCC | Highly speculative. Multiple weaknesses. |
| 25–34 | CC | Distressed. Active concerns. |
| 0–24 | D | Default-equivalent or extreme tail risk. |
Risk-adjusted return
The site exposes two derived metrics built on the overall score:
- Risk-adjusted APY:
realised_30d_APY * (overall_score / 100). Penalises high APY in low-grade pools. - Excess yield over risk-free:
realised_30d_APY - rf_1m, whererf_1mis the live 1-month T-bill yield. A negative value flags pools paying below the risk-free rate.
These are descriptive metrics, not prescriptions.
What v0.1 is NOT
- Not a recommendation. We don't tell allocators what to buy.
- Not a guarantee. A high score reduces but does not eliminate risk.
- Not point-in-time backward-validated. We're publishing v0.1 with the model's first ~6 months of data; rigorous backtesting against historical loss events is on the v1.0 roadmap.
- Not asset-specific advice. The score is for the pool, not for any individual investor's situation.
Honest caveats
The following are real limitations of v0.1 we want users to understand:
- Counter-stress scenarios are static. Real markets are correlated; modeling rates -200 bps holding all else constant is a simplification. v1.0 will add joint-scenario modeling.
- Auditor reputation tiering is editorial. We chose tiers based on historical incident-rate data, but tier assignments are debatable.
- Forward APY estimates assume no protocol-specific shocks. A governance vote can change incentive structures overnight.
- Emerging risks are underweighted. Things like MEV-specific risk, sequencer-extraction risk, and points-program devaluation aren't yet first-class dimensions; they'll be added when data supports a formula.
- Oracle risk is partly heuristic. We can't perfectly model the joint failure probability of two oracles; we use empirical priors.
If you find a specific score you believe is wrong, file an issue at github.com/DigiDom87/stablelens-v2/issues with the pool URL and the dimension you're challenging. We respond publicly.
Versioning
- v0.1 (current): published 2026-05-06. Initial public release with weights above and dimension definitions.
- v0.2 (planned): refine sustainability calculation; add MEV / sequencer-extraction sub-component to dimension 6.
- v1.0 (planned within 12 months of v0.1): full backtesting against 2020–2026 loss events, joint stress-scenario modeling, peer review by 3 named external researchers.
Every version increment ships with a docs/methodology/changelog.md entry detailing what changed and why. Old scores remain queryable by version.
Contributing
This document and the accompanying code are open source. PRs welcome at github.com/DigiDom87/stablelens-v2. Substantive changes to the model itself require an ADR before code is reviewed.
End of methodology v0.1.
For questions or challenges to specific scores: methodology@stablelens.com.
Cite this methodology
APA: StableLens. (2026). Risk Methodology v0.1. Retrieved from https://stablelens.com/methodology/v0.1
BibTeX: @misc{stablelens_v01_2026, title={StableLens Risk Methodology v0.1}, author={StableLens}, year={2026}, url={https://stablelens.com/methodology/v0.1}}