Skip to main content
The Consensus Credit Rating maps a raw average Probability of Default (PD) to one of 21 letter-grade buckets. That’s the right format for most use cases — but 21 categories can be too coarse when you need to detect small movements within a grade, or rank entities that sit inside the same CCR bucket. The secondary CCR100 scale addresses this. It preserves more of the underlying PD signal by mapping the same raw Consensus PD into 101 narrower buckets instead of 21.

CCR100 Publication

Publishing a CCR100 value is a two-step process. Step 1 is covered in full on the Consensus Credit Rating page — this page focuses on Step 2.
1

Raw PDs → Consensus PD Average

Contributed TTC PDs are averaged across all banks:Consensus PD=1Ni=1NPDi\text{Consensus PD} = \frac{1}{N} \sum_{i=1}^{N} PD_i
2

Consensus PD Average → CCR100 Midpoint PD

That average is passed through a separate CCR100 lookup table:
CCR100 PD=f(Consensus PD)\text{CCR100 PD} = f(\text{Consensus PD}) Buckets are indexed 1–101, where 101 corresponds to default.
Because this is a table lookup, the published midpoint PD is discrete rather than continuous. In practice it is usually very close to the raw average — but not always exactly equal.

When to use CCR100

Use the headline CCR for credit classification, reporting, and any context where a letter grade is the right output. Use CCR100 when you need to:
  • Rank entities within the same CCR bucket — two entities both rated bbb may sit at meaningfully different positions within that grade
  • Track small movements — a shift that doesn’t cross a CCR threshold will still show up in the CCR100 bucket
  • Feed quantitative models — the underlying average PD and CCR100 midpoint are more suitable continuous inputs than a categorical letter grade
For field-level definitions, see the Data Dictionary.