Which instrument — between credit ratings and credit default swap (CDS) spreads — best responds to fixed income investors’ need to appraise credit risk? Such an assessment becomes necessary because of mounting criticism to rating agencies’ promptness in identifying changed credit conditions. An empirical research on a sample of American reference entities is carried out. Cardinal CDS spreads are transformed into ordinal ratings, after adjusting for the systemic component in CDS spread movements. CDS-implied ratings are found to be more timely than agency ratings and thus best suit investors’ exigencies. Furthermore, CDS-implied rating changes are found to usually lead agency rating changes. In fact, credit ratings have turned into regulatory licences to access capital markets and do not solely rely on their quality any longer. Simultaneously, the focus has shifted from investors, who used to be the prime users of ratings, to issuers. A reference to the industry’s compensation structure helps explain the reason for that. On the other hand, CDS-implied ratings are a tool able to give the point-in-time credit-risk appraisal investors are more interested in.
This work examines the possible relation between credit risk and credit ratings, and the timeliness of this relation for the financial services. Investors and regulators use credit ratings as part of their decision making process and it is important to understand to which extend credit ratings reflect actual credit risk. This research focuses on the financial services because of the increasing credit risk over the years 2003 till 2007 and the strong commercial interests of credit rating agencies in this industry. A distorted relation between credit risk and credit ratings is most likely to become apparent in the financial services. Six financial metrics are used as a proxy for credit risk and two variables are used to measure changes in credit ratings. Based on the data and analyses the relation between credit ratings and credit risk is – at best – weak for the research period. The most significant relations are lagged by three or four years, which means that credit ratings are responding three or four years after changes in credit risk occur.
Given the growing popularity of professor ratings websites and common departmental policies to keep official professor survey responses confidential, an important concern is the validity of these online ratings. A comparison of student responses to official end-of-semester teaching evaluations and unofficial professor ratings on two widely used websites at UC Berkeley (Rate My Professor and Ninja Courses) indicates that online ratings are significantly lower than their official counterparts. There is also relatively high correlation between official evaluations and online ratings, with most coefficients between 0.4 and 0.7. Similar downward bias was found in other American institutions (Rice University and Harvard University). Some of the bias in Rate My Professor is due to single ratings and early ratings, but similar results are found for Ninja Courses, which has stricter policies regarding posting. Ratings from both websites are not significantly correlated with grade distributions, suggesting that use of these sites for grade retaliation is uncommon. Neither official evaluations nor online ratings are significantly correlated with enrollment.