This report provides the psychometric properties and validation report for FranklinCovey’s Speed of Trust Leader Assessment (SOTLA). This assessment is a component of FranklinCovey’s Speed of Trust® (SOT) course—and is intended to measure the extent leaders/people managers exhibit the skills and behaviors represented in our SOT course. The assessment is typically deployed as a 360-assessment before and after participation in the course.
Download the full Speed of Trust Leader Assessment Technical Report
There is a process for validating assessments, sometimes called psychometrics.
The validation process can be standardized, depending on the industry. Often though, it’s more accurate to say there are several validation criteria, some is more important than others, but generally the more validation criteria an assessment meets, the better.
Many organizations that sell assessments do some kind of validation. Organizations that focus primarily on assessments may validate every assessment they produce. Other organizations that have assessments as a segment of their business, like KornFerry and Gallup, also publish extensive validation/technical reports for their most popular assessments, periodically updating those reports with new data or to document changes to the assessment.
Understandably, many of our clients have asked to see what validation work we’ve done on our FranklinCovey assessments.
Further, our old Team Trust Index (TTI) — now the SOT Team assessment — was previously validated, which led client to question why other FC assessments were not validated.
This is a ~15 page report that details the validation testing on the new SOT Leader Assessment.
The report covers 3 themes:
- How the new SOT Leader assessment compares to the old tQ — for instance, comparing how people score on both AND how scores on both relate to other measures of trust
- The SOT Leader assessment’s reliability — for instance, how similar do people score when they take the assessment a week later
- The SOT Leader assessment’s validity — for instance, to what extent scores predict workplace engagement and job satisfaction
In total, over a thousand leaders rated themselves, and more than 500 direct reports rated their leaders as part of these studies.
Yes. The report covers the following validation criteria:
- Internal consistency
- Test-retest reliability
- Factor structure (i.e., is the SOT Leader assessment a multidimensional measure)
- Convergent validity (i.e., does the SOT Leader assessment relate to other validated measures of trust)
- Based on self-rater data
- And based on direct report-rater data
- Incremental validity (i.e., is the SOT Leader assessment a better measure of trust than the old tQ)
- Criterion/concurrent validity (i.e., does the SOT Leader assessment relate to outcomes we intend it to predict: like engagement and job satisfaction)
- Differences in SOT Leader assessment scores based on respondent demographics (etc., age, gender identity), and team and organizational variables (e.g., org size, remote work status)
The SOT Leader assessment meets the generally accepted standards on several validation criteria — normally distributed responses, multiple forms of reliability, and multiple forms of validity — and in some instances performs really well on these criteria.
The relationships between rating your leader higher on the SOT Leader assessment and a host of great outcomes, like engagement, job satisfaction, intent to stay at your organization, is really strong. Relationships that rival some of the top academic assessments on leadership.
One additional insight from the data is that years spent with their manager did not predict how direct reports rated their managers on the assessment. One reasonable interpretation of that is: a leader can, and many in our sample have, fostered high trust with their direct reports within a year. Building high trust doesn’t need to take years.
The SOT Leader assessment is 20 questions rather than 25 for the tQ. We also removed the sections on Market and Organizational Trust. The new assessment’s questions are also of a different format — consisting of only a single statement rather than the 2 statements that bookend each tQ question. We offer more detail on these changes and why they were made in the FRG and in the technical report.
The headline is that the new SOT Leader assessment is both shorter, and yet has stronger relationships with 1) other measures of Trust and 2) it is a better predictor of important outcomes like employee engagement and job satisfaction. We also found that the new assessment beats the tQ on a few other important psychometric criteria — for instance, scores on new assessment tend to more normally distributed, and there’s less bunching up of scores at the highest end of the response scale.
More detail is found in the technical report.
There are basically two types of testing we relied on to determine the questions.
We started with many possible question options, and those were determined alongside our Trust Practice Leader Doug Faber. We then conducted many small tests with respondents to find the questions that are better on a few criteria. What we looked for were questions with a good range in responses, preferably a normal distribution of scores. And for questions with strong correlations with concepts we want the questions to be correlated with — for instance, other measures of trust. This process got us almost all the way there to our final set of questions.
The second step in the process is even more involved. And that’s the validation effort that is detailed in the technical report.
We focused on measurement and validation of self-ratings and direct report ratings of their leader. These are generally considered the most important perspectives in assessment validation.
But that means we did not look at peer ratings or manager ratings. This is something we can and may do after the new SOT Leader assessment launches. We may update the technical report with any relevant data we find there.
Caveat, if necessary:
What we collected was independent, cross-sectional data — meaning we collected independent samples of people rating themselves, and independent samples of people rating their leaders. It was not the case that we had pairs of direct reports and leaders rating themselves and each other.
Relation to the FranklinCovey 360 Diagnostic:
However, we do have data of other raters rating the same individual in the FranklinCovey Leader Diagnostic. The relevant statistics that speak to 360 reliability are detailed in the technical report for the FC Leader Diagnostic (as found in the Help Center).
We have a section in the FRG that goes into detail on this. In short, the scale accompanying the SOT Leader assessment is consistent with the FC Diagnostic, which is necessary to keep respondents’ experience consistent as they complete the Diagnostic alongside the SOT Leader assessment. We do also have data finding that the new response scale provides more normally distributed responses than the old response scale for the tQ, which is an important criterion for the assessment’s validation.
For information on the SOT Team assessment is found in the SOT FRG.
Regarding the Team assessment validation: the Team assessment’s questions are nearly identical to the previous Team Trust index (TTi). Most of the updates to this assessment are around its administration and its attachment to Module 4. Further, the old TTi was previously validated by an external research firm in 2016 and the conclusions from that report are still applicable. While the validation report for the SOT Team Assessment may not be as thorough as for the Leader Assessment, it does report important metrics around the assessment’s reliability (e.g., internal consistency).
Our validation testing was done only in the US, and in English. It’s almost always the case that validation starts within one country and language, and then if cross-cultural validation is required, it is done later through additional studies.
We also note there’s an important distinction between whether an assessment is valid across cultures, and whether there are simply differences across cultures. So for instance, we do find some demographics differences in our US-based data — some based on race/ethnicity, some based on remote work status. So these are differences between groups. But regardless of the group, SOT Leader assessment scores still predict outcomes like engagement and job satisfaction. That’s what speaks to the validity of the assessment.
All that said, if we’re to do cross-cultural study of the SOT Leader assessment, that would come in the future, and could be added to an revised version of the technical report.
Our validation work was lead by Alex O’Connor on the Product team. He has a PhD in research psychology, was trained in psychometrics, and has previously published validated assessments in academic journals.
Our validation work was supported by an external expert, Joshua Eng, PhD. He’s faculty at the Indiana University — School of Medical. He’s responsible for validating the assessments that measure learning and well-being outcomes for surgical residents across the country and has decades of experience as a psychometrician.
Comments
0 comments
Article is closed for comments.