Explaining the CritiCall Personality Test with Dr. Clinton Kelly
E3

Explaining the CritiCall Personality Test with Dr. Clinton Kelly

Clinton: Welcome to Testing,
Testing 1-2-3, a podcast

brought to you by TestGenius.

Jenny: Welcome everybody.

My name is Jenny Arnez
and I'm from TestGenius.

We have today, Mike Callen, VP of Products
at TestGenius back with us, as well as Dr.

Clinton Kelly from ioPredict.

If you've had a chance to see
our previous session with Dr.

Clinton Kelly, you know that we talked
about all things personality testing.

We have Clinton back today to talk about
personality testing, specifically in

the 911 dispatch environment, and using
CritiCall, our dispatch assessment.

Mike, do you want to say anything
about CritiCall before we jump in?

Mike: Yeah, absolutely.

So CritiCall, as most people in the space
know, is a skill and ability testing

program that, that leans very heavily
on multitasking ability while being

able To receive auditory information
and enter it into a computer, which

is one of the most core abilities
that is necessary for success on

day one in the dispatch environment.

And I would say the dispatch environment
in terms of our conversation has more

than means more than 9 1 1 it's police,
fire, it's EMS, it's 9 1 1 utility.

Ambulance, there's all sorts
of environments where the job

that's being performed is very
much in a stressful situation.

You're under duress and it's a CritiCall
incident, meaning that the effects

of making good decisions can be very
positive and the effects of making

poor decisions or performing poorly
can have incredibly negative effects.

Implications, including even death.

So it's a very important
environment to be working in.

CritiCall is a product that is a
product of Biddle Consulting Group.

It runs within our TestGenius suite.

And a few years back, we had the
great opportunity to work with Clinton

and his firm to create, ground up a.

Dispatcher specific personality test.

And that's what we're really going
to be meeting today to talk about.

Jenny: Wonderful.

Clinton, you want to just introduce
yourself real briefly for those

who didn't get a chance yet to
watch our first session with you?

Clinton: Yeah.

, I'm a Clinton Kelly.

I'm with ioPredict and I have a background
in industrial and organizational

psychology and I specifically work
on helping organizations to implement

and validate tests for hiring.

And so that's what I've done for
almost, I'd say about 20 years now.

It's implementing hiring
tests in organizations.

Jenny: Wonderful.

Thank you.

So personality testing and CritiCall.

Why does it matter?

Why is it important?

Clinton: It's, like Mike said,
there's, there are multiple things.

There's, there are, can
you technically do the job?

Can you work with computer screens?

Can you be taking in information, auditory
information and typing at the same time?

And that's the technical,
can you do the job?

And then there's the personality side
is, are you a good fit for this job?

Do you have the personality type that
is conducive to being successful?

And so both of those pieces of the
puzzle are important parts when it

comes to making good hiring choices.

And so that's why it matters.

And these are like Mike said,
these are stressful jobs, typically

working longer shifts, dealing
with situations that aren't always

pleasant, and you need to make sure
you have the right fit for this job.

And so that's really important.

Mike: I have had the pleasure of
going to many conferences in the

space and there's a conversation
that occurs at least once or twice.

Every time I'm at a conference where
I'm talking with the center director,

and we're talking the profile of this
person is somebody who's been around

the, this particular vertical for 20
plus years and they have a really great.

Grasp and feel of the space.

And what these folks will tell me
very often, and I never disagree with

them is that, I've been in this job so
long, I can sit down with an applicant.

I can talk to them for five minutes
and I can know whether or not they're

going to be successful on the job.

And they don't say that to me in a
way where they're arguing with me

saying they don't need to do testing.

Not at all.

What we all understand about that
particular situation is that, yeah,

they probably have a really great
feel about who would be successful

and who wouldn't be successful.

The problem is that particular
process doesn't have a paper trail.

That explains why somebody made
an affirmative decision or why

they made a negative decision,
and it doesn't leave the applicant

feeling very good about the process.

I remember in years past, I applied
for jobs where I thought I was

super highly appropriate for it.

There was a.

bogus selection process.

I didn't get hired and I
felt really bad about that.

So one of the things that we want to
do on the testing side for CritiCall

and for this, these dispatcher
positions is we want to provide a.

process that's holistic and really one
that does what these center directors

know that they could do, but creates
a paper trail at the same time.

And Jenny, you asked Clinton
about why, you want to do the

personality testing in this space.

And one of the things that we look
at in testing and selection is you

look at Job performance as like a
pie with different pieces in there.

And so you can have tests that measure
audio data entry and map reading and

multitasking and those kinds of things.

And each of those pieces of pie
are going to represent different

aspects of job performance.

And so the goal in a testing
process is to try to get as many

pieces of that Pie filled up with
CritiCall performance dimensions.

And some of those are
going to be hard skills.

Some of them are going to be,
soft skills, situational judgment.

And many of those things are going to
be personality, particularly in this

realm, because it's such a difficult
job is you want to know as much as you

can know from a job related perspective
about the personality of this person.

Clinton said, specifically to
be able to find a good job.

a good fit.

And so that's really what our goal is
with this particular product right here.

Jenny: So if I was a hiring
manager or a sergeant in an agency

that's implementing this CritiCall
and using the personality test.

Why it's important to me is that I know
that I can hire basically the right person

for the job who's going to stay too.

Is that correct?

Clinton: Yeah.

We've shown with how we built the
test and the validation study that

we've done, we've shown that higher
scores on this test are more likely

to be rated As higher performers
on the job by their supervisors.

And so there, we have actual data that
says, Hey, it doesn't get, and there's

no guarantee cause as Mike said, there's
a certain amount of, we can't predict

up with a hundred percent accuracy,
but we show through our study that

higher scores on the test, they're
more likely to be rated as higher

performance by their supervisors.

Mike: Why don't you talk a
little bit Clinton about how

this particular test was created.

You, there's a lot of standard personality
tests that have the same 110 items.

They get asked, everybody gets
asked the same question, but

they're scored or keyed differently.

This test wasn't an off the shelf test.

This was created ground up.

And, when you talk about that
relationship between job performance

and the items, talk about how
that was established a little bit.

Clinton: Yeah, so what we did
is we looked and we analyzed the

job and we did this analysis of
the job to see what is important.

We talked with different agencies
to see, okay, what are the important

characteristics of individuals
who are successful on the job?

And so we custom developed this test from
scratch specifically for dispatchers.

And we designed, we built it
with the attention to measure

things like adaptability.

With a heinous job, it's important
that people can adapt and on the

fly, they get calls and have to
figure out how to handle things.

Another one, one of the things
we tend to measure is composure.

You have to be able to remain
calm when you're on a call.

Another thing is resilience and
not being able to let a certain,

be affected one call to the next.

You have to be able to let that one go and
move on following procedure and policy.

That's also another thing you, while
it's important to have the adaptable,

you also need individuals in these
types of roles who are not going

to fly by the seat of their pants,
that they know what the policy and

procedure is in certain situations.

And then one of the other
ones is confidences.

You have to also be able to take
some, I say, be a little bit assertive

and be confident on the calls.

When you're in these situations
and then lastly multitasking a

preference for multitasking and
that's pretty apparent for anyone

who's been in a dispatching agency.

If you've walked into a room, you
just see the multiple screens, the

phones and things that are going on.

There's lots of things
done at the same time.

So just individuals who have a
preference or have a preference for

multitasking behaviors is another
thing we were attempting to measure.

Mike: And you bring up a good point.

When you talk about these, you were
talking about which one was it?

Adaptable, sorry, create being creative.

I can't remember exactly which one
it was, but the point that you're

making is that there's a, this whole
continuum of this particular scale.

And there's the very top end of
it and the very low end of it.

And there's actually a sweet spot
within each of these areas that, that

you want people to fit into because
the highest performers are there.

It's not that you want people who are
rated a hundred out of a hundred on all

of these particular areas, rather you
want them to fall within a certain range.

How do you determine that?

Clinton: Yeah, so what we did in this case
is we actually, we developed a number of

different test items designed to measure
those areas that I was just talking about.

And we collected data from a number of
variety of different dispatching agencies

throughout the United States and Canada.

And so we asked them, we said, hey,
we got their employees, current

dispatchers, and we said, can
we get them to take this test?

And then we created a supervisor
a performance rating scale.

So in the, what you can see on the screen
here, we created a scale where supervisors

rated the individuals who took this test.

They rated their performance on these
different areas of being adaptable,

flexible, ambitious, motivated.

And we had them rate those employees.

So we had people throughout the U.

S.

and Canada who are current
dispatchers take this test.

We had their supervisors rate
them on their job performance.

And then we looked at, we then
evaluated the relationship.

Are there patterns in the data that we're
seeing that people who are higher or lower

on these questions or on these scales
that are more likely to be successful in

the job as rated by their supervisors?

So their supervisors say they're
more successful and what patterns

on these different test questions
result in better dispatchers.

And sometimes you can get lucky.

What you can do is you can get lucky.

Like I like it is you can throw spaghetti
against the wall and maybe it makes

some magical image one time, right?

But can you do it again?

Can you replicate it?

And so what we did here to avoid
making sure that we didn't just

capitalize on chance, we didn't
just get lucky with these questions.

Like we threw all this
stuff in the black box.

And while magically people who
preferred to eat, sushi versus

pizza are better dispatchers.

We might, you might find that pattern
in data once, but we did here, as you

see on this slide here, see sample
one and sample two, we collected

data From all these individuals.

And we randomly split the sample in half.

We randomly said, okay, you're
sample one, you're sample two.

We did a random split in sample one.

We said, okay, these questions
predicted performance.

And then we use that, we said, do they
also predict performance in sample two?

And that cross validation, that
helps us to make sure that we didn't

just get lucky the first time.

And you can see these correlations here.

This is, not to get too deep into the data
here, but these correlations, when there's

stars next to them, these means they are
significant predictors of performance.

They are predicting job performance
in both sample one and sample two.

And so we're able to show that we're
consistently predicting performance

and didn't just get lucky in the
way of the, some unique, just saying

that we just capitalize on chance.

Mike: And one of the things that
I think that's unique about this

particular personality test is that
you guys wrote all these items ground

up specifically for this space.

So this isn't, one of those personality
tests that's talking about, how

much you enjoy long walks on the
beach or crowded movie theaters.

It's talking about, real situations
that pertain to this environment.

Can you talk a little bit about that?

Why is that important
and how does that help?

Clinton: Yeah, it makes it feel
a little more like the job.

We talked about that, call that call.

Cause some face validity, it
feels a little more job related

to those taking the test.

And so what we have in
the way we set up this

test, they are what we call paired
like a forced choice personality test

and it's where we have two different
statements and we say something like

i'm more likely to And on the left
hand side, I might say follow the rules

and the right hand side is change the
rules, And so we have that don't you?

Yes, we do have an image of
that You That is on right here.

Right here is an example.

This is a silly example here, but
let's just say I am more likely

to eat a hamburger or eat a salad.

And so you have the ones on the
left or on the right, and people

indicate where they fall in this
continuum of their preference.

We set up the questions, the
ones on the actual test to be,

so they've look more job related.

For example, something like this,
I'm more likely to, on the left hand

side, make quick decisive decisions
or make long thought out decisions.

Inherently, neither one of those is bad.

Someone who makes long
thought versus quick decisive.

Sometimes there could be benefit to one
or these the other, depending on the job.

But in this particular job.

We actually, you need to, you are
typically respond, responding quickly.

You need to make quick,
decisive decisions.

And so we were looking, are the people
who make quick, decisive decisions, are

those who are more likely to be successful
in the job as rated by their supervisors?

And so those are the types of things
we're looking at with these questions.

We try to write them in ways that are like
Mike has said, more related to the job.

Mike: And that's borne
out in the data analysis.

How many people did we work with on this?

It was, seems like it was about what?

300, 320 people or so.

Clinton: Yeah, it was over 300.

Yeah.

And so you can see here three in the final
sample, we had, I think we had 370 or 380.

In the overall of good data, when
I say good data, people that fully

completed the test, we also had
supervisor ratings of performance.

So we had some people who completed
the test, but we didn't get ratings

of performance from the supervisors.

And some people were the supervisors gave
us ratings on how they performed, but

the employee didn't complete the test.

And but of where we had good data,
a little over 300 individuals.

And you can see, we randomly
split that sample in half.

In each one, we get about 150 in
those random splits of the sample.

Mike: Excellent.

I'm going to close my door.

My neighbor's dogs are barking
at the most opportune moment.

I appreciate that.

Thank you for that.

Jenny, is there anything that's
come to mind right now for you?

Jenny: As I'm looking at this list
of performance criteria, and you

talked about designing the test.

Did you do some sort of a job analysis?

Is that what happened?

Clinton: Yeah, so we did do that job
analysis and we looked at, and actually

at at Biddle and with TestGenius, they
had done this job analysis actually

previously a couple years back.

And we used, we leveraged that job
analysis to identify target areas for

potential development of test items.

But then we also had some phone calls
with a number of different managers

of dispatching agencies throughout the
country, where we asked them to confirm

just again, just as a second, confirm of
what we were seeing in the job analysis

results, what are most predictive of
success of dispatchers when it comes to

soft skills or the personality kind of
different things, different constructs.

And we use that to drive the
development of the content.

And one of the things I like to say, we
custom developed this, like Mike had said.

But whenever you make a test, not just
like on any team, on a sports team,

not all the players are all stars.

We developed these questions,
not all of the items worked.

So when we originally developed the
test, we had over a hundred questions

that were where we collected data.

Some of those questions did
not predict performance.

It didn't matter which side
they picked, which side of the

statement they were picking.

He said, Hey, people on either side
were equally as effective on the

job according to their supervisors.

So those are questions that got tossed.

They didn't make the final cut
because we said it didn't really

matter which side you were on.

These individuals, they
didn't seem to predict.

These items didn't predict performance.

And so that's an important thing
when you do this development is to

evaluate your item and see which ones
are working and which ones aren't.

And those got tossed.

The ones that didn't work,
we said, Hey, it didn't work.

We thought we might, but it didn't.

Mike: It's important to be able to
know how to use this kind of test.

One of the things that we run into is that
people have strengths and weaknesses, even

really high performing people have certain
areas of their knowledge, skill, ability,

personality, personal characteristics
that would be considered to be very high.

And they might also have some areas
that aren't as strong because, no,

no man is a superhero or woman.

When we're looking at the results, we
want to be able to take that into account.

And one of the things that I really love
about the report is that Clinton, you

and Jason spent a lot of time looking at
this overall performance and then certain

subscales that had validity coefficients.

But then to be able to
ask probing questions.

So we have a saying in our environment
or work environment here, not everybody

is good at everything, but it's
important to understand where your

weaknesses lie, maybe more important
to not think that your weaknesses

are strengths, but also to be able to

remediate those situations,
to be able to understand this

isn't necessarily a strength.

So can you maybe walk us through a sample
score report and show us, how is it that

a person who's a hiring manager might
use this report, how they might interpret

it and how they might deal with some of
these aspects of the scoring and maybe

some followup questions in order to get
the best utility out of this report.

Clinton: Yeah, sure.

We can do that.

Let's pull up that report
and let's walk through here.

So here is a sample report here.

We'll start to go through
this is one I just completed.

You can see it gives a, some
information just about the test

taker and their overall test score.

We give three bands of scores and
those scores are falling into either

they are, I'm trying to remember
off the top of my head, highly

recommended or somewhat recommended.

You'll see at the top right there,
I got a highly recommended score.

We also give the total time that
the person took on the test.

We give them a 30 minute time limit.

As you can see, I've completed
this a number of times, but most

people, it's probably 10 minutes or
less that are completing this test.

It doesn't take a long time to complete.

We give them plenty of time because
it's not meant to be a speeded test.

In other words, we're not
trying to say, Hey, finish this.

You better hurry up.

If you don't go fast,
you're not going to finish.

We have some background here about the
report and what we want to do is we

mentioned on this background is we want
you like to really focus on the driving

factor really should be the overall score.

This overall recommendation, this
highly recommended, recommended,

or somewhat recommended.

And the reason for that is because
that's considering the most information

that's considering all of the test
items that create the overall test.

And what we found from our test,
for example, those who are highly

recommended, they have a 77 percent
shot of being successful on the job.

And we, what we mean by being
successful on the job is they

were rated as above average job
performers by their supervisors.

So 77 percent of the people who have a
highly recommended score, in our data

collection, they were above average
in their job performance ratings.

If you drop down to recommended, 53
percent of those in the recommended were

above average in their job performance.

And if you go down to somewhat
recommended, only 31 percent of

those individuals were above average
in their job performance ratings.

So as you can see from those numbers,
again, no test is perfect, but you

can see there, there is value as
you go further down in those scores,

you're are more likely to bring
someone on who is not going to be

very as good in their job performance
and that the data bears that out.

Mike: I think this is very interesting
in terms of having the personality

test coupled with the CritiCall
hard skills test because with the

hard skills test there are certain
skills and abilities that we're

expecting on day one of the job.

You want people to be able to
hear auditory information and

enter it into a CAD system.

Obviously they're going to get trained
on the CAD system, but they have to have

the core skills and abilities in place.

They need to be able to
multitask under duress.

They need to be able to, read maps
and understand, coordinates and such.

So these things are all very important
and they're deal killers, right?

If you possess these skills and abilities.

You have the ability to move forward.

If you don't possess these skills
and abilities, then you really don't

have any real reason to be continuing
in the process because we're, the

centers are expecting people to
come into this environment day one.

So if you take that, you say,
X percentage of the applicants

have the skills and abilities
necessary for success on the job.

Now you take them and you
run them just those people,

through the personality test.

Voiceover: And we'll be right back
after a word from our sponsor.

Ready to revolutionize your HR strategies?

Head over to TestGenius.com to discover
our latest tools and solutions designed

to streamline your hiring processes.

Mike: You're getting a whole nother
bite at the apple right there in terms

of predictive success of selecting
the right person for the job.

So it's really your odds making you're
putting the odds in your favor that

the hiring select or the selection
decision that you're making is going to

be advantageous for your organization.

Is that right?

Clinton: Oh, that's absolutely correct.

It's a, you did a great job
of explaining in common terms.

We, in every profession, you
like to use fancy terms to make

yourself sound smarter than you are.

And we call that incremental variance.

And so then, yeah, and and just what that
means essentially is those tests, like you

said, Mike, they tell us different things
and that's what we mean by incremental.

It adds on to the information
that a technical test tells us.

So technical tests of, typing
auditory information, entering

things in that gives us that predicts
differently than a personality test.

And that's just the term we
call it incremental variance.

And so you're adding unique pieces
of information that give you

basically greater predictive power.

Mike: It's interesting because now I'm
starting to understand a little bit more

about how this fits in, because I think
the next part of this report you're

going to show is the overall score.

And then you have subscales.

And I guess those subscales are some
of that predictive variance as well.

Is that correct?

Clinton: Yeah, and that is as well.

So give me some of that
predictive variance.

And so we have, like you said,
on the second page, we have this

overall recommendation again.

It was on the top of the first page.

We're just going to visually
show you the three options here.

And it's highly recommended and
you can see here, we give us, we

tell you a little bit about this.

It was keyed using actual employees.

And so we want to let you know Hey, these
individuals, this was, this is based on

data from actual people working in these
types of roles that we aren't just making

this up and saying, Hey, we think these
people are going to be more successful.

This is what the data has shown us.

Now, what we say here now, we introduce
these subscales and we have these

subscales and they can provide unique
information that then could be useful for

prodding in say, a selection interview.

And so we have one of the subscores
that we have is dependability.

And so we give you some information on
high scores, what that means for higher

scores on dependability and what that may
mean for a lower score on dependability.

And then we give you some potential
interview questions that you may want to

follow up with in the interview process.

Thanks.

And like Mike had said, just because
someone scores lower on one of these

subscales, let's say I scored really
you can see on this one, I'm like

confident assertive, I'm right in the
middle, on that 58 on the confident

assertive subscale out of 100.

And I've seen this before where someone
says, Ooh, Clinton's only a 58 on

the confident assertive subscale.

I don't know.

We, I don't think we should hire him.

What we need to go back to is
again, this overall recommendation

should be the driving decision.

Now, what you can do with this is
because maybe I'm moderate on this.

You could say, Hey, maybe you want to ask
some confident sort of follow up questions

that we provide here on this page is
in as part of the interview questions.

Hey Clinton, tell me about this,
discuss this, tell me about a time.

And follow up on those interview
questions to see if there's maybe any

sort of flags there or training needs
or say, is this going to be a problem

if we bring him in or follow up there?

Mike: That kind of goes back to what we
were talking about earlier with the being

aware of strengths and weaknesses, right?

Or not perceiving a
weakness as a strength.

If a person, one example that we
use in this dispatch space, one

of our tests gives the ability
to collect a voice sample.

And if you collect a voice
sample from someone, you've

never talked to them before.

They're just an applicant at that point.

And you realize that they talk
really softly, there's this, you

want to jump out and you want to
say listen, in this role, you need

to be able to speak with authority.

It isn't necessarily problematic that
this person speaks really softly.

What's what matters is, are they aware
that their natural tendency is to speak

softly, but they have the inherent
ability to ramp it up when the time comes.

And I think that's the same with
some of these right here and some

of these follow up questions.

Clinton: Now, yeah, and I also like to
point out too, often, so those follow

up questions, are people aware of those
weaknesses, but also, oftentimes, what

I call our strengths, often become our
weaknesses, and what I mean by that is,

because we know it's a strength, so let's
say I know I am assertive, and I'm high on

assertiveness that's great, he's going to
be really assertive on these phone calls.

Oftentimes it can be than a
crutch maybe become too assertive

at times become dominant.

And just because you have a strength
also mean you have to be aware of

your strengths, because strengths
overused can become weaknesses.

And just knowing these things, so you
can also probe on some of these where

they're high scores, because you say,
hey, really good, he's really assertive,

but you might want to say, hey, are
there times maybe where he's overbearing?

Mike: Overly assertive.

How about that?

That's right.

Yeah.

That's just another really important
reason to dissuade somebody from

looking at that number and making
a decision, because it's not, it's

a description of an aspect of their
personality, but it isn't necessarily

an indication of what you need to do.

It's an indication that you should
drill down a little more closely to

be able to make that best decision.

Clinton: Yes, yeah.

And like I said, overall, if you are
making a, a go, no go decision, let's

just say you have you know, a hundred
applicants and you can't afford,

you just can't interview them all.

And the driving decision there on
who you're going to interview, again,

focus on the overall recommendation
score and not the subscores when

it comes to say, Hey, we can only
interview 20 of these individuals.

Of these 100 let's focus on maybe
then the overall recommendation

and you drill down on the subscales
in some great interview Yeah,

Mike: that's a good point.

So if you think about let's say
you had 100 you drilled down to 20.

You bring those 20 in that's
a lot of people to interview.

And if you're only hiring for three
or four positions, you've got to

make some really important decisions.

So you need additional information.

You know a lot of times in our case if
you have a score, a data entry score,

or some sort of test map reading you
know, it's 97 percent and somebody's 96.

There's not a big difference
between somebody scoring 96

or 97 on a map reading test.

It's not enough to be able to make some
sort of distinguishing decision between,

person one and person two, really, these
are the kinds of things that you want

to use to determine of these 20, which
are the people that we're going to move

forward to, and whatever our next part of
our recruitment is, even if it's hiring.

Clinton: Yes, that's a great point, Mike.

Often tests are not, they're not
precise enough to make or, to draw

conclusions from small point differences.

It's like that example,
96 versus 97 on that map.

Now, if someone's a 60 versus a 97, that's
a large, far enough away where you can

say, hey, someone struggled a little more.

But when people 70 and this one's a 71.

It's basically, statistically,
they're the same.

Mike: If you, if we want to
sound really smart, we would say

performance differentiating, right?

Clinton: Yes.

Yes.

Mike: Very cool.

Jenny, anything that you, you
can think of that you want to

talk about here in terms of,

Jenny: Just

a comment.

As I look at this report and I just
imagine myself in the one who's

doing the hiring and needing to
screen out and sit with candidates.

What a great tool this is.

What a time saver to be
able to look at this report.

I have questions that I can ask.

And then the other thought I had was,
I wonder if I might have a tendency

to try to weight the sub scores.

That one is more important
than the other one.

And so I could really, we've already
talked about this, but I could just

really see the tendency to, to think that
one is more important than the other.

Clinton: Yeah.

Jenny: And

Clinton: people, you have
to fight that tendency.

Mike: Yeah, and then
go with the questions.

The questions are going
to give you the meat.

That's really going to, that's really
going to tell you what it is that

you, what you wanted to uncover
as you go through that process.

Now, this test has been
criterion validated.

And why don't you talk a little bit -when
something is criteria and validated that a

lot of people call that the gold standard
of validation, because it literally

predicts some sort of job performance
that's correlated with the test scores.

But one of the things that's really
advantageous about that is that the

Uniform Guidelines does allow this
criterion validation of which shoot you

guys wrote a 280 page validation report.

So our clients are able to adopt that
validation report as their own, which

will give them all of the defensibility
aspects that come with that.

They're able to adopt that to their
own environment through a very

simple process that actually takes
only a couple of minutes for them

to go and transport that over.

Can you talk about that a little bit?

I think that's very
useful to our audience.

Clinton: Yeah, sure.

So most, organizations,
they may say that's great.

I would love to do this criterion
validation study ourselves, but

we only have five dispatchers,
or 10 dispatchers, and that's not

enough to really get the data.

Good news is we did the heavy lifting
for you, like Michael says, by having

a multi jurisdictional criterion
validation study where we had more

than 300 individuals participate, and
we showed the relationship between

test scores and job performance.

All you have to do under the Uniform
Guidelines on employee selection

procedures in section 7b There we have a
simple process to say is your dispatching

job substantially similar in terms of
kind of the major work behaviors to where

we did the original validation study.

And we have a document for you where we
have listed out the work behaviors, and

you can see here, major work behaviors,
and we have it both for the technical

skills component of the CritiCall test.

So like more of the map reading,
audio comprehension, and also for the

personality component, and you need just,
all you have to do is have your employees,

your subject matter experts, or your
current dispatchers or supervisors of

the position, indicate how similar those
are to, to the job at your organization.

And if there's substantial similarity,
and this will auto calculate for you,

we'll say, yes, you have enough here
to be able to transport the validity.

And so you don't have to do your
own kind of criterion validation

study with hundreds of people
that you don't even have.

And we've done that for you.

Mike: Now, and this right here is actually
from CritiCall, the hard skills test.

So you can see these work behaviors
are concrete work behaviors,

reading, comprehending, computer
use, data entry, et cetera.

We also have a sheet for the personality
test, which targets the work behaviors

that you and Jason targeted in our
test development validation study.

So it would be softer skill work
behaviors that pertain to the

personality testing aspects of that.

And it looks to me like basically you
hit each organization's going to have

a series of subject matter experts, and
they're just going to agree or disagree

that these major work behaviors are
similar to those that are required in

their environment and from what I've
seen in using this transportability

tool over many years with people, the
universality of the dispatch environment

is such that everybody goes through this.

And yeah, of course, we
have to be able to do this.

And of course, we have to do this.

It's a really high likelihood that the
criterion validation that was established

is going to translate or transport over
to any other given organization, even if

it's a very small rural organization, or
if it's a very large urban organization.

Clinton: That is one of the great things
of this job is that it is, like you said,

it is very specialized and that there's,
it's very similar across organizations.

Sometimes when I've done some tests,
let's say for example, for software

engineer, there, that can be, they can
vary widely depending on organization

size or what exactly they're coding
or what software they're working with.

And there's a lot of differences, but this
dispatcher tends to be, like I said, tends

to be pretty similar, and so that's a good
thing is that we usually find people are

saying, yeah, this is accurate and we do
this and we can transport that validity.

Mike: Yeah.

Excellent.

And then we can't see all this
spreadsheet because there's

just too much stuff on here.

But essentially what happens is you
go through and you just, you get in

a room together with your subject
matter experts and you say, okay,

number one, one through five, five,
meaning it's exactly like my job.

And one, meaning it's not at all.

And everybody shouts off their ratings.

One person enters it into there.

And at the very end of the
process, there's a box there.

The bottom that's red and it turns green.

If it's right there, the green signifies
that the work behavior behaviors

are considered to be transportable.

Thanks for slicing and dicing this, Jenny.

I really appreciate that.

But it's as easy as that.

Number one, number two, number
three, number four, number five.

And then when they get to
the end, it turns green and

you can print a copy of that.

You can actually take our validation
report that Clinton and Jason authored

and you can attach your transparency
portability report, easy for me to say,

to that validation report, and you can
then own that validation report for

this position for your organization.

And I would like to say that if
someone was to call you up and do

this boots on the ground from end
to end at their particular center.

If they were going to, if you were
going to show up, you're going

to do a job analysis, you were
going to do a test development.

You were going to do criterion validation.

If you're going to do
all this stuff on site.

That would be a really
expensive process, wouldn't it?

Yeah.

Clinton: Yeah.

So it's at least just a conservative
estimate, at least $30, 000.

Mike: That's amazing.

And we'll do it for half that, right?

Clinton.

Clinton: Yeah.

Yeah.

So just half, yeah.

Mike: So that's quite
an incredible savings.

And then what, talk a little bit about
the defensibility aspect of this.

What we know, we didn't
talk a lot about validity.

We can, we're going to actually
cover validity on a future podcast.

We didn't really talk a lot about
the development of the personality

test in episode one of this

series.

We talked a little bit more about
it, if you're interested in it.

When you go through this validation
process for this kind of test or any

other kind of test the way I describe it
is that it's like a coin with two sides.

And on one side is the utility and
the utility speaks to how well does

this selection instrument work to

bring in the people that are going to be
successful on the job in some way or shape

or they're not going to steal from you
or they're not going to quit immediately.

They're not going to be disruptive
or whatever your criteria is.

Then on the other side of
that same coin opposite the

utility side is defensibility.

And defensibility is speaking to what
in a worst case scenario, if your

selection process gets questioned by
some human being with an attorney or some

legal authority how well are you going
to fare in that particular instance?

So can you talk a little bit about all
the work that we've done and how that

benefits each organization in terms of
the utility and defensibility aspects.

Clinton: Yeah and that's a great way
of putting those two sides of the coin.

Mike, I always tell with clients, whenever
we work with them on validating tests

or implementing hiring tests is that the
goals I always say are one A and one B is

making sure you're hiring the best people
for the job and then making sure you're

doing so in a legally defensible way.

And that's what you want to do is
you want to have both of those.

You should be able to have
some legal protection.

So if someone does challenge your
hiring process or says, Hey, you didn't

hire me because I am a white male.

Or you didn't hire me because I
don't know, some group I'm too old.

You didn't hire, we want to be
able to defend it and say, no, we

didn't hire you because, you didn't
have the right technical skills.

You couldn't do these things.

You weren't, we showed through
our validation efforts we know

that lower scores on this test
are less likely to succeed.

And so you were screened out because
of that and not because of your age

and not because of some other issue.

And that's what we provide.

And that's what they're the uniform.

We didn't.

I haven't gone into this too much,
but the Uniform Guidelines on Employee

Selection Procedures provide a playbook.

And when it comes to personality tests
and criterion validation, Section 15B

of the Uniform Guidelines on Employee
Selection Procedures provides like a list.

A list of just of things that you need
to cover when doing a validation study or

that a test creator vendor should cover
if they're developing a test Yeah in

that report and the report that we have
we actually at the very beginning of the

report Have a list of all the requirements
of Section 15B of the Uniform Guidelines,

and we indicate which page or pages of
the report cover those sections of Uniform

Guidelines to show that we're addressing
all of the requirements that we're

doing our homework and you're covered.

And so if someone does challenge the
process, this is a nice report that then

can be provided to lawyers and people
who are going to get in those rooms and

fight those battles to say, Hey, we've
done our homework here and we're making

decisions based on good information and
not because we just don't like people

from a certain protected group status.

Mike: On the prior episode where we
talked generically about personality

testing, I used an example from this
9 1 1 police, fire, EMS space where,

you know, being at the national
conferences and having an opportunity

to talk to center directors there's
this universal understanding among them.

If they've been around long enough, they
can sit down with a human being, for 5

or 10 minutes and talk to them and they
can determine whether or not they're

going to be successful on the job.

And I never, I never argue with these
folks because I know that they're

right about that, that they have that
uncanny ability to be able to sort

out who's going to be successful and
who isn't going to be successful.

We would never recommend that they
use that as a strategy because

it's fraught with pitfalls, right?

The first one is that you don't have
a paper trail that shows why you

selected one person and you didn't
select another person and that in

itself can be quite problematic.

And so really when we set out to create
CritiCall, what we did without even

really knowing is that we created it to
be proxy for that center director who's

able to spend time with an applicant
and in a similar way, know whether or

not they're going to be successful on
the job, but to do so in a way where

it's completely documented step by step.

Here's the reasons why
they should move forward.

Here's the reasons why they
shouldn't move forward.

But furthermore to that, what we find,
and I would say particularly in this

social media environment or day and
age, is that it's so important for

organizations who are hiring to be good
citizens, good stewards in their realm.

And so what this critiCall test and
even your personality test that's in

the CritiCall test does and does so
well is it affords the applicant an

opportunity to find out not only find
out for the organization they're applying

for, but find out for themselves.

Am I suited for this job and to do so
in a way that is really satisfactory.

In fact, what we see with our testing
is that if you're like me and you

can't multitask, you hate the test
so much that you pretty much get up

and walk away, don't even finish it.

And what you do is you walk away
saying, Oh my gosh, I dodged a

bullet because I would hate that job.

I'm so glad I didn't, get that
job so I could work Christmas and

weekends and graveyards and all that
stuff that I really also don't like.

So these kinds of well constructed tests
really serve both of those masters in

such a way that it leaves the organization
feeling very good about the process that

they have in place, as well, it leaves
the applicant feeling whether or not they

got hired that they had a fair shake and
even maybe more than some other jobs.

If you're not suited for this environment,
you really have no disagreement with that.

So do you have any thoughts about that?

Does that resonate with you at all?

Clinton: Yeah, it does.

And I think there's a, I
always, a couple of thoughts.

One is I think it's important, like
you said, candidates will often self

select out when you have good selection
process that are face valid, that

kind of look and feel like the job.

And like you said it's not only our
organization, should they be figuring out

who's a good fit for the organization,
you as a job applicant should be trying

to figure out, is this a good fit for me?

And that test, it also
helps to allow for that.

My, my other thought was, it was
like you talked about is we're

providing what I call a standardized,
a nice standardized process

for organizations to
make hiring decisions.

And while it's not often sexy, people
aren't often saying, Ooh, look at

that standardization you have in
your process and, really admiring it.

That standardization is actually what
helps and saves a lot of organizations

and actually what can build in some of
that predictive power, because oftentimes

one of the problems, like you said, Mike,
with those individuals who've been in the

job for years and say they can recognize
who's going to be the best performer,

is sometimes problems occur when
there's a lack of standardization.

If they didn't use tools like this,
they all of a sudden they're free

flow and they go in one direction with
one applicant, they don't go in that

direction with another job applicant,
that lack of standardization can then

where problems can be introduced into
the process and a candidate may complain

and say, I didn't get a chance to show
what I could do because they asked all

this other candidate these questions
that they didn't even ask me, okay.

We're having a standardized process.

We're giving all the candidates a fair
playing ground to show their skills

and their abilities and they're fit
for this job in a standardized process.

There's a lot of value
in that standardization.

Mike: Yeah.

And from the utility aspect as
well, that's the defensibility

part, but the standardization gives
you more utility in your selection

process, which is equally important.

The other side of that same coin
that we talked about earlier.

Excellent.

That's great.

Do you Jenny, did you have anything else
that you wanted to talk about before we

end?

Jenny: No, actually.

Yeah, actually, sense we're winding down.

This has been a, an amazing
discussion, and I just want to just

say that as somebody who's taken the
personality test and the CritiCall

test, it's just not my space to work.

And I can tell you, I have self
selected and I'm so grateful for those

who can do such a high stress job.

We definitely need you if you're
watching this, we need you and thank

you for the work that you're doing.

So Clinton, as we wrap up, why don't
you just briefly tell a couple of

sentences about ioPredict, who you
are and how they can reach you.

Clinton: Yeah.

ioPredictict, we do test validation.

So we can, we do test development from
scratch, like custom test creation.

And we work actually with Biddle and
Mike, and we worked on this CritiCall

test, personality test for dispatchers.

And we partner on other projects as well.

ioPredict.com is our website.

And on there, you can reach
out to us through there.

But we also work with organizations
when sometimes they've adopted a test

from another vendor where it's created,
but they say, Hey, we don't know.

Did they really put this together?

And they reach out to us to do validation
studies or to be that other source

to, to review and to implement tests
that maybe other people have created.

So we do that as well.

Jenny: Thank you.

And we'll have all of your contact info
in the show notes for this episode.

And if you haven't watched our first
episode podcast episode with Clinton,

I just want to encourage you to go
back and check that out as well as we

talk about personality testing more
in general, CritiCall personality, not

CritiCall personality testing specific.

That was a mouthful.

Hey, thank you so much Clinton
for just not only for joining us

for this episode, but also you
joining us for a previous one.

And thank you for our listeners.

Thank you for our viewers.

Clinton: Thanks

Jenny: for having me.

Thanks for having me.

Yeah, this has been fantastic.

Thank you, Mike.

Mike: Thank you.

Thank you very much, Jenny.

Great job.

And thank you, Clinton, for joining us.

ioPredict is a really
important partner of ours.

They, Clinton and Jason, who are
the principal consultants there,

worked for us for many years.

They started up their own firm
and we have, continued to work

together very closely all along.

And we actually really think of
them as a branch of our company.

And now that doesn't mean we'll be
paying your taxes for you, but we

certainly think of you in that regard.

And we hold you guys in high esteem.

So it's really great to have
you here hanging out with us.

Clinton: Thank you.

Yeah, we feel the same way.

Mike: Thank you.

Jenny: Wonderful.

Thank you everybody.

Mike: Have a great day.

Voiceover: Thanks for tuning in to
Testing, Testing 1-2-3 brought to you by

TestGenius and Biddle Consulting Group.

Visit our website at testgenius.com
for more information.

Episode Video

Creators and Guests

Jenny Arnez
Host
Jenny Arnez
Training Development and Sales Support at Biddle Consulting Group & TestGenius
Mike Callen
Host
Mike Callen
President of Biddle Consulting Group and TestGenius
Clinton Kelly, PhD
Guest
Clinton Kelly, PhD
Principal Consultant with ioPredict, and specializing in test development and validation including validation of IT coding assessments.