Exploring Behavioral Testing with Logi-Serve
E1

Exploring Behavioral Testing with Logi-Serve

Podcast-Intro: Welcome to
Testing, Testing 1-2-3, a podcast

brought to you by TestGenius.

Jenny Arnez: Welcome everybody.

My name is Jenny Arnez.

I'm your host for today's podcast.

I have two gentlemen with me today.

I have Mike Callen.

He's our VP of products at TestGenius.

I'm also part of the
TestGenius team and we have Dr.

Chris Cunningham.

He's the chief science
officer from Logi-Serve.

Mike, you want to go ahead
and introduce yourself?

Tell us a few things.

Yeah.

Mike Callen: Hi, thank you, Jenny.

I'm, I'm Mike Callen, as you
said, and I'm the VP of products

at, at TestGenius, which is a
product of Biddle Consulting Group.

And TestGenius is a skill and ability
testing platform that handles a lot of

different Positions and we have been
mostly engaged in doing concrete, hard

skills testing over the years and through
an integration with Logi-Serve with

whom for whom Chris Cunningham works.

We now have the ability to add a
really substantial library of soft

skills tests or behavioral tests.

And so with that, we want to turn it
over to Chris to introduce himself.

Chris Cunningham, PhD: Yeah, thanks guys.

I appreciate that.

Yeah.

So I'm, I'm Chris Cunningham.

I'm the Chief Science
Officer at Logi-Serve.

And what that means is I use a background
in industrial and organizational

psychology to guide the development
of our assessments and the ongoing

validation and management of those
different testing libraries the way

that they're applied with our clients.

And as Mike alluded to we are
focused in more of a soft skill

space with these assessments.

We tend to focus on competencies,
which you can think of as

complex individual differences.

They're more than just traits,
but they're not states.

So what that means is they're fairly
stable within individuals, but they

can be developed with concerted effort,
focused effort, and these particular

competencies that we target tend to
focus on behaviors associated with a

person's underlying service orientation.

And that's one of the reasons why this
partnership is potentially so powerful.

Jenny Arnez: Wonderful.

Mike Callen: Absolutely.

Jenny Arnez: Chris, can you tell
us a little bit about Logi-Serve?

Chris Cunningham, PhD:
Yeah, I'd be happy to.

And so again, just for the visual
learners out there here's, here's a

bit of a representation of some of the
things we've been up to as a company.

We've been around for
a little over a decade.

We are a provider of talent management and
assessment products that we have developed

in house and are purpose built to help
organizations take a more competency

focused approach to their screening
and hiring and developing of talent.

As you can see on this particular
slide, we've developed a lot of big

partnerships over the years, and we've
become really well recognized for

adhering to best practices and making
sure that it's science first with a,

with a heavy amount of attention on
utility and utilizability by end users.

So the company actually came into
being after both my partner and CEO,

Eric Krohner and I had spent a while
working as a value added reseller.

We used to package and market
other vendors' assessments.

And we recognize that there were some,
some gaps, some features missing and

a lot of limitations on the usability
of any of these other platforms.

And so about a decade or so ago,
we got together and thought, well,

what if we were to re imagine this?

How could we create an assessment
that was more engaging for

candidates and also easier to use
for recruiters, hiring managers, etc?

And that was really kind of
the origin of Logi-Serve.

Mike Callen: It's very interesting.

Hey, Chris, you, you
mentioned traits, not states.

And I, and I really like that.

That's an issue that I think a lot of us
who are not IO psychologists wonder about.

You know, how, how do we know when
something is a trait that is mostly

inherent, but can somehow be developed
versus some sort of aspect of behavior

that's just happening in the moment
or inspired by the situation, but

not necessarily something that's
a part of a person's personality?

Chris Cunningham, PhD: Yeah, that's
a, that's a great question, and it is

a, it is a difficult one to parse out
because, you know, that distinction

between trait and state is, is very
popular amongst researchers and

personality and individual differences,
but in practice, it can be quite

difficult to know or see the difference.

One way to look at it is that an
individual difference that's more of

a state or a kind of context specific
difference is one that only shows up in

reaction or response to specific stimuli
or specific set of contextual factors.

And as a result, that characteristic
might disappear or show up in a completely

different way if a different set of
stimuli or different context is provided.

What you see then in an actual practice
setting is that that aspect of a

person's behavior might not be real
consistent across different situations

or contexts because it is a response or a
reaction to aspects of that environment.

In contrast, a trait, something that's
a little more stable, is generally the

type of individual difference that is
a little bit more universal, a little

bit more consistent, generalizable.

It's the kind of aspects of
a person that you tend to see

regularly across different contexts.

And so usually when we think
about who a person is, or like,

what do they bring to a job?

Especially we're really quite interested
in the aspects of their personality

and their individual differences that
are more consistent because that's,

that's more of a reflection of who that
person is and what they're capable of.

Mike Callen: That's very interesting.

I really appreciate you sharing that.

It's not something we kind of had
on our bulleted list to discuss.

But it's, it is definitely
pertinent in terms of, you

know, the recruitment situation.

It also occurred to me as you were
describing it, that is it is very,

very much to me, at least a similarity
between the hard skills testing that

we do and the behavioral or soft skills
testing that you folks are targeting.

We, we would love to wrap our minds
around, you know, those things that

are traits that, that we can depend
upon the results of our testing.

And so it's really great that as you
sort of kick off this discussion that

you're doing so in a way that lets
us know that you're targeting these

performance dimensions or aspects of
personality that lead towards specific

performance on the job that are
more, more concrete, less temporary.

Chris Cunningham, PhD: So, yeah, yeah,
I would say that that's a, that's a

good way of kind of summarizing it.

And I would also just add, you know,
this, this idea of individual differences

ranging from completely fluid or
situational specific to completely

static and fixed there's, there's
quite a lot of variability in there.

And so that's actually one of the
reasons why we got so interested in

behaviorally oriented competencies in our
own assessment development, because we,

we really wanted to help organizations
identify I guess the simple way to put

it is individual differences that matter.

So we were less interested in abstract
trait labels that are just maybe, you

know, general descriptors of a person
and more interested in aspects that could

be closely linked to a motivation or
a likelihood of engaging in a specific

performance related behavior, and
especially when we target a domain like

service or interacting with others on
a regular basis, there's an opportunity

to think really concretely, as you said,
about what does good service look like?

Like, what does it take to demonstrate
service on a day in and day out

basis in a variety of roles?

If we can fixate and focus on those
characteristics, what's what's

really interesting is we can assess a
person's underlying sort of propensity

or likelihood of engaging in that.

We can also give hiring managers and
other talent leaders a chance to see

sort of the bench strength of their
team and to understand, you know, where

might we want to invest in a little
training and development to help shore

up some of these different behavioral
tendencies and skills that are needed.

So I think maybe an even simpler way
of thinking about how these two general

products support each other is if
you've got this underlying propensity

or competency, it really helps explain
that motivation, that intention,

that attitude toward demonstrating
these these different behaviors.

But to do that effectively, you
also might need skills to be able

to demonstrate these underlying
competencies in a way that comes

across as aligned with the specific
performance requirements in the job.

So this is sort of like a one two punch
might be a simple way of thinking about.

Mike Callen: Great.

Yeah, that's, that's very, very
interesting and very helpful.

Jenny Arnez: Yeah, I had a chance to
actually take a Logi-Serve test and

actually, before I ask this question,
why don't you explain a little bit

about what the Logi-Serve tests are?

Chris Cunningham, PhD:
Yeah, no, absolutely.

So in a Logi-Serve assessment, and I'll,
I'll I'll flip to a another graphic here

so you can get a sense of how this looks.

In a Logi-Serve assessment, we target
a number of competencies that we've

identified through careful study
and review of the evidence base,

the literature that's out there.

In the domain of service, there's close
to 60 or 70 plus years of really well

done research looking at what is it
that explains an individual's ability

or propensity to serve others well.

And we've consolidated that and
boiled that down to nine core

behavioral competencies that you
see listed around the outside gear

in this particular graphic here.

These competencies include things like
communication, adaptive problem solving,

interpersonal skills, motivation to
serve, self efficacy, so on and so forth.

But what makes the assessment
unique over and above that focus

on these generalizable competencies
is the way in which we assess them.

So unlike maybe a more traditional form
of assessment where you might see, you

know, a lot of items all in a, in a
set and you have to rate each one on

a scale of maybe strongly disagree to
strongly agree or something like that.

We have broken the assessment into
three different sort of activities,

and the first one candidates are
asked to reflect on past experiences

that they've had in life and in work
settings that would have allowed them

to build competence, build proficiency
and mastery in these different domains.

And then we ask them some questions about
different attributes of their personality

that they tend to display or show on a
regular basis that are ways that these

different competencies might surface or
manifest to others on a day to day basis.

And then the third protocol or
component to the assessment is

a series of situational judgment
or scenario type questions.

And these are designed in a way that
partners the sort of written scenario, the

text, with digital art that illustrates
and represents the actual work environment

that a assessment is being used for.

And with this particular last piece of
the assessment, individuals are now not

just telling us like who they are and
what they've done, but they're actually

showing what would you likely do.

It's a way of getting into the
intentions and the actual capacity

somebody has to make a proper behavioral
choice in the actual work setting.

The way the assessment ends up wrapping
up is that the data gathered across

all of these different prongs is then
incorporated, integrated together

to generate scores that reflect a
pretty deep understanding of the

relative strengths and weaknesses of
each candidate with respect to all

nine of those targeted competencies.

Mike Callen: That's it's very interesting.

I think one of the things that I
noticed in taking the Logi-Serve

assessment as well is that when you're,
you're dealing with the it's an,

it's an avatar based product, right?

You have sort of these illustrated
scenarios so that one can get

an idea of what the situation
is that they're operating in.

Oh, good.

You've got some.

Fantastic.

So in this situational judgment phase
the test takers presented with a scenario

and they're asked, you know, how would
you respond in this particular situation,

which I think is pretty typical.

One of the things that I really like
about your product, and I sort of suspect

that this might be a big part of the
secret sauce is that there's a follow up

that says how is the caller or customer
or, you know, the, the, the operative

person that you're dealing with in the
particular scenario going to respond?

Or how are they going to understand
your response or take your response.

I'm not probably not using the
right word, but it seems, you know,

one is directly what's the best
way to operate in this scenario.

But then when you put this other aspect
of it, it seems to be really more

of an empathetic type of situation.

You have to respond the way that
you're supposed to respond, but how

is it going to make that person feel?

Am I right about that?

Is that, is that a big
part of your secret sauce.

Chris Cunningham, PhD: Well, you're
right in recollecting that that's

a big component of these scenarios.

And, you know, it's interesting because
all of these prongs that I just reviewed a

minute ago, they're all so deeply entwined
that it's kind of difficult to tease

apart what the real secret ingredient is.

But I will tell you that it's
definitely a big component of it.

The reason is, as you've described,
Mike, there's there's kind of like a

two stage process to our scenarios.

Okay.

So with this type of scenario based
testing, it's really common to see

items where you're presented with
a scenario and asked, what should

you do or what's the best response
or what's the worst response?

That's not the way we approach it.

Instead, we ask, we give individuals
a scenario and then we give them four

choices and that the behavioral choices
or the options have all been graded in

terms of appropriateness that situation.

So it's kind of the good,
bad and the ugly responses.

And we look for the likelihood that each
candidate would respond to that situation

in each of the ways that are outlined.

So right off the bat with our item
sets, you're not just learning what

somebody thinks the right answer is,
you're actually getting a more complete

profile of what the person would
would likely do and would not likely.

Okay, so that's that's stage one.

We get a more complete picture of
what we would expect to see out of

that candidate in that scenario.

And then the second stage is let's
see if this person is able to think

through consequence of their action.

So here we get a chance to kind of
explore or peek under the hood of

their decision making capabilities.

And in a service provision context, that's
so critical because so much of providing

good service is about maintaining a
relationship with the other individual.

Even if it's very short term, there's
this development there of a connection.

And if if an individual is not able
to kind of think through, okay,

well, this is what I said I would
most likely do of those options.

Now, what do I think the consequence or
the reaction of the customer is going to

be or the patient or the client or the
guest is going to be on the other end that

allows us to see a little bit further.

So we refer to this as like
service forethought or the

ability to sort of anticipate.

Where does the story go from here?

And so at times we've also referred
to our scenario portions as like a

storyboarding activity, because in
essence, that's what it feels like to the

candidate is there's a little snippet.

Mhm.

Of a story unfolding here, and they're
getting a chance to jump into that

role for a minute, kind of explore it.

And then they move into a different story
that targets a different competency.

Mike Callen: Yeah, that that
is really very much based upon

the response that they choose.

However, they're reacting
to the situation.

So it, it to me seems like it shows
a degree of competence, but then if

you also are able to understand how
someone's going to respond, it, it

probably tends to suggest that they're
going to be better at continuing

that storyboard throughout time.

Yeah, yeah, very, very, very cool.

Chris Cunningham, PhD: And I'll also
note that this, this approach to

testing, I got to remind everybody that
we designed the whole test also be the

kind of an assessment that a candidate
would actually finish because one of the

other problems that plagues assessments
right now is that many individuals will

start, but not everybody will finish.

With our assessments, we generally
see completion rates above 90 percent

and some of our clients is 98%, which
is really high in this industry.

And part of the reason it's so high is
that everything is structured in a way

that it kind of grabs the attention
of the candidate, pulls them in, and

they want to see what happens next.

And before they know it, they're done.

Because the whole assessment, I
mean, it only takes most people

on average less than 20 minutes.

Usually it's around 15 to 20 minutes,
and that's that's a really kind of

focused assessment that yeah, it requires
more than two seconds, but you're also

getting a lot of information out of it.

And the candidates feel it like they
can see the relevance of the questions.

It's not abstract.

It's not way out there
in the ether somewhere.

They can, they can understand why
you want to know this about them

before you make a hiring decision.

Mike Callen: Yeah, absolutely.

And that, that face validity is so
satisfying for for, for, you know,

job seekers versus, you know, the
traditional personality tests.

I think the classic isn't it at the
combine they ask whether the players

prefer hot dogs to hamburgers or some,
some, you know, silly things like that.

Do you like to be in
crowded movie theaters?

Or do you like to go to, you
know, parties where there's

lots of people and loud music?

You know, what does that
have to do with this world?

And there might be some strong
corollaries to behavior, but it

doesn't feel really satisfying.

Whereas this feels incredibly satisfying.

And I think that's such
an important aspect.

Chris Cunningham, PhD: So, you know,
and you bring that up, Mike, and, and

one of the ways we explain this to a lot
of our partners and clients in the past

is, is using a graphic like this, and
we try to help people understand that

when you talk about the, the validity
of assessment, what you're actually

talking about is the ability of that
assessment to inform your ultimate

decisions so that your decisions are good.

So this is what's meant when
people say validity is a, is a,

is a property of inference, right?

It's not actually something
that's baked into the test.

It's what happens after the test.

Well, for that to work, you know,
not only do the candidates have to

appreciate why they're being asked
what they're being asked, because

then they're likely to respond more
honestly and truthfully and completely.

But your hiring managers and recruiters
also have to believe that the test

is measuring something relevant.

Otherwise, they're not going to use it.

You know, and then you also want to
make sure that the test is indeed

measuring what you say it is.

And that's where that cold
construct validation comes in.

And that piece is completely
absent from a lot of the other

assessments on the market.

And then, of course, you need to
be able to show that scores on the

assessment do predict or explain
variability in performance outcomes.

And that's that criterion part.

So what we've done over the years
is we've built a glass totally full

approach here to this challenge.

And we know that that's not.

Necessarily required in all
contexts, and we also know

it's not necessarily the norm.

But again, one of our goals is to try
to help people appreciate the value and

the importance of assessment science.

And that's, that's really been one of
our guiding principles from day one.

Mike Callen: It's very impressive.

We have some experience in the past
with an office and interpersonal

competence test that we created and
we decided to do, you know, very high

fidelity video situational judgment
where we filmed the scenarios and such.

And it was it's, it's been shocking
to realize what the shelf life is on

something like that, you know, you can
spend an awful lot of money creating

something like that only to find
that, you know, the hairstyles and the

fashions change so quickly that it's
maybe the juice isn't necessarily worth

the squeeze to go very high fidelity.

And I think that I would admit to
you that I've tended to be a little

snobbish about the avatar style testing
because to me it always seemed a little

cartoonish in comparison to strict
video situational judgment but it's not.

It's it's not that way at all.

And moreover, I think that
it's abundantly more updatable.

And in fact, you have instances
where you're able to as, as you can

see, in some of these examples here,
you're actually able to customize

certain elements within the scenario
for, you know, marquee clients.

I'm sure that's an upgrade.

But.

But that's really some spectacular
examples of, you know, what,

what it is that you can do here.

So, yeah, thank you for toggling through,

Chris Cunningham, PhD:
yeah, I just want to share.

I mean, and one of the things I want
to share with you is we, we took years

actually figuring out whether we were
going to go video, whether we were going

to use digital art, whether we were not
going to do any illustration at all.

And in addition to the cost
issue that you outlined, Mike,

there's also a speed issue.

You can't update video as fast
as you can update digital art.

There's a major cost issue.

There's also a delivery issue.

So like this assessment platform is meant
to be optimized for mobile delivery,

meaning over a mobile device, which, you
know, in some parts of the world, there's

still not widespread broadband access.

So delivering a video can really slow
things down and create a barrier.

But with these images, you know,
they can pop up and they look

consistently good on any kind of device.

I'll even share one quick
kind of funny story with you.

One of our very first clients, when
we configured the scenarios for them,

they were a larger retail operation.

We got everything set.

They loved it.

And then about a month later,
they did an org change.

And one of the biggest things
that happened is the color of the

uniforms changed from blue to red.

And they're like, we can't have
blue shirts in these images.

Like, Oh my gosh, I think we're
going to have to stop using the

assessment, but we really love it.

We're like, well, wait a minute.

Like, you just need the shirts
to be a different color.

And they're like, yeah,
that's, that's what we need.

And we're like, Okay.

Give us a half an hour.

Like, and that was it was done.

And so it's, it's not hard, you know?

In fact, that's why we built it this
way was to be able to work with,

create partnership with our clients.

Mike Callen: Give us a half an hour
at a hundred thousand dollars and

we'll take care of that for you.

Jenny Arnez: I have to say, I think
this style, this, the, the art

made it easier for me as a test
taker to enter into the story.

I had.

I hadn't taken a Logi-Serve test before.

And so I'm, I'm in there and I
thought, Oh, this is kind of exciting.

I was literally entering in.

Now, if I was watching a video,
I would have felt like an

outsider, just kind of observing.

Definitely.

It felt like I was becoming
part of some adventure.

It was a, it was a remarkable,
positive experience.

Chris Cunningham, PhD: That was actually
one of the inspirations for it was the old

choose your own ending storybooks, right?

Yeah, but the other thing we're doing
here and you know, it's again We're not

trying to trick people but the other
thing you should know about this format

is when we are conveying images like
this or like this or these we're able

to actually not just tell people this
is the kind of environment you'd be

in loosely we're able to render these
exactly like the work environment.

So this particular illustration here this
comes exactly from a client call center.

Like we, we took, this is a high
customization, obviously, but we got, we

went in, we took photos, we rendered it.

So the posters actually
reflect cultural cues.

The technology on the desk is the actual
technology that the person's going to use.

The color scheme is the color scheme.

And so.

And so what's happening is you're
also getting a chance to queue and

send signals to the individual, which
helps improve the relevance of what

they're about to say response wise.

So it's good for the candidate, but
it's ultimately good for the client

because you're getting a higher quality
piece of data coming out the other end.

Jenny Arnez: That's fascinating.

Mike Callen: The video situational
judgment tests that that we did you

know, a big part of the scripted aspect
of it was getting the right camera

angles on the operative person and being
able to express these nonverbal cues

that you want to express, you know,
distaste, displeasure, anger, whatever.

And so that was always something that
we perceived as a huge advantage.

But it's interesting when you're
clicking through the examples that you

have there of some of the different
characters and scenarios, it's very

easy to see these nonverbal cues,
you know, to the same extent that you

would be able to see them in video.

And ironically, I, as I'm thinking
about them, I realized, you know, that

we really, you know, paused on that
person, almost in the same way that that

your static screens are in order to be
able to be sure that we were properly

conveying that that feeling or sense.

And, and so at any rate, again, that's
just, you know, my way of, of really,

you know, complimenting the process
that you folks have invested so much

into because it's very, very effective.

Chris Cunningham, PhD:
Yeah, I appreciate that.

And I know our team does too.

And I will tell you on that last
comment you made, it's interesting.

We really struggled with that for a while
until we realized, listen, you know,

it might be neat to capture the nuance
of nonverbal expression in a video.

But when that gets reduced to the
size of an image on a smartphone,

you're not going to see it.

And so digital art allows us to, and
you might notice like some of our

graphics are, they seem a little like
exaggerated when they're blown up to full

screen, but when they come down to the
level of a graphic on your smartphone,

it looks natural, it looks normal.

And that's, and that was part
of the control that we could get

with this digital art approach.

So yeah, a lot of thought went into it.

I can't, I can't say enough for
our digital art team and their

ability to capture the essence.

Like we would draft items along
with descriptions of here's the

emotions that need to be conveyed.

This is how these emotions are
generally conveyed in a universal way.

Let's capture this in the images.

And we went from there.

Sponsor Break: And we'll be right
back after a word from our sponsor.

Ready to revolutionize your HR strategies?

Head over to testgenius.com to discover
our latest tools and solutions designed

to streamline your hiring processes.

Jenny Arnez: Can you talk a little bit
about how an HR manager or director

might use These assessments in both
pre-hire and then also you've mentioned

post-hire and employee development.

Chris Cunningham, PhD: Sure.

Yeah, sure.

I'm happy to do that.

So I'll start here.

You know, by focusing on competencies,
what we're doing is we're digging

into sort of the foundational
attributes or aspects of a person

that enable them to actually
demonstrate competence on the job.

So in other words, if I if I have these
underlying competencies that I'm more

likely to actually do the right thing
behaviorally in the work environment

and in the service context, the way
this might look in practice is something

like what's represented on the screen
right now, where, if I've got employees

that are competent under an underlying
way, and they're doing the right thing

behaviorally, then that positively
impacts the customer, the guest,

the client, the patient experience.

And that's really good for business.

Okay.

And then the other thing is
when my employees are doing the

right thing behaviorally, it
also improves the operational

efficiency of that organization.

And that has an impact on the bottom line.

So a simple value chain argument like
this helps to illustrate for many

decision makers and stakeholders in an
organization why we want to understand the

underlying competence of our workforce.

From this, we can then expand that
conversation and say, well, listen,

you know you can't probably just hire
top competency people because your,

your labor pool, your, your applicant
flow probably isn't strong enough to

allow you only to take top scores.

So you're going to have more
variability within your team.

Wouldn't it be neat if you could
figure out where they're stronger,

where they're weaker, and then direct
some development initiatives at those

individuals to see if we could strengthen
their overall level of confidence.

And so for post hire applications, that's
often one of the ways this assessment is

used, is we'll generate a report, you get
to see the profile of your individuals,

but you can also aggregate and see group
level sort of summary statistics and

breakdowns of scores, and then you can
work from there to develop training plans.

Here's an example of some of the reporting
that comes out of a typical assessment.

And you can see we've also put
a lot of time and effort into

making this really simple.

So this is not one of those
assessments where, you know, your

users have to go through six months
of training and get certified before

they can do anything with the test.

Like we've actually done studies where
we've gone to HR conferences and we've set

up a little computer lab and we brought
in HR managers and said, Hey, you know

here's here's access to the platform.

Now make a good decision and
justify how you made that

decision in about five minutes.

They can make a good decision because
people understand that gold is better

than silver is better than bronze.

Okay.

But they also understand that from a
medalist perspective, even a gold medalist

has to keep working to stay strong and
a silver medalist has room to grow, but

they're still potentially really valuable.

And a bronze medalist also there might
be worth seriously looking at, especially

if they've got some competencies
that are in a stronger territory.

So we use this type of reporting
methodology also to keep the focus from

day one, not on screen in screen out that
kind of dichotomous decision, but we use

this kind of reporting to always get our
talent managers thinking, okay, this is

what I'm about to potentially employ.

And I want to be able to work with
this individual from day one until

the day they leave the company to
help them reach their full potential.

Why?

Because if I invest in my talent, they
create value for the organization.

So that was always part of the plan,
you know, from the beginning and all

the reporting is designed this way
to kind of facilitate that thinking.

That's also why sometimes this
test is very popular to be used to

identify potential for promotion
because you're looking for people

that are showing maybe growth or
a potential area for achievement.

And then you can put them on a track for
development or some rotational program.

Sometimes it's helpful in that way too.

Mike Callen: Yeah, I, this
is really important aspect.

When we started talking with Eric many,
many years ago about working with you

folks, you know, this is one of the
agreement points that we wanted to make

sure that we shared, which is that when
we're looking at hard skills aspects,

we're looking at skills, abilities,
personal characteristics that are

required day one for success on the job.

So it's very easy to have that
dichotomous situation that you described

where you can clearly have the skills
and abilities necessary for success.

Or maybe you clearly don't.

And so it's really easy to make these
decisions when we're looking at these soft

skills aspects, I love the bronze, silver,
silver because everybody's a winner.

The question is, is what's the
attributes that you do possess at

this point in time that will dovetail
nicely with your bevy of hard skills

that you've already shown to possess.

So I really like the way that these,
you know, two really different aspects

work together towards the same result,
which is recruiting people who are

going to "A" - be successful in the
job and "B" - hopefully continue to be

successful there for many years to come.

Chris Cunningham, PhD: Right, right.

And I'll show you to like
in a more general sense.

This is how that scoring context
gets broken out in most reports.

And so you'll see that our middle
range, our norm, if you match the

benchmark, so to speak, for a given
assessment, then you're going to land

squarely in the silver territory.

But if you see a gold medalist or a
high silver person, somebody has a score

above 500, that's likely a very strong
candidate with respect to that competency.

And so the, the, the gist or the guidance
for a user is quite simple, like aim high.

Okay, but don't feel like you have to
wait for all of your applicants to test

so you can rank order them at the end
of the process because we know that's

not feasible for most organizations.

So instead, you know, look for people that
land in that silver and gold territory

and give them serious consideration.

And if your applicant pool is not strong
enough, and all you end up with is a

lot of bronzes, but they might have some
strengths in the silver gold area take

a close look at them and ask yourself,
are we prepared to work on building

some of these bronze competencies?

Because if we are, then it might
be a really great candidate.

Mike Callen: Yeah.

And again, especially if you have
other, if they possess other hard

skills, at a very high level.

And you know, there are certain tests
that help to measure the trainability

aspect, you know, your comprehension
and those kinds of aspects.

And so if you have somebody, you know,
who like in your example is a high

bronze, has good skills and abilities
appears to be trainable, but would be

a great addition to your team then it
makes it a lot easier to take a chance

or, or maybe dip the ladle a little
bit lower into the different tiers

that you have among your candidates.

Chris Cunningham, PhD: Absolutely.

Absolutely.

And, and the one thing I just wanted
to add with this particular graphic,

I'll just throw it up on screen here,
is you'll you'll notice and this

actually comes from an actual client.

And so this is a representation of
how an assess how this assessment

can differentiate between strong
performers and everybody else.

And you'll notice that it's not
always the case that all nine

competencies are equally influential
in determining who a strong player is.

So that's another piece that has to
be taken into account for any given

client any given users Like maybe they
did have a bronze in this example here.

Maybe they had a bronze in proactivity,
which is down the lower left corner.

Well, you know what?

That was not a strong
differentiation point.

So I wouldn't lose sleep over that.

On the other hand, if, if we have an
individual with a bronze and motivation

and interest, which is just to the
right of proactivity, you notice

the gap between the top performers
and everybody else is massive there.

I would be really worried at that point.

If, if they were bronze medalists,
I only want silver and gold when it

comes to motivation and interest.

So my point is people using the assessment
have a pretty good understanding in

their organization of what what is
critical, what is not so critical.

And the beauty of this assessment is
it gives you that ability to make that

more nuanced decision because it isn't
pass fail like you're saying, like, but

it does give you guide rails, right?

And it makes it pretty easier,
easy for you to kind of figure out.

Okay, what is the balance trade off risk?

We're willing to take with this candidate.

Jenny Arnez: So are you recommending
that HR people, managers, they look at

these nine competencies and they, they
rank them as to what's most important

to them and for a given position?

Chris Cunningham, PhD: So, sometimes
that's helpful approach but I'll

show you again in the main report.

The, the number that most
people use is the overall.

That's the one that's at
the top of the report.

And what that is, is an
aggregation of all nine.

And the, it's, it's beyond our
scope today to talk about exactly

how these scores are generated.

But I will say to you that these
scores are, there are individual

scores for every competency.

And there are composites that combine
them in certain weighted fashions,

and then there's an overall composite.

And by and large, most of our users
first look at the overall score to

do some initial filtering, and then
they go down to the nuanced breakout.

And you can even see in this mock up
here on the screen, this particular

candidate is not a strong player.

But they happen to be okay, a
little okay with proactivity.

So I could look at this and say, well, you
know, I, I would consider this a higher

risk person, but if they, if they have
productivity, then that's a really great

attribute, especially if I'm willing and
able to invest a little bit in helping

them see gaps that they need to address.

And so I don't know that I
personally would take a risk on

this particular candidate, but if I
got into a bad situation, I might.

Mike Callen: Yeah, you, it gives
you the awareness of the risk.

So if you're in a situation where,
you know, you've it's a little crude,

but a lot of the people that we work
with talk about butts in the seats.

You got to get butts in the seats.

And you know, sometimes you need to
have a human being in there and it

may not be your first or second or
third choice, but you've got to get

somebody in there doing the work.

And so you have a good idea of what
type of efforts going to be necessary

in order to, to bring them along.

You know, to that end, Jenny
asked you about post hire.

And I know that for a lot of positions
there are preceptor or trainee.

Or sorry, trainer to trainee
relationships, and there you go.

That's one of your aspects here.

Do you find, or do you recommend that
having the preceptor or trainer go through

take the test and then trying to, you
know, match behavioral types between

those would make the training slash
onboarding process more successful because

you're matching like people together?

Or is there any research
that you've done on that?

Chris Cunningham, PhD: I am following
what you're outlining, and I think

there probably is some evidence that
that approach might help, especially

if the, the training and development
model is, is kind of a mentoring

forum where you're trying to match
people with you know, with, with

that sort of mindset and approach.

I can't tell you that we've done
research on that particular model

yet, because most of our clients.

Have wanted to use this to
help make better decisions

about new talent or talent.

They want to develop.

There's some that have started using
the assessment to guide promotion

decisions, but we haven't seen
a ton of interest in using it to

pair partner employees together.

So I think it's, I think it's a
great illustration of another way

in which this competency approach is
so flexible and is really designed

to support good talent management.

It's not simply a pre hire
selection tool like that.

That's one point I think we just
need to emphasize is that there's

a lot of opportunity here to
use this both pre and post hire.

And that was by design.

So we'd really love to see clients
use it for all of its richness.

Mike Callen: Excellent.

Well, you know, if that comes up down
the road, then you know, maybe that'll

be a great opportunity to have you back
and talk a little bit more about that.

I know that, like in nursing, for
instance, that preceptor trainee

relationship is super important.

And so you know, because people do
learn differently, there could be

a benefit and, you know, sort of
matching them up style to style.

But at any rate, it was just something
that was that I was curious about.

Jenny, do you have anything
that you wanted to bring up now?

Jenny Arnez: I I just a
general, just a comment.

Taking the assessment, the whole
approach was the most positive

assessment I've ever taken.

It typically, you know, when you go to
take a test or an assessment in, say,

Excel, it's a little nerve wracking.

You can, you can get a little bit
stressed, but I didn't feel that in this.

And so kudos to to you all for
developing something like this.

It's amazing.

Chris Cunningham, PhD: I appreciate that.

And I don't have a graphic to add here,
but I, I will share with you because

we've actually done some direct testing
of candidate experience and reactions and

a lot of times what we hear is that more
than 90 percent of our applicants describe

the assessment as fun, and 90 percent or
more actually describe it as easy, and

people have tried to tell me at times that
an assessment shouldn't be easy and then

I say to them, how's that working for you?

Because the thing is, like, if
our goal is to trip and trick and,

you know, make people feel stupid.

Then as an industry,
we've done pretty great.

But if our goal is to learn about
people and make better decisions so

that fit is ultimately accurate, then
we need an assessment that people

are going to be willing to take and
actually give their best selves too.

So, so I think that that's
really really helpful.

The other interesting metric I would just
share with you all is we've also tracked

sort of a net promoter effect with this
assessment and in some of the engagements

we've had with clients, we've asked
people, "Hey, you know, now that you've

had this assessment experience how likely
would you be to recommend a friend of

yours to apply for a job here as well?"

And again, more than 90 percent of users
would recommend that company because

of their assessment related experience.

And I don't think we spent enough time
talking about that in this industry

because you gotta understand like if
we create a front end experience that's

abysmal we're also damaging our ability
to recruit talent because thanks to

the power of the internet, right?

Like if one person has a bad experience,
the rest of the world knows about it.

So having a good front end experience like
this, having a good internal experience

like this, I think it really matters
now more than it probably ever had.

Mike Callen: We do at least 50 percent
of our business in the 911 emergency

services, police, fire dispatch space.

It's a huge market for us.

And in this post COVID world it's
been uncanny how every conference

that we go to, there are big sessions
that are led by roundtable groups

of people who are talking about
how do we bring more people in?

How can we improve our situation
so that we get, you know, a little

bit of momentum going in our,
communities, you know, towards, you

know, bringing more people into here?

It's, it's become a more and
more challenging situation.

And you know, I, I do tend to kind of
come, you know, from the side where at

least on the hard skills we've got to be
able to ask them some tough questions.

And, and to a certain extent, you know,
if you can't like to, to use Jenny's

example, if you can't work Excel to a
very high degree and the job requires

you to work Excel to a very high degree,
then, you know, you shouldn't feel

as though you, you passed the test
with flying colors if you failed it.

But when you're dealing in the, this
behavioral soft skills realm, you know,

that's really kind of an opportunity.

People are what they are, you know, to
a great extent at this point in time.

And so to have an opportunity to really
sort of feature yourself and, and do

so in a way that that is affirming and
positive is, is really is really great.

And, and actually what it sort of leads
me to think might be a good best practice

when we're putting tests together within
within our platform, including your

tests, you know, that we may want to, to
recommend to people that they go through

and they put the hard skills tests first
and let people finish with something

that's a little more fun and is going
to leave them with maybe a little bit

better memory of the testing process.

Chris Cunningham, PhD: So yeah,
yeah, that's something that would be

interesting to explore because you know,
the other the other approach is to warm

them up with something easy, right?

And they get them excited
about well, okay, they've

asked me good questions so far.

So let's let's go a little further.

But, you know, what I think is
important here to to point out

is, you know, worrying about the
look, the feel, the experience, the

assessment, this is not like meaningless
conversation for an academic audience.

Like, we got to also remember that
unfortunately, assessments have also

had a pretty bad history of maybe
discriminating against certain groups

and not really leveling the playing
field as much as they're supposed to.

And that's something else that
we've spent a lot of time working

with in the development of our
tool especially is making sure that

it does level the playing field.

And we've actually seen some really strong
evidence of this in a number of studies.

And what I mean by that is that this test
does not show preference to any group.

So if you're a member of a minority or
protected group, you have just as much

chance of scoring well on this as somebody
who's a member of a majority group.

It's not a cognitive test, first of all,
okay, and it's focused on competencies

that you could have developed in a variety
of different life domains and experiences.

Why I'm emphasizing this is that we really
do, as an industry, need to do more to

think about how we package together these
screening experiences and really just

keep asking ourselves, like, are we asking
questions that that get at attributes

that need to be present day one, week
one, or are we focusing on things

that we can actually build over time?

And I know this is a, is a hard challenge,
especially when you're working with

skills testing, but I know you guys
have done a lot to tease that apart

with your clients and make sure they
understand that when they're putting

these assessments in place as well.

Mike Callen: Definitely.

We have we haven't talked much about
the structure of your assessments in

terms of the library itself, but you
have a sort of a service focus, and then

there's a sales focus, and then there's
a managerial leadership potential focus.

And then you have many different
industries or verticals that

are represented with each
of those particular areas.

And then you can drill down further
and you can choose specific job titles

that are very standard across the
country, or maybe even around the world.

All told I think that we have about
shoot 220 or so of the Logi-Serve

tests available in our library
to our clients who have that

particular add on service activated.

So the, the new almost every new
client that comes on board since

the integration has been done will
be getting this particular service.

And then those clients who signed on
prior to it have the opportunity to

upgrade their subscription to include it.

But it's a very, very robust
library that you've you've made

available and very impressive.

Did I leave out any of the
details about the library or

anything that you'd like to add?

Chris Cunningham, PhD: No, I think
that's a good summary of an explanation

of the value add that this brings.

And you all already have a wonderful
product that can really help

companies make better decisions.

And we're really happy and excited to
become a partner that can help you take

it to another level with your clients.

We've we've seen a lot of success
using this product as a standalone.

So we know that when it's partnered
with another well designed product,

the impact will be even greater.

And we're just excited to kind of
see some of these impacts happen.

Mike Callen: Excellent.

As are we for sure.

One of the things, areas that

I reflect upon is the,
the leadership side of it.

And I don't know if you have any
particular insights on this, but

having grown up in, in sales very
often organizations will, you

know, promote their best performing
salespeople to sales managers.

And sometimes that can have
good results and sometimes

it can kind of be disastrous.

Sometimes the best salespeople
aren't the best sales leaders.

And sometimes the best sales leaders
aren't the best sales people either.

So being able to add something to
the conversation that gives you some

insight and, and intelligence into,
you know, deciding who it is that

you promote into these situations
can be, can be very, very helpful.

And some of the smaller organizations
of which you know, most of the

businesses in America are you know,
making the wrong decision in this

regard, you know, especially for
instance, promoting a really good

sales person to sales management and
having them fail can be disastrous

to, to that particular organization.

Chris Cunningham, PhD:
Yeah, yeah, that's so true.

And, you know, to add a couple of
layers on this, I mean when you take

a very strong individual contributor
like the examples you've been giving

Mike and you put them into a managerial
role, even if they are effective as a

manager, you've lost the value of their
performance as an individual contributor.

So it's a real big gamble and we have
to be careful because sometimes the

characteristics and attributes that make
individuals powerful as an individual

can actually hinder them from being
good at working well to support others.

It's, it's a different competency
set to be a leader and a manager

than it is to be a sole contributor.

So that's exactly the rationale behind
why we built that, that vertical out.

And you know, in that particular,
assessment we've also seen it be used

a lot to help organizations increase
diversity and their leadership potential

and their leadership development programs.

Because the problem with the model
that you shared before, Mike, that

is so common, is when we get in the
habit of identifying our current top

performers or we ask managers to, you
know, identify talent, it tends to

happen through a similar to me process.

Like, you know, the manager
says, Oh, well, yeah, Joe reminds

me of myself 20 years ago.

I think He's the one to tap.

And of course, the problem
with that is that that

automatically creates homogeneity.

And it really damages an organization's
abilities to go in a different

direction or to really open that
door for more people to consider it.

So I mentioned leveling
the playing field before.

It, in leadership, we see that as a really
big need in many organizations because

they just don't have a process in place
that's scalable to figure out who do

we want to fast track for development?

Who do we want to put on that
leadership track so that we're

ready when those roles open up?

Mike Callen: Yes, very, very good points.

Well, I, Jenny, don't
have any other questions.

I'm looking through my, my list
of reminders that I had that

I wanted to ask Chris about.

So, Jenny, do you have any other
questions that you'd like to ask

or comments you'd like to make?

Jenny Arnez: No, I don't actually, I feel
like we're kind of we're winding down.

We're at a good stopping point.

I think for this conversation,
I mean, I definitely have other

thoughts that I'd like to ask you
sometime Chris about AI and we kind

of touched on this in a previous
conversation, but that's not for today.

That's a, that's a whole different, street
that we'd be driving down, but I, I just

am so grateful that you joined us today.

And I've thoroughly
enjoyed this conversation.

Any final thoughts from
either one of you guys?

Chris Cunningham, PhD: I'm just
appreciative we have the chance to

have these types of conversations.

A lot of times what holds other
clients back is lack of awareness

of some of the complexities and
nuances that operate under the hood.

And then just being able to talk about
some of these topics, it may not be

something that was on somebody's mind,
but then when they learn about it, they're

like, yeah, you know, that was on my mind.

I just didn't know what to call it.

So it's important to have these
types of discussions and to

raise these types of issues.

So thank you both for
putting this together.

And I'm always happy to, to
take that trip down those other

streets when you guys are ready,

Mike Callen: That's great.

Yeah.

I, I, I really appreciate you, you
being here today and you know, we,

we hope that this content, you know,
meets the needs of new HR people.

People are getting established and really
trying to learn, you know, about different

aspects of testing and selection.

But like I told Jenny before we
started today, I was really excited

about this because, you know,
it's fun to geek out a little bit.

And you know, when you have, you have
somebody that's a chief science officer,

I mean, it sounds like you, you started
in Star Trek and, you know, you've just

kind of graduated to where you are right
now, but it's, it's really great to to

be able to pick your brain and and hear
your opinion on a lot of these things.

As I mentioned, we've been
in conversations with Eric

for many, many years now.

When we first started talking about
integrating our products, you know, we

had an installed product and it wasn't
something we could do at that time.

But we've persisted in the conversation.

And so, you know, now we're here.

We've launched we've got the generic
off the shelf products available.

We're going to work together to create
a market killer product for, you know,

the 911 EMS police and fire space.

And we're super excited
about that as well.

And so anybody who's listening to this if
you're interested, I'm sure we'll have a

form or, or some sort of call to action
available at the end of the video here.

If you're an existing client and
you'd like to, you know, upgrade

and add this incredible line of
testing to your subscription.

You can reach out to us and
we'll be happy to get you set up.

We can literally get that launched
for you in about five or 10 minutes.

And if you're not a client and you'd
like to talk more about being one and

being able to take advantage of this
incredible Logi-Serve product along

with our hard skills tests we'd love
to chat with you about that as well.

Again, Chris, thank you so much for
being so generous with your time today.

We really appreciate you
you know, sharing with us.

So thank you very much for that.

Chris Cunningham, PhD: Likewise.

Jenny Arnez: And we just want to say
thank you to our viewer and our listener.

We're sure glad you joined us today and we
hope to connect in another place and time.

Thank you everybody.

Mike Callen: Thank you.

Podcast-Outro: Thanks for tuning in to
Testing Testing 123 brought to you by

TestGenius and Biddle Consulting Group.

Visit our website at testgenius.com
for more information.

Episode Video

Creators and Guests

Jenny Arnez
Host
Jenny Arnez
Training Development and Sales Support at Biddle Consulting Group & TestGenius
Mike Callen
Host
Mike Callen
President of Biddle Consulting Group and TestGenius