How can we make the most of civil service success profiles?
Civil servants have Stockholm Syndrome for the old competency framework.
Within the civil service, certainly within the policy profession, there seems to be a near-universal belief that the interview and recruitment process is ineffective. It is found to be restrictive, tick-boxy and has low accuracy for identifying the best candidates. The centralised approach imposed by civil service HR doesn’t work for what policy officials need to get the right people for their team.
But I think a lot of this is due to culturally embedded ways of working, rather than the actual bureaucratic rules. In this post, I want to explore the rules of civil service recruitment to propose better ways that policy officials could recruit into their team, completely within the existing rules.
Civil service recruitment is built around the ‘success profiles’ framework. This has a strong focus on ‘behaviours’, which used to be called ‘competencies’ and are a set of nine generic workplace skills that can apply to any type of job. In the written application, the candidate submits 250-word statements of their skill in this area, and are heavily encouraged to use a single example of work. In an interview, the candidate is asked a specific question relating to the behaviour and must fit their answer to the specific question and the overall behaviour description. Candidates know which behaviours will come up, but not what the question will be. Again, they are encouraged to give a single example, typically taking 5-7 minutes. In both writing and interview, candidates are told to use the structure “situation, task, action, result.” Candidates will typically prepare a short number of their top behaviour examples that they will use regardless of the specific question asked, perhaps just adapting the focus of their answer to the question. The behaviour descriptors often make it hard to focus down into one specific area of interest and really probe the candidate on it, as they are expected to hit a broad range of the skills in the behaviour description, rather than excel at one particular part.
For years this has been the core of civil service recruitment. The problems of it being restrictive have been complained about for years. The thing is, the civil service actually did something about this. The behaviours described above are roughly identical to the old competency framework. The civil service introduced success profiles in response to criticism that the competency framework was too simplistic and box-ticking. The new framework has a lot more flexibility, with new tools. This includes the new core components alongside behaviours of ‘strengths’, ‘experience’, and ‘technical’*. It also added some other elements, like the ability to request a presentation at interview.
So far there has been some uptake of these new elements by the policy profession. In my experience, the most common one is Strengths. The candidate knows that these questions will be asked, but not the strengths they might relate to. The list of strengths is also quite long. The questions are designed to elicit an instinctive response from a candidate to indicate how much of a natural they are at this certain behaviour. So if the strength is “problem solver”, a weak candidate might describe a textbook process for problem solving, whilst a strong candidate would enthusiastically talk about all the problems they’ve enjoyed solving.
The problem I have with strengths is I think they do a really poor job of actually demonstrating a strength. Partly this is the inherent structure. Although the panel use a ‘warm up’ question to baseline the candidate's natural level of enthusiasm, this is hopelessly subjective. It’s hard to tell how someone will respond to a different question from their response to one. Some people are just not naturally enthusiastic. I was once given the feedback from an unsuccessful interview that I had given a balanced answer to a strength question, but the recruiting manager knew that I demonstrated this behaviour stronger than 99% of people. I’m just naturally inclined to see both sides of something. I have since learned that giving my instinctual response to a strength question is the wrong tactic. It’s just another system to be gamed.
The other somewhat common element is presentations. But mostly these presentations simply take the form of a 5-minute introduction to the policy area. This is a good screen for anyone who doesn’t know what they’re talking about or simply can’t be bothered to do the research. But it’s not a challenging screen in general. Most policy areas have good government publications summarising the challenges. These presentations basically boil down to “can you do a Google search, read and summarise someone else’s summary of this area.” This is the bare minimum requirement you would expect, and doesn’t really allow for much opportunity to go beyond. I have used one of these presentation questions, and I don’t know how I would score someone higher than a generic ‘pass’ or ‘good’ mark (i.e. 6 or 7 on the 7-point scale). Unless they manage to propose a completely new approach to the problem that the civil service, academics and think tanks have all missed, in which case they’re probably wasting their time in a junior civil service role.
I think there are two main reasons the policy profession interviews like this. The first is simply cultural inertia. We got so attached to the competency framework that when it was replaced with something better, people were afraid to try something new. This is partly just established ways of doing things and partly fear of doing something that could be challenged and get you in trouble. I think the latter fear is largely unfounded though. The second reason is that policy officials are usually just looking for a smart, capable generalist. They aren’t looking for specific technical skills, qualifications or subject-matter expertise. So they decide that taking a generic approach to interviewing is the best. This is a mistake. Being generically smart and capable is clearly made up of a few core components: being able to analyse a problem quickly yet comprehensively, being able to write and speak clearly and completing individual tasks in a short space of time. These are all things you can investigate more specifically than the generic strengths and behaviours.
So what would a better approach look like?
I mainly want to discuss interviews, but before that it’s worth briefly touching on better written applications. There is less flexibility here than in the interview. Technical skills aren’t very relevant to policy jobs; it’s hard to assess policymaking via a short written application. I think the only area for significant improvement is in better targeting personal statements. These aren’t always used, but when they are the statement is typically very similar to behaviour responses, just tailored around the skills in the job advert rather than behaviours (although they’re often very similar - even worse). As long as the question makes some reference to the skills in the job advert, though, you could be a lot more specific than “why are you suitable for this role” or similar questions. You could ask candidates to answer a specific question that tests their ability to demonstrate those skills, e.g. “how would you put together a plan to tackle [policy problem]?” This would force them not to parrot their experiences (which you’ve got in the CV and behaviours), but to actually demonstrate use of their knowledge and skills.
Now onto interviewing. There are some pretty standard critiques of interviewing in any organisation. Interviews favour those who are personable, slick and can tell a good narrative rather than those who are actually good workers. They encourage people to embellish their experiences or claim team successes as their own to make them sound more impressive, although outright lying is rare. Interviews rely on subjective assessments, so they fall prey to all sorts of judgement failures, with people more likely to favour those who are similar to them, or whose career experience is most familiar and hence understandable.
The ideal interview from my perspective would reduce these subjectivities as much as possible. Candidates would be presented with questions they cannot easily prepare for, that actively demonstrate the sort of skills required in the job. The goal is not to deliberately surprise or catch out candidates, just avoid them providing you with a rehearsed answer. One counter-argument to this is that for some candidates, having less ability to prepare is unfair, as others are better at thinking on their feet. I take this seriously, although I would also argue that in the policy profession at least, the ability to think on your feet and analyse things quickly is one of the central skills. The main problem is that some people may be bad at doing this whilst under the pressure of an interview situation. Although that applies to any interview situation, and the current approach comes with plenty of its own anxieties about whether the questions will work for your best set of prepared examples. Coping under pressure is also a useful work skill, although the pressure of an interview might be unlike the pressure of regular work situations.
Another principle is that interview questions should encourage significant variance. You should expect that a perfect candidate would ace the questions and a poor candidate would flunk them. Too often interviews aim at a middle ground where candidates end up either just above or just below the line, making it hard to really tell who is the strongest candidate or whether you even want to take the best candidate or try again.
My first suggestion for better interviews is to just drop strengths altogether. Until I see any evidence they are an effective and fair way to assess people, I think the flaws I described above make them a waste of time.
The next thing to do is reform behaviours. The recruitment framework allows two types of behaviour questions: past-behavioural or situational. I have only ever seen past-behavioural questions used (“tell me a time when you…”). But situational questions are clearly less easy to game (“How would you deal with [situation]”). The main thing is making sure the situation is adequately specific. Currently, a generic question would be suitable because it would throw people and you’d see who actually knew how to approach work problems and who was bluffing and over-prepped. But if these questions became the norm, people would quickly be able to develop stock answers to generic questions like “how would you deal with difficult individuals who had conflicting views to each other?” Instead, questions would have to be relatively targeted to the circumstances and needs of the specific team. Although even if people did develop generic answers, I think this approach allows for a lot more effective follow-up questions, as the panel are in control of the situation, not the interviewee who could be embellishing their experience.
There may also be fewer behaviour questions if we add in other things. The first is consistently using effective presentations. No more generic intros to a policy area. The question needs to be specific enough and hard enough that the individual needs to think for themselves. I’m a big fan of presenting a scenario to candidates 30-60 minutes before the interview, forcing them to think on their feet and display speed as well as clarity of thought. This requires a bit of extra work from the panel, but is great at weeding out candidates who lack capability.
The second thing to add in is technical or experience questions, which allow for greater flexibility and more specificity on the particular skills you’re interested in. For the policy profession there might not be a big difference, although it will depend on the role. For example, if you really want someone with experience of managing a bill or a consultation, then a technical question might be better. Even asking someone the experience question “can you tell us when you have had to analyse a particularly complex problem and produce an effective solution” is functionally very similar to a ‘Making Effective Decisions’ behaviour question or ‘problem solver’ strength. But it allows you to be specific about the thing you are interested in - problem-solving ability - rather than the candidate having to fit their answer into a particular behaviour descriptor, or to feign enthusiasm they might not feel in a stressful interview scenario. Sometimes the breadth of behaviours is desirable, but often it can make it hard for candidates to focus on the aspects that are most important for this particular role. These questions again have the benefit that they are harder to prepare for.
Wrapping up
There certainly are reforms at the centre that could improve recruitment. At one point I seem to remember a target of summer 2020 for all civil service jobs to be advertised publicly by default (most jobs are advertised only to existing civil servants). I assume the pandemic meant priorities changed and this got dropped, although there’s still a commitment to do this for Senior Civil Service jobs. I know plenty of excellent people who try for years to get a role in the civil service from the few public jobs posted, whilst much less capable people can get jobs from internal campaigns. Currently a major risk of open recruitment is that it increases the number of applications you get, but if everyone moved at the same time, the total number of public jobs would hugely increase, slightly mitigating this. The current approach is to occasionally do big open recruitment rounds. These don’t seem to attract a higher (or lower) quality of candidate, possibly because of poor matching between mass recruitment and specific roles, as well as low incentive on panel members to have high standards for someone who won’t work directly for them.
That is a digression, though. There’s no reason that policy civil servants need to wait for permission from central HR functions to change the way we recruit. There’s plenty of potential we’re not even using right now. We shouldn’t be afraid to try new things to see if we can improve the selection of talent within the policy profession.
*There is also a success profile component of “ability”. As far as I can tell, this only refers to generic verbal/numerical reasoning tests used to screen large numbers of candidates at the pre-application stage. Which is a shame, because there’s some good academic evidence that a simple cognitive test can be more effective and less discriminatory than the typical CV & interview approach, or almost any approach that relies on subjective assessment. In fact it’s a remarkably robust trend that objective measures tend to perform better than subjective measures on many fronts. The problem is that systematic, objective measures have systematic, measurable biases, whereas subjective measures have harder-to-measure or less consistent flaws that we can pretend don’t exist or that can be minimised.
I agree with basically all of this. Particularly the points about Strengths, they seem incredibly non-informative given the way they work and I see no other ways of doing them better, so should be scrapped.
Of the typical toolkit using a presentation is the saving grace. We tend to ask candidates to comment on a policy question that is accessible but possible to be really thoughtful about, so it both provides a discriminator for those who aren't capable of synthesizing the basics of what is on the internet about the policy issue, but then also does separately give an opportunity for those who have become really thoughtful about the policy nuances to shine.
Definitely agree that introducing better opportunities to require candidates to think on their feet would be very useful for discriminating, and doing that well is a pretty good indicator in my experience for those who do well in policy roles. As you suggest there is actually decent scope to do this already and without offending the Civil Service Resourcing beast too much, but it's just culturally not the done thing which is a shame.
One thing you don't comment on, is that there is a lot of pressure from Civil Service Resourcing to just equally weight each component, meaning the more behaviours you have the more they tend to crowd out the much more useful signal coming from something like a presentation. You can push Civil Service Resourcing to allow you to use alternate weightings but it's a grind and they're resistant.