FAQ

What are the biblical and theological foundations of clergy selection?

From a theological perspective, deliberations about whether and when to select and/or ordain an individual for professional ministry occur at the intersection of ecclesiology (doctrine of the church) and vocation (the sense of being called by God for a life of service in the church, or in a secular vein, “following your bliss.” (See Joseph Campbell in Conversations.)

The Christian Church has been functioning at this theological intersection for two thousand years, that is, since the time of Christ.

The biblical example of the selection of a replacement for Judas provides a model for understanding the process. After Jesus’ resurrection, and just prior to Pentecost, the eleven surviving disciples decided it would be important to replace Judas as a twelfth witness to “all these things concerning Jesus of Nazareth.”

The eleven apostles identified two parts. One was that of the ministry of Judas, as reported in Acts 1:17 (RSV): “For he was numbered among us, and was allotted his share in this ministry.” The second part was, “His office let another take.” The ministry was to give witness, or testimony, to all that had taken place. The office was that of an apostle.

To put this in modern terms, the eleven apostles identified a need for ministry and enunciated a criterion for selection to that ministry, which Peter articulated in Acts 1:21-22 (RSV): “So one of the men who has accompanied us during all the time that the Lord Jesus went in and out among us, beginning from the baptism of John until the day when he was taken up from us – one of these men must become with us a witness to his resurrection.”

Next came the committee meeting, as reported in Acts 1:23. “They put forward two, Joseph called Barsabbas, who was surnamed Justus, and Matthi’as. Then they prayed. ‘Lord, who knowest the hearts of all men, show which one of these two thou has chosen to take the place in this ministry and apostleship from which Judas turned aside, to go to his own place.’” And so, in order to avoid being presumptive, after they had offered a prayer of supplication and blessing, they held a lottery! Acts 1:26 notes that “…they cast lots for them, and the lot fell on Matthi’as; and he was enrolled with the eleven apostles.”

Many selection committee members have no doubt felt the same way when it comes to the final decision. Selection committee members often say they experience an overwhelming sense of responsibility for decisions about applicants for ministry. But having dealt with that feeling, they also indicate that they wish for something like a lottery to relieve them of the final burden.

Nonetheless, the committee system functions in such a manner that either consensus is reached, or a majority vote decides the issue. Casting lots is no longer an option.

It is somewhat curious that Matthi’as is not heard of again, at least not in canonical scripture. But then there was St. Paul, as one born out of due season, he opined, called on the Road to Damascus. And, of course, Christians have read of his testimony and witness, through his letters to the churches, from that day to this. St. Paul was the de facto replacement for Judas, not Matthi’as, or so it seems.

This intersection between ecclesiology and vocation is not an easy place to be! But to avoid such decisions is also unthinkable. It is not likely that consultants or selection committees will always be correct in their assessments. But as good stewards of the gifts, talents, skills and experiences that we have been given, we must apply those gifts, talents, skills and experiences to the matter of clergy selection, so that the church fields the highest quality clergy corps possible under current circumstances. These responsibilities are a part of the office and ministry that we undertake.


What is meant by “psychological fitness” for ministry?

To provide appropriate, high-quality service in ordained or lay professional ministry, a person must be able to function well on four personal and professional levels. The first level, fitness, is the potential for developing effectiveness in ministry. It is foundational and continues through the ensuing three levels of criteria categories that are important to judicatories in determining whether to select an individual to prepare for ministry in that denomination. The second level is competencies needed for ministry. The third level is readiness for beginning the practice of ministry. And the fourth level is developing effectiveness in ministry.


How are the four levels of psychological fitness for ministry related?

Fitness is foundational to the other three categories that interest judicatories in selecting candidates for ordained and lay professional ministry. Fitness is the potential for developing competencies, potential readiness to enter ministry and potential for long-term effectiveness in ministry.


How does CAS conduct psychological fitness reviews?

The criteria that are appropriate to the psychological fitness review are those which indicate that the applicant has the basic potential to develop the competencies necessary to attain a position of readiness for the beginning practice of ministry, with the expectation that those who are enter ordained or lay ministry will go on to achieve higher degrees of effectiveness in ministry as they continue in ministry.

CAS provides applicants with a battery of tests, analyzes and interprets the results, prepares a narrative report, discusses the report with the applicant in a clinical interview and submits the final report to the applicant’s judicatory. The procedure is focused on measuring and reporting key psychological fitness criteria.


Why does CAS use testing, in addition to clinical interviews, to assess fitness for ministry?

We believe, and academic and research psychologists have demonstrated through empirical research, that the results of such (clinical-actuarial) reviews are more accurate and less biased.

In addition to being more accurate (valid) and reliable (dependable), this type of process also allows us to offer this distinct, high level of assessment nationwide, rather than be limited to assessing candidates in a small region.

In addition to testing, references and the clinical interview data allow us to provide a more personal perspective. They also contribute to the validity and reliability of the reports. The resulting five- to seven-page narrative report is both comprehensive and user friendly.


What battery of tests does CAS use?

The inventories include, but are not limited to, the Strong-Campbell Vocational Interest Inventory, Adjective Check List, Shipley-Hartford Test, a Proverbs Test, an Attitude and Activity Scale, MMPI-2, the California Psychological Inventory and a customized Incomplete Sentences (free response) Blank.

The CAS battery also includes a reference form to be completed by four persons who know the applicant, including his or her senior pastor, a congregant if the applicant is already serving in a parish, a colleague and one other reference of his or her choosing. This data arises from persons who know the applicant and who have observed his or her functioning in a variety of contexts.


If you’re using standardized tests, does that mean the report features standardized results?

Absolutely not! We score, measure, interpret and analyze the data, preparing an individualized narrative report. It is customized to the applicant’s assessment and to the needs of the judicatory. No standardized computer printouts are presented in the final report. Every report is individually prepared.

Since ministry applicants represent a self-selected population with common purposes and interests, and since the same criteria are being reviewed for each applicant, there is some common ground that needs to be covered in each report. Hence, the language in the reports, since it is based on the empirical data from testing, may reflect some common features from one report to the next on certain issues.

There are also, however, areas in each report that would be distinctive. That is, ministry applicants, as a population, tend to have many traits and characteristics in common. However, as individuals, they also have traits and characteristics that are unique. It is this balance between common characteristics (that would be appropriate to ministry) and unique characteristics (that would provide the individual flavor) that we seek to maintain in the reports.


What are the key psychological fitness criteria?

Intellectual ability and style, vocational perceptions (interests, motivations), personality composition and functioning, character traits, lifestyle, mental and physical health, gender identity and sexual maturity, self-image, role-image (as pastor), relational style and boundary setting are among the key factors that are measured against “ministerial criteria”.


What do you mean by ruling out pathology?

In addition to measuring and reporting on key psychological fitness criteria, the CAS review process also addresses the criteria necessary for ruling out pathology.

For example, a disqualifying condition would be the inability of an applicant to accept limitations in self and in relationships that could lead to growth and hope.

Another common disqualifying condition would be an applicant’s inability to make decisions and, hence, the likeliness that he or she would fail to initiate appropriate action in a ministry setting.


How is the CAS assessment process a consultation?

First, the applicant receives a face-to-face consultation – what we call a clinical interview – to discuss what is contained in the written report, based on testing, and any matters that arise during this debriefing session.

Second, the judicatory selection decision-maker(s) receives this report, which contains recommendations for further nurture and development of the applicant during his or her period of preparation for ministry, along with a psychological fitness rating.

Our expectation is that the applicant and judicatory will be able to arrive at an optimal state of readiness for beginning the practice of effective professional ministry, prior to or at the time of ordination or certification, except in a few instances in which significant mental or emotional pathology is present, or when the applicants prove themselves unwilling or unable to engage in the recommended experiences that would assist them in areas of development or working on significant limitations. Of course, such recommendations to applicants or selection committees would not be considered binding unless the selection committee so states and records such recommendations in the minutes of their meetings. Consultation is the sharing of the information, not in the implementation of the information that is shared.

How is an applicant rated on psychological fitness?

The CAS report includes a bottom line recommendation, from highly recommended to not recommended.

How often is psychotherapy recommended as part of CAS’ overall recommendation?

Brief counseling focused on specific areas of concern, such as authority problems or difficulties with the constructive expression of anger, would be recommended in approximately one out ten reviews and is seen as essential in one out of fifteen reviews. Psychotherapy is recommended in approximately one out of twenty to twenty-five situations. CAS does not provide these therapeutic services.

What happens in the clinical interview?

The clinical interview is a one-hour feedback and debriefing session between the applicant and a local, professional clinician. During the session, the interviewer asks questions about any matters or issues that need to be clarified. The applicant is given the opportunity to relate his or her spiritual journey, that is, what has brought him or her to this place in time. The applicant is invited to read the five- to seven-page narrative report that CAS has prepared, making note of any questions or concerns. The applicant and interviewer then debrief the applicant’s thoughts and reaction to the report. (See also Instructions for conducting clinical interviews for a more detailed account.)

At the conclusion of the interview, the applicant is reminded that he or she may review the report again, provided he or she has it released by CAS or the clinical interviewer to an appropriate credentialed mental health professional.

Any notations deemed necessary by the applicant and/or interviewer are made to the report, and it is sent to the applicant’s judicatory by the interviewer. If the applicant feels the report is not accurate, the interviewer may recommend a second opinion.

How do applicants tend to respond to the CAS report?

CAS founder and senior report writer John E. Hinkle, Ph.D., conducted a study of the responses of applicants to the report during the clinical interview. For five years, at the end of the report-reading period of the interview, each applicant was asked, “In a general way, before we go to the details in the report, having had a chance to read the report yourself, how would you respond to the material in the report?”

No statistical analysis was done on the responses. Yet, applicants typically responded as follows, in their own words:

Pretty close. Very well. Pretty much on target. Pretty accurate. No real kick in the pants. Pretty true. Intensely accurate, Very accurate, Pretty good. Sooo accurate. Very on. Sounds pretty good. Right on the head. Fits pretty well. Very accurate. Very fair. Very accurate. Impressed with the insight available. I can very clearly see myself in just about every paragraph. Pretty consistent. Fits my experience of myself very much. Just like my picture! Really impressed—incredible to read what you know yourself to be true. Very accurate. Really accurate, makes a lot of sense to me. Accurate, nothing surprising. Very, very close. For the most part, pretty close. I see myself this way. Really well. Like someone has followed me around for the past six months without my knowing it. I am awed and amazed that you can capture me so precisely. Overall, a rather accurate general portrayal. Excellent job of capturing me in actuality. Very accurate. Catches me as I am. Startlingly accurate. So accurate it’s eerie. Touches my two key struggles. Its like you can read my mind. Took the test seven years ago…more assertive now. It fits. Wow! Pretty accurate. Mostly accurate. Strange, I’ve never read anything like this before. Very accurate. On target. Amazing! Very much agree with it…right on. This is me…I am amazed! Fits very well. Yeah, that seems to be me. I am impressed that all of those test results show who I am.

No applicant disagreed with the report in general, though a few wanted some clarification about certain statements in the report at the time of the interview. And on rare occasion, a sentence was struck from the report by the clinical interviewer, or the clinical interviewer wrote in the margin, “The applicant does not agree with this statement.” With the interviewer having made the marginal note, the applicant left the sentence in the report since the data supporting it was clear from testing.

Can you quantify applicants’ satisfaction with CAS report results?

In more than five thousand psychological fitness reviews, the clinical interviewer’s professional opinion has been that the report fit has been very good, good or, on rare occasion, moderate. Applicants have agreed that the report is a good fit nine out of ten times. Of the ten percent that remain, minor adjustments to the report, based on data from the clinical interview sufficed to make the report ready for release. Of the 5,000+ reviews completed to date, only five applicants decided to seek a second opinion. Of those five, the clinical interviewer (in those cases Dr. Hinkle) encouraged three of them to do so, given their obvious dissatisfaction with some of the contents of the report. In each of those instances, upon receiving the report from the second opinion, the applicant decided to release both reports to the selection committee, saying that, upon reflection, they did think the first report was accurate as well. Going through both reviews apparently enabled these applicants to adjust their own self-image to a more realistic set of self-perceptions.

With regard to the other two second opinions, the applicants decided that they were not ready to engage in preparation for ministry and withdrew from the process.

 

What recourse does an applicant have if she or he doesn’t agree with the CAS report?

If an applicant believes that the CAS report does not fairly or satisfactorily describe him or her, the CAS process allows for the applicant to seek a second opinion to balance or correct what is written in the report.

As noted elsewhere, in more than 5,000 reviews, a second opinion has been requested on only five occasions – three times because the clinical interviewer requested that the applicant do so. In each of these three instances, the applicant was willing to release both reports to his or her judicatory. Interestingly, the results of both reviews were similar. On the second reading, these three applicants decided that the information in the first report, as well as the second, was accurate.

With regard to the other two second opinions, the applicants decided they were not ready to engage in preparation for ministry and withdrew from the process.

How does an applicant go about getting a second opinion?

The applicant must agree to and meet two criteria, with the third being a request rather than a criterion:

1. The applicant must seek and receive the approval of his or her judicatory selection committee for a referral to a professional chosen by the applicant to conduct a second opinion review.
2. The applicant must pay for the second opinion review.
3. The applicant will be asked to consent to the release of the reports from both the first and the second review to his or her judicatory selection committee.

How bias free are the tests CAS uses? And the entire CAS process?

While there is no such thing as “bias free,” or total neutrality in “perspective taking,” we go to great lengths at CAS to take into account, neutralize and remedy to the extent possible any negative biases related to gender and/or racial, ethnic and cultural difference.

Can you say more about bias in the assessment and selection process?

There are many opportunities for bias in psychological assessment. As CAS works with judicatories to assess the psychological fitness of applicants for preparation for ordained or lay professional ministry, those opportunities for bias exist in the application for ministry processes, the report writer, the clinical interviewer and the standardized inventories that make up the test battery.

Bias is another word for perspective taking. When a perspective is taken on any person, that perspective enables certain perceptions, neutralizes others and excludes others. This is the human condition – to take perspectives, and hence, to be biased.

The application for ministry process itself takes a perspective on the applicant. So it may be properly described with the term “bias.” Inasmuch as perspective taking is both necessary and inevitable, it is a rather serious mistake (bias) to look for bias in one place – rather than in all the places where it exists.

Biases can be either “bad” or “good,” depending upon the purposes for which they are used, as well as the content that constitutes them. For example, the title of a book was Ethnocentrism and other Altruistic Values. Today, in the early 21st century, ethnocentrism is widely regarded as a limiting and therefore “bad” perspective. And yet, it is also the case that persons in almost any culture in the world of cultures will defend the perspective (worldview) that frames the culture (meaning system and lifestyle) to which their group, organization, tribe or society is committed. That is typically thought of as patriotism and self-sacrifice, and it is usually seen, even promoted, as a virtue.

Importantly, bias, in the sense of taking a perspective, yields information about the matters upon which it focuses. Each of us as individuals may be properly described as biased. We approach other biases from a stance of bias. Again, bias is not only inevitable, it can be helpful. In the case of selecting candidates for preparation for ordained or lay professional ministry, applicant bias (defensiveness, for example), clinician bias (pathology, for example), inventory bias (ethnicity, culture, gender) or process bias (age/maturity) is present in the situations, as it is in all situations, in part because of certain purposes that need to be accomplished.

A “good” bias (perspective) is one that contributes in known and intentional ways to the purpose intended. A “bad” bias is one that either is not known or does not contribute to the fulfillment of stated purposes.

And so the question is not, “Are inventories, applicants, clinicians and selectors in the process biased?” We know that they are. The question is, rather, “Are those biases known and intended, and further, do they contribute in helpful ways to the purposes intended in the overall process of applicant selection?”

To be even clearer, the bottom-line question is, “Are known or unknown biases contributing a degree of contamination to the assessment process in a patterned way, so as to have a negative effect on the accuracy of the decisions that are being made, and if so, what biases, in what patterns, and to what degree?”

How does CAS remedy any “bad” biases in its processes and procedures?

One may take some comfort in the fact that the entire process of entering ministry, from calling to ordination, is a multiphase, multistage process. One way to balance our biases is to have applicants evaluated by different persons, in different contexts, on both the same and different dimensions. Such a process “heterogenizes the irrelevancies,” that is, it provides different biases in different contexts with different persons doing the evaluating. In this manner, the irrelevant variables have a greater chance of being “weeded out” where contexts and evaluators and criteria are diverse, leaving the central core of criteria intact.

A second way of remedying biases is to take a measure of each bias itself. For example, an applicant typically approaches the CAS testing process with a particular bias – purpose. The applicant wants to “be selected” to enter ministry. So the applicant may decide that she or he needs to present her or himself in a favorable and positive light on testing, in the interest of being accepted into ministry. Thus, the factor of “social desirability” influences the responses to testing that an applicant makes. Typically, the applicant thinks about a given response, wondering, “What will the evaluators make of this response?” To this end, measures of the need for social desirability are built into the personality inventories so that results may then be evaluated in the light of this “known bias” on the part of the applicant.

A third way to neutralize bias is to have at least one person present in the process as a consultant or interviewer who embodies that bias in her or his personhood (gender, or ethnicity, for example). Where bias is known to exist, counter-balancing biases can be brought to bear upon the known bias. In this instance, for example, male clinicians may need to have a female consultant sit in on an interview with a female applicant. In the same manner, a female clinician may need to have a male consultant sit in on the interview with a male applicant.

Are psychological tests and personality inventories culture free and/or gender neutral?

Inevitable and inescapable problems of bias fall under the headings of culture bias, gender bias and biases involved in being part of an ethnic group that is different from the ethnicity of the culture at large. (Of course, everyone in the evaluation and selection process is “ethnic,” or a cultural person with certain biases….)

And yet there are ways to identify, understand and rise above such biases:

  • Assessment tools and procedures are culture bound.
  • Personality is culture bound. So psychological assessment instruments are culture bound. However, personality is not bound to one culture only, as may be seen in immigrant populations. Since persons can cross cultures, psychological assessment instruments can also cross cultures. But this is no simple or easy task, either for the immigrant to a new culture or for the assessment tools to measure that journey. While persons can cross cultural boundaries with some success and to some degree, they can never do so completely. A complete crossing of cultures would involve the loss of the identity formed in the culture of origin, that is, the culture crosser would have to “go native.” In the same manner, psychological inventories cannot be expected to cross cultures completely either. If they did, they would lose their ability to measure traits and characteristics with appropriate levels of validity and reliability.
  • Persons can learn social roles, behaviors and expectations (i.e., role norms) in a second culture with some facility. Such persons may develop a “persona” appropriate to the second culture, but only with considerable difficulty, if at all. The learning of a second language as a youth or adult and appropriate role behavior in a second culture involves the development of a bi-cultural identity with a persona appropriate to each culture and to the institutions and roles of each culture.
  • Psychological assessment of persons who are speakers of English as a second language involves measuring the individual’s “persona” as it has developed in the acculturative process in the English-speaking context. It does not assess the persona developed in the culture of origin, unless the entire process and tools of assessment were developed and are implemented with the context and framework of the culture of origin – and with reference to functioning within that culture of origin.
  • § Just as individuals and groups may cross cultures to some degree of effectiveness, so some psychological instruments can cross cultures with about the same degree of effectiveness. Psychological instruments may be translated into the language of the new or former culture, as long as proper attention is given to cross-cultural conceptual equivalence and to the use of equivalent idiomatic expressions that may be contained in the former culture in the translation. The same point could be made about within-culture changes, whether across regions or across time. A test item such as, “Do you play drop the handkerchief?” constitutes such a generationally antiquated test item. Forty years ago, almost any person in the U.S. would have known the game “drop the handkerchief” and likely would have played the game growing up.
  • Assessing candidacy for ministry in the U.S. typically occurs within the context of application for service in a structured bureaucratic institutional system that was developed and now functions in a value framework that reflects and embodies a Western/European perspective and roles (i.e., norms and values).
  • Candidacy assessment intends to measure an applicant’s “fitness for ministry” within the confines of and/or in and through the instrumentality of an institution whose assumptive world and self-understanding is one of diversity and plurality, as Westerners understand and implement those values.
  • The assumptions and parameters of the institution’s sense of its plurality and diversity (ethos) are anchored in the English as a lingua franca. That is, the business affairs and decisions of the United Methodist Church, U. S. A., for example, are conducted largely, though not exclusively, in the English language.
  • Given this state of affairs with reference to language, when combined with the placement practices of the UMC (that is, you go where you are sent, and you go when denomination officials send you) and further combined with the institutional criterion of “appointability” to a large number of parishes in a “tenured” system (tenure is afforded by the annual conference, the basic administrative unit of the United Methodist system) over a career in ministry, the former applicant who has been ordained to be a functional member of the professional group must be able to communicate in and through the English language in English-speaking settings. This is the only way her or his professional future can provide the flexibility inherent in the appointment system. In other words, if the cross-cultural applicant has not learned English to a sufficient degree, then the possibility of “appointability” is compromised and the system cannot keep its promise of tenure. Other denominational placement systems would have their own unique aspects for which specific criteria would be invokes. The UMC example above is simply one of a kind in terms of placement, though the placement of celibate priests in the Roman Catholic Church would be somewhat similar.
  • Candidacy assessment, then, involves the evaluation of those aspects of the bi-cultural identity of an individual candidate – aspects that are pertinent to his or her functioning in an Anglo-based institutional value system.
  • Effective functioning in the English language and in the wider Anglo ethos, as institutionally structured in the wider system of the United Methodist Church, to continue the example, requires at least minimal levels of functioning in the learned “Anglo aspects” of a bi-cultural identity. Since bi-cultural functioning appears to be necessary to the individual’s personal and familial well-being, as well as to his or her ongoing vocational effectiveness and continuing appointability, bi-cultural identity must be present and fairly assessed for speakers of English as a second language.
  • At CAS, we believe that psychological assessment of ordination candidates who are speakers of English as a second language should be focused on the English-speaking/ experiencing portion of the applicant’s bi-cultural identity. We further believe that a portion of the instrumentation should utilize English, even when parallel translations of some of the inventories are made available.
  • The CAS assessment packet provides for evaluating the applicant’s functioning in the English-speaking context. When English is the applicant’s second language, parallel copies of the standardized instruments in the applicant’s original language may be used for clarity of item content where such translations can be made available. Where such materials are not available, a translator can be used as a standard practice. Alternately, the use of an appropriate translator’s dictionary or an interpreter can be utilized.
  • We also believe that further intercultural consultation should be made available in the interpretation of the data and/or in the clinical interview with the applicant. Alternately, interpretation and consultation by a clinical interviewer or report writer who has crossed cultures and has learned a second language (not English) to a functional degree is a requisite of a responsible and culture-fair process. We are referring to an assessment specialist, report writer and/or interviewer who has crossed cultures and understands the experience from the inside out – someone who is sensitive to the acculturation issues and their impact on the assessment. The same can be said for ethnicity.
  • We further advocate for intercultural consultation in the interpretation of the data and the report and in the interview with the candidate, or in other appropriate ways in the assessment.

These points present principles, practices and guidelines for assessing applicants who speak English as a second language. They may be extrapolated as they apply to the assessment of gender and ethnicity, where differences are not typically as extreme, but nevertheless have considerable significance.

In any case, fairness requires the development of differentiated norms for each group. That is, individuals from each group will need to be compared with applicants of the same “category” as well as with the larger norm group that includes all applicants for the preparation for ordained or lay professional ministry.

Who will be involved in assessing and reporting on an applicant’s psychological fitness for ministry?

The applicant will select a monitor – a local professional, such as his or her pastor – who will be present while the test battery is being administered, ensuring that “standard conditions” for such testing are present. To this end, the monitor signs a contract with CAS to follow the instructions provided and then reports back to CAS on the testing situation and the applicant’s response to it.

The applicant will also provide the names of four references to contribute valuable information and insight about the applicant’s character, strengths and areas of growth from a variety of contexts in which the applicant functions.

A CAS assessment specialist will review and analyze the test results along with information provided by the four references. That specialist, referred to as the “report writer” will prepare the report for review by the applicant with the assistance of a local clinical interviewer who is a member of the CAS team.

Once the applicant and the clinical interviewer have validated (and possibly amended) the report, it will be sent to the applicant’s judicatory selection decision-makers for additional review and follow-up.

While this is the usual and primary assessment team, the services of a special consultant may be needed if there appear to be cognitive problems, psychopathology and/or cultural issues at stake.

 

What are the circumstances under which special consultants might be brought in to assist in assessing an applicant?

We have found that the services of a special consultant are helpful if there appear to be cognitive problems, psychopathology and/or cultural issues at stake.

Cognitive problems
Certain cognitive problems may require a full Wechsler Adult Intelligence Scale, or the Reitan Test Battery of cognitive functioning. The test battery can point to the possibility of an organic basis for a suspected “thought disorder” and the likelihood of chemical/medical intervention to correct the problem. A Rorschach Projective Technique could be used to determine the precise nature of a thought disorder and the extent to which the thought disorder could be remediated by an appropriate form of cognitive therapy.

That is, it would be essential to know whether the thought disorder is emotional or organic in origin and nature before a decision could be made concerning the prediction of future behavior.

Psychopathology
If a possible diagnosis of moderate to severe psychopathology were to emerge, a special consultant may be needed. On rare occasions, the test battery and/or the clinical interview may suggest that the applicant has serious problems with intermittent depression, for example, or have a conduct disorder or thought disorder.

Having a psychiatrist interview the applicant may help clarify the issues and concerns, with the results of the psychiatric consult being be included with the CAS report to the judicatory. The psychiatrist would be able to share the diagnosis with the applicant or not, as she or he sees fit.

Cultural issues
A special consultant may be needed to help interpret test results of candidates from racial/ethnic minorities. The selection of the psychiatrist would be based in part on her or his own racial/ethnic background and that of the candidate.

Are there other circumstances under which CAS’ services are appropriate?

Yes, our procedures and assessment specialists may be of assistance to a denomination with regard to pastors in service who are experiencing difficulty, or with those who have been on a leave of absence and want to return to active ministry.

These assessments may be selected or required by the denomination, or they may be requested by the pastor herself or himself. Either way, the assessment would need to be authorized by and paid for through the judicatory, so the assessment contract is between the judicatory and CAS.

How well researched is the CAS test battery and assessment process?

The Clergy Assessment Service appears to be the most researched ministry selection program in the U.S. to date!

Significant research on personnel selection programs began during World War II when the army intelligence corps began “hiring.” Another significant program of personnel selection research accompanied the Peace Corps project, which had the advantage of selecting a clientele who served for short-term periods of two to four years. Failures in the Peace Corps selection process could be rather quickly identified and put right.

The difficulties of doing research with an ultimate rather than a proximate time frame are extensive and well known. Assessment for careers in ministry replicates those same research difficulties. Nonetheless, there are other ways to research such programs, and CAS has benefited from such studies.

In developing the contents of the test battery, DeWire (1978) surveyed the testing scene in a major denomination. He wanted to know how many of the denomination’s selection judicatories were utilizing psychological assessment to establish fitness for ministry, or lack thereof, among applicants for lay and ordained ministries. Where they were utilizing psychological assessment, he wanted to know how test results were being used, what degree of satisfaction with and confidence in the testing process was in evidence, who received and reviewed the reports, how the assessments and reports were handled and stored and what kind of assistance and research would make programs of psychological testing more effective.

DeWire found that thirty-nine of the sixty-four basic administrative units of the denomination were utilizing testing procedures in 1978. Tests were being used both for screening and guidance decisions. He concluded that interest in the area of psychological testing of applicants for ministry was intense and that further work was needed to assist selection judicatories in such matters.

Results from the professional evaluators of these judicatories indicated that thirty of the thirty-nine conferences were utilizing the MMPI, nineteen the Strong-Campbell Vocational Interest Inventory, fifteen the Edwards Personal Preference Test and ten a version of the Incomplete Sentences Blank. Other tests being used were at a frequency of five or lower. DeWire’s findings provided an empirical basis for structuring the work of a national task force on testing for the denomination.

It is significant to note that the CAS test battery makes use of these same high-use instruments in its test battery, adding the Adjective Check List, Shipley-Hartford Test, an Attitude and Activity Scale, the MMPI-2 and California Psychological Inventory. The CAS battery also includes a reference form to be completed by four persons who know the applicant, including his or her senior pastor, a congregant if the applicant is already serving in a parish, a colleague and one other reference of his or her choosing. Thus, the DeWire study has informed the CAS assessment process, anticipating and affirming various instruments utilized in the CAS test battery.

Further, J. Jeffrey Means, Ph.D., is to be credited for sorting out the conceptual levels of criteria, that is, fitness, competency, readiness and effectiveness.

Several other studies have focused on the CAS test battery and process. Clergy Assessment and Career Development, (Parthenon Press/Abingdon Press, 1990) presents several such studies. In Chapter 9, John Hinkle and Emily Haight address issues in criteria development, suggesting three models for the identification and development of criteria. David Hogue explores the concept of “ministerial satisfaction” through the development of an empirical measure of satisfaction in ministry.

In the same volume, Michael Comer presents research directed toward evaluating the capacity of selected instruments in the test battery to predict to what extent clinicians are doing what they say they are doing – making systematic and rational decisions from test data sufficiently well that an empirically documented pattern of decision-making is clear. Comer replicated earlier findings on the “severity of record” variable from the MMPI. He adds new material to the research effort in studying the fit of self-image with the clinical picture of personality surfaced by reviewers. The most significant impact of the Comer study may well be its provision of a necessary building block for further validity studies in clergy assessment, as noted in Chapter 11.

In Chapter 12, Pamela Holliman addresses the question of whether persons who are involved in making selection decisions in a denominational process understand the purpose and process of psychological assessment. She reports her review of seven such units (judicatories) with a denominational system in depth. Both selectors and applicants are included in the review.

Holliman found that an unexpectedly high degree of agreement was present across units and between candidates and selectors, concerning the priority to be given to psychological assessment reports in the selection process. Further, she discovered very high levels of agreement among selectors about the value of the psychological assessment report to the selection decision. Candidate responses to the process were similar to those of psychotherapy outcome studies. Approximately one-third of candidates thought that the clinical assessment interview and the report were most helpful. About one-third felt the interview and report were adequate in affirming what applicants already knew about themselves. The final one-third felt that they got nothing out of it. Of this final one-third, ten percent thought that testing was a bad idea and could even be destructive.

Each of these studies, except for the Hogue study, was conducted on the CAS model and associated denominational selection and nurture processes. Each of these studies was conducted by a person carrying out studies in a nationally recognized Ph.D. program. In each case, the research was supervised by university and seminary faculty members – a feature of the research process that verifies the validity of that process. Dissertations are available through University Microfilms at the University of Michigan at [add internet address here].

In addition, Hinkle conducted a study of the use and utility of the CAS reports provided to judicatory selection decision-makers. Subsequent to the use of the report in a selection process, selectors completed the research instrument, commenting on such questions as, 1) was the report useful in your deliberations and decisions, 2) was the report pertinent to your questions and concerns, 3) was something missing from the report about which you would have liked more information and 4) how would you rate your level of satisfaction or dissatisfaction with the report.

Results validated the usefulness, pertinence and completeness of the reports. The satisfaction level was rated “very high.” Further studies have been developed, but as of this date are incomplete, largely due to a lack of funding for implementation. Other studies have addressed issues related to clergy assessment, as noted in our bibliography.

In conclusion, results of the studies outlined above provide strong empirical confirmation of the effectiveness of the CAS assessment process and product.

 

What kind of lead time should our judicatory count on for a psychological fitness review by CAS?

We recommend three months. From the moment we receive the completed tests, recommendations, and other forms in our office, until the date the report arrives in the judicatory officials’ offices, it generally takes eight to ten weeks to complete the assessment process. The actual time depends on how long it takes all parties to receive the materials, respond and take appropriate action to move the process along.

 

 

How does the cost of a CAS psychological fitness review compare to other clergy candidate review processes?

At $600 for the assessment, the cost is much less than half of the “going rate” in other markets and in the secular environment. Our fees are also substantially less than those charged by organizations and individuals functioning in a religious environment.

Please note, however, that costs for special consultations are extra. As such, we present the facts to the judicatory in advance of bringing on a consultant, discuss the issues and costs and receive judicatory approval for the additional expenditure before proceeding with the special consultation.

The cost of in-service reviews of pastors in difficulty, which follows the same procedure as that for applicants for ministry, is $700 as these reviews require two interviews rather than one.

 

 

Can we ask the applicant to pay for the CAS psychological fitness review?

Most judicatories pay the entire assessment fee. However, some ask the applicant and his or her sending congregation to share the cost. The rule of thumb is that the applicant should not be required to pay more than 49% of the total cost since ownership of the report rests with the judicatory requiring it.

Regardless of the source of the funds that pay for the assessment, the judicatory is responsible for collecting those funds and responding in full to the CAS invoice. That is, the financial contract is between the judicatory and CAS, not CAS and the applicant.

 

How long has CAS been around?

In 1968, our founder, John E. Hinkle, Ph.D., began working with judicatories in Indiana to develop a psychological fitness for ministry assessment program. We’ve been expanding our reach and improving our processes, in response to judicatory needs nationwide, ever since.

How did CAS get started?

In a theological and vocational sense, John E. Hinkle heard a call from Macedonia, as did St. Paul in the New Testament (Acts 16:09 RSV) in 1968. Formerly a pastor and then a missionary in the Philippine Islands, Dr. Hinkle was completing his Ph.D. dissertation while serving as the newly appointed Director of the Indiana Pastoral Care and Counseling Center in Indianapolis.

In this instance, the call came from selection decision-makers from two judicators in the State of Indiana. They wanted to include a psychological fitness review in their process of selecting applicants for ministry. In response to that call Dr. Hinkle developed a psychological fitness for ministry assessment program that 20 years later became the model for a national system of psychological assessment of fitness for ministry in a major denomination.

In 1972, Dr. Hinkle accepted an appointment to the faculty of Garrett Theological Seminary in Evanston, Illinois. Concurrent with that appointment, another call from Macedonia occurred in the Chicago metropolitan area, a call that was followed by many other such calls. The response to these early calls to provide a program of psychological assessment of fitness for ministry resulted in the formation of the Clergy Assessment Service, around 1970.

Other judicatories began asking for consultation in establishing such a program in their regions. Throughout the 1970s, Dr. Hinkle consulted with denominational judicatories in Nebraska, Wisconsin, New Jersey, New York, Washington and North Dakota. He also offered seminars on the topic in Texas, California, Maryland, Illinois and Georgia.

In 1980, he delivered a watershed paper to the National Congress of the American Association of Pastoral Counselors at a meeting in Denver, Colorado.

From 1975 to 1985, the program model of Clergy Assessment Service was accepted for use as a national program by a mainline protestant denomination. At the same time, the model was being utilized in judicatories of other such denominations.

By 1990, CAS was delivering assessment services to seven United Methodist judicatories, three Lutheran judicatories, three United Church of Christ judicatories, two Episcopal judicatories, a Roman Catholic Order and various judicatories selecting missionary applicants for foreign service. Dr. Hinkle also served on two denominational national committees to develop and provide oversight of such programs.

By the year 2000, twenty judicatories from various denominations were participating in the CAS process. Currently, that number is thirty-five, and growing. New in 2005, CAS went online to begin servicing these accounts in a more expeditious manner.

Today, a panel of trained, certified and experienced report writers has been recruited, one on the East Coast, two in the Midwest and one on the West Coast. Additionally, CAS has more than thirty-five clinical interviewers at strategic locations around the country. The number of report writers and clinical interviewers is expected to grow as do the requests for CAS services.

In these next years of growth, CAS is committed to keeping quality high, service mobile and competent. Policies and procedures will remain cost effective as well. CAS is keenly aware that these services are paid for, ultimately, by monies and pledges that are dropped into the collection plates of parishes around the country on Sunday morning. CAS’ pledge is to be good stewards of those same dollars by rendering faithful service in response to the call of the church, that is, in St. Paul’s phrase, the call from Macedonia, come over and help us. (Acts 16:09 RSV)