Background and Rationale for New Norms
The 2018 update to the norms for youth reflects the third set of normative reference values for children and adolescents used by RPAS (Meyer, Viglione, Mihura, Erard, & Erdberg, 2011). The first set, which was used from 2011 to early 2014, relied on normative data that had been collected by researchers in five countries using the Comprehensive System (CS; Exner, 2003), and published in the 2007 Supplement to the Journal of Personality Assessment on CS International Reference Samples (Meyer, Erdberg, & Shaffer, 2007). As documented in our
Initial Statement on Child and Adolescent Norms,
the standard CS norms for children and adolescents (Exner, 2003) were seriously in error on a number of variables, which ended up making normal children look disturbed (Meyer et al., 2007; see also Viglione & Giromini, 2016). The internationally collected data corrected that problem. However, the data did not provide norms for all variables, and when they did, norms were only available for three broad age ranges, consisting of ages 5 to 8, 9 to 12, and 13 to 18. These features made those norms suboptimal. Consequently, RPAS began gathering additional child and adolescent data in order to implement agespecific norms for all variables.
That data collection effort resulted in norms that we implemented in January 2014 and used until switching to the current norms, implemented in September 2018. We considered the 2014 norms to be transitional rather than final because not all the data collectors had obtained certificates of proficiency in administration and coding, and only three countries contributed protocols. Nonetheless, the data clearly improved over the former CSbased norms, providing reasonable agespecific data for all RPAS variables. For the 2014 norms, we superimposed overlays on the adult normative reference values on the Page 1 and Page 2 profile pages. The overlays showed for each age from 5 to 17 the expected mean score for that age, accompanied by "whiskers" extending one standard deviation (SD) below and above that mean. These overlays thus showed the score expected for someone of a particular age, as well as the typical range of scores for that age, relative to adult standards. In addition, we computed agespecific standard scores for each age by using the expected means and SDs. This was done for all variables, and the results output provided standard scores both according to the adult norms and according to the agespecific norms. We described the technical steps we followed to develop those norms
(Technical and Methodological Aspects of Developing the Initial Overlays for RPAS Child and Adolescent Norms)
and also provided practical advice on how to interpret the results output
(Practical Guide to Understanding the RPAS Results Output for Children and Adolescents).
The 2014 norms not only provided specific norms for each age, they also fixed some of the irregularities that were present with the 2011 norms (Meyer & Erdberg, 2018). Because of the procedures we used to generate the 2014 norms, they worked well to identify expected means and SDs for each age. However, they did not work as well for identifying how deviant scores were when they fell outside of that range for variables that lacked normal distributions. These nonnormal variables provide counts of rare or very rare behaviors (e.g., Ex or PEC codes), which produces skewed distributions, with outlying values that pull the mean higher than the midpoint of the distribution and produce SDs that cannot fully account for observations in the tail of the distribution where high scores reside. This causes the standard scores derived from the means and SDs of skewed variables to be elevated more than they would be if we had used percentiles to generate standard scores. To compensate for this, RPAS treated standard scores for youth different from adults, with icons and interpretive ranges encompassing 15point bins rather than 10point bins. For instance, the average range was 85 to 115 with youth (15 points on either side of 100), but 90 to 110 with adults (10 points on either side of 100); and yellow icons were used for the range 116130 with youth but for the range 111120 with adults.
For the 2018 norms, our aim was to obtain better norms for variables with skewed distributions. To do so, we relied on a much more exhaustive effort to characterize the full distribution of scores for any skewed variables, from the very lowest values, through the center, to the very highest values. We also wanted to make the output on the RPAS Summary Scores and Profiles pages more readable and less confusing. By eliminating the adult normative reference values and overlays, as well as the 15point bins, users can now interpret the adult and youth norms in the same way.
The 2018 norms make use of the same participants as the 2014 norms. As such, they remain transitional norms that we will eventually replace. This document describes the 12 primary steps we took to create the new norms, and then explains how reference values for the new norms differ for some variables in important ways relative to the 2014 norms.
Thirteen Steps to Develop the Norms
Step 1: Organize and Combine Samples
Like the 2014 norms, the 2018 norms have at their core RPAS administered protocols from Brazil (n = 197) and the US (n = 113) spanning the age range from 6 to 17. To this, we added a small number of ROptimized modeled records that had originally been collected using CS administration guidelines. These protocols encompassed ages 7 to 12 and were collected in the US (n = 24) and Italy (n = 11). All protocols were coded using RPAS guidelines. In the combined sample, each gender was approximately evenly represented (51.7% female, 45.1% male, 3.2% missing information).
There was variability in the number of cases available at each age, with ns by age as follows: 6 = 3, 7 = 56, 8 = 74, 9 = 60, 10 = 77, 11 = 43, 12 = 15, 13 = 11, 14 = 2, 15 = 1, 16 = 2, and 17 = 1. Because we want the data to generalize internationally, we tried to give each country relatively equal weight in the final analyses for each age level. To accomplish this, we assigned individual cases from a country at a particular age weights that ranged from a low of 0.64 to a high of 3.0. We assigned weights in such a manner that the overall sample size at a given age would remain fixed. For instance, among the 74 8yearold children, 22 were from the US, 50 were from Brazil, and 2 were from Italy. In order to more optimally equalize the contribution from these three subsamples, the US children were given a weight of 1.25, the Brazilian children were given a weight of .80, and the Italian children were given a weight of 3. Thus, in the weighted analyses, the effective sample sizes were US = 28 (22 * 1.25 rounded), Brazil = 40 (50 * .80), and Italy = 6 (2 * 3.0), such that the overall sample size for 8yearolds remained at 74.
We concluded that the individual ages from 7 to 12 had enough cases to analyze separately. However, we combined the cases from ages 13 to 17. The very small sample of 6yearolds (n = 3) was too small to examine on its own, so these cases were added to the group of 7yearolds. To anchor the adult end of the developmental continuum, the RPAS adult normative sample (n = 640) was subdivided into three age groups: young adults age 18 to 26 (n = 177, M = 22.9), middle adults age 27 to 54 (n = 394, M = 38.5), and older adults age 55 to 86 (n = 68, M = 63.4). The one 17yearold in the RPAS norms was placed with the cases from 13 to 17, resulting in a sample of n = 18 with an average age of 13.9.
Step 2: Identify Variables with Potentially Problematic Skew
RPAS provides normed data for 131 variables, as reported in the RPAS Profile Appendix of Summary Scores for All Variables. We identified 85 variables with potentially problematic skew, which we defined as skew ≥ 1.0 in either the youth data (n = 346) or the adult data (n = 639). Next, we applied square root transformations to the variables. This fixed the potential skew problems for 46 of the 85 variables. These variables were CT, W, Dd, Dd%, SR, SI, AnyS, H, (H), Hd, (Hd), Ad, An, Cg, Sy, Vg, Vg%, M, FM, m, FC, CF, C', a, Ma, Mp, Blend, WSumC, SumC, MC, YTVC', mY, PPD, Blend%, INC1, WSumCog, MOR, PHR, ODL, ODL%, CritCont%, Complexity, LSO Cmplx, Det Cmplx, (H)(Hd)(A)(Ad), and VFD. For 37 of the remaining variables, skew was less problematic, but not fixed. These variables included Pr, Pu, (A), (Ad), Art, Ay, Bl, Ex, Fi, Sx, FQn, WDn, M, C, Y, T, V, r, FD, CBlend, DV1, DR1, PEC, INC2, FAB1, FAB2, CON, SevCog, Lev2Cog, ABS, PER, COP, MAH, AGM, MAP, MAHP, and IntCont. The two remaining variables (DV2 and DR2) were dichotomous, meaning that protocols had either scores of zero or one, and thus their distributions were unfixable. CON was nearly dichotomous, with skew > 7.0 in both the youth and adult data. Because DV2, DR2, and CON had severe distributional problems, as well as very similar distributions in both subsamples with no evidence of developmental trends (e.g., the M for CON was 0.02 in both the youth and adult subsamples), they were set aside at this step, with the expectation that we would use the existing adult norms for each youth age. This left us with 82 targets for revised norming.
Step 3: Generate Equally Likely Samples at Each Age
Because our samples at each age were relatively small, we knew that substantial sampling error would affect any developmental estimate they provided. Sampling error is the natural variability that causes the observed data values to depart from their true population values. To contend with this natural variability, we used bootstrap resampling procedures (Efron & Tibshirani, 1993) to create 100 alternative datasets for each RPAS variable at each age. These bootstrap samples provide 100 equally likely versions of the data, showing what each agebased sample could have looked like when drawing samples of the same size from the same population. They also show how much the sample estimate for a variable (e.g., the 50^{th} percentile for Sy) could vary by chance alone. These alternative samples helped maximize our ability to detect genuine developmental changes from these small and imperfect samples. From each sample, we obtained key statistical values across the distribution of scores. These included the M , SD, and skew, as well as the minimum and maximum values, and the 2^{nd}, 5^{th}, 9^{th}, 16^{th}, 25^{th}, 37^{th}, 50^{th}, 63^{rd}, 75^{th}, 84^{th}, 91^{st}, 95^{th}, and 98^{th} percentiles.
Step 4: Fit Polynomial Regressions to Selected Raw Score Statistics from Age
Next, we relied on continuous inferential norming to fit polynomial regression curves based on age to the developmental data (Lenhard, Lenhard, Suggate, & Segerer, 2018; Oosterhuis, van der Ark, & Sijtsma, 2016; Zachary & Gorsuch, 1985; Zhu & Chen, 2011). This approach is in contrast to the traditional approach of estimating normative values by plotting distribution parameters (e.g., 50^{th}, 75^{th}, and 90^{th} percentile) in agebased categories and iteratively smoothing out irregularities in the developmental data. For the traditional approach, just the number of cases at a given age group estimate each distributional parameter (e.g., only the 7yearold cases estimate the 50^{th} percentile for that age). In contrast, continuous norming uses all cases across the age continuum to estimate the shape of the developmental trajectory for the parameter under consideration (e.g., all cases estimate how the 50^{th} percentile changes as a function of age). As such, the number of protocols contributing to our regression estimates was 985, consisting of 346 children and adolescents and 639 adults. Test publishers have embraced continuous norming as a means of enhancing accuracy when generating normative estimates. Not only can it predict various parameters (e.g., means, standard deviations, skew, individual percentiles), but it requires perage samples that are just 20% to 40% as large as traditional norming to achieve the same level of accuracy (Oosterhuis et al., 2016; Zhu & Chen, 2011).
At this step, the specific statistics we predicted were the 50^{th} percentile, 84^{th} percentile, 95^{th} percentile, maximum value, M, and SD. The 50^{th} percentile is the median value, equivalent to a standard score of 100, the 84^{th} percentile is equivalent on a normal distribution to a standard score of 115, and the 95^{th} percentile is similarly equivalent to a standard score of 125.
Next, for each of the 82 target variables, we fit regression equations to predict the statistics noted above from age. In these analyses, the dependent variable consisted of the statistic from the actual agebased samples, using cases both weighted to balance contributions across countries and unweighted, as well as statistics from the 100 bootstrap samples. Thus, at each age, there were 102 dependent variable 'cases.' In the regression models, we gave full weight (1.0) to the bestbalanced actual sample (i.e., weighted to equalize contributions across countries), a midweight (.75) to the unweighted actual data, and lower weight (.50) to each of the bootstrap samples. For each of the 82 target variables, we modeled the linear, quadratic, and cubic functions of age (i.e., Age, Age^{2}, and Age^{3}). We completed this iteratively for each statistic (e.g., the 84^{th} percentile). Our goal was to find the best fitting function that made developmental sense. In the inferential norming literature, a key step is to review the alternative regression results both to see how much prediction increases when moving from a linear to a quadratic and then a cubic function and, most importantly, to see if the various alternative regression models make developmental sense.
Step 5: Identify Optimal Predictive Functions
The groundwork for the next step began with analyses that used a smaller subset of adult normative records, specifically the 118 adult records that had available the full text of their responses. We initially ran the analyses described above using these records, before switching to the full adult normative sample. At Step 5 when using the smaller subset of adult records, we reviewed images of each scatterplot showing Age, Age^{2}, and Age^{3} predicting one of the target statistics for one variable. Using the individual scatterplot and the statistical information about model fit, one of us (GJM) identified the best fitting function (i.e., linear, quadratic, cubic, or some combination) for every target statistic for every score, with the goal of optimally capturing changes in youth by a function that also made sense developmentally. These decisions were then reviewed by one of the rest of us (DJV, JLM, LG), with disagreements iteratively resolved over several rounds of discussion. As this process evolved, it became increasingly clear that a single function optimally fit each variable, applying as well to the 50^{th} percentile as to the SD.
Consequently, when revisiting Step 5 when using the full normative sample to anchor expected scores for adults, we began by creating a file containing on one page the six images of scatterplots showing Age, Age^{2}, and Age^{3} predicting the target statistics for a variable. A separate page was devoted to each of the 82 target variables. Using the visual scatterplots and the statistical information about model fit, one of us (GJM) identified the best fitting function across all six of the target statistics. The judgment made looking at all six statistics and using the full adult norms differed from the original resolved judgments described in the previous paragraph for nine of the 82 variables. Three of us (DJV, JLM, and GJM) carefully reviewed these variables in a joint discussion, achieving consensus in confirming that the proposed revision more optionally captured the developmental continuum, particularly in youth.
The two figures below illustrate the data and our decisions, using the square root of Sy and WSumCog as examples. In each figure, the upper panel respectively shows the 50^{th}, 84^{th}, and 95^{th} percentile, while the lower panel shows the M, SD, and maximum. Our decisions focused more on the percentiles and maximum than the M and SD. Within the six figures, age is on the horizontal axis, and statistical values for the square root of the raw score on the vertical axis. The black circle indicates the weighted average score designed to better balance contributions across countries. The yellow rectangle is the unweighted actual sample mean (for adults, these two are equivalent). The blue crosses indicate the results from each of the 100 bootstrap samples. The solid line designates the linear regression estimate, the dotted line designates the quadratic function, and the dashed line indicates the cubic function. For each age, the optimal regression should fall in the vertical band of values formed by the observed data and bootstrapped alternatives (i.e., it should be within the vertical blue band).
In the first set of scatterplots for the square root of Sy, we selected the quadratic function as the most optimal fit. It is clear that the linear function does not sufficiently capture the developmental increase in these scores as youth grow up (nor its seeming decrease in older adults), and the cubic function has some implausible developmental trajectories. For instance, although its steep slope in youth could make sense, the nonlinear increase for maximum values would not (increases in Sy from 65 to 70 also would not make sense developmentally). For similar reasons, we also selected the quadratic function as the most optimal fit for predicting the square root of WSumCog. With this variable, the linear function did not adequately capture the decline in cognitive codes from ages 6 to 17, and the cubic function shows declines and rises that are implausible developmentally. The quadratic function, however, captures the decline as youth age, as seen in the 84^{th} and 95^{th} percentile graphs, and that function reduces to linear when appropriate, as is the case for the 50^{th} percentile and the maximum value.
Step 6: Predict Full Distributional Statistics using the Optimal Predictive Functions
Ultimately, we never selected the cubic function as providing the most optimal prediction. We selected the linear function for eight variables [Hd, (A), Ad, Fi, M, C, FAB2, and PHR], and the quadratic function for the remaining 74 variables. With the optimal predictive function selected, we then predicted the key statistical parameters for each variable. These included the minimum and maximum values, as well as the following percentiles: 2^{nd}, 5^{th}, 9^{th}, 16^{th}, 25^{th}, 37^{th}, 50^{th}, 63^{rd}, 75^{th}, 84^{th}, 91^{st}, 95^{th}, and 98^{th}. We estimated three alternative minimum and maximum values. These included the values observed across all ages in the age category (i.e., youth or adult), the values observed across all samples at a particular age, and the values observed for each individual sample. We used these options to help ensure accurate modeling of the distribution tails, with the values for the age categories being the most extreme and the values for individual samples the least extreme. We then predicted each statistical parameter (e.g., 16^{th} percentile) from age, producing 19 equations per variable.
Step 7: Generate Expected Raw Scores from Age
Using the regression equations from Step 6, we generated expected raw scores for the different ages. We completed this for each variable using its 19 equations. Although our interest was in ages 6 to 17, we predicted raw scores from age 6 to age 20 to help ensure we were properly mapping the developmental transition to adult standards. For all variables, the key statistical parameters were on the square root metric, so we squared the raw scores generated from the equations to return the scores to their original metric.
Step 8: Fit Polynomial Regressions to Estimated Standard Scores from Raw Scores
With these data, we were now in a position to examine how the raw scores related to the underlying percentiles, as well as to their normal curve standard score equivalents. There were clearer nonlinear associations between the predicted raw scores and their standard score equivalents than between the raw scores and their percentiles. Consequently, we focused on the former. The key percentiles we examined (2^{nd}, 5^{th}, 9^{th}, 16^{th}, 25^{th}, 37^{th}, 50^{th}, 63^{rd}, 75^{th}, 84^{th}, 91^{st}, 95^{th}, and 98^{th}) correspond to standard scores of 70, 75, 80, 85, 90, 95, 100, 105, 111, 115, 120, 125, and 130. We anticipated that the three alternative minimum values would help predict standard scores of 55, 60, and 65, while the three alternative maximum values would help predict standard scores of 135, 140, and 145.
We visualized the associations between raw scores and standard scores for each variable, and fit linear, quadratic, and cubic functions to the results. After doing so, it was clear that the standard scores tied to the 95^{th} and 98^{th} percentile (i.e., 125 and 130) frequently stood out as erratic relative to the optimal fitting function, which usually was the cubic function. Consequently, we dropped them from the modeling. In addition, the maximum values across the youth and adult age categories, which included results from the 100 bootstrap samples, overestimated normative values for a standard score of 145, so we also dropped them from the modeling. With these adjustments, it also then became clear that predicting the maximum value observed in each individual sample fit reasonably at a standard score value of 135, while predictions of the maximum value observed across all samples at a particular age fit reasonably at a standard score value of 145.
Because many scores are unable to take on values below zero, and a value of zero often resides at a standard score in the 80s or 90s, retaining predicted raw score values of zero or very near to it produced a long tail of zero values down to a standard score value of 55 that interfered with meaningful prediction. To overcome this, we removed from consideration predicted raw scores that had mean values < 0.10.
Finally, the cubic function optimally fit the data in almost all instances. The exceptions were for Pr, Pu, (Ad), Bl, Ex, Sx, Y, V, r, FAB2, and ABS, for which the linear function was most optimal.
The two figures below illustrate these points for Sy and WSumCog. The horizontal axis shows raw score values, while the vertical axis shows standard score values. The horizontal rows of colored dots indicate each age from 6 to 20, with the vertical and similarly colored lines indicating the cubic function that best predicts that age. For Sy, it is evident that the cubic function does not exactly match the shape of the standard score estimates in the range from 60 to 85 for raw scores very close to zero. However, that has little practical consequence, as values of zero will receive an appropriately low standard score, while reasonably modeling raw score values of one and higher. Although hard to read and incomplete, the legend on the lower right of each figure shows the R^{2} values for ages 6, 7, and 8. They range from .951 to .956 for Sy and are .999 at each age for WSumCog.
Step 9: Generate Equations to Predict Standard Scores from Raw Scores by Age
Knowing the best fitting function linking raw scores to standard scores, we next used that information to generate the equations for predicting standard scores from raw scores for each age using the appropriate linear or quadratic equation for each variable. Across the 11 linear functions, the average and median R value for model fit was .98 and the minimum was .88. The least accurate predictions were obtained when predicting standard scores for Sx with 6, 7, and 8yearolds (.88, .92, and .94, respectively). This is perhaps not surprising, given that sexual content is essentially nonexistent at these ages in nonpatients. For the 71 cubic functions, the R value for model fit was .996 on average; the median was .998, and the minimum was .964.
Step 10: Join All Variables
Armed with the equations to optimally predict standard scores from raw scores for the 82 target variables with potentially problematic skew, we joined the variablebyage equations from Step 9 with the 2014 normative Ms and SDs for the 46 variables that did not have potentially problematic skew (i.e., skew < 1.0). For these variables, we could use the Ms and SDs to generate appropriate standard scores. The 2014 normative data for the 3 variables with problematic skew that could not be improved also were included in the joint file. Thus, the file contained all 131 variables with data and three possible equations to generate standard scores. One option used the Ms and SDs, another used the linear function from Step 9, and the third used the cubic function from Step 9.
Step 11: Predict Standard Scores from Raw Scores
Next, using the appropriate function, we generated standard scores across the full range of plausible raw score values across all variables. The variables under consideration can obtain very different values, ranging from those that can have negative values (e.g., MC − PPD), decimal places for either a half point (e.g., WSumC) or tenth of a point (e.g., EII3), or values well above 100 (Complexity). To encompass this variability, predictions were generated for three types of values. Based on ranges in the data, one type encompassed unit values from 22 to 175 in increments of 1, another encompassed half point intervals from 22.5 to 21.5, and the third encompassed values from 3.0 to 9.0 in increments of 0.1. In order to predict appropriate standard scores for a raw score of zero with the skewed variables, we estimated the standard score associated with a raw score of 0.5, converted that standard score to its percentile equivalent, and divided that percentile by 2 to get the midpoint, which is the expected point value for 0, just as in the adult norms.
For the three variables that were dichotomous or nearly so (DV2, DR2, and CON), we imported the adult normative data as the best approximation for all youth ages. Note, however, that the two variables making use of these scores on the Profile Pages for interpretation, WSumCog and SevCog, rely on agespecific normative results, not adult information.
For all variables, we assigned percentiles based on their normal curve equivalents. As a final step, we trimmed the results to fit realistic parameters. Specifically, we omitted impossible raw score values (e.g., negative values for count variables, values above 40 for F or WD) and deleted all estimated standard scores below 0 or above 200.
Step 12: Carefully Check Predicted Values and Integration with Adult Norms
Next, we exhaustively checked the developmental progression of each variable at each age, with a particular focus on how adequately the standard scores for each raw score intersected with adult norms. When we detected irregularities, we investigated the source of the irregularity in detail. In every instance, the irregularity turned out to be an issue with the adult normative values at the upper end of the raw score distribution.
To create the adult norms, raw score values were converted to their percentile equivalents and then those percentiles were converted to their normal curve equivalents. This process works fine if the full range of potential raw score values are present and represented in the norms. However, that is not always the case. For instance, YTVC' has a maximum value in the norms of 24. On the Page 1 Profile for adults, this value has a standard score of 145, on the very right edge of the plotted profile area. However, the YTVC' distribution is not fully continuous. All values between 0 and 15 are represented in the norms. However, then the values jump to 17, 20, 21, and 24. The gaps in values create irregularities in the adult norms because they are assigned percentiles based on their rank ordering (and relative frequency of cases), not based on how many underlying units separate the values. The resulting irregularities can be seen on any adult Profile Page, such that the progression of raw scores and their standard score equivalents are regular up to values of 15, but then irregularly spaced above that point. Similar irregularities at the upper end of the adult norms are visible on the Profile Pages for other variables (e.g., H, V). Indeed, continuous inferential norming, as used with the 2018 youth norms, is designed precisely to fix this problem when generating normative data.
Thus, as it stands now, the 2018 youth norms do a better job modeling extreme scores than the adult norms. The adult norms have irregular gaps in the links between raw score values and their standard score equivalents, produced by the percentile transformations used to generate those norms. It will be beneficial for RPAS to use procedures similar to those used to create the 2018 youth norms to update its adult norms and to model extreme values more smoothly, as is planned for the next edition of the manual.
Step 13: Update Results Output
Using the new normative data, we generated Profile Pages specific to each age. These now have agespecific underlying units to show the location of raw scores on the grid. In addition, the Appendix in the results output providing norms for all 131 variables was updated from the 2014 norms, dropping the Adult Standard Score (ASS) columns and renaming the former Child Standard Score (CSS) columns. For youth, the new Appendix is in the same format as it always has been for adults. However, it now provides agespecific results for each variable.
Illustrating Differences Between the 2014 and 2018 Norms
The figure below provides results from the Appendix for a 15yearold boy. On the left are the old 2014 normative results and on the right are the current 2018 normative results. The focus in this display is on the Content scores, so the official headings are omitted. However, one can see that most of the variables have very similar values (CSS on the left versus SS on the right), differing not at all to several points. However, a number of variables with skewed distributions now have a higher floor, such that a raw score of zero is more common or typical than before [e.g., (A), An, Ay, Bl]. More importantly, the variables with substantial skew now have lower standard scores when raw scores are greater than zero. This can be seen with the four variables outlined with rectangles. Whereas a single Explosion code (Ex) led to a standard score of 136 before, it leads to a score of 128 now. Similarly, two Fire codes (Fi) previously led to a standard score of 130, but now generate a score of 121. In addition, three Vagueness codes (Vg) with an associated percentage score of 12% previously generated standard scores of 131 and 127, respectively. Now they generate standard scores of 119 and 118. These latter illustrations document that the revised norms work as intended to model more adequately the shape of the underlying skewed distribution, and consequently its elevations. Furthermore, the youth norms for all variables integrate seamlessly with adult normative expectations. This was not fully the case with the 2014 norms.
For Users with Existing Youth Protocols in the Scoring Program
2011 CS Norms
Child and adolescent protocols that either were obtained using CS administration guidelines or scored using the CS FQ tables still have to use the old 2011 normative overlays for ages 5 to 8, 9 to 12, or 13 to 15, as described in our
Initial Statement on Child and Adolescent Norms. These protocols have retained their existing settings in the RPAS scoring program.
2014 RPAS Norms
For existing cases, users have the option to retain the old normative scores or convert the case to the new norms. If you chose to retain the old norms, you will continue to have the option to convert to the new norms later. However, once you convert the record, you will not have the option to revert to the old norms. Two places in the program allow for conversion, as seen in the images below. The first image is from the Viewing Protocol screen. If the scores for the protocol need tabulation, the screen will provide the option to do so retaining the 2014 norms or converting to the new 2018 norms. The second image show the same options on the Point and Click interface for coding. Similar options are on the Tables coding interface.
References
Efron, B. & Tibshirani, R. T. (1993). An introduction to the bootstrap. New York, NY: Chapman & Hall.
Exner, J. E. (2003). The Rorschach: A Comprehensive System, Vol. 1: Basic Foundations (4^{th} ed.). Hoboken, NJ: Wiley.
Lenhard, A., Lenhard, W., Suggate, S., & Segerer, R. (2018). A continuous solution to the norming problem. Assessment, 25, 112125. doi:10.1177/1073191116656437
Meyer, G. J. & Erdberg, P. (2018). Using the Rorschach Performance Assessment System (RPAS) norms with an emphasis on child and adolescent protocols. In J. L. Mihura & G. J. Meyer (Eds.). Using the Rorschach Performance Assessment System (RPAS) (pp. 4661). New York, NY: Guilford Press.
Meyer, G. J., Erdberg, P., & Shaffer, T. W. (2007). Towards international normative reference data for the Comprehensive System. Journal of Personality Assessment, 89, S201S216. DOI: 10.1080/00223890701629342
Meyer, G. J., Viglione, D. J., Mihura, J. L., Erard, R. E., & Erdberg, P. (2011). Rorschach Performance Assessment System: Administration, coding, interpretation, and technical manual. Toledo, OH: Rorschach Performance Assessment System, LLC.
Oosterhuis, H. E. M., van der Ark, L. A., & Sijtsma, K. (2016). Sample size requirements for traditional and regressionbased norms. Assessment, 23, 191202. doi:10.1177/1073191115580638
Viglione, D. J., & Giromini, L. (2016). The effects of using the International versus Comprehensive System norms for children, adolescents, and adults. Journal of Personality Assessment, 98, 391397. doi:10.1080/00223891.2015.1136313
Zachary, R. A., & Gorsuch, R. L. (1985). Continuous norming: Implications for the WAISR. Journal of Clinical Psychology, 41, 8694. doi:10.1002/10974679(198501)41:1<86::AIDJCLP2270410115>3.0.CO;2W
Zhu, J., & Chen, H.Y. (2011). Utility of inferential norming with smaller sample sizes. Journal of Psychoeducational Assessment, 29, 570580. doi:10.1177/0734282910396323
