Persistent ethnic differences in test performance may be entirely an artifact of the method used to 'adjust' the test

ResearchBlogging.orgIt is well established among those who carry out, analyze, and report pre-employment performance testing that slope-based bias in those tests is rare. Why is this important? Look at the following three graphs from a recent study by Aguinis, Culpepper and Pierce (2010):

i-c71dbba428e54068f59fc88a275f1e12-preemployment_testing_bias_A.jpg

i-de4ed874327524757b739d54b60a9024-preemployment_testing_bias_B.jpg

i-92b22944f302e0e86f6f2fdc75ca2896-preemployment_testing_bias_C.jpg

Figure 1. Illustration of the typical finding of no slope-based differences and intercept-based differences favoring the minority group (Panel A), and the possibility that there are slope-based differences (Panels B and C) together with no intercept-based differences (Panel B) and small intercept-based differences (Panel C).

Figure 1A shows the idealized scenario that is assumed to be the case most of the time: Minority populations have a lower performance result than majority populations, however both have the same shaped distribution, and thus the same shaped regression line (same slope, equally straight) but with a reduced overall value and thus a lower y-intercept (where the lines hit the up and down axis on the left).

Figures 1B and 1C show what is assumed does NOT happen. In this case, the minority group has a lower (flatter) slope, and this actually brings the y-intercept closer to the Majority's value.

In real life situations, one might want to account for the y-intercept (a measure of the overall, or average, performance) so that minority and majority groups have the same height line. The myriad reasons for doing this are not important right now, just assume that we believe it to be fair to make sure that the absolute best person with purple skin has the same chance of getting a job as the absolute best person with green skin (where one is the majority and one is the minority). If there is a systematic performance difference, it may well be because of something consistent between the groups that we don't care about but want to adjust for.

Psychometrics experts have long contended that we can do this by simply adjusting the y-intercept (the up and down part) of the line without negative consequence. If, however, they are wrong, and the real life situation looks more like B or C in the figures above, this would be bad. It would mean that people in the minority group who perform at the top of their game would still be under-measured, less likely to get the job, and the subject of bias against them.

The established wisdom appears to be wrong

Well, it turns out that the very study that provided these pretty graphs, "Revival of Test Bias Research in Preemployment Testing" by Herman Aguinis, Steven Culpepper and Charles Pierce (published in the Journal of Applied Psychology) strongly suggests that we've gotten this wrong all along. It is not safe to assume that there is no bias in slope in these tests, and in fact, there is reason to expect that there usually, or at least often, is a slope difference despite the fact that the opposite has been "well established."

... these established conclusions are not consistent with recent ... research showing that [testing for] slope-based differences is usually conducted with insufficient levels of statistical power, which can lead to the incorrect conclusion that bias does not exist... Also, these established conclusions are not consistent with expectations ... that sociohistorical- cultural and social psychological mechanisms are likely to lead to slope-based differences across groups.

... and not just y-intercept differences. Meaning, an adjustment assuming no difference in slope would result in a bias against the group with the shallower slope when it comes to actually doling out jobs or promotions.

The study provides a new and very sophisticated look at the statistics underlying this sort of analysis and strongly suggests that "...intercept-based differences favoring minority group members is a result of a statistical artifact. In fact .... we would expect to find artifactual intercept-based differences favoring the minority group even if these differences do not exist in the population."

An important conclusion of this study, and a rather startling one for most people in the field or who rely on psychometric research to justify their race-based (racist) agendas, is that intercept-based differences (actual overall mean differences) in performance on tests "... are smaller than they are believed to be or even nonexistent ..." which in turn is consistent with the findings of a number of recent studies that have brought the whole methodology into question.

Can you say "paradigm shift?"

An inadequate history

So called Industrial and Organization Psychology (I/O research, or psychometrics) has a long history, and the last couple of decades of that history has involved two overarching trends: 1) The relative isolation of the field into publication in highly specialized journals with a lot of the research teams very comfortably referencing each other and shutting out external criticism and b) a growing strong belief (and I have carefully selected that word ... belief) in the validity of the methods and the accumulated evidence based on those methods.

But it may well turn out that much of this internal self love is based on a poorly assembled house of cards. For example, Aquinis, Culpepper and Pierce document a large number of prior studies that were done with inadequate sample sizes. They point out that Lent, Auerbach, and Levin (1971) found that the median sample size of 406 studies in Personnel Psychology between 1954 and 1969 was 68; Studies from Personnel Psychology between 1950 and 1979 in human resource selection had similarly low sample sizes, according to Monahan and Muchinsky (1983). Dozens of studies published in the journals Journal of Applied Psychology, Journal of Occupational and Organizational Psychology, and Personnel Psychology were similarly flawed (Salgado 1998; Russell et al. 1994).

So that is the breath of the problem. The intensity of the problem is exemplified in a specific case outlined by Aguinis, Culpepper and Pierce. They had a look at a paper by Rotundo and Sackett (1999) in which the authors concluded that "the sample size used in the present study was double the largest tabled value in the Stone-Romero and Anderson article, and the predictor reliabilities were in the .80 to .90 range. . . . We suspect that the power to detect a small effect size in the present study would be reasonably high." (ibid page 821) It wasn't. Aguinis, Culpepper and Pierce computed the "statistical power" (using a standard method) of Rodundo and Sackett's results at 0.101. The usual benchmark for this statistic is to be larger than 0.80.

The point here is simple: Psychometricians can arm wave all they like regarding how many studies have been done, how those studies repeat each other and give similar results, and even how key very well endowed studies (those with large samples) support the broader range of lesser studies. But the arm waving looks rather silly when one looks at the plethora of bad studies using questionable data interpreted optimistically to the extent that one wonders if there is some sort of built in denial. Or worse.

A bad methodology can have victims

And this is not a problem of small magnitude. Aguinis, Culpepper and Pierce provide an approach that would have worked for the touted Rodundo and Sackett's study:

Statistical power would increase to what is usually considered the acceptable level of .80 if some of the design and measurement characteristics are improved. For example ... increasing the sample size in the African American subgroup from 1,212 to 32,000, increasing the sample size in the Whites group from 17,020 to 90,000...

Clearly, we should not be impressed with numbers like "one thousand" when numbers of an order of magnitude more are needed to make the strong and important claims that are often made.

And, the effects of what appears to be systematic inadequacy in the entire field on the humans under study, is astounding. Aguinis et al re-examined he Rodundo and Sackett study to see who would be affected, and how, if the resulting model was used to hire individuals after taking a test. If the biases inherent in the analysis were ignored,

... there would be 20.6% of false negatives in the African American subgroup and 1.42% of false positives in the White subgroup. ... about 250 African Americans (out of a total of 1,212) would be denied employment incorrectly and about 242 (out of a total of 17,020) White applicants would be offered employment incorrectly.

Its the slope, stupid. Oh, and the sample size and the underlying assumptions and a few other things ...

So, getting it right, and more poignantly, decades of insisting that it has been gotten right when it wasn't, is highly consequential.

A large portion of the recognized problem has to do with the sample size, relative sample size (between groups) and statistical power. These characteristics of a study are inherent to the testing method itself rather than to the groups being tested. An example that serves as a metaphor (but not as am exact statistical homologue) would be as follows. Suppose you are in a baseball league, and as the final playoffs are approaching. The teams from two cities have a competition to decide which city will host the final games. It is a home-run derby of sorts, where the team that can hit the ball the farthest in an open field wins. But there is a special rule: One batter is allowed in the competition for every 100,000 people living in each city. So, if this was Saint Paul and Minneapolis (the latter is much larger than the former) Saint Paul would have only a few batters while Minneapolis would have many batters. As a result, the playoff games would usually be held in Minneapolis. The reason for this is simple: The chance of getting an outlier -- a hit that is exceptionally long or short -- is greater with more attempts. Similarly, the outer portions (lower or higher) of an x-y pairing of data will be more extreme (more lower and more higher) in larger samples. And, these more extreme values to affect slope (and slope affects intercept). This example simply illustrates that something as simple as sample size can matter to the outcome of an analysis, in a way that appears to say something meaningful about the underlying population, but where that "meaning" is an artifact, not a reality.

So, as a result, a purely statistical artifact ... a feature built into the system of measurement and analysis ... appears to have been written off as non existent in psychometrics. But it exists.

In the case of the present study, statistical effects cause the slope of smaller distributions to flatten out compared to those of larger distributions. In any event, the authors suggest that future studies evaluate the statistical power of the tests being used and be more careful about drawing conclusions with inadequate sample sizes. The study also recommends more dramatic shifts in approach:

... we suggest a new approach to human resource selection that examines how members of various groups are affected by testing and also approach testing from a different perspective. [Involving] a change in direction in human resource selection research including an expanded view of the staffing process that considers in situ performance and the role of time and context. In situ performance is the "specification of the broad range of effects--situational, contextual, strategic, and environmental--that may affect individual, team, or organizational performance" [in order] to lead to a better understanding of why and conditions under which "tests often function differently in one ethnic group population than the other"

The tests were biased anyway, but it is worse than previously thought

There is a second set of factors linked to bias in test results, which is well summarized and discussed in the article at hand: The sociohistorical-cultural explanation. I refer you to page 5 of the paper for more details, but briefly, this involves performance differences caused by two factors in minority individuals who would otherwise perform the same as majority individuals: 1) Real differences in ethnically shaped views of what matters for success and 2) performance bias owing to added pressures of being the minority who is required to act as the majority.

For the present, suffice it to say that these effects can also result in biases that have not been properly controlled for, and more specifically, slope differences.

When it comes down to it, our concern is that psychometrics is making a consistent, widespread and damaging Type 1 error: Believing that a certain kind of bias exists and another kind of bias does not, and thus adjusting a certain way inappropriately. The Aguinis, Culpepper and Pierce paper provides a new statistical proof that a widespread mistake in analysis "... can lead to the conclusion that a test is biased in favor of minority group members when in fact such a difference does not exist in the population of scores."

The paper is here for you to read. A press release regarding the paper is here.

Citations:

Aguinis, H., Culpepper, S., & Pierce, C. (2010). Revival of test bias research in preemployment testing. Journal of Applied Psychology, 95 (4), 648-680 DOI: 10.1037/a0018714

Lent, R. H., Auerbach, H. A., & Levin, L. S. (1971). Research design and validity assessment. Personnel Psychology, 24, 247-274.

Monahan, C. I., & Muchinsky, P. M. (1983). Three decades of personnel selection research: A state-of-the-art analysis and evaluation. Journal of Occupational Psychology, 56, 215-225.

Salgado, J. F. (1998). Sample size in validity studies of personnel selection. Journal of Occupational and Organizational Psychology, 71, 161-164.

More like this

The anti-GMO study released late last week has raised so many bad science red flags that I'm losing count. Orac and Steve Novella have both discussed fatal flaws in the research, the New Scientist discussed the researchers' historical behavior of inflating insignificant results to hysterical…
Imagine that there is a trait observed among people that seems to occur more frequently in some families and not others. One might suspect that the trait is inherited genetically. Imagine researchers looking for the genetic underpinning of this trait and at first, not finding it. What might you…
How Shift Workers Can Improve Job Performance And Implement Realistic Sleep Schedule: A new study in the journal Sleep shows that the use of light exposure therapy, dark sunglasses and a strict sleep schedule can help night-shift workers create a "compromise circadian phase position," which may…
One of the things often heard is that someone is leaving the city for the burbs because the schools are better (I use the generic city, since, in my experience this attitude doesn't appear to be limited to any particular city). But what if parents aren't choosing better schools, but better student…

In my professional opinion as a statistician: "Ugh!"

The interpretation of the intercept is wrong, and the slope estimates are obviously biased - How did they (the Psychometricians) manage to miss such a basic errors for so long? If they had bothered to look in a statistics textbook on regression and experimental design, this problem would have been fixed a long time ago.

I don't have stats past A-level, and haven't used even that for a decade, but this post still had me swearing in disbelief, just from an informed layman's knowledge. I wonder how completely this extends to other groups considered within psychometric analysis?

By stripey_cat (not verified) on 04 Aug 2010 #permalink

stripey_cat, what was the political status of the group at the time the psychometric measure was developed? There is a conservatism inherent in the process of changing tools over time that will tend to enshrine the original biases of the tool-makers.

Actually, the bias, remarkably, works no matter who is in the majority. This is one of the interesting things in the paper. If we stopped using these tests for 35 years, distributed lots of birth control to white skinned Americans, and encouraged brown people to have lots of babies, THEN re-instituted the test when 80% of Americans were brown, 12% white, and 8% some other color, it would still work (as a sinister tool) but on behalf of the new brown majority.

The minority stress effect would now apply to white people; the statistical minority effects know no skin color ... they just flatten out the smaller group unless their sample size goes way up (which could be done with sampling of the kind that never is done) and so on.

That is not to say that, for instance, SES, quality of education and other effects would not matter. They still would, but that is not the subect under discussion here.

Of course, if the 12% white minority was still in charge, different statistics would be used to avoid these issues!

Hmm. I meant the more general use of bad stats and assumptions to bolster the reputation of old tools. Age tends to lend its own (unearned) legitimacy, so making big fixes doesn't happen often.

Absolutely. Especially in statistics, because most practitioners are not experts in the way stats should be used in their fields. Going back just a few years, it was very difficult to find someone who had published a few or more papers using statistics in any social science field (including the harder end such as physical anthro) who understood that they needed to use bootstrapping. Rather, they felt comfortable relying on concpets developed a hundred years ago and not changed much since, even though the concepts were developed the way they were explicitly because the computational requirements of real statistics were undoable by a single individual working with pencil and paper. (I didn't mention that the present paper, being a methodological one, uses bootstrapping and simulation).

Very interesting article! I guess this just shows confirmation bias is a big problem in all areas of science. Statistics are a great tool, but they don't help if you don't use them properly. Here the statistical problems are certainly real. Did these guys (I/O researchers) not bother to hire a professional statistician? Problems with test and criterion score reliability and range restriction are obvious. There's insufficient power to detect a slope-based difference if it exists, and such a difference could also result in a bias in the intercept. It should be pointed out the bias will exist but be in the other direction if the mean test score is higher for the minority vs. the majority group.

Yet, the authors have their own ideological biases. In the absence of any real statistical evidence a slope-based difference exists, one cannot proceed under the assumption that it does. Sociohistorical-cultural and social psychological "explanations" are, in fact, only unproven hypotheses. They may affect predictor and criterion scores differently, but no real data is presented allowing us to predict the direction of a slope-based difference if one exists. Thus, the idea that a slope-based difference is "likely to exist" is ideologically, not statistically, based.

I guess it just depends on which glasses you put on: the "racist society has it in for minorities" or the "liberal/egalitarian society has it in for whites". And that bothers me.

Greg is correct - the bias is purely mathematical against the smaller group, and no politics are involved. There is quite possibly politics in (not) noticing the bias and (not) correcting it.

The errors are in the basic understanding and application of regression, which should be evident to a moderately trained statistician even before the bootstrapping and simulation. That's not a knock against Aguinis; he uses those tools to quantify the magnitude of the error and bias.

By Tomato Addict (not verified) on 05 Aug 2010 #permalink

NeuroGuy, you are wrong, and do please look at the "about" section of this blog before you get yoruself booted off. I have a policy against pseudonymous commenters peppering my site with biased rhetoric about race,racism, gender bias, and science denialism.

Now, on to what you've said, just in case you are a valid commenter and not one of those guys referred to in the about page!

First, the authors of this article do NOT claim that there are slope effects in the absence of evidence for slope effecs. The entire paper is all about demonstrating that they exist. Yes, they do claim that earlier papers that claim there are not slope effects (most prior research in the field, as it turns out) can not make that claim, but they ALSO demonstrate that we can expect that there usually are such effects.

Sociohistorical-cultural and social psychological "explanations" are, in fact, only unproven hypotheses. They may affect predictor and criterion scores differently, but no real data is presented allowing us to predict the direction of a slope-based difference if one exists.

This paper is available for anyone to read. You might give that a go. You would have to follow up on some of the references cited there in, but their case is pretty good.

You are conflating socio-historical effects with statistical effects. Both have an impact, both are discussed and well documented in this paper.

The only thing I see going on here that is idiological is your comment. The rest is science. If you don't like the science, don't use a science-pretentious handle like "neuroguy"!

This is a perfect example of tribalism in science. Excellent post.

I have a policy against pseudonymous commenters peppering my site with biased rhetoric about race,racism, gender bias, and science denialism.

It's your blog and you can do as you wish. However flinging around accusations without evidence is not my idea of proper decorum. I challenge you to identify a specific statement of mine that constitutes biased rhetoric about race, racism, gender bias, or science denialism. Show me exactly where I have stated a belief in the inferiority of one race or gender, or specifically what well-founded scientific fact or theory I have denied.

First, the authors of this article do NOT claim that there are slope effects in the absence of evidence for slope effects. The entire paper is all about demonstrating that they exist.

No, it isn't. I've read the entire paper. The paper is mainly about demonstrating lack of sufficient power to detect slope effects in the previous research. The authors do this very well. Lack of sufficient power in previous studies however does not constitute evidence they exist.

Yes, they do claim that earlier papers that claim there are not slope effects (most prior research in the field, as it turns out) can not make that claim, but they ALSO demonstrate that we can expect that there usually are such effects.

Saying "we expect to see such differences based on XYZ" is a hypothesis, not a demonstration. A demonstration takes actual data.

This paper is available for anyone to read. You might give that a go. You would have to follow up on some of the references cited there in, but their case is pretty good.

No, it isn't, and I have followed up on some of the references. The reason why their case isn't very good is they are one-sidedly looking only at the effect of minority group membership on test performance, but not on outcomes.

Let's look at the stereotype threat. It is absolutely true, as Brown and Day say, that "the extent to which stereotype threat influences predictive validity will depend on the degree to which stereotype threat differentially influences predictor and criterion scores". In other words, the stereotype threat can result in minorities performing worse on tests (predictors), but also in actual performance (criteria). This could result in a slope-based difference overpredicting minority performance, underpredicting minority performance, or neither (if the two effects cancel each other out). The same for the sociohistorical-cultural explanation - fear of "acting white" or "acting gringo" would adversely affect not only test scores, but performance as well.

The only thing I see going on here that is idiological is your comment. The rest is science. If you don't like the science, don't use a science-pretentious handle like "neuroguy"!

Sorry, I won't rise to that bait either. In real science, you actually have to produce data in support of what you are claiming, and you don't get a pass based on political considerations. If you expect to get a pass, you, and not I, are the ideologue. Let's see some real data supporting the existence of slope-based differences. But according to the paper all previous studies were severely under-powered and unable to detect it, if it were there. So a better study would be in order. I will accept that slope-based differences exist if and when a scientifically valid study comes out demonstrating they exist. That is the proper scientific attitude.

"An important conclusion of this study, and a rather startling one for most people in the field or who rely on psychometric research to justify their race-based agendas, is that intercept-based differences (actual overall mean differences) in performance on tests "... are smaller than they are believed to be or even nonexistent ..." which in turn is consistent with the findings of a number of recent studies that have brought the whole methodology into question. Can you say "paradigm shift?""

Your title and statement above grossly misrepresents the article. Nowhere do the authors suggest that "intercept-based differences (actual overall mean differences) in performance on tests '... are smaller than they are believed to be or even nonexistent ...'" or that "ethnic differences in test performance may be entirely an artifact of the method used to 'adjust' the test." NOWHERE.

What the authors are talking about is predictive bias, which is a function of test scores and test validity. Under the current model, research shows that test scores over-predict school/career performance for Blacks and Hispanics. That is, Blacks and Hispanics perform worse than their test scores predict, relative to whites. According to the authors this predictive pro-bias, which is based on intercepts, may not exist. To quote:

"In spite of these established conclusions there are many reasons...why intercept-based bias favoring minority group members may be smaller than it is believed to be or not exist at all. Regarding the finding that no differences in slopes exist, Monte Carlo simulations and literature reviews have revealed that conclusions regarding the absence of slope differences across groups may not be warranted."

They say NOTHING remotely to the effect of "test performance may be entirely an artifact of the method." In fact, they note that the differences in test performance between groups contributes to the likelihood of predictive bias.

In addition to saying that the predictive (intercept based) pro-Black/Hispanic bias might not exist, the authors suggest that there might be a predictive bias based on slopes. To add to your above quote:

"For the second analysis, for which there is a difference of only .01 in validity coefficients across groups, we also used a desired selection cutoff of 0 for the criterion as input in the Aguinis and Smith (2007) calculator. Errors due to using a common regression line instead of the subgroup-based regression lines would lead to 18.02% of false positives for the African American group and 14.4% of false negatives for the White group. Given the sample sizes, about 1,134 African Americans (out of a total of 6,296) would be incorrectly offered employment and about 2,451 Whites (out of a total of 17,020) would be rejected incorrectly."

They give reasons why there might be a predictive slope bias against Blacks and Hispanics (what happened it Asians?), but, of course, there might instead be a slope bias against whites. Either way, the paper did not directly concern mean test scores or impute that "ethnic differences in test performance may be entirely an artifact."

Greg,

Some of your comments are even more off.

Greg said: "First, the authors of this article do NOT claim that there are slope effects in the absence of evidence for slope effects."

The authors do not claim that their is a slope bias in absence of evidence because they do not claim there is one. They claim that a slope bias cannot be ruled out and they suggest that such a bias is, in their view, likely. Their methodological reasoning for the likelihood of the bias is pretty much the same as the methodological reasoning for the likelihood of genetic based population differences in intelligence -- once the possibility is show, what are the chances of there being none?

Greg said: "Actually, the bias, remarkably, works no matter who is in the majority. This is one of the interesting things in the paper...."

The authors suggest that there might be a predictive slope bias against Blacks and Hispanics, given their sociohistoricalâ cultural and social psychological particularities -- particularities which do not generalize. (And yes, Neuroguy's point is valid -- To quote:

"The expectation that there are slope-based differences across groups is not based on differences in socioeconomic status but, rather, on sociohistoricalâ cultural and social psychological explanations. Next, we provide examples of the types of mechanisms that may cause slope-based differences across ethnic-based groups."

The examples given don't generalize. Jews (as we both know) and East Asians do not appear to suffer from them; why would you suppose whites would, (especially from b?):

a. "Members of the minority group interpret discrimination against them as more or less permanent and institutionalized and developâa folk theory of getting ahead which differs in some respects from that of Euro-Americansâ

b. "Moreover, there are family and community pressures to not âact Whiteâ (in the case of African American communities) or âact gringoâ (in the case of Latino communities)."

c. "Social psychological explanations for why slope-based differencesare expected across groups rely on the stereotype threat literature"

Basically, the authors think it's likely that there will be a slope bias. And they think it's likely the bias will be against Blacks and Hispanics on the basis of a-c -- reasons that already have been trotted out a million times (but fail) to explain the test score gap. Regardless, there is no reason to think this would generalize.

Neuroguy: I challenge you to identify a specific statement of mine that constitutes biased rhetoric about race, racism, gender bias, or science denialism. Show me exactly where I have stated a belief in the inferiority of one race or gender, or specifically what well-founded scientific fact or theory I have denied.

From reading your comments I guessed, and it is only a guess, that your intention is to obfuscate the nature of the original paper and my post about it. The clues to this include your restating what the original paper had said and in your immeidate accusation that it is all political. Those are typical elements of the pattern to which I refer.

Had I had the sort of conclusive idea that you demand in your second comment, I would have simply deleted your comment and banned you from the blog. But I was not sure if you were being evile or sloppy, and chose to give you a chance.

No, it isn't. I've read the entire paper. The paper is mainly about demonstrating lack of sufficient power to detect slope effects in the previous research. The authors do this very well. Lack of sufficient power in previous studies however does not constitute evidence they exist.

Try reading the entire paper.

This paper is available for anyone to read....No, it isn't,

Yes it is.

I think this may be at the heart of the problem. You are reading some other paper.

"Chuck" .... First, no more sock puppeting. Pick a name, use it.

Second, as I said above to NeurGuy, the paper is indeed available. Your characterizations of both the paper and my comments are pretty much fantasy. I can't see anything you've said that is worth responding to other than "do actually read the paper and the post"

And when you do read the paper, read all of it. Where you see something that I've said that you think isn't in there, I assure you that there is indeed a problem, but it's your reading comprehension that is at issue.

And, finally, do read the policy I have on this blog about anonymous people using this as a place to deposit your racist garbage. I'm going to think about what to do with your comments, and I may simply decide to just delete them. I have a hard time sustaining the argument that your comments are any kind of valid attempt to understand or agrue a point.

And, again, I see nothing in a moderately quick read of your comments that tells me anything other than that you are making up the contradictions you are claiming. If this paper was of limited access we'd have a serious problem here of intellectual honesty, but since people can read it for themselves, it is not that serious.

I'm pretty sure that both you and "Neuroguy" only read the press release and the abstract. Bad form, Lesacre. I mean Chuck.

The examples given don't generalize. Jews (as we both know) and East Asians do not appear to suffer from them; why would you suppose whites would, (especially from b?):

Chuck, this would be an example of where you are annoying. Read above and beyond and around the cited sentences. For instance, about the work in the Netherlands comparing native Dutch to Turks and Yugoslavs, and in Israel, and so on.

Dear reader: I apologize that you had to read these comments. Chuck and Neuroguy are members of the pro-race based thinking (a kind way of saying "racist) mafia, and they (as well as a hadful of others) make it their business to find blog posts that criticize racist psychometric research and load the blog posts up with relevant looking comments that are 90 percent distortions. I think there are a couple of people who might want to read these comments so I'll leave them up for a couple of days, but I'm obligated to take them down eventually.

I have another 450 odd miles to go and need to grab some of thee grand olde continental breakfast now, then hit the road. I will address this a little later - ok, several hours from now actually.

I just wanted to jump in to ask that "neuro"guy shut the fuck up with your pretentious use of a handle that implies you know fuckall about science. Specifically that you know anything about social sciences. You don't. This is shown clearly in your ignorant fucking commentary about this paper and rather pisses me off. Politics != science, no matter how "sciency" you try to sound.

Both you and "Chuck" need to re-read the paper, then ask a relatively clever child to read the paper and explain it to you.

Oh no, I stayed in one of those "fancy" motel 8's that has a waffle iron and everything - though apparently I didn't spray enough of the junk and my waffle mostly stuck.

I would have been happier with wrapped danishes, if they were the cheese kind.

I need to grocery shop, but when I get back I will write a rather more thorough post about this paper and ignorant asshats who can't fucking read. I did already write about the awesome 12 hours on the road with an eight year old and a two year old - including on the fly repairs in the middle of nowhere, in Ohio. Goodtimes...Goodtimes...

Why on Earth did authors choose dashed line for majority? Less people...less ink...should have used dashed line for minority.

I know that's a silly comment on a weighty issue, but it irked me.

Because the paper explores variation in the characteristics of the minority as variables are changed. The majority line is the background.

Besides, you are just letting your majorinumericonomrative bias show.