By: Sneha Jose and Jiwon Kim

external image alfred-binet.jpgexternal image bellcurve.png
Alfred Binet (1857–1911)

In 1904, a French psychologist, Alfred Binet was commissioned by the French government to find a method to differentiate between children who were intellectually normal and those who were inferior (audiblox, 2009). This was to finalize who would go to special schools where they would receive more individual attention. By doing this, disruption would be avoided in the education of intellectually normal children. Binet and his colleague, Theodore Simon began developing a number of questions since they were faced with this task. The questions focused on attention, memory and problem-solving skills, which were things that were not taught in school. Binet also took in consideration that children had different intellectual abilities. Based on this observation, Binet suggested the concept of a mental age, or a measure of intelligence based on the average abilities of children of a certain age group.

The Binet-Simon scale was the first intelligence test. It also became the basis for the intelligence tests still in use today. However, Binet himself cautioned against misuse of the scale and the misunderstanding of its implications. The scale was to serve as a guide to identify children in schools who required special education according to Binet. He insisted that intelligence is influenced by a number of factors, changes over time and can only be compared among children with similar backgrounds (Siegler, 1992).

The Stanford-Binet test was brought into the United States after the Binet-Simon scale was developed. Stanford University psychologist Lewis Terman took Binet's original test and standardized it using a sample of American participants. This adapted test was first published in 1916 and it soon became the standard intelligence test used in the U.S. The intelligence quotient (IQ) was used in this test to represent an individual's score on the test. Even today, the Stanford-Binet test remains as a popular assessment tool.

IQ is a score derived from a set of standardized tests that were developed with the purpose of measuring a person's cognitive abilities in relation to one's age group ("Intelligence 2 ," 2004). An intelligence test is also referred to as a cognitive assessment. It is often administered to obtain more information about a student's intellectual strengths and weaknesses and overall cognitive potential. The test gives general information about a student's abilities compared to others in age in several areas. The tests are intended to be a predictor of how well and in what ways a child will learn new information. However, other factors must always be considered. A high IQ does not guarantee success, just as a low IQ does not guarantee failure ("School psychologist ," 2006).

The IQ test determines how quickly and easily a person can learn and process information. When an individual takes an IQ test, his or her scores are then compared with his or her peers' scores--typically, this is based on age.Once this comparison is complete, the user is given an appropriate IQ score. Although scores are scaled differently depending upon the particular IQ test used, the average IQ score is usually around 100 ("How does an," 2000).
Most IQ tests consist of 10 to 14 sections, with each focusing on a particular cognitive ability such as comprehension, vocabulary, letter-number sequencing, spatial ability and reasoning. Why does the test focus on these areas? The reasoning is that, since they are abilities, rather than skills, a person cannot manipulate the outcome of the IQ test ("How does an," 2000).
In other words, a tester should not be able to study for the IQ test. As a result, the IQ test is meant to provide a completely fair and unbiased assessment of each tester.However, despite these intentions, the IQ test has its critics. Many studies claim that the test is targeted to Caucasians and therefore supplies biased information about other races. In addition, the test does not measure one important skill: creativity. Yet despite these criticisms, the IQ test is still commonly seen and is used as a way to predict a person's academic and financial success ("How does an," 2000).

Validity of IQ test scores are not always satisfied! Most of the psychologists admit that IQ tests are not error free psychological instruments. None of the IQ tests can guarantee 100% accuracy of predictions. However, psychometricians generally view IQ testings with high statistical reliability. A high statistical reliability implies that while test takers can receive varying test scores on differing occasions, when taking the same test and can vary in scores on different intelligent quotient tests taken at the same age, scores generally match out. IQ testing scores are surrounded by an error band that shows what the test taker's true score is likely to be. For modern tests, the standard error of measurement is about 3 points, or in other words, the odds are about 2 out of 3 that a persons true IQ is in range from 3 points above to 3 points below the test IQ. Another description is that there is a 95% chance that the true IQ is in range from 4-5 points above to 4-5 points below the test IQ, depending on the test in question. Clinical psychologists generally regard them as having sufficient statistical validity for many clinical purposes.

Age and health in IQ testing performance
Many research shows that as people age, their performance on IQ testings decreases. One study published in 2006 that was led by Grover C. Gilmore of Case's Mandel School of Applied Social Sciences tested a group of 20 year old college students, and a group of older individuals, with an average age of 70 years old. Gilmore and fellow researchers ran two experiments, testing a group of the experiments that are referred to as coding experiments. The results of the experiment did not come as a shock. The results proved that speculation of IQ testing scores decline as age declines. The first of the two tests showed that the college students were 34% faster than the older adults, which is quite a substantial difference. The younger students were over a third faster than the elderly adults at the coding tasks, which, when applied to the workplace or an academic setting, is quite a large difference. The second factor that has been speculated to affect IQ test scores is poor health. Studies have shown that individuals deficient in micro nutrients have significant difficulty developing cognitive ability, and this correlates to lower scores on IQ tests. For example, one study showed that individuals who were deficient in iodine scored on average 12 points less on IQ tests. Considering the average IQ test score is 100, a 12 point drop is quite substantial, and shows that just one micro nutrient deficiency can cause a significant decrease in test scores.
Although IQ testings can not grantee 100% accuracy, they are widely being used all over the world. Intelligence is an ambiguous subject to measure. It can be developed to the fullest possible expression wit correct information, practice and experience. You also have to note that IQ testings can only identify your current level of mental alertness not your constant intelligence factor ("Are you a," 2004).

Video Clip

This short video clip illustrates an experiment designed at Kyoto University. The experiment shows that chimpanzees have startling photographic memories that can easily beat humans.

audiblox. (2009, 05 09). Iq test: Where does it come from and what does it measure?. Retrieved from

Intelligence and iq. (2003). Retrieved from

Are you a genius? iq test scores claim to classify... (n.d.). Retrieved from

Intelligence 2 . (2004). Retrieved from

IQ Brain. (n.d.). Age and health: How they affect iq test scores. Retrieved from

School psychologist . (2006). Retrieved from