Descriptive Statistics

Descriptive statistics can be useful for two purposes: 1) to provide basic information about variables in a dataset and 2) to highlight potential relationships between variables. The three most common descriptive statistics can be displayed graphically or pictorially and are measures of:

Graphical/Pictorial Methods

There are several graphical and pictorial methods that enhance researchers' understanding of individual variables and the relationships between variables. Graphical and pictorial methods provide a visual representation of the data. Some of these methods include:

Histograms

Scatter plots

Geographic Information Systems (GIS)

Sociograms

Visit the following websites for more information:

Glossary terms related to graphical and pictorial methods:

GIS
Histogram
Scatter Plot
Sociogram

Measures of Central Tendency

Measures of central tendency are the most basic and, often, the most informative description of a population's characteristics. They describe the "average" member of the population of interest. There are three measures of central tendency:

Mean -- the sum of a variable's values divided by the total number of values
Median -- the middle value of a variable
Mode -- the value that occurs most often

Example:
The incomes of five randomly selected people in the United States are $10,000, $10,000, $45,000, $60,000, and $1,000,000.

Mean Income = (10,000 + 10,000 + 45,000 + 60,000 + 1,000,000) / 5 = $225,000
Median Income = $45,000
Modal Income = $10,000

The mean is the most commonly used measure of central tendency. Medians are generally used when a few values are extremely different from the rest of the values (this is called a skewed distribution). For example, the median income is often the best measure of the average income because, while most individuals earn between $0 and $200,000, a handful of individuals earn millions.

Visit the following websites for more information:

Glossary terms related to measures of central tendency:

Average
Central Tendency
Confidence Interval
Mean
Median
Mode
Moving Average
Point Estimate
Univariate Analysis

Measures of Dispersion

Measures of dispersion provide information about the spread of a variable's values. There are four key measures of dispersion:

Range is simply the difference between the smallest and largest values in the data. The interquartile range is the difference between the values at the 75th percentile and the 25th percentile of the data.

Variance is the most commonly used measure of dispersion. It is calculated by taking the average of the squared differences between each value and the mean.

Standard deviation, another commonly used statistic, is the square root of the variance.

Skew is a measure of whether some values of a variable are extremely different from the majority of the values. For example, income is skewed because most people make between $0 and $200,000, but a handful of people earn millions. A variable is positively skewed if the extreme values are higher than the majority of values. A variable is negatively skewed if the extreme values are lower than the majority of values.

Example:
The incomes of five randomly selected people in the United States are $10,000, $10,000, $45,000, $60,000, and $1,000,000:

Range = 1,000,000 - 10,000 = 990,000
Variance = [(10,000 - 225,000)2 + (10,000 - 225,000)2 + (45,000 - 225,000)2 + (60,000 - 225,000)2 + (1,000,000 - 225,000)2] / 5 = 150,540,000,000
Standard Deviation = Square Root (150,540,000,000) = 387,995
Skew = Income is positively skewed

Visit the following websites for more information:

Glossary terms related to measures of dispersion:

Confidence Interval
Distribution
Kurtosis
Point Estimate
Quartiles
Range
Skewness
Standard Deviation
Univariate Analysis
Variance

Measures of Association

Measures of association indicate whether two variables are related. Two measures are commonly used:

Chi-Square

To test for associations, a chi-square is calculated in the following way: Suppose a researcher wants to know whether there is a relationship between gender and two types of jobs, construction worker and administrative assistant. To perform a chi-square test, the researcher counts up the number of female administrative assistants, the number of female construction workers, the number of male administrative assistants, and the number of male construction workers in the data. These counts are compared with the number that would be expected in each category if there were no association between job type and gender (this expected count is based on statistical calculations). If there is a large difference between the observed values and the expected values, the chi-square test is significant, which indicates there is an association between the two variables.

*The chi-square test can also be used as a measure of goodness of fit, to test if data from a sample come from a population with a specific distribution, as an alternative to Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests. As such, the chi square test is not restricted to nominal data; with non-binned data, however, the results depend on how the bins or classes are created and the size of the sample

Correlation

Visit the following websites for more information:

Glossary terms related to measures of association:

Association
Chi Square
Correlation
Correlation Coefficient
Measures of Association
Pearson's Correlational Coefficient
Product Moment Correlation Coefficient