In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a nonparametric test is one that makes no such assumptions. In this strict sense, "nonparametric" is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s).
For practical purposes, you can think of "parametric" as referring to tests, such as ttests and the analysis of variance, that assume the underlying source population(s) to be normally distributed; they generally also assume that one's measures derive from an equalinterval scale. And you can think of "nonparametric" as referring to tests that do not make on these particular assumptions. Examples of nonparametric tests include
 the various forms of chisquare tests (Chapter 8),
 the Fisher Exact Probability test (Subchapter 8a),
 the MannWhitney Test (Subchapter 11a),
 the Wilcoxon SignedRank Test (Subchapter 12a),
 the KruskalWallis Test (Subchapter 14a),
 and the Friedman Test (Subchapter 15a).
Nonparametric tests are sometimes spoken of as "distributionfree" tests, although this too is something of a misnomer.
