1896 words - 8 pages

Replicability and generalizability are important considerations when analyzing research findings. Result replicability measures the extent to which results will remain the same when a new sample is drawn, while generalizability refers to the ability to generalize the results from one study to the population (Guan, Xiang, & Keating, 2004). If results are not replicable they will not be generalizable. Replicability is important because it determines whether results are true or a fluke. Measures of replicability can be obtained using either external or internal methods. External replicability analysis requires redrawing a completely new sample and replicating the study. Internal replicability ...view middle of the document...

When the bootstrap is utilized for descriptive purposes, the variance in statistic estimates across resamples is examined (Thompson, 1999). The descriptive analysis focuses on the standard deviation of the statistics or the standard error (SE) of the parameter being measured (Thompson, 1999). By examining the fluctuations in the SE for a statistic across resamples, a measure of the stability of the results can be observed (Fan, 2003). For example, a researcher could investigate the means or another statistic across samples. By looking at the SE, the researcher gains insight into how the means vary across resamples, which provides information on measures of result replicability.

Furthermore, the bootstrap method can be used parametrically or non-parametrically. Parametric bootstrapping makes assumptions about the distribution of the population being examined (Boos & Stefanski, 2010). Parameters of the assumed distribution are estimated first (Boos & Stefanski, 2010). Distribution assumptions can be mathematically derived and are based on a set of assumptions about the population (Beasley & Rodgers, 2009). Oftentimes, when conducting parametric bootstraps, researchers assume the distribution is normal, or bell-shaped (Beasley & Rodgers, 2009). Next, random samples are drawn to estimate the probability of a variable obtaining a given value in an interval or to estimate the probability density function (Zientek & Thompson, 2007).

In non-parametric bootstrapping, the researcher does not make any theoretical assumptions about the data distribution. Instead, an empirical estimation of the sampling distribution is created through resampling and replacement (Beasley & Rodgers, 2009). When conducting a non-parametric bootstrap, a great number of resamples are drawn that are the same size as the original sample and then replacement is used (Zientek & Thompson, 2007). Replacement is when a given value is used in the same resample multiple times. Replacement is important because it increases the possible amount of unique resamples, which provides more data on replicability (Beasley & Rodgers, 2009). Non-parametric bootstrapping is utilized more often in research because the procedure does not require theoretical assumptions about distribution shape (Amiri, Von Rosen, & Zwanzig, 2008).

Bootstrap Procedure

When conducting the bootstrap procedure, the researcher must consider the characteristics of the original sample (Zientek & Thompson, 2007). Smaller samples, numerous variables, and small effect sizes can all affect replicability (Zientek & Thompson, 2007). For example, a small sample size or a sample that is mainly composed of outliers will create an inaccurate pseudo population, no matter how many resamples are drawn (Zientek & Thompson, 2007). If the original sample is not representative, the bootstrap procedure cannot give an accurate estimate of result replicability in the population. A larger sample can help combat the likelihood of drawing a...

Get inspired and start your paper now!