Data Completeness Bias« Back to Glossary Index
Data Completeness Bias
Searching a data set for differences between groups on particular outcomes, or in subgroups of patients, without explicit a priori hypotheses.
« Back to Glossary Index
Related Glossary Terms:
- Internal ValidityWhether a study provides valid results depends on whether it was designed and conducted well enough that the study findings accurately represent the direction and magnitude of the underlying true effect (i.e., studies that have higher internal validity have a lower likelihood of bias/systematic error).
- BiasSystematic deviation from the underlying truth due to a feature of the design or conduct of a research study (for example, overestimation of a treatment effect due to failure to randomize).
- Incorporation BiasOccurs when investigators use a reference standard that incorporates some the diagnostic test that is the subject of investigation. The result is a bias toward making the test appear more powerful in differentiating target positive from target negative than it actually is.
- Spectrum BiasIdeally, diagnostic test properties will be assessed in a population in which the spectrum of disease in the target positive patients includes all those in whom clinicians might be uncertain about the diagnosis, and the target negative patients include all those with conditions easily confused with the target condition. Spectrum bias may occur when the accuracy of a diagnostic test is assessed in a population that differs from this ideal. Examples of spectrum bias would include a situation in which substantial proportion of the target positive population have advanced disease or and target-negative participants are ‘normal’ or asymptomatic. Such situations typically occur in diagnostic case-control studies (for instance, comparing those with advanced disease to normals). Such studies are liable to yield an overly sanguine estimate of the usefulness of the test.
- Trim-and-Fill MethodWhen publication bias is suspected in a systematic review, investigators may attempt to estimate the true intervention effect by removing, or trimming, small positive studies that do not have a negative study counterpart and then calculating a supposed true effect from the resulting symmetric funnel plot. The investigators then replace the positive studies they have removed and add hypothetical studies that mirror these positive studies to create a symmetric funnel plot that retains the new pooled effect estimate. This method allows the calculation of an adjusted confidence interval and an estimate of the number of missing trials.