In order to assess the validity of a given scientific phenomena, it is crucial to obtain various ratings and observations from different individuals (e.g. scorers and observers). Furthermore, the degree of agreement between multiple raters/observers defines the difference between an observed value and the true value. The statistical measurement of agreement has become one of the most important qualitative measure used in assessing inter-rater reliability. Moreover, “Agreement between two different methods, graders or raters of ordered categorical measures is an important subject in any field of science” . Consequently, professional researchers and graduate students are aware of this and are using different statistical approaches to validate their findings as well as to avoid the impact of variability on their scientific research quality.
Definition of Cohen’s Kappa measure:
There are several methods for calculating the probability of agreement between either fixed or random number of appraisers. In addition, the most commonly used methods for assessing agreement are Fleiss' kappa and Cohen's kappa. The main difference between the two methods is that Cohen  is limited to two raters only, and Fleiss (1971) is applicable to multiple raters  . Therefore, “that is the reason Fleiss  presents an extension. However, Fleiss does not agree with Cohen for the two-rater case due to the stronger assumption on chance agreement “. In this paper, I will focus on presenting the importance of Cohen’s Kappa (1960) technique to ensure validity and reliability in qualitative research.
Kappa statistic (K) was proposed by Cohen (1960), “the method is the most popular coefficient of rater agreement. Furthermore, Kappa (K) indicates the degree to which two raters agree beyond chance. It is a summary measure of agreement in a rater X rater cross-classification” . This statistical method is widely used in various fields of science such as medical sciences, engineering sciences and behavioral sciences. In addition, the kappa method can also be used “to classify (assign) objects...