Cohen's Kappa is a statistical measure used to evaluate the level of agreement between two or more raters when they classify items into mutually exclusive categories. Unlike simple percentage agreement calculations, Cohen's Kappa takes into account the possibility of the agreement occurring by chance. This makes it a more robust and reliable measure for assessing inter-rater reliability, particularly in fields such as nursing where accurate and consistent evaluations are critical.