Inter rater reliability can be measured using various statistical methods. The most common methods include Cohen's Kappa, which is used for categorical data, and the Intraclass Correlation Coefficient (ICC), which is used for continuous data. These measures provide a numerical value that indicates the level of agreement among raters, with higher values representing greater reliability.