While I understand that Cohen`s cappa is normally calculated between two evaluators who provide ratings in a 2x2 correlation table format, I want to calculate a measure of the concordance between two groups of evaluators. The first group highlighted here provided ratings ranging from a Likert scale of 1-3 (0-3 for the last criteria) to 11 criteria "C Attn R, tCLOppR... " as shown in the columns. This scale corresponds to the frequency of a behavior, with 0=unavailability, 1= not used consistently, 3= used consistently throughout the session. With regard to the evaluation of an overall agreement between two groups of evaluators, I suggest that you group the evaluations between the evaluators in each group and then calculate the Kappa on the group evaluations. There are other platforms in JMP that you can use to analyze the agreement. Please refer to the example www.jmp.com/support/help/13/Example_of_an_Attribute_Gauge_Chart.shtml#304555, which includes several evaluators and several elements (i.e. parts). From the data you`ve shown, it seems like you need to turn your data into a question where questions are asked as rows and comments as columns. Oh, it turns out that y-by-x has "okay stats"! But that`s not a cross-analysis, is it? Thank you, Karen! P.S. "FDA Guide... Oh, that`s interesting!. I know Cohens Kappa ("Analyze > Quality and Process > Variability / Attributes Gauge Chart").

. I am a fourth-year student working on my honorarium thesis, and I encountered a problem with the calculation of a kappa pooled statistic in JMP. . . .

本博客所有文章如无特别注明均为原创。
复制或转载请以超链接形式注明转自盛飞在线,原文地址《Agreement Across Categories Jmp
暧昧文章:
  • 还没有相关文章
最近评论
Copyright © 盛飞在线 Theme DNSHH by Hang & Ben & S-kias / Wordpress)))