08 Abr Analysis Of Attribute Agreement
However, a bug tracking system is not an ongoing payment. The assigned values are correct or not; There is no (or should not) grey area. If codes, locations and degrees of gravity are defined effectively, there is only one attribute for each of these categories for a particular error. Analytically, this technique is a wonderful idea. But in practice, the technique can be difficult to execute judiciously. First, there is always the question of sample size. For attribute data, relatively large samples are required to be able to calculate percentages with relatively low confidence intervals. If an expert looks at 50 different error scenarios – twice – and the match rate is 96 percent (48 votes vs. 50), the 95 percent confidence interval ranges from 86.29% to 99.51 percent.
It is a fairly large margin of error, especially in terms of the challenge of choosing the scenarios, checking them in depth, making sure the value of the master is assigned, and then convincing the examiner to do the job – twice. If the number of scenarios is increased to 100, the 95 per cent confidence interval for a 96 per cent match rate will be reduced to a range of 90.1 to 98.9 per cent (Figure 2). The review should help determine which specific individuals and codes are the main causes of the problems, and the evaluation of the attribute agreement should help determine the relative contribution of repeatability and reproducibility issues to these specific codes (and individuals). In addition, many bug tracking systems have problems with precision readings that indicate where a defect has occurred, because the location where the defect is detected is recorded and not where the defect appeared. Where the error is found, it does not help much to identify the causes, which is why the accuracy of the site assignment should also be an element of the test. First, the analyst should determine that there is indeed attribute data. One can assume that the assignment of a code – that is, the division of a code into a category – is a decision that characterizes the error with an attribute. Either a category is correctly assigned to an error, or it is not. Similarly, the appropriate source location is either attributed to the defect or not. These are «yes» or «no» and «correct allocation» or «wrong allocation» answers. This part is pretty simple. Yes, for example.
B Repeatability is the main problem, evaluators are disoriented or undecided by certain criteria. When it comes to reproducibility, evaluators have strong opinions on certain conditions, but these opinions differ.