A structured abstract summarizes the paper in about a page. It is longer than a typical short abstract, but it has a fixed format. Here is an example, from Roy Maxion
Background. Keystroke biometrics seeks to identify users on the basis of their typing rhythms. This identification is usually performed by anomaly detection systems utilizing machine learning methods. Researchers often conduct evaluations of these systems based on user data comprising multiple repetitions of a password, generally on the order of 40 per user. However, as the number of repetitions increases, a practice effect may be in evidence.
Aim. We wish to answer four questions: (1) What effect does practice have on the time needed to type a password? (2) How does practice influence hold times and digram latencies? (3) Do the effects of practice influence biometric-system accuracy? (4) How do practice effects change system performance at the user-level?
Data. The data used for our studies are gathered from 51 subjects. Each subject typed a 10-character password, .tie5Roanl, 400 times.
Methods. We quantify practice effects by fitting exponential practice curves to the data. We conduct a series of evaluations to demonstrate the effects of practice on a machine-learning-based biometric system.
Results. Our findings include: (1) on average, users type the password 0.97 second faster at the end of the study than at the beginning, a 30% speed-up; (2) 50% of users require over 214 repetitions to become practiced; (3) practice slightly changes hold times but significantly alters digram latencies; (4) a system’s error rate is significantly lower for practiced typists; (5) most users see a decrease in their error rate as they become more practiced, but roughly a quarter of our subjects see an increase.
Conclusions. User practice has significant implications for keystroke biometrics. Since unpracticed users are more variable in their typing, they are more likely to provoke misclassification errors than practiced users. Researchers or practitioners using data from unpracticed users without accounting for practice effects may obtain overly pessimistic error rates when evaluating their systems. Obtaining accurate error-rate estimates may require, for example, selecting stimulus strings that incur minimal practice effects, collecting sufficient data so any practice effects are fully observed, or devising new algorithmic methods that adjust the data for practice effects.