Background and Objectives
Survey-based evaluation of a system, such as measuring user’s satisfaction or patient-reported outcomes, entails a set of burdens that limits the feasibility, frequency, extendability, and continuity of the evaluation. Automating the evaluation process, that is reducing the burden of evaluators in questionnaire curation or minimizing the need for explicit user attention when collecting their attitudes, can make the evaluation more feasible, repeatable, extendible, continuous, and even flexible for improvement. An automated evaluation process can be enhanced to include features, such as the ability to handle heterogeneity in evaluation cases. Here, we represent the design of a system that makes it possible to have a semi-automated evaluation system. The design is presented and partially implemented in the context of health information systems, but it can be applied to other contexts of information system usages as well.
Method
The system was divided into four components. We followed a design research methodology to design the system, where each component reached a certain level of maturity. Already implemented and validated methods from previous studies were embedded within components, while they were extended with improved automation proposals or new features.
Results
A system was designed, comprised of four major components: Evaluation Aspects Elicitation, User Survey, Benchmark Path Model, and Alternative Metrics Replacement. All components have the essential maturity of identification of the problem, identification of solution objectives, and the overall design. In the overall design, the primary flow, process-entities, data-entities, and events for each component are identified and illustrated. Parts of some components have been already verified and demonstrated in real-world cases.
Conclusion
A system can be developed to minimize human burden, both for the evaluators and respondants, in survey-based evaluation. This system automates finding items to evaluate, creating questionnaire based on those items, surveying the users' attitude about those items, modeling the relations between the evaluation items, and incrementally changing the model to rely on automatically collected metrics, usually implicit indicators, collected from the users, instead of requiring their explicit expression of their attitudes. The system provides the possibility of minimal human burden, frequent repetition, continuity and real-time reporting, incremental upgrades regarding environmental changes, proper handling of heterogeneity, and a higher degree of objectivity.