Description (eng)
The present Master thesis explores the conceptual and practical development of User Interface Sounds within the context of the research project Smart Companion 2 (= SC2), hosted by St. Pölten UAS. The primary objective of SC2 is to enhance and maintain the individual autonomy of persons aged 60 to 85. This is achieved through various interventions, including the reduction of fall risk, facilitated by the integration of a commercial vacuum cleaner robot equipped with an "Amazon Echo Dot" and additional sensors. As such, SC2 is capable of, for example, identifying an unconscious person and initiating an emergency call if necessary. A total of four scenarios related to the usage of User Interface Sounds (= UI Sounds) are examined to augment the dialogues of the "Echo Dot." The core of this work is focused on the specific requirements for the sound design of UI Sounds. These sounds were developed, implemented, and evaluated within the framework of Human-Centered Design (= HCD) using a Digital Audio Workstation (= DAW), in conjunction with a Mixed Methods Research Design (= MMR). Despite the small sample size, MMR enabled a more in-depth evaluation through the balanced weighting of qualitative and quantitative statements. During the study, a total of 8 out of 20 UI Sounds were favored by test users for the four scenarios. These were previously determined both quantitatively and qualitatively via a moderated user test. This offers initial insights into further requirements for sound design, aimed at increasing the acceptance of SC2 usage and enhancing the user experience. It is emphasized that the study has several limitations. For a more comprehensive and valid assessment of the implemented measures and their effects, further studies, particularly field studies, is necessary.