One feedback I have recently been given is that I need to go back to my study plan and re-evaluate how it all links together. This in turn will help me strenghten my researc question and clearer formulate the aim of my PhD project. I will not address them one by one, but instead will reflect on the areas that are represented in all of them.
- Technology: While I initially looked into all technologies developed to support people living with dementia, monitoring technologies have become a focus of my research. These technologies mainly benefit the caregiver by supporting him in monitoring people living with dementia or by relieving him of some of these duties. It is associated to the (perceived) risk people with dementia may be living in, to the question whether they enhance the independence of people living with dementia by enabling them to stay independent for longer or whether they infringe on the privacy of people living with dementia, which may have an impact on the autonomy as well. I see smart, intelligent, context-aware or connected technologies as an extension of this as they address the same problem area and potentially have the same implications in regards to personal rights.
- Dementia: So far I have not yet addressed any particular type of dementia, but use it as generic as I find it in the literature on the topic of technology. I am aware of many different types of dementia and how they differ and will limit down the topic as I see appropriate.
- Empathy: This has so far not been an explicit item in my research, but I wonder if it should be so. The studies undertaken so far – and those planned – all have an element of this. In the initial study I specifically chose names and descriptions to evoke empathy as well as asking participants to think about themselves in the situation, I used dementia specific technologies to develop empathy myself, the story I wrote is made relatable, i.e. aims to create an empathic connection, and I aim to work with people who know people living with dementia thereby being able to relate to the person. Especially if I turn my focus from learning from users of technologies to designers (or even both? ) empathy becomes a tool I use. What I need to become clear about is to what aim I employ this tool. What is it I want to change? People’s opinions? Ways of working? Is it just a tool to elicit responses? This is something I need to dig into deeper.
- Future: All of the things I have done and intend to do have a focus towards the future. Individual futures , technological futures, short and mid-term futures. This is firstly related to the shift in technology that I see happening and the wish to learn more about the outcomes this may have, but also because I do not yet see that happen much. Technology development seems very much caught up in the now, with little regard for where the technologies may go, how they may develop.
- Values: So far I normally used the quite generic term ‘values’ to explain what I am interested in. What my research started to focus on is the complex and complicated relationship between privacy and autonomy in regards to moitoring technologies. I am also interested on the ‘beyond privacy’ – what does this entail? How does ot play out in the everyday? Is privacy a value in itself that is worth protecting or are there other factors that play into it, e.g. quality of life, relationship with others? The question I am interested in goes beyond privacy in the info-sec way: Is encrypting data enough or do other factors play into the understanding of privacy that will hinder the use or acceptance of the devices/technologies currently developed/proposed?
Critical Design Methods share the focus on future developments, and the aim on critiquing the values/biases in designs, which will make them an useful tool to use in this project. Nonetheless I wonder if I did not put too much emphasis on critical design methods so far and if my aim does not go beyond learning about the methods, but learning about the technologies and exploring their future developments.