How It All Began
Throughout my time at University I always enjoyed the Designing side of Technology. This lead me to picking a dissertation strongly focused on designing for users with unique needs.
How It All Changed?
When beginning the project I had very little knowledge of IoT Voice Assisted Technologies and began to delve into the capabilities of Amazon Alexa. Upon discussion with my dissertation supervisor (A Speech and Language Specialist) she informed me of the scope of what we was planning to achieve.
Unfortunately, what was expected and what was actually achievable through the limitations of Alexa at the time did not line up!​​​​​​​
The New Project!
After realising the issues with the originally proposed project, things quickly developed into a large scale, user centred design process involving all the good stuff I had hoped for, from gathering requirements from users to analysing the data and finally designing and implementing something truly useful and fit for purpose. 
Data Gathering:
It was time to start talking to the people that mattered, the users! 
Workshops:
Two, 3 Hour workshops were used to gather all information. The aim of the workshops was to gain a sense of the perceptions People with Parkinson's and their carers have around Internet of Things technologies and to see ways in which they could be used to support Parkinson's Disease.
As to not take away from the overall flow of the workshops, each one was recorded via audio. This is to be used later for in-depth analysis.
Workshop Format:
The format consisted of 3 activities:
1. First a group activity was conducted to allow the participants to explore the priorities that people with Parkinson's would like to see in research conducted with them.
2. Second, a group time-lining activity was performed to explore what a typical day looks like for People with Parkinson's. Participants were asked to write the activities they do day to day on post-it notes and then stick them in order of the day from start to finish while also labelling them as "Easy" or "Difficult" along the Y-Axis.
3. Finally, participants completed a storyboard representing a character and how they tracked and self-managed their Parkinson's symptoms. This involved asking, what symptoms and what types of data would be monitored and what technologies they felt could do this.
What Did I Find?
In order to answer this, I needed to analyse the data.
As mentioned earlier, each workshop was audio recorded and so it was important to get this information out of the audio and into a solid workable form. 
Transcription:
In order to use the audio data it needed to be physically documented and so I performed a manual transcription of the audio. This was very time consuming, but very rewarding process. It could often take up to 3 hours per 1 hour of audio and there was a total of 9 hours! However, once done, this information would be the foundation of which to build a successful and useful tool!
What Does It All Mean? (Analysis):
With all the data transcribed, I could now move on to pulling out the good bits!
1. Thematic Analysis:
I found the most appropriate way to use transcribed data is to perform Thematic Analysis. This involved systematically working through the data and labelling anything that is interesting.
Once labelled, the final theming step can be done. For this step, I went through all the data again and began to turn the initial labels into the designated themes and sub-themes whilst adding any new themes I came across in the second reading.

An example of some themes found throughout the transcription.

Thematic Maps:
After the themes had been discovered and labelled I could then produce a thematic map, the first one is usually very large and complex! I then had to refine this complex map to produce a small subset of themes that can be reviewed and turned into ideas for a design.
TEst

This is the complex thematic map that will need to be refactored to pull out the key themes that can be used for Idea Creation

This is the simplified thematic map which shows more key themes that can now be used for Ideation.

What are the options?
All the data is now laid out in front of me and its time to consider what can be created and what will be useful for the participants based on their inputs. 
Some Ideation:
1. A Yoga skill that would walk the user through various exercises.
2. A Memory Game for Amazon Alexa that would aim to provide aspects improving memory retention. (Its to be noted here that participants discussed 
3. A Medication Tracker that would track when you have taken medication and then remind you when you last took it and when the next one is due upon requesting.
The Results are In!
The final choice for application was to create a memory training game for the Amazon Alexa.
This decision was made based on the recurring theme of memory as a priority, this theme was discussed in both workshops and was highlighted through activity one by participants.
It was also found while researching that memory games that require you to repeat phrases often helps with vocal performance and so this links back to the original project concept of speech and language therapy. 

What Does the Skill Do?

The application (Skill for Alexa) is a memory training "game". It consists of "Players" remembering sequences of numbers and/or words for as long as they can while the sequence increases in size. Failure to repeat the sequence correctly results in GAME OVER!

The gamification design attempts to keep the individuals motivated and make the training feel less like a chore.

 On top of this, there is also a high score system that helps to track improvement from previous days and to give users something to beat to keep them coming back and training.

How On Earth Do I Make This?! (Designing)
I may have come up with an Idea but its far from reality yet! The next step was to design the software and application itself.
This stage includes, laying out exactly what the application should do and attempting to experience it from a users perspective as if its already complete.
This stage came with an interesting difference to most designs I had previously done, as the user interface is completely Audio based. This made it more important than ever to use design methods that could visualise the Audio UI and prevent me from making it too complex.​​​​​​​​​​​​​​

Functional Requirements

Functional and Non-Functional Requirements:

Defining these is key to application development, it will set a minimum as to what the app needs to do to be functional (Functional Requirements) and will define what the app should feel like and work like while being functional (Non-Functional Requirements) 

Use Cases:

After creating solid requirements, some use cases can be created. These aim to look at multiple different common uses of the product by the user. They help to detect where error can occur and the general flow of the application for the user. 
They are even more important for Alexa Development specifically as they can help define "Intents" which relate to everything the user can do within the skill.
User Experience Flow Diagrams:
Alongside the above I created flow diagrams to really visualise how the application will work for the user. I found these to be really helpful, as the application interface is entirely audio based with no physical UI it was very useful to be able to see the flow and to help avoid making things to complex to navigate in a completely audio interface.
Amazon Alexa Specific Design:
When designing for Alexa there are a few specific things that need to be considered. Such as, "Intents", "Utterances" and "Slots". These are the sole interface of Alexa interaction and so need to be planned out.
Utterances:
These are what the user says in order to trigger an "Intent". These can vary vastly for different people and so I used "Scripts" to come up with a large amount of varied inputs. Scripts allow you to experiment with example user interactions and generate natural ways of starting intents.
Intents:
Intents are triggered by the utterances and run specific parts of the application. As seen in the table above, saying "Tell me the rules" will run the part of the application called "HearRules".
Slots:
Slots allow for variation of data, for example, not everyone has the same name. So when answering with your name, a dynamic slot is used to store the name given and use it for the application.

What If It All Goes Wrong?
As with anything in life, sometimes it just all goes wrong :( In this case a "Contingency Plan" is used to say what to do in my time of panic!
Lets get making! (Implementation)
Now designing is done and I had planned all the necessary parts of the application as well as what to do if things go wrong, it was time to start making the prototype!
I'll skip some of the codey complexity here, but I began to create the Alexa Skill using Node.js and the Alexa SDK. 
I followed the requirements and implemented each function phase by phase whilst iteratively testing along the way using "Bespoken". 
"Bespoken" is a hosting platform where I could use command line interactions to manually trigger intents and see the results in JSON, this prevented a lot of headaches for people working around me as I didn't need to keep talking to Alexa and having her reply. (THANKS JSON!)

An example of bespoken being used to see input and output and the application working.

Implementation Issues:
With implementation comes hurdles! As I was very new to Alexa Programming I was still learning how things worked. This causes some unexpected errors during the creation of the first prototype.
1. Name Looping - This is the first problem I encountered. This issue caused the app to forever loop upon asking the user their name. When the user sets their name they are asked for a confirmation, this was supposed to assist with voice issues as experienced by People with Parkinson's. However, what happened was upon clarifying their name the app would then set there name to the word they clarified with, such as "Yes". This would then loop forever!
Does it Work? (Acceptance Testing)
For my implementation process, I iteratively tested after each phase of adding functionality using the before mentioned "Bespoken" software and Acceptance Tests.
Acceptance tests are used as assurance measures that each aspect of the skills core functionalities work to an acceptable level. The entire product is broken down into individual segments to be tested allowing for errors to be easily detected and fixed before actual user testing has begun.

An example of my acceptance test table with some passed tests

Does it Work for People With Parkinson's (Expert User Testing)
For expert user testing I tested the prototype with Two of the participants from the initial workshops. 
User 1:
I visited their home and set the device up to test, observe and interview.
User 2:
This second user tested remotely using the Alexa BETA functionality and an instruction pack provided for set up. This emulated a real world scenario where the user would need to set things up themselves.
To gather data I used 2 main methods:

Observation:

For this method I created an observation form and filled it in with key information about the users interactions with the app. 


Questionnaire:

I designed the questionnaire to use the Likert Scale format. This is because its accessible requiring no complex motor movements and only a simple mark to score. Its also very easy to understand as it simply uses a rating from 0-4.

This questionnaire was also replicated online and provided with a link for remote tests.

Some Funny Errors!
During testing their was a few comical moments that the participants had a laugh at and I always love to share these:
Bad Words (Voice Recognition):
Specifically when using the words game, Alexa would miss hear words even when said correctly. This is down to some of Alexa's funny pronunciations and the difficulty deciding if it should be said the way Alexa says it or the way you should say it! 
An Example - One of the words was "Mumps" but Alexa would almost always hear it as "Mums" and say you was wrong! (Sorry everyone!)
General Failure:
One of the users accidentally said the word "Start" when they had a miss understanding, this caused Alexa to say "Is start your name?"
To which the user responded "Bob" is my name, to which she then responded "Is Name your Name?" everything crumbled away from there!
What Did They Think?
When looking at the results of testing I categorised them into 6 key areas. But, to summarise:​​​​​​​
Overall Skill: 
The final Skill worked well overall and users agreed that specifically the numbers game would help keep them "Mentally Sharp" and they could see it improving their memory.
They also felt a sense of accomplishment when beating their high score which made them feel great!
The Ease Of Use:
It was not as easy as expected, this seems to be user specific as Participant 1 struggled with the conversational interface and lack of ques when to speak, however participant 2 did not. This could be relating to their different generations, technical backgrounds and stages of Parkinson's.
Wrapping Up:
To close things off, here are two brief video demonstrations of the final prototype in testing.
It was an absolute pleasure to work with all the people who participated in this project. Overall, some great data was collected and it was amazing to see people so eager to get involved with new technology.
I think there is definitely room for more work to be done in this field in the future and hopefully it will benefit many people suffering with conditions like Parkinson's.
Back to Top