Write about the evaluation

From Organic Data Science Framework
Revision as of 02:43, 16 September 2014 by Felix (Talk | contribs)

Jump to: navigation, search


Possible dimensions for evaluation:

  1. show how easy it is for new people to participate. For example, show total training time, show how often they go back to consult documentation, show how often they delete things they have created because they made mistakes, etc. Could also do a survey of new users. New users will include: Jordan, Craig, Hilary, Gopal (anyone else?).
  1. show how people find relevant tasks. For example, show how they use search, how they get to tasks they want to do, etc.
If the tracked data has a certain variation we could represent the conversation rate of components.
FindingRelevantTasks.png
We can add additional heat maps to illustrate where the users focus on.
Heatmap G16 Workshop.png
Heatmap Mainpage.png
Heatmap FrameworkDesign.png
  1. show how people track their own tasks. Here we would do a survey to ask how easy it is for people to track tasks.
Survey with categories 1 to 10:
  1. My Task Tab
  2. Task Alert
  3. Person Page
  4. Expertise

Need to show collaboration touch points:

  1. is more than one person signed up to each task?
Nr of Participants Tasks in %
0 0
1 10
2 25
3 50
4 10
>4 5
Data Source: JSON Serialized Tasks.
Results from 15th of Septembre:
Paper persons per task.png
  1. is more than one person editing the metadata of tasks?
Nr of Meta Data Editors Tasks in %
1 10
2 25
3 50
4 10
>4 5
Data Source: Tracking Data.
  1. is more than one person editing the content text of tasks?
Nr of Content Editors Tasks in %
1 10
2 25
3 50
4 10
>4 5
Data Source: Wiki Logs.
  1. is person collaborating with more than one other person?
Person is collaborating with ... other persons. Persons %
1 10
2 25
3 50
4 10
>4 5
Data Source: JSON Serialized Tasks.


Yandex.Metrica