I will now explain my current plan for the project. Notice that I say current here – this may change throughout the course of the project: I may narrow in on a topic of interest, or branch out to investigate anomalous research findings.
I will be building the end product – the dissertation and software – via a process of iterations, much like an iterative Software Lifecycle. The Literature Survey is ongoing – throughout the whole project from beginning to end – feeding into all parts of the dissertation, and indeed this Proposal, as shown in the Gantt chart (Figure
). The literature I choose is sometimes chosen to support points I wish to make, sometimes acting to guide my next area of research, reinforce findings, compare or contrast with other research, and probably many other things I have not yet thought of. Most importantly, I will be looking at who the paper/article etc. is cited by, preferring sources that are peer-reviewed.
As well as this literature research, I will also have an ongoing Product Literature Survey – looking at existing software out there that is related to my current area of interest.
Central to this idea of iteration is my desired method of performing user studies: I will first do what I have called a “Pilot” – a short and shallow trial User Study that focuses not on the research I’m concerned with, but instead the particular experimental design I would like to use in my actual User Study. By employing a Pilot I can hopefully get an idea of the nature of the experimental design – perhaps discovering any variables I had not previously considered that will require me to increase my sample size or simplify the experiment in order to mitigate their effect on the dependent variable I wish to test for. These are all problems discovered in Yates (2012) – including basic teething problems in getting the experiment to flow smoothly. In an even less detailed aspect, the pilot may allow me to look at what is out there. It may help to not look for anything in particular initially, and see what happens.
At this stage, with the help of discussion with my Project Supervisor, I have some ideas about how to gather data in User Studies and these pilots could prove to be a useful testbed for such tools. I have a hypothesis that the novice developer “thrashing” Lopez et al. (2012) can be observed by shorter pauses between editing and experimentation, and I could measure this by way of measuring the mouse position relative to the IDE, clicks, and key-presses, using tools built-in to Elm and a bit of extension to stream this over the Internet to my storage facilities Czaplicki (2013b).
As you will see in the Gantt chart (Figure ) I have included Testing & Implementation under the same heading as I will be doing Test Driven Development. My experience on Placement at PicoChip, my job as a Software Engineer at Altran and readings have helped me realise that this way of developing is time-saving and improves code quality by enforcing modularity in order to test it Martin (2008) and Hunt & Thomas (1999).
I will now talk about the resources I require for the completion of this dissertation, including the availability of these resources.
I will require users for my user study. These users must be proficient in at least one programming language (declarative programming languages are niche in and of themselves, never mind the discipline of programming, so some basic knowledge is required in order to see useful patterns in User Interface Design). Suitable candidates are First and Second Year Computer Science students from most Universities in the UK. Their availability is limited – Christmas holidays and coursework deadlines may mean that certain periods of the term are particularly busy for them. At Bath, suitable periods are therefore November, January to Mid February (inclusive), Mid-March to April (inclusive). It will be useful to procure free periods for other nearby Universities to hedge my bets, and to have a decent random assignment of users so I can make equivalent groups in my experiments.
The ACM Digital library, accessible via the Bath portal either from University or from home via Single-sign-on is a valuable resource for research papers, articles and references. The Cited-By feature will allow me to assert the popularity/ranking of each resource. Another valuable resource is the Psychology of Programming Interest Group, a “[group of] people from diverse communities to explore common interests in the psychological aspects of programming and in the computational aspects of psychology”, with peer reviewed papers on particularly relevant topics to my area of research.
I will require regular access to the Internet, Emacs with haskell-mode installed and Elm version 0.10 Czaplicki (2013a). I will also need git for software source control, and bitbucket.org for online, private backups of my work. I require LaTeX to type up my dissertation, and have chosen texlive on Ubuntu 12.04.3 as my development environment of choice. The full development environment is installed at the house I am staying in, in Bath, on my laptop. I am also able to replicate this environment to a satisfactory level at Bath University on any computer with access via Putty/SSH or similar to LCPU, as all the above software can be installed and run on my Bath University account.
I am using Chromium Version 28.0.1500.71 Ubuntu 12.04 (28.0.1500.71-0ubuntu1.12.04.1) to run the Elm IDE, which is an important dependency that may cause problems in getting Users in User Studies to run a functionally equivalent browser. Only recent editions of Chrome, Chromium, Firefox, Opera and Safari (not Internet Explorer) support Elm web programs.
In conducting User Studies, I will be interacting with people and collecting data from them, so I must be considerate and mindful of those I talk to and the information I handle.
An Ethical Checklist such as the one Bath University uses as it’s template Bath (2013) may assist my research such that I treat each participant with care and respect. I may learn from the discoveries made by others – in my reading, I came across a paper (also mentioned earlier) that highlighted concerns that participants under study had, and the paper detailed ways to mitigate these concerns so as to make the participant feel that are informed and safe Yates (2012).
The problem area of user-interface programming, and more generally, the activity of programming in a context such as a software engineering environment, encompasses certain realms of interest. Through my survey of literature, my research has touched upon the above-mentioned terms, and I have discovered some thought-provoking problems that exist in the field of programming. The concept of ‘Programming’ embodies other concepts – art-forms, engineering processes, science, language, and mathematics, among others. To me, programming is a creative endeavour unlike any other – in which the programmer wields materials of no substance – the code – by manipulating symbols on a screen, which represent states in the machine being used. There are so many programming languages, and all languages (all that are Turing-complete) reduce to the same language – that of a Turing Machine. So, why do we have so many programming languages?.
Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy. (Perlis, 1982)
Different languages lend themselves to different ways of thinking about problems. They may place emphasis on one feature, for example list manipulation and hide others such as types. The language or programming environment may make explicit the effect of changes as they are encoded, as opposed to queuing up a block of changes and the programmer having to initiate an update manually.
I would like to draw your attention in particular to the terms Abstraction, Cognitive offloading, Feedback, Loss of information?/Augmented reality?, Thrashing, and “Programming blind”. These, at current, are my topics of interest, and my literature review has up to this point been inextricably and heavily influenced by this.
In the process of surveying relevant (and sometimes irrelevant) literature to this dissertation, recurring conceptual patterns were observed – one particular instance of this is that several authors seem to lay victim to the trap of claiming their creation is “easy to use”, “better”, “simpler than x” without providing any supportive evidence of this.
Perhaps these are incidents of ‘experimenter bias’ – where the evaluator is naturally predisposed to a positive appraisal of their own findings. One way to avoid this is to have one set of people perform the data capture and another set perform the data analysis. Nevertheless, these patterns emerge, and present numerous opportunities for experimentation and subsequent evidence supporting or contradicting these claims. Experiments may see if the same conclusions are reached as the above-mentioned authors, accounting for the ‘evaluator effect’ (Hertzum & Jacobsen, 2001).
Whether this particular route is taken for experimentation hinges on pilot studies that will be conducted concurrently to the Literature Survey, each inextricably shaping the other’s direction of investigation and inquiry.
The catalyst to this whole dissertation was a talk about the concept of a highly reactive development environment – where changes in the code result in instantaneous updates to the runtime, ‘on-the-fly’. This was presented in Bret Victor’s “Inventing on Principle” (Victor, 2012). In his presentation Bret makes several assertions about the ‘traditional’ style of coding, one statement of which is that “most of the developer’s time is spent looking at the code, blindly without an immediate connection to the thing they’re making”. He argues that “so much of creation is discovery, and you can’t discover anything if you can’t see what you’re doing” – alluding to his earlier statement that the compile-run-debug cycle is much like this.
Evan Czaplicki, in his thesis of which Elm is the product (Czaplicki, 2012), makes similar claims – “[Elm] makes it quick and easy to create and combine text, images, and video into rich multimedia displays.” While the evaluation of Elm’s usability is not the focus of the thesis, rather, it is to establish a context for Functional Reactive Programming and describe the implementation details, he makes other usability claims without evidence – “[non-declarative frameworks for graphical user interfaces] mire programmers in the many small, nonessential details of handling user input and modifying the display.”, “FRP makes GUI programming much more manageable”, and in a section entitled The Benefits of Functional GUIs, “In Elm, divisions between data code, display code, and user interaction code arise fairly naturally, helping programmers write robust GUI code”. If these claims are true, there is all the more evidence that Elm should be a language of choice for GUI programmers, but experiments must be done to determine this.
And perhaps this rapid development cycle is not always suitable – in their 2012 paper, Lopez et al. show that novices tend to “thrash” about, trying out many ideas that may or may not be a solution, and executing “poorly directed, ineffective problem solving …failing to realise they are doing it in good time, and fail to break out of it”, whereas experts think much more about the problem at hand before proceeding with a solution (Lopez et al., 2012).
Perhaps a further direction of investigation may be running an experiment to spot whether or not Elm’s auto-updating IDE lends to a lack of critical thinking – some operationalization may be pauses reported as ‘thinking’ made during development – where a pause is disambiguated as ‘thinking’ by the experimenter asking the participant why they did not perform any interaction with the computer for more than 10 seconds, and the participant reports that they were planning/designing/other similar activity. Along this line of thinking, a paper studying the relationship between speech pauses and cognitive load (Khawaja et al., 2008) found through studying 48 mixed gender participants that there is statistically significant indicators of cognitive load through analysing pauses in speech. Perhaps this concept of pauses can be applied to the activity of programming. However, the planned method of disambiguating pauses via self-reporting (previously mentioned) would not be suitable according to these authors – “such measures can be either physically or psychologically intrusive and disrupt the normal flow of the interaction”, although a paper cited by (Khawaja et al., 2008) itself claims that “although self-ratings may appear questionable, it has been demonstrated that people are quite capable of giving a numerical indication of their perceived mental burden (Gopher & Braune, 1984)”. Indeed a pilot study by Fraser and Kölling (McKay & Kölling, 2012) structures the self-reporting by getting the users to evaluate an IDE as they use it using a set of subject-specific heuristics that they have designed. They showed that this customised set of heuristics helped guide the user more effectively than Nielsen’s heuristics in evaluating usability, so one could develop a custom set of heuristics for evaluating the usability of Elm.
From the Elm thesis (Czaplicki, 2012), the language syntax and rapid feedback seem simple enough that it is conceivable (or at the very least, possible and of experimental interest) to allow the user to customise the UI layout to their liking. Letting the user shape the UI in concert with a UI programmer is covered the study of the interface development environment “Mobi-D” in millitary and medical applications (Puerta, 1997), with success in those fields. It may be worth speculating how Elm would fit into the development cycle that Puerta’s paper outlines, as this may lend inspiration to potential user interface enhancements to the Elm IDE for A/B testing. It must be noted that there does not seem to be a re-emergence of Mobi-D since the paper was written, however.
My goal is to answer these questions. By way of conducting user studies, leveraging Elm with extensions to do A/B testing to illustrate it’s effectiveness (or ineffectiveness) at enhancing User Interface Design.
Central to this idea of iteration is my desired method of performing user studies: I will first do what I have called a “Pilot” – a short and shallow trial User Study that focuses not on the research I’m concerned with, but instead the particular experimental design I would like to use in my actual User Study. By employing a Pilot I can hopefully get an idea of the nature of the experimental design – perhaps discovering any variables I had not previously considered that will require me to increase my sample size or simplify the experiment in order to mitigate their effect on the dependent variable I wish to test for. These are all problems discovered in (Yates, 2012) – including basic teething problems in getting the experiment to flow smoothly. In an even less detailed aspect, the pilot may allow me to look at what is out there. It may help to not look for anything in particular initially, and see what happens.
At this stage, with the help of discussion with my Project Supervisor, I have some ideas about how to gather data in User Studies and these pilots could prove to be a useful testbed for such tools. I have a hypothesis that the novice developer “thrashing” (Lopez et al., 2012) can be observed by shorter pauses between editing and experimentation, and I could measure this by way of measuring the mouse position relative to the IDE, clicks, and key-presses, using tools built-in to Elm and a bit of extension to stream this over the Internet to my storage facilities (Czaplicki, 2013b).
The primary direction I mentioned (as echoed in my Proposal) was doing AB testing of Elm vs. another language (e.g. JavaScript) (i.e. the language is the dependent variable) using the same Concurrent FRP IDE (the independent variable).
He also suggested a potential experiment to test just the paradigm, eliminating the IDE from the experiment above. Perhaps for a Pilot study.
We also spoke about ideas for pilot studies – asking “What might be surprising insights into declarative programming languages for User Interface Design – the case of Elm?”.
Speak-aloud protocols where you prompt/facilitate the user to say what is on their mind when that e.g. pause for more than 10 seconds – a measurement I set out to look for during an experiment.
I might ask
I notice you have paused for at least 10 seconds – why did you?
I thought the code would do X, but it did Y.
Why did you think it would do X?
…
I must ask the participant questions designed in a way that they are not leading.
Leon suggested I gather a rich data set, as it’s difficult to take notes AND prompt the user during an experiment. SO difficult. Perhaps record video.
Devise a Pilot study, answering these 3 questions:
Also see paper Leon will send me on “Thematic analysis & Psychology”
Using per-participant questionnaire (See ), I captured video & audio data of participants while the completed the task of extending a mario game to make mario fly
Using Thematic analysis (Braun & Clarke, 2006) to code the data…
Prompting “What are you thinking about?” etc. seemed to place additional cognitive load on the user as they spent longer resuming than when not prompted. This caused noise in assessing the actual cognitive load incurred during the completion of the task. Were the signs of struggling/undergoing difficulty due to simply not understanding the language, or were they due to the difficulty of the task?
In particular, the majority of instances where the users paused turned out to be confusion as to the semantics & syntax of the language.
Track the user mouse and keyboard movements in a 3-tuple: (Time t, (Mouse.x, Mouse.y), Keypress k)
It doesn’t have to be implemented this way. I could extend Model Adjustment 1 to define blocks of code as tokens in themselves, and capture how long the cursor is static on that particular token.
Leon suggested a further refinement of this idea in order to further narrow the data (in fact, just capturing mouse & keyboard movements will result in an explosion of the volume of data – countrary to what I intend to achieve). His refinement was to define regions of interest in the code pane, and only when the mouse/key cursor is in the region, do I capture data.
Use the if cursor in region then log (Time t, (Mouse.x, Mouse.y), Keypress k)
functionality as a lens to focus on significant portions of video capture.
We then discussed some questions that might lead my direction of study in the next steps of my research:
Is the mouse/cursor position a proxy for someone’s attention as they carry out the task?
Often when I’m coding I’ll leave the cursor where it is but think about other regions of code. I don’t necessarily move the keyboard/mouse cursor to the section of code I’m thinking about. Instead, I use it as a ‘bookmark’ to track what I’m currently implementing, and may scroll around to other parts.
The result of the dissertation will be a list of observed cognitive easing/loading that each language produces for users, much like an advantage/disadvantage comparison:
Elm | JavaScript |
---|---|
+ … | + … |
+ … | - … |
- … | - … |
- … | + … |
+ … | _ |
Design a task in JavaScript to go inside this adjusted model (incorporating Model Adjustment 1 and 2).
This will require a degree of “implementation juggling” in order to find a balance of code-length/difficulty over the same task in Elm in such a way that is not creating noise in the thing being studied: Cognitive load.
Keep the reactivity constant, compare the differences in ease between JS and Elm.
If time available, run another Pilot study on this task + adjusted model
Needs to be more objective! Why? What will I modify?
I will now identify what the requirements are for the project.
Write software to assist the capture of objective data to inform me of the user’s activities as they use the Elm IDE.
Perform Pilot and User Studies
I must perform Pilot and User Studies in an iterative fashion, each one learning and building upon discoveries made in prior ones, starting vague and getting more and more focused on a particular facet of User Interface Design and/or Declarative programming as an activity.
Priority: High
I must use these studies to inform experimental and software design to disambiguate and filter data collected in the experiment, and to exercise hypotheses.
Priority: High
Source code
The software must be written clearly and simply.
Priority: High
The software must have suitable, concise comments which explain the programs intent, but only where the code alone is not enough.
Priority: High
Activity recording
The program activity recording feature must not slow down the user’s use of the IDE more than 1ms difference than without it.
Priority: High
There should be software to visualise the usage data
Priority: Medium
This is the chapter in which you review your design decisions at various levels and critique the design process.
More detail on what I will modify. How will I modify?
Discussed progress made and what hypotheses to form that may usefully model cognitive load.
I have implemented full-screen mouse tracking that stores to a database a tuple:
(t, (x, y))
for every mouse move, producing a list in JSON (so it’s more like {{uniq-userid: {125125, (67, 321)}}, {uniq-userid: {125126, (67, 322)}} ...}
)
I am ready to demo this (See Action 1.)
The only issue worth tweaking is that user activity data is captured separately from the error output, so I will need to collate the data afterwards or find some way to feed it into the same data store.
2 Hypotheses
Why the regions (see green boxes in figure above) I define in the code (to mouse-track e.g.) are meaningful
Frequency of semantically or syntactically incorrect errors made will differ as a function of the language under study
These need narrowing as they are too broad to test. Explode them into multiple, tighter hypotheses.
They are valid because they are well-founded – i.e. I have good reason to believe that # of errors made is an indication of cognitive load. I have good reason to believe that the selected regions will have more mouse activity (or whatever activity I suspect indicates higher cognitive load) as they are harder regions of code OR they pertain to achieving the set task.
Refine Mouse logging
1 2 3 |
|
So that when we get an error, we timestamp and append it to a log file so this can later be collated with the Firebase to determine when errors were made
I’ll need to insert a layer between compile :: Snap()
and serveHtml :: MonadSnap m => H.Html -> m ()
that performs the logging. It will have type signature TypedHtml -> H.Html
See the functions compile
and serveHtml
in Server.hs (See ).
if mouse(x,y) in some2by2Square then Just mouse(x,y) else Nothing
See https://github.com/spanners/laska/blob/master/Signals.elm
1 2 3 |
|
DONE Design a task in JS and Elm
DONE Determine what to do with mouse (for example) data.
What makes code difficult to understand and work with?
[Programming is] manipulating symbols blindly ~ Bret Victor
Do a 2×2 study, defining regions in the code monitoring mouse clicks. Regions can either be simple/hard in complexity (exhibiting/not-exhibiting one of the above ‘difficult’ properties). Or code can be task-oriented or not, that is the code does/does not need to be changed to achieve the completed task set for the user:
Elm | - |
Simple/Task | Hard/Task |
Simple/Not-Task | Hard/Not-Task |
JavaScript | - |
Simple/Task | Hard/Task |
Simple/Not-Task | Hard/Not-Task |
Look at total and/or mean time in each of these areas for comparison.
My study will be between-subjects instead of within-subjects.
That is, I will study different users for different languages. If a user has completed the task in Elm, I can not have them complete the task in JavaScript, and vice-versa.
I will necessarily make a compromise here:
Between-subjects:
I lose the ability to keep programmer competence as constant, thus it is a confounding variable
I gain the ability to ignore learned-experience in completing the task – the participant is different every time so will not have done this task before, thus this is not a confounding variable.
Within-subjects is the converse of the above methodological properties
DONE Reorder divs so embedded div is on top of editor div.
This turned out (I am fairly certain) to be due to codemirror.js binding mouse clicks. It was solved by using Elm’s Mouse.isDown
. Using Mouse.isDown
has the added benefit of tracking mouse selects and drags, because it logs (x,y)
when the mouse is down and (x,y)
again when it is up.
DONE Create a task that features Hard/Simple x Task/Not-task (See )
Implement Region filtering functionality so mouse activity is only logged when the clicks occur within defined region(s)
I have instead defined bounding boxes that pertain to the regions I want to track as a mouse-data filter – that is, I capture all click data for the whole frame, and then filter it by comparing x,y co-ordinates with my bounding boxes. If it’s in the box, keep it, otherwise discard.
DONE Integrate JS task into IDE
DONE Perform pilot study
WIP Visualise mouse data
Describe how I extended the Elm IDE
**This is the chapter in which you review the implementation and testing decisions and issues, and critique these processes. Code can be output inline using some code
. For example, this code is inline: public static int example = 0;
(I have used the character | as a delimiter, but any non-reserved character not in the code text can be used.) Code snippets can be output using the environment with the code given in the environment. For example, consider listing 5.1, below. Listing 5.1: Example code
Code listings are produced using the package “Listings”. This has many useful options, so have a look at the package documentation for further ideas.**
Using the Elm IDE
The task I chose for Pilot Study 1 was too difficult to capture the cognitive load incurred by the language itself for a given task, due to the difficulty of the task itself creating noise. I could improve this by simplifying the task, in a way that is ‘language agnostic’, i.e. that is not idiomatic of Elm or JavaScript (the two languages that I am comparing). Something like the following will never be that easy in JavaScript:
1 |
|
Saw some things in Pilot Study 1, also in the use of the Elm IDE I extended, I saw some things before Pilot Study 2.
1H.
A 2×2×2 study, that is 2 Languages (Elm and JavaScript), 2 Region difficulties (Hard and Simple) and 2 Region relevances (Relevant and Not relevant) will be done to determine if the number of mouse clicks per region differ across variables.
See Figure for the visualisation of participant 15 completing the Elm version of the task.
Operationalisation of thrash (the concept), i.e. cementing the concept by a metric that models cognitive load (does it? we don’t know – further work after the analysis of this may determine if it is a plausible indicator of cognitive load)
Leon suggested an improvement over this experimental method is to take people who are new, and train them up either in JS or Elm, and then run the same task. That way, their level of ability is comparable. (New as in never having used JS or Elm)
My current method creates quite a bit of noise in the data, because I rely on self-reported level of expertise in JS/Functional languages. I don’t know how to modify the data to account for this. I could group the analyses into categories? I.e those who reported being experts at JS, those who reported never having used it, those who reported being experts in at least one FP language, and those who reported being new.
Talk about “phases” in a programmer’s activities during task-completion:
(Not necessarily distinct and in sequence — more often interleaved)
X
, but how?Not capturing window resizing is problematic – participant 15 (See Figure ) very likely had a much shorter window height than I have used here. I suspect this is the case because of the cluster of mouse clicks in the same range of the x axis as the Compile button, but much futher up in the y axis, but I have no way to be sure as I did not log window dimensions.
Time (min) | Clicks |
---|---|
38.717217 | 183 |
8.034583 | 130 |
7.878533 | 39 |
23.672500 | 25 |
29.754533 | 391 |
14.993517 | 78 |
48.960367 | 769 |
6.354050 | 71 |
7.878533 | 39 |
29.698267 | 501 |
40.302217 | 803 |
12.319317 | 65 |
17.106933 | 79 |
12.958300 | 119 |
Instead of χ2, consider just using multiple regression with dummy variables (binary predictors) (See Table )
Condition | d1 | d2 | d3 | d4 | d5 | d6 | d7 |
---|---|---|---|---|---|---|---|
relevant × hard × Elm | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
relevant × hard × JS | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
relevant × easy × Elm | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
relevant × easy × JS | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
irrelevant × hard × Elm | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
irrelevant × hard × JS | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
irrelevant × easy × Elm | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
irrelevant × easy × JS | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
This is the chapter in which you review the outcomes, and critique the outcomes process.
You may include user evaluation here too.
This is the chapter in which you review the major achievements in the light of your original objectives, critique the process, critique your own learning and identify possible future work.
Bath, U. (2013) ‘Research ethics framework checklist’, [online] Available from: http://www.bath.ac.uk/research/pdf/ethics/EIRA1ethicsform.doc.
Braun, V. and Clarke, V. (2006) ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), pp. 77–101, [online] Available from: http://www.tandfonline.com/doi/abs/10.1191/1478088706qp063oa.
Czaplicki, E. (2013a) ‘Elm 0.10’, [online] Available from: http://elm-lang.org/blog/announce/0.10.elm.
Czaplicki, E. (2012) ‘Elm: Concurrent FRP for Functional GUIs’,
Czaplicki, E. (2013b) ‘What is functional reactive programming?’, [online] Available from: http://elm-lang.org/learn/What-is-FRP.elm (Accessed 1 October 2013).
Gopher, D. and Braune, R. (1984) ‘On the psychophysics of workload: Why bother with subjective measures?’, Human Factors: The Journal of the Human Factors and Ergonomics Society, SAGE Publications, 26(5), pp. 519–532.
Hertzum, M. and Jacobsen, N. E. (2001) ‘The evaluator effect: A chilling fact about usability evaluation methods’, International Journal of Human-Computer Interaction, Taylor & Francis, 13(4), pp. 421–443.
Hunt, A. and Thomas, D. (1999) The pragmatic programmer: from journeyman to master, Boston, MA, USA, Addison-Wesley Longman Publishing Co., Inc.
Khawaja, M. A., Ruiz, N. and Chen, F. (2008) ‘Think before you talk: An empirical study of relationship between speech pauses and cognitive load’, In Proceedings of the 20th australasian conference on computer-human interaction: Designing for habitus and habitat, OZCHI ’08, New York, NY, USA, ACM, pp. 335–338, [online] Available from: http://doi.acm.org/10.1145/1517744.1517814.
Lopez, T., Petre, M. and Nuseibeh, B. (2012) ‘Thrashing, tolerating and compromising in software development’, In Jing, Y. (ed.), Psychology of Programming Interest Group Annual Conference (PPIG-2012), London Metropolitan University, UK, London Metropolitan University, pp. 70–81.
Martin, R. C. (2008) Clean code: A handbook of agile software craftsmanship, 1st ed. Upper Saddle River, NJ, USA, Prentice Hall PTR.
McKay, F. and Kölling, M. (2012) ‘Evaluation of subject-specific heuristics for initial learning environments: A pilot study’, In Proceedings of the 24th Psychology of Programming Interest Group Annual Conference 2012, London Metropolitan University, pp. 128–138.
Perlis, A. J. (1982) ‘Epigrams on programming’, SIGPLAN Notices, 17(9), pp. 7–13.
Puerta, A. R. (1997) ‘A Model-Based Interface Development Environment’, IEEE Softw., Los Alamitos, CA, USA, IEEE Computer Society Press, 14(4), pp. 40–47, [online] Available from: http://dx.doi.org/10.1109/52.595902.
Victor, B. (2012) ‘Inventing on principle’, In Proceedings of the canadian university software engineering conference (CUSEC), [online] Available from: http://vimeo.com/36579366 (Accessed 15 March 2014).
Yates, R. (2012) ‘Conducting field studies in software engineering: An experience report’, In Jing, Y. (ed.), Psychology of Programming Interest Group Annual Conference (PPIG-2012), London Metropolitan University, UK, London Metropolitan University, pp. 82–85.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
All code available here: https://github.com/spanners/elm-lang.org.
Javascript task here: http://mouth.crabdance.com:8000/_edit/task/MovingBox.js
Copyright (c) 2012-2013 Evan Czaplicki
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Evan Czaplicki nor the names of other
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
1 2 3 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
This study aims to assess how Functional Reactive Programming Languages are used. To do this, we will be asking you to modify a Mario game to get him to fly. The session will take no more than 1 hour.
During the session, you will be introduced to Elm, a functional reactive programming language, as well as being shown what we want you to create. We’ll also present you with a questionnaire to see what experience you’ve had with Functional programming (or similar concepts) before. Finally we’ll give you another questionnaire to ask how you think the session went, and the level of workload in the task.
The session will be recorded on video and then the audio from the session will be transcribed anonymously in order to find any problems that you had during the session. During this process, the data will be stored securely. Important Information
All data collected during this study will be recorded such that your individual results are anonymous and cannot be traced back to you. Your results will not be passed to any third party and are not being collected for commercial reasons. Participation in this study does not involve physical or mental risks outside of those encountered in everyday life. All procedures and information can be taken at face value and no deception is involved. You have the right to withdraw from the study at any time and to have any data about you destroyed. If you do decide to withdraw, please inform the experimenter.
By signing this form you acknowledge that you have read the information given above and understand the terms and conditions of this study.
Name | Age | Sex | Occupation |
................ | ... | ... | .................. |
Signed
Date
Experimenter: Simon Buist, Dept. of Computer Science. EMAIL ADDRESS
For the purposes of this questionnaire, we consider a piece of software to be an application for which you have received/conceived of a specification, and coded a solution that meets this solution.
.............................................................
.............................................................
.............................................................
.............................................................
.............................................................
.............................................................
.............................................................
.............................................................
.............................................................
If you want to have the study as a whole explained to you, please do so now. However we ask that you refrain from discussing this with potential future participants.